Wikipedia has a new initiative called WikiProject AI Cleanup. It is a task force of volunteers currently combing through Wikipedia articles, editing or removing false information that appears to have been posted by people using generative AI.

Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis began when Wikipedia editors and users began seeing passages that were unmistakably written by a chatbot of some kind.

  • narc0tic_bird@lemm.ee
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    2 months ago

    Best case is that the model used to generate this content was originally trained by data from Wikipedia so it “just” generates a worse, hallucinated “variant” of the original information. Goes to show how stupid this idea is.

    Imagine this in a loop: AI trained by Wikipedia that then alters content on Wikipedia, which in turn gets picked up by the next model trained. It would just get worse and worse, similar to how converting the same video over and over again yields continuously worse results.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 months ago

      See also: model collapse

      (Which is more or less just regression towards the mean with more steps)

    • Wrench@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      Yes, this is what many of us worry will become the internet in general. AI content generated on from AI trained on AI garbage.

      AI bots can trivially outpace humans.

      • kboy101222@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        2 months ago

        I was just discussing with a friend of mine how we’re rapidly approaching the dead internet. At some point, many websites will likely just be chat bots talking to other chat bots, which then gets used to train further chat bots. Human made content is already becoming harder and harder to find on algorithm heavy websites like Reddit and facebooks suite of sites. The bots can easily outpace any algorithmic changes they might make to help deter them, but my fb using family members all constantly block those weird Jesus accounts and they still show up constantly

    • 8uurg@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      A very similar situation to that analysed in this paper that was recently published. The quality of what is generated degrades significantly.

      Although they mostly investigate replacing the data with ai generated data in each step, so I doubt the effect will be as pronounced in practice. Human writing will still be included and even curation of ai generated text by people can skew the distribution of the training data (as the process by these editors would inevitably do, as reasonable text could get through the cracks.)

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        AI model makers are very well aware of this and there is a move from ingesting everything to curating datasets more aggressively. Data prep is something many upstarts have no idea is critical, but everyone is learning about, sometimes the hard way.