• umbraroze@kbin.social
    link
    fedilink
    arrow-up
    54
    ·
    10 months ago

    Yup. The robots.txt file is not only meant to block robots from accessing the site, it’s also meant to block bots from accessing resources that are not interesting for human readers, even indirectly.

    For example, MediaWiki installations are pretty clever in that by default, /w/ is blocked and /wiki/ is encouraged. Because nobody wants technical pages and wiki histories in search results, they only want the current versions of the pages.

    Fun tidbit: in the late 1990s, there was a real epidemic of spammers scraping the web pages for email addresses. Some people developed wpoison.cgi, a script whose sole purpose was to generate garbage web pages with bogus email addresses. Real search engines ignored these, thanks to robots.txt. Guess what the spam bots did?

    Do the AI bros really want to go there? Are they asking for model collapse?

    • gayhitler420@lemm.ee
      link
      fedilink
      arrow-up
      25
      ·
      10 months ago

      Of course they want the model collapse. Literally no American tech company has been about reliably, sustainably supplying a good or service or stewarding some public good.

      They’re doing the vc -> juice stock -> gut resources cycle. Nobody cares about the model.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 months ago

      Considering Reddit has decided to start selling user content for training, yeah I guess they want their models to collapse. There’s so much bot generated content nowadays