• kubica@kbin.social
    link
    fedilink
    arrow-up
    17
    ·
    10 months ago

    I don’t think they are going to stop storing it somewhere, just stop delivering it.

    • rho50@lemmy.nz
      link
      fedilink
      arrow-up
      13
      ·
      10 months ago

      Idk… in theory they probably don’t need to store a full copy of the page for indexing, and could move to a more data-efficient format if they do. Also, not serving it means they don’t need to replicate the data to as many serving regions.

      But I’m just speculating here. Don’t know how the indexing/crawling process works at Google’s scale.

      • evatronic@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Absolutely. The crawler is doing some rudimentary processing before it ever does any sort of data storage saving. That’s the sort of thing that’s being persisted behind the scenes, and it’s almost certainly both not enough to reconstruct the web page, nor is it (realistically) human-friendly. I was going to say “readable” but it’s probably some bullshit JSON or XML document full of nonsense no one wants to read.

    • pre@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Seems unlikely they’ll deleted it. If they’re started deleting data that’s quite a change. They might save from bandwidth costs of delivering it to people I suppose.

      Maybe something to do with users filling the AIs from the google cache? Google wanting to ensure only they can train from the google-cache.

      @kubica@kbin.social @Powderhorn@beehaw.org @rho50@lemmy.nz