• root@precious.net
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    4 months ago

    I don’t know if they’re conflating rendering with display or just assuming those GPU are at max TDP 24/7, but they’re way off on actual energy consumption.

    There seems to be a lot of recent articles attacking datacenters, particularly those involved in LLM “ai” work. This feels like one of those articles.

    I’m not saying we shouldn’t keep them in check, but I also don’t like being manipulated by “grass roots initiative” marketing companies, particularly on Lemmy.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      The AI numbers are pretty solid. Papers published on Hugging face list training times and platform and convert that into CO2. Those will be full load for weeks/months across arrays of GPUs.

      In this case, I don’t see why you’d need that kind of hardware for this application. You might be right that it’s not running at maximum load. If so, then somebody has been mis-sold the hardware. Whatever you’re doing it will be at a consistent load though. They are always doing the same thing.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        It’s probably that only their professional tier cards are built to handle synchronization on that scale. There are obviously other massive displays out there, but they’re also using specialized and expensive hardware to handle all the signal processing.