Last week, my favoured gaming news site, VGC, asked former US PlayStation boss Shawn Layden whether he thought the pursuit of more powerful consoles was still the way to go for the video games industry. His answer was not what I expected.

“We’ve done these things this way for 30 years, every generation those costs went up and we realigned with it. We’ve reached the precipice now, where the centre can’t hold, we cannot continue to do things that we have done before … It’s time for a real hard reset on the business model, on what it is to be a video game,” he said. “We’re at the stage of hardware development that I call ‘only dogs can hear the difference’. We’re fighting over teraflops and that’s no place to be. We need to compete on content. Jacking up the specs of the box, I think we’ve reached the ceiling.”

This surprised me because it seems very obvious, but it’s still not often said by games industry executives, who rely on the enticing promise of technological advancement to drum up investment and hype. If we’re now freely admitting that we’ve gone as far we sensibly can with console power, that does represent a major step-change in how the games industry does business.

So where should the industry go now?

  • magiccupcake@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    22 days ago

    I think we are entering a different era.

    Once upon a time shrinking nodes came with cost reductions for the same amount of compute.

    With the new bleeding edge nodes, this is not so true, you can increase compute density, but the cost of new nodes is astronomical, so prices go up too.

    Many improvements recently are more architectural in nature, like zen ccds to decrease costs.

    The architectural improvements will continue to scale, but node improvements are slowing, we are right on the edge of what is physically possible with silicon.

    The improvements in games have slowed a ton too.

    Each new generation of consoles has started to reach diminishing returns for graphics. Ray tracing seems more like a technology that is being pushed to sell hardware, rather than actually improving graphics efficiently.

    The next high compute case might need more creative solutions other than throwing more compute at it. Like eye tracking for VR which reduces compute demand greatly

    • Coelacanth@feddit.nu
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      21 days ago

      Ray tracing seems more like a technology that is being pushed to sell hardware, rather than actually improving graphics efficiently.

      If efficiently is the key word then I agree with you. Ray Tracing is definitely still extremely expensive as far as performance goes. But I do think we’ve also seen it actually add marked improvements to the graphical impact of games.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        …but does it add anything to the experience of playing the game?

        It certainly doesn’t affect the gameplay. You’ll still do the same things. It doesn’t enable a new game dynamic.

        All it does is push the graphic fidelity up a bit. For me a good game can be enhanced marginally by good graphics, but a bad game is a bad game even if the graphics are stunning.

        • yonder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 days ago

          In my mind, there is not much justification for raytracing other than niche cases. Games like BOTW and CS2 get by just fine using a combination of Screen Space Reflections, cubemaps and pre-baked global illumination to get good looking reflections and lighting. Raytracing seems the most useful for games like Minecraft where the world is completely dynamic and nothing can be baked ahead of time because of that, though plenty of shaders look amazing without raytracing already.