You do realize it’s not that simple, right? That’s arm, not x86 so it would be a different architecture from consoles and pcs. It necessitates using some sort of translation layer like rosetta for mac and that tanks performance. So no, in the short term that wouldn’t be neat.
Those are synthetic CPU tests. It’s not a valid point of reference when discussing a cpu+gpu workload for an x86 game. Plus, you’re comparing with 3 year old Intel cpus. The mobile king right now are AMD APUs.
New ARM chips would also need to only emulate the speeds of current x86 chips as opposed to future ones to support the current crop of games. The idea would be that new games would be compiled natively. Most games nowadays use a handful of engines, so it’s really a matter of porting the engine to the new platform. There are a number of architecture differences that make chips like Apple M series and new Qualcomm chips strictly superior to anything Intel or AMD are putting out. This article does a good overview. The gist is that there are two main advantages. System on a chip architecture eliminates the need for the bus, so GPU, CPU, and any other cores can all share memory directly. The other big advantage is that RISC instructions have a fixed sized, you can read a batch of instructions figure out which ones are independent, and then run those in parallel. This approach scales to a large number of cores. On the other hand, CISC instructions are variable length and this makes this approach impossible to scale. AMD discovered that past parallelizing 3-4 instructions the cost of figuring out dependencies exceeds the benefits of running them in parallel.
My overall argument here is that the chip simply has to run enough current games well enough, and that new games would target the chip natively. And I’m going to point out that Steam Deck clearly shows that using an emulation layer as a bridge is a perfectly viable approach.
That’s not really true. It has to run a plethora of games well. Both new and very old. Not to mention emulators as well.
The fact that a different architecture might be a lot better than x86 doesn’t change the fact that pcs and consoles use x86 and that all of the emulators target that architecture as well. I don’t care how much better arm o risc can be, I care about being able to use the games and programs I want to use today. Unless new architectures are powerful enough to run x86 programs decently woth a translation layer, their adoption will not be widespread.
You do realize it’s not that simple, right? That’s arm, not x86 so it would be a different architecture from consoles and pcs. It necessitates using some sort of translation layer like rosetta for mac and that tanks performance. So no, in the short term that wouldn’t be neat.
M1 and M2 chips are so much faster that they outperform native x86 macs running Rosetta https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-benchmark/
Those are synthetic CPU tests. It’s not a valid point of reference when discussing a cpu+gpu workload for an x86 game. Plus, you’re comparing with 3 year old Intel cpus. The mobile king right now are AMD APUs.
New ARM chips would also need to only emulate the speeds of current x86 chips as opposed to future ones to support the current crop of games. The idea would be that new games would be compiled natively. Most games nowadays use a handful of engines, so it’s really a matter of porting the engine to the new platform. There are a number of architecture differences that make chips like Apple M series and new Qualcomm chips strictly superior to anything Intel or AMD are putting out. This article does a good overview. The gist is that there are two main advantages. System on a chip architecture eliminates the need for the bus, so GPU, CPU, and any other cores can all share memory directly. The other big advantage is that RISC instructions have a fixed sized, you can read a batch of instructions figure out which ones are independent, and then run those in parallel. This approach scales to a large number of cores. On the other hand, CISC instructions are variable length and this makes this approach impossible to scale. AMD discovered that past parallelizing 3-4 instructions the cost of figuring out dependencies exceeds the benefits of running them in parallel.
My overall argument here is that the chip simply has to run enough current games well enough, and that new games would target the chip natively. And I’m going to point out that Steam Deck clearly shows that using an emulation layer as a bridge is a perfectly viable approach.
That’s not really true. It has to run a plethora of games well. Both new and very old. Not to mention emulators as well.
The fact that a different architecture might be a lot better than x86 doesn’t change the fact that pcs and consoles use x86 and that all of the emulators target that architecture as well. I don’t care how much better arm o risc can be, I care about being able to use the games and programs I want to use today. Unless new architectures are powerful enough to run x86 programs decently woth a translation layer, their adoption will not be widespread.