“Jensen sir, 50 series is too hot”
“Easy fix with my massive unparalleled intellect. Just turn off the sensor”
If you needed any more proof that Nvidia is continuing to enshittify their monopoly and milk consumers. Hey lets remove one of the critical things that lets you diagnose a bad card and catch bad situations that might result in gpu deathdoors! Dont need that shit, Just buy new ones every 2 years you poors!
If you buy a Nvidia GPU, you are part of the problem here.
I wonder if there was some other reason for this removal, e.g. I could imagine some change in this generation that could have made the hotspot sensor redundant for some reason.
But yeah it’s far more likely to be for the reasons you outlined. Absolutely diabolical.
Yeah NVIDIA is a bullshit company and has been for a while. AMD and Intel need to get their raytracing game up so they become a real competitor for NVIDIA especially now when there are more games that require raytracing.
I’ve never bought Nvidia but they become more like Apple every day. Why be consumer friendly for niche PC builders? The average gamer already associates Nvidia with performance so it’s time to rely on good ol brand loyalty!
The problem is, it’s not just an assosiation. NVIDIA cards are the fastest cards hands down. I wish Intel and AMD would provide competition on the high end, but they just don’t do it.
Even worse, the best next gen AMD GPU won’t even beat AMDs best last gen GPU, they even say this themselves.
To me, buying Nvidia for performance is like buying an APC as a daily driver for work because of it’s safety rating. The cost long term does not at all seem worth it.
The APC has serious drawbacks besides the costs when used as a daily driver. I haven’t noticed any drawbacks with my GPU, and if I had the money I would have bought a faster one.
I do wish however that AMD and Intel would compete in the high end, because I am not an Nvidia fanboy. Real competition would result in lower prices and maybe also better products.
We are in a similar situation with GPUs now compared to where we were with CPUs before Ryzen came out. Everything above a quad core on Intels side was so expensive back then and real progress was not happening.
AMD have the right idea in trying to target the value to performance crowd. People that want every dollar to count. You get diminishing returns the higher you reach and that’s the biggest drawback to Nvidia. The abysmal value.
But I hate Nvidia from a moral perspective as well. Proprietary software, paying game developers to use that software, inflated claims that their hardware will play those games better because of that software, then convincing the average public they just have a superior product when all they’re really good at is what amounts to bribes.
Ryzen has been more effective than RX because Intel had zero ability to respond and still doesn’t. They went to make GPUs because they couldn’t break the wall for CPUs. Before AMD crawled out of the grave, Nvidia and Intel had a full blown market monopoly. Inescapable.
I’m not interested in that continuing, so I’ll stick out a 7% worse product for $200 less and hope that throwing money at AMD will get them to compete on the bleeding edge with Nvidia like they are with Intel.
Surely if the card is damaged due to overheating, the customer won’t be blamed since they can’t keep track of the hottest part of the card, right? Right?
Haaaaahahahahahahaahahakxjvjfhorbgkfbdjdv
Funniest shit I’ve read all week
The drop in clocks in certain situations that a lof of outlets are “conveniently” attributing to CPU limitations, has all the hallmarks of throttling… It’s hard to criticise the incumbent monopoly holder when they have a history of blacklisting outlets that espouse consumer advocacy.
Can AIB’s add extra sensors for the OS to read, or will the nVidia driver not provide that level of information?
Unlikely, as the hotspot sensors/detection logic is baked into the chip silicon and it’s microcode. AIB’s can only change the PCB around the die. I’d almost guarantee the thermal sensors are still present to avoid fires, but if Nvidia has turned off external reporting outside the chip itself (beyond telling the driver that thermal limit has been reached), I doubt AIB’s are going to be able to crack it too.
Also the way Nvidia operates, if an AIB deviates from Nvidia’s mandatory process, they’ll get black balled and put out of business. So they won’t. Daddy Jensen knows best!
We’ll find out how the 5080 is on Thursday, but I expect that the 5070 Ti should have cool temperatures.
Oh I’m sure the lower cards will run cool and fine for average die temps. The 5090 is very much a halo product with that ridiculous 600w TBP. But as with any physical product, things do decay over time, or are assembled incorrectly, and that’s what hotspot temp reporting helps with diagnosing.
Isn’t a GPU that pulls 600 watts in the whackjob territory?
The engineers need to get the 6090 to use 400 watts. That would be a very big PR win that does not need any marketing spin to sell.
It’s not a node shrink, just a more ai-focused architecture in the same node as the 4090. To get more performance they need more powah. I’ve seen reviews stating a ~25% increase in raw performance at the cost of ~20% more powah than the 4090.
A little dramatic but okay.
Is it though?
The Hotspot temp sensors are one of the most critical diagnostic sensors an end user can have. When the thermal interface material begins to degrade (or leak out of the rubber gasket, in the case of the 5090’s liquid metal) your package temp may only go up a few C but your Hotspot may increase by 10-20C or more. That indicates problems and almost definitely is one of the leading causes of dead and crashing GPU’s- it’s also the easiest to detect and fix.
Removing this quite literally has zero engineering reason beyond
- hiding from reviewers the fact that the 5090 pulls too much power and runs too hot for a healthy lifespan, even with liquid metal and the special cooler
- Fucking over the consumer so they can no longer diagnose their own hardware
- Ensure more 5090’s die rapidly, via lack of critical monitoring, so that Nvidia funny number can keep going up by people re-buying new GPU’s that cost more than some used cars every 2 years.
The sensors are still definitely there. They have to be for thermal management or else these things will turn into fireworks. They’re just being hidden from the user at a hardware level.
This isn’t even counting the fact that Hotspot also usually includes sensors inside the VRM’s and memory chips, which are even more sensitive to a bad TIM application and running excessively warm for longer periods of times.
It looks bad with the insane TDP they run at now. They could cut 33% of it off and probably lose like 5-10% perf depending on the SKU. Maybe even less.
It also looks a lot like planned obsolescence.