Fucking FINALLY doing lightfield displays.
AI’s Fast Advancement in Video Generation is Unlocking New Modalities of Displays
God dammit that is not why. This is borderline magical technology. Do not combine it with overhyped buzzwords to guess about depth, when literally any video game can just tell you what’s where.
(4K, 60 Hz with 1 bit of monocular depth)
… the fuck do you mean, one bit? Is this just one panel in front of another? It is. Okay. That’s why they need AI hand-waving - they’re faking it.
People, it’s been an entire decade since Nvidia slapped a lenticular array onto some Sharper Image headset and proved that all you need are more pixels and more rendering power. Suffice it to say: we have that. Why the fuck hasn’t anybody done it at scale? Nvidia included?
(Removed I was dumb)
Is this a 3D monitor? Is it just a depth-of-field thing? I have no idea what’s special about this thing.
It looks like this is a really different idea from normal monitors, where there are multiple screens layered so that your eyes are tricked into seeing depth. Kind of like 3d tv technologies, but different. I don’t know if it will catch on, but I’d like to see it in person.
I don’t think I understand well. It’s not some 3D stuff right? It just adds the blur in the right spots given a depth field in then input? Why do it in the monitor and not directly on the GPU of the computer?
If it’s 3D somehow, brilliant!
From the video in the article, it looks like there are multiple, layered screens. It’s not clear how it determines what you’re focusing on though.