NVIDIA dropped DLSS 5 this week and the recpetion has been something. 84% dislike ratio on the official reveal video. That's not a rough launch. That's a community actively rejecting what you just showed them.
The controversy isn't really about performance. It's about what's actually happening to the image on your screen.
DLSS has always used AI to reconstruct lower resolution frames into sharper ones. That part people were fine with. DLSS 4.5 was genuinely well regarded and most enthusiasts had made their peace with smart reconstruction. DLSS 5 is something different. Instead of reconstructing what the game engine actually rendered, the AI is now generating pixels based on what it thinks should be there. The game engine hands off partial data and the model fills in the rest from its own imagination.
In the Cyberpunk 2077 demo, viewers started noticing what they're calling neural artifacts. Brick textures shifting pattern mid scene. Sign text turning into illegible nonsense. Subtle changes to character faces between frames. Things that shouldn't be moving or changing, changing anyway.
That's the word people keep landing on. Hallucination. And it's hard to argue against it.
The CEO was asked about this directly during a Q&A on April 16 and his response did not help things. He told the audience they were "completely wrong" and compared the shift to neural rendering to the transition from 2D to 3D graphics. He said the human eye can't tell the difference between a handcrafted pixel and a neural pixel at 144Hz and that rejecting this is "rejecting the future of the medium."
I understand what he's trying to say. I also think telling your core audience they're wrong, en masse, on a livestream, is maybe not the ideal comunications strategy.
Here's what actually bothers people, I think. It's not just the artifacts. It's the question of what you're even playing when DLSS 5 is on. The developer spent time on those brick textures. An artist made decisions about that character's face. If the AI is generating its own version of those things frame by frame, you're not really seeing what was made for you. You're seeing the model's interpolated guess at it.
That might sound pretentious. But games have always had this conversation about artistic intent and compression and fidelity and what counts as "the real thing." DLSS 5 just pushed it somewhere nobody expected to go this fast.
The hardware angle makes this worse. DLSS 5's neural rendering pipeline only runs on RTX 50 series cards. Blackwell and above. So if you're on a 4090 you are locked out entirely regardless of how you feel about it. DLSS 4.5 with smart reconstruction works on RTX 20 series and up. The jump in requirements is steep and the timing, with the RTX 5060 Ti apparently in a production pause right now, makes it feel like a very aggressive push toward forced upgrades dressed up as innovation.
There's a broader fear underneath all of this too. If AI can hit your performance targets well enough, why would NVIDIA bother optimizing silicon for raw rasterization anymore. Why invest in brute force rendering when the model can paper over the gaps. It's a reasonable concern and I don't think NVIDIA has given anyone a satisfying answer to it yet.
The Windows 12 AI Pro stuff landing the same week, with all its NPU requirements, makes the whole thing feel like a co-ordinated push toward an AI-first ecosystem that nobody really asked for at this pace. Maybe it all makes sense in three years. Maybe neural rendering becomes the norm and we look back at this the same way people complained about anti-aliasing being fake. Maybe...
Post a Comment