Without Frame Generation, the gen-on-gen increase in performance is relatively modest (by historical standards for GPUs). Just 20% improvement between a 4070 and 5070.
Hardware
All things related to technology hardware, with a focus on computing hardware.
Rules (Click to Expand):
-
Follow the Lemmy.world Rules - https://mastodon.world/about
-
Be kind. No bullying, harassment, racism, sexism etc. against other users.
-
No Spam, illegal content, or NSFW content.
-
Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.
-
Please try and post original sources when possible (as opposed to summaries).
-
If posting an archived version of the article, please include a URL link to the original article in the body of the post.
Some other hardware communities across Lemmy:
- Augmented Reality - [email protected]
- Gaming Laptops - [email protected]
- Laptops - [email protected]
- Linux Hardware - [email protected]
- Mechanical Keyboards - [email protected]
- Microcontrollers - [email protected]
- Monitors - [email protected]
- Raspberry Pi - [email protected]
- Retro Computing - [email protected]
- Single Board Computers - [email protected]
- Virtual Reality - [email protected]
Icon by "icon lauk" under CC BY 3.0
It was rather obvious Nvidia was fudging performance results via framegen. Now let’s see those numbers verified independently and we’ll land in a typical generation to generation improvement probably. I wouldn’t be surprised if neural shaders demo was done by an actual demoscene coder because it looked like something someone could do in 64k demo competition and not like something that a developer could do at scale without a horde of artists.
I hope that things like input lag, frame persistence and frame time consistency become metrics we use to judge GPUs as soon as possible because current ones are no longer representative of hardware performance.
"Nvidia is faking their boosted performance by using technology that boosts performance!"
I seriously do not get the complaints about DLSS. Does it make games look better with more frames? Oh it does? And only people doing a frame-by-frame analysis to find things to complain about can tell any difference? Ok, so what's the issue? Seems like people are only complaining because it uses the scary AI words and they absolutely refuse to acknowledge that AI could be useful in any way, shape, or form.
I’m not complaining about AI upscaling or image quality, I’m complaining that real performance gets lost when you focus on FPS only the way Nvidia presents it.
Modern frame-gen has miraculously small latency cost but what’s important is that it doesn’t fix latency or inconsistent frame pacing at all. You’ll smooth out frame pacing issues at high enough FPS which cool but that’s it. When you’re at too low of a frame rate you won’t fix latency by adding more AI interpolated frames (because they don’t include your inputs) but by churning out real frames faster (upscaled is still real in this context). Once your base frame rate is at good enough level then frame gen sees diminishing returns and then you wonder what was the point in the first place. See Immortals of Aveum on Xbox Series X as an example.