this post was submitted on 15 Jan 2025
135 points (97.2% liked)

PC Gaming

8924 readers
1096 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
all 17 comments
sorted by: hot top controversial new old
[–] [email protected] 63 points 2 days ago (5 children)

This may be a hot take downvoted to oblivion, but I think DLSS and all similar AI-dependent frame generation type stuff is a band-aid on a problem that won't (or shouldn't) exist for long, in the grand scheme of things.

If you have performance improvements, you ultimately don't need such things once that performance reaches an acceptable level.

So two things may be happening:

  1. Performance improvements are not possible anymore. That seems false, because we still see them. Costs are high, but they're there.

  2. Things like DLSS allow corps to give you less performance while still maintaining an illusion of a good experience. It ultimately reduces hardware costs, which the corpos ultimately just pocket.

I lean strongly towards 2 at the moment. Notice how nvidia also continues to push DLSS as an exclusive feature -- notably different from FSR in that regard, while FSR is admitted to be a tech allowing for better framerate on lower-end hardware.

For nvidia, it's a selling point, and it allows them to sell you less hardware with fewer actual improvements. It is the same snake that just wants you to (eventually) stream games instead of processing them locally, because it enhances corporate control.

[–] [email protected] 27 points 2 days ago

It really is an excuse for games to have crappy performance all too often.

[–] MooseTheDog 2 points 1 day ago* (last edited 1 day ago)

A lot of games released today 'feel' like games that came out 20 years ago, there are exceptions and I'm including them too. Most of the growth came in graphics and visuals. People would buy any game if it looked cool. Now we're on the diminished returns side of things, and investors are trying to maintain the charade. They're pushing out half-baked products, and selling out. Leaving old heads and new heads with the bag of expensive but useless products. Think SLI on steroids.

[–] [email protected] 12 points 2 days ago (1 children)

DLSS isn't just frame generation, the SS in name is for Super Sampling. It's the best solution we have to graphical issues like subpixel shimmering and moire effects that are especially prevalent in modern titles due to contemporary graphical effects and expectations, and it might be quite a while before we invent something better.

[–] [email protected] 22 points 2 days ago (2 children)

And the other side of that coin is: personally I'll happily accept shimmering and moire effects if it means I don't lock myself into yet one more corporate ecosystem.

FSR also combats those things, but can run on any GPU.

[–] [email protected] 10 points 2 days ago

I agree, I was referring to ML supersampling and antialiasing in general.

[–] GreyCat 1 points 1 day ago

There is way for performance improvements ans DLSS type technologies to both exist. We are not at the pinnacle of graphical quality. Studios will not suddenly stop putting better graphics in games with the higher performance they get out of new cards, and allot that capability for more frames.

Even if you have better - newer GPUs, DLSS-type stuff will always allow you to get more performance out of your rig. I am not even talking about frame-grn since I have not tested that yet. Just the upscaling.

[–] MooseTheDog 2 points 1 day ago

AI is like a sledgehammer to this walnut of a problem. This is supposed to sound badass or something, but in tech parlance this is literal insanity. Nothing about computers should be about endlessly repeating things hoping for better results. This is the opposite of technology. DLSS sucks anyway, who's it for? Content creators have to deal with encoding, that wipe any of that detail out, and only paid youtubers seem to mention it in passing.

[–] [email protected] 13 points 2 days ago (1 children)

Where is the model file stored after non-production training is completed? Do we all download it from the driver update? If so, and i don't use DLSS, how can I remove that gigantic model checkpoint?

[–] brucethemoose 13 points 2 days ago (2 children)

It’s probably not big if it’s included in the driver download and run in real-time so quickly. Not big enough to worry about anyway.

[–] [email protected] 24 points 2 days ago

DLSS2.0 is "temporal anti-aliasing on steroids". TAA works by jiggling the camera a tiny amount, less than a pixel, every frame. If nothing on screen is moving and the camera's not moving, then you could blend the last dozen or so frames together, and it would appear to have high resolution and smooth edges without doing any extra work. If the camera moves, then you can blend from "where the camera used to be pointing" and get most of the same benefits. If objects in the scene are moving, then you can use the information on "where things used to be" (it's a graphics engine, we know where things used to be) and blend the same way. If everything's moving quickly then it doesn't work, but in that case you won't notice a few rough edges anyway. Good quality and basically "free" (you were rendering the old frames anyway), especially compared to other ways of doing anti-aliasing.

Nvidia have a honking big supercomputer that renders "perfect very-high resolution frames", and then tries out untold billions of different possibilities for "the perfect camera jiggle", "the perfect amount of blending", "the perfect motion reconstruction" to get the correct result out of lower-quality frames. It's not just an upscaler, it has a lot of extra information - historic and screen geometry - to work from, and can sometimes generate more accurate renders than rendering at native resolution would do. Getting the information on what the optimal settings are is absolute shitloads of work, but the output is pretty tiny - several thousand matrix operations - which is why it's cheap enough to apply on every frame. So yeah, not big enough to worry about.

There's a big fraction of AAA games that use Unreal engine and aim for photorealism, so if you've trained it up on that, boom, you're done in most cases. Indie games with indie game engines tend not to be so demanding, and so don't need DLSS, so you don't need to tune it up for them.

[–] paraphrand 6 points 2 days ago* (last edited 2 days ago) (1 children)

Isn’t it also tuned for each game individually? So it would be different iterations for every supported game.

I swear when it came out that they said devs would have to submit their games for training.

Reading the “article”, it doesn't seem like that’s actually the case. It sounds more generic.

[–] [email protected] 11 points 2 days ago

DLSS 1.0 is per game. 2.0+ is generic but can be tuned per game

[–] [email protected] 8 points 2 days ago

For anyone looking for an ELI5 on CNN vs VTi.

[–] [email protected] 3 points 2 days ago

imagine if and when an entire model's weights are lost?

Imagine you have a personal AI that you've been training for years, and its learning off you, and there's a backup failure? It might be like losing a pet...