The requirements are 7 year old hardware. While not everyone upgrades their PC every 7 years, I don’t think it’s unreasonable to stop supporting 7 years old hardware. Apple requires iPhone XS (6 years old) for iOS 18, Google requires Pixel 6 (3 years old) for Android 15, MacOS Sequoia requires 6 years old laptops. Turns out Microsoft is the one giving the most support.
tekato
Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI). Remember time to run a model is dependent on GPU performance, so a faster GPU will be able to run this model faster. I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good
It can. This method only needs access to the frames, which can easily be accessed by the OS.
But ... this is GPU tech.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
Second page of the paper explains the shortcomings of warping and hole filling.
TRULY modern OS
What does this even mean? So iOS, MacOS, Windows11, Linux aren’t modern?
a way better compositor than wayland (in fact, android has the best compositor in the world, compared to ANY OS)
Wayland is not a compositor, it’s a protocol. SurfaceFlinger can totally be made in Wayland. Saying SurfaceFlinger is better than Wayland is like saying words are better than English.
A properly modified desktop OS based on it (better than Samsung's DeX for example), that is also able to run normal Linux apps, would be a huge winner.
Nobody will ever use this on Linux, unless it is implemented in Wayland. It is infinitely more likely for Android to rewrite its compositor for Wayland than SurfaceFlinger being adopted as Linux’s main compositor.
Do you really talk to your friends through email?
But what does that actually mean? When he actually went through the propper channels for his position? Department of Defense is a VERY wide organisation, and allegedly he did just that.
It means exactly that. File a complaint with the DOD or IC IG’s office.
Claiming he us not a whistleblower, because a VERY specific procedure needs to be followed is just a legal cop out. It's an ambiguous law that can be used to burry shit indefinitely, and bent to be applied as they wish if people go public.
Here you are saying the same wrong information for the third time. He does not qualify as a whistleblower because be publicly leaked classified information, there’s nothing ambiguous about that.
Snowden: I raised NSA concerns internally over 10 times…
…Snowden said. "The NSA has records. They have copies of emails right now to their Office of General Counsel, to their oversight and compliance folks, from me raising concerns about the NSA's interpretations of its legal authorities."
Right. He was able to copy millions of classified documents but forgot to get copies of the same emails that prove he raised concerns through the proper channels. This is the only email that there’s record of, and it was not even submitted by Snowden, but by the NSA.
the proposal included a 5 year ban on any browsering
I guess 5 years is not too bad, but the judge would probably never agree to the ban.
Did you submit a bug report?
Given that even Mozilla (who has significantly less resources than Google) had the ability to create a second web engine and then abandon it, it would be dumb to think that Google doesn’t have at least 2 or 3 teams working on different browsers or engines for no reason.
Unless they’re prohibited from creating another web browser ever again (which would most likely be a bad idea), they can probably come up with a working browser in less than two years
Well, you would need a “history” of frames, so 1 wouldn’t be enough. Anyways, that’s fully possible, but then you would be generating garbage.
That’s correct, nobody said otherwise. This is to help increase frame rate, so you need a source of frames to increase. Regular frames are still rendered as fast as the GPU can.
Because that’s implementation specific. As specified on the paper, once you have a history of frames, you can use the latest frame t_n to generate up to the t_(n+i) frame, where i is how many frames you want to generate. The higher i is, the higher the frame rate but also the more likely it is to be garbage.
I didn’t watch the video, but that’s completely possible. After you have a couple of frames generated, you can start alternating between a real frame and a generated one with this method. So you can’t have 60 fps at the beginning, but you can after a few frames.
The only difference between watching a movie and playing a video game is that the movie isn’t polling your input. This framework only cares about the previously rendered frames, and from a technical standpoint, they’re both just a bunch of pixels.
Yes that’s because it is the implementer’s choice. I don’t know if they say what ratio they used, but it doesn’t matter because you don’t have to use their ratio. Anyone can implement this as they want and tune for quality/performance.
Yes that’s a thing they seem to have missed. Would have been nice to see how it compared to actual rendering.
Yes that’s true for now. But remember that Windows started a trend with Copilot where manufacturers are now encouraged to include NPUs in their CPUs. Every modern laptop (M series, Qualcomm, latest Intel/AMD) now include NPUs in them (although underpowered ones, but these are first generation devices so it will inevitably get better), so in the near future these could run on the NPU that would come in almost all computers. Once NPUs are more common, this could easily become a driver.
Yes Intel will not give the source code, but that’s not needed to recreate this experiment. Corporate funded academic research can be proprietary, but if it is published to the public then anyone is free to use that knowledge. The whole point of academic journals is to share the knowledge, if you wanted to keep it private you simply don’t publish it.
Yes, because the method is all you need to recreate this. Intel is a for profit company so they might keep their own implementation to themselves. Pages 4:7 tell you exactly what you need to do to replicate this with details, they even give the formulas they used where needed. Remember this is supposed to be a general and modular framework that can be tuned depending on your goals, so the method needs to reflect that generality to allow for experimentation.
They might publish it in the future, they might not, but if they don’t nothing is lost and they get a head start on implementing research that they paid for.
This patent is about hardware configuration of a system designed to run such a model in a way that Intel considers optimal. So I guess they’re considering designing SOCs specialized on these things (maybe for handhelds?). But this is not related to the paper, since this doesn’t affect your ability to train and run this model on your RTX like they did on the paper.
This one is more tricky, but it also does not affect your ability to implement your own model. What they are doing here is akin to a real-time kernel operation but for graphics. You set a maximum time for a frame to be rendered (ideally monitor refresh rate), if the algorithm decides that the GPU won’t meet that deadline, then you generate the frame and discard whatever the GPU was doing. It’s basically a guarantee to meet the display update frequency (or proper v-sync). Also they aren’t likely to get this one because they’re trying to patent the logic: if time1 is less than tmax, pick option one; else pick option two.
These patents do not affect the paper in any way, since they do not cover what is needed for this method (RTX 4070 Ti, Ryzen 9 5900X, Pytorch, TensorRT, and NVIDIA Falcor) or their alternatives.