This exists. In fact, Intel GPUs do not actually support HDMI, they convert to DisplayPort in GPU.
tekato
The material itself cannot be magnetized, which is what they mean by not “being magnetic”. However, these do have spin currents that can be measured and controlled, so they sustain “magnetic activity”. Kind of like a hybrid between ferromagnetic and antiferromagnetic.
Good. That means you can buy them for cheap in a few years from Ebay.
Hardware hasn't changed in the way you think it has for quite a while. For shits i span up a compatability check on my fifteen year old file server and it qualifies for w10.
Your 15 years old system is Windows 10 compatible because Windows 10 was released in 2015, meaning that it is actually a 5 years old system when compared to Windows 10. You can run Windows 11 with any device from 2019.
The big wank issue with win10/11 is microsoft trying to enforce corporate hardware requirements on home users. Mostly so they can start trying to garden wall their shit.
You keep saying this line about corporate hardware. What’s “corporate” about TPM 2.0?
It is not the same. The government won’t even allow you to drive a car without a seatbelt if you somehow managed to buy one. Anyways most cars will provide you support for 5 years. A car is worth tens of thousands of dollars yet you get less support than a computer and nobody is complaining about that. Just like you can use your car with its 10 years old software, you can use your 10 years old computer with your old OS (Windows 10). This is a very simple problem.
I mean, in an academic sense, if you possess the ability to implement the method, sure you can make your own code and do this yourself on whatever hardware you want, train your own models, etc.
But from a practical standpoint of an average computer hardware user, no, you I don't think you can just use this method on any hardware you want with ease, you'll be reliant on official drivers which just do not support / are not officially released for a whole ton of hardware.
Not many average users are going to have the time or skillset required to write their own inplementations, train and tweak the AI models for every different game at every different resolution for whichever GPUs / NPUs etc the way massive corporations do.
It'll be a ready to go feature of various GPUs and NPUs and SoCs and whatever, designed and manufactured by Intel, reliant on drivers released by Intel, unless a giant Proton style opensource project happens, with tens or hundreds or thousands of people dedicates themselves to making this method work on whateverhardware.
Yes, this was never intended for the average user, the average user doesn’t even understand what is being explained in the paper. This is for video game studios to include with their games, or driver and OS developers to implement this system wide. The user gets provided a working product as usual. How many users do you think go and play with the FSR code which is totally open source? Not many (I’m inclined to say zero).
I think at one point someone tried to do something like this, figuring out how to hackily implement DLSS on AMD GPUs, but this seems to require compiling your own dlls, and is based off of such a random person's implementation of DLSS, and is likely quite buggy and inefficient compared to an actual Nvidia GPU with official drivers.
I’m not aware of somebody trying DLSS on AMD, but I don’t think it will ever work. Anyways, this is precisely why this isn’t intended for the average user, because even the average developer doesn’t know how to work these things. There’s very few people who know what to do with the information that was provided, as is the case with most academic papers.
Which would mean that the practical upshot for an average end user is that if they're not using a GPU architecture designed with this method in mind, the method isn't going to work very well, which means this is not some kind of magic 'holy grail', universal software upgrade for all old hardware (I know you haven't said this, but others in this thread have speculated at this
Yes, new technologies are never guaranteed to work with old hardware. That’s just how things are unfortunately.
And also the overhead of doing the calculation of predicting pipeline render times vs extrapolated frame render times is not being figured in with this paper, meaning that the article based on the paper is at least to some extent overstating this method's practical quickness to the general public.
The real-time arbitration is not the focus of this paper so that’s expected. Here they describe the framework, and the patent is just a particular use case for it.
I think the disconnect we are having here is that I am coming at this from a 'how does this actually impact your average gamer' standpoint, and you are coming at it from much more academic standpoint, inclusive of all the things that are technically correct and possible, whereas I am focusing on how that universe of technically possible things is likely to condense into a practical reality for the vast majority of non experts.
I guess that makes sense.
What is a single word that means 'this method is a feature that is likely to only be officially, out of the box supported and used by specific Intel GPUs/NPUs etc until Nvidia and/or AMD decide to officially support it out of the box as well, and/or a comprehensive open source team dedicates themselves to maintaining easy to install drivers that add the same functionality to non officially supported hardware'?
Unfortunately that’s the case with any advanced technology, no matter how open it is. We depend on companies who are willing to pay somebody to figure it out.
TPM is required for Windows 11 because it is used for security purposes. The world is filled with things that aren’t “technically required” but they are actually required because they help prevent things. The web doesn’t “technically require” HTTPS, but modern websites require an HTTPS connection. A seatbelt isn’t “technically required” to drive a car, but you are required to wear one anyways.
I would disagree given that two of the most efficient computer chips are based on phone SOCs (Qualcomm and Apple). Anyways, the fact that your system is powerful doesn’t mean anything from a support standpoint. Supporting old hardware means you need different versions for devices with different capabilities and architectures, which is not feasible for a company that also wants to focus on new technologies. Again, out of all top operating systems, Windows is giving you the most support.
I feel this is a bit of an overstatement, otherwise you'd only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.
Well, you would need a “history” of frames, so 1 wouldn’t be enough. Anyways, that’s fully possible, but then you would be generating garbage.
Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you'd quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.
That’s correct, nobody said otherwise. This is to help increase frame rate, so you need a source of frames to increase. Regular frames are still rendered as fast as the GPU can.
Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to... after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.
Because that’s implementation specific. As specified on the paper, once you have a history of frames, you can use the latest frame t_n to generate up to the t_(n+i) frame, where i is how many frames you want to generate. The higher i is, the higher the frame rate but also the more likely it is to be garbage.
So, that would be a 1:1 ratio of extrapolated frame to reference frame.
This doesn't appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.
I didn’t watch the video, but that’s completely possible. After you have a couple of frames generated, you can start alternating between a real frame and a generated one with this method. So you can’t have 60 fps at the beginning, but you can after a few frames.
It seems to just be running already captured videos as input, and then rendering double FPS videos as output.
The only difference between watching a movie and playing a video game is that the movie isn’t polling your input. This framework only cares about the previously rendered frames, and from a technical standpoint, they’re both just a bunch of pixels.
I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to... all I'm seeing is basically 'we look at the history buffer.'
Yes that’s because it is the implementer’s choice. I don’t know if they say what ratio they used, but it doesn’t matter because you don’t have to use their ratio. Anyone can implement this as they want and tune for quality/performance.
Unfortunately they don't actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of 'control' column where all the scores for the various 'visual difference/error from standard fully rendered frames' are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.
Yes that’s a thing they seem to have missed. Would have been nice to see how it compared to actual rendering.
Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.
I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.
Yes that’s true for now. But remember that Windows started a trend with Copilot where manufacturers are now encouraged to include NPUs in their CPUs. Every modern laptop (M series, Qualcomm, latest Intel/AMD) now include NPUs in them (although underpowered ones, but these are first generation devices so it will inevitably get better), so in the near future these could run on the NPU that would come in almost all computers. Once NPUs are more common, this could easily become a driver.
To be fair, when I wrote it originally, I used 'apparently' as a qualifier, indicating lack of 100% certainty.
But uh, why did I assume this?
Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.
Much research done by Universities also ends up proprietary as well.
Yes Intel will not give the source code, but that’s not needed to recreate this experiment. Corporate funded academic research can be proprietary, but if it is published to the public then anyone is free to use that knowledge. The whole point of academic journals is to share the knowledge, if you wanted to keep it private you simply don’t publish it.
This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it's comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.
Yes, because the method is all you need to recreate this. Intel is a for profit company so they might keep their own implementation to themselves. Pages 4:7 tell you exactly what you need to do to replicate this with details, they even give the formulas they used where needed. Remember this is supposed to be a general and modular framework that can be tuned depending on your goals, so the method needs to reflect that generality to allow for experimentation.
Sure, you could try to implement this method based off of reading this paper, but that's a far cry from 'here's our MIT liscensed alpha driver, go nuts.'
They might publish it in the future, they might not, but if they don’t nothing is lost and they get a head start on implementing research that they paid for.
Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.
This one appears to be focused on the machine learning / frame gen method, the software:
https://patents.justia.com/patent/20240311950
This patent is about hardware configuration of a system designed to run such a model in a way that Intel considers optimal. So I guess they’re considering designing SOCs specialized on these things (maybe for handhelds?). But this is not related to the paper, since this doesn’t affect your ability to train and run this model on your RTX like they did on the paper.
And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.
https://patents.justia.com/patent/20240311951
So yeah, looks to me like Intel is certainly aiming at this being proprietary.
I suppose its technically possible they do not actually get these patents award
This one is more tricky, but it also does not affect your ability to implement your own model. What they are doing here is akin to a real-time kernel operation but for graphics. You set a maximum time for a frame to be rendered (ideally monitor refresh rate), if the algorithm decides that the GPU won’t meet that deadline, then you generate the frame and discard whatever the GPU was doing. It’s basically a guarantee to meet the display update frequency (or proper v-sync). Also they aren’t likely to get this one because they’re trying to patent the logic: if time1 is less than tmax, pick option one; else pick option two.
These patents do not affect the paper in any way, since they do not cover what is needed for this method (RTX 4070 Ti, Ryzen 9 5900X, Pytorch, TensorRT, and NVIDIA Falcor) or their alternatives.
The requirements are 7 year old hardware. While not everyone upgrades their PC every 7 years, I don’t think it’s unreasonable to stop supporting 7 years old hardware. Apple requires iPhone XS (6 years old) for iOS 18, Google requires Pixel 6 (3 years old) for Android 15, MacOS Sequoia requires 6 years old laptops. Turns out Microsoft is the one giving the most support.
Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.
The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.
IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.
Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI). Remember time to run a model is dependent on GPU performance, so a faster GPU will be able to run this model faster. I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.
I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good
It can. This method only needs access to the frames, which can easily be accessed by the OS.
But ... this is GPU tech.
This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.
And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.
Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.
Mining is easy, but most countries don’t want to deal with the problems that come with refining.