Intel feature 6

Intel shares first details about ExtraSS which is its answer to NVIDIA DLSS 3 and AMD FSR 3.0

Intel has just shared its first details about ExtraSS, its answer to NVIDIA DLSS 3 Frame Generation and AMD FSR 3.0 FG. According to the blue team, ExtraSS is a novel framework that combines spatial super sampling and frame extrapolation to enhance real-time rendering performance.

Now the big difference between ExtraSS and DLSS3/FSR 3.0 is the frame generation technique. DLSS 3 and FSR 3.0 use Interpolation, whereas ExtraSS will be using Extrapolation.

The benefit of Extrapolation is that it will not increase input latency. The bad news is that this technique can introduce more artifacts than Interpolation.

As Intel explained:

“Frame interpolation and extrapolation are two key methods of Temporal Super Sampling. Usually frame interpolation generates better results but also brings latency when generating the frames. Note that there are some existing methods such as NVIDIA Reflex [NVIDIA2020 ]decreasing the latency byusing a better scheduler for the inputs, but they cannot avoid the latency introduced from the frame interpolation and is orthogonal to the interpolation and extrapolation methods.

The interpolation methods still have larger latency even with those techniques. Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.”

So, in theory, ExtraSS should provide a better gaming experience. Whether Intel will be able to match the image quality of DLSS 3 and FSR 3.0 though remains to be seen.

Intel has also shared some comparison screenshots between ExtraSS and some other aliasing technologies. These screenshots look promising. However, we’ll have to test ExtraSS ourselves to see whether it can compete with DLSS 3 and FSR 3.0.

Intel ExtraSS comparisons-1Intel ExtraSS comparisons-2

Let’s not forget that in its initial tests, AMD FSR 3.0 also seemed great. When it came out, though, it had major issues. Things got better with time, but FSR 3.0 is still inferior to DLSS 3.

Finally, ExtraSS will work on all GPU vendors. Contrary to DLSS 3, ExtraSS won’t be locked on the company’s GPUs. So, it will be interesting to see whether developers will prefer ExtraSS over FSR 3.0.

Stay tuned for more!

38 thoughts on “Intel shares first details about ExtraSS which is its answer to NVIDIA DLSS 3 and AMD FSR 3.0”

  1. Wow, ExtraSS. Then it must be better than DLSS 3 and FSR 3 since it’s Extra. Surely Intel wouldn’t just try to blow smoke up our a*s holes.

    /s

      1. Reviewed my recent comments and I think you’re right. I am bringing more negative attitude in my comments. I will be aware of that going forward.

    1. I don’t think that’s going to happen. Nvidia wants us buying 4xxx cards. The Supers will be coming out next month along with reviews. I’m interested if the price is right.

      1. Exactly, you can’t add circuitry to an existing chip …… It’s basically the same reason you can’t do DLSS on a 10 series card, it simply lacks the proper circuitry

    2. never going to happen, fsr3 or this is what you’ll get but if you have those card gens you’ll just update your card or laptop in a few years anyways it’ll be included in there default. 2060m here next up couple years probably a 5050ti or something, def trying to ramp DOWN gaming to minimal by then. More productive hobby like woodworking or model building etc

  2. Yey another feature to generate fake frames and performance increase, instead doing proper optimization and performance pass.

    1. As long as I get to 145 fps in AW 2, I don’t really mind. Although I would prefer pure rasterization performance, that’s for sure.

    2. Also if you get high enough for people not to notice it you can put subliminals in there easily, ads could be put in there and you wouldn’t notice at high enough added frames, mind control. They also have phones with cameras UNDER the screen just like 1984 had, chinese company had them but we’ve heard nothing for a few years now. notebookcheck had an article about it couple years ago. Can’t put tape over that. https://www.notebookcheck.net/Xiaomi-third-gen-under-screen-selfie-camera-tech-to-debut-in-2021.490252.0.html

    3. if they can get it high enough fps they can put subliminals in there you wouldn’t notice, ads, mood control etc Back in 2020 xiaomi had a 3rd gen underscreen camera which looks like the iphone will have at some point, no way to put tape over that. notebookcheck had that article and others about it. Joanna seems to have removed my last comment about this, typical.

    4. All frames are fake ,,,,, They are just a series of 1’s and 0’s

      It’s just like ALL video and graphics are rasterized because all monitors raster scan one line at a time ….. I don’t care if it’s Ray Tracing or video it’s ALL rasterized and sent to your monitor one line at a time. That’s what raster means

  3. No additional latency for more artifacts? I’ll take it!

    I don’t think I’d notice or care about some sparse bent shapes here or there or weird wires/fences/vegetation, but I’d definitely take the clean input latency… unless it creates very noticeable ghosting in moving things like FSR, for which, I’d still take the added fps. Hopefully it’s not that bad.

          1. Yes genius, but artifact doesn’t imply or equal ghosting. I should have used a different sign especially for you.

  4. DLSS 3.0 and FSR 3.0 both suck because they introduce significant input lag. What they do is delay the latest rendered frame, and first interpolate it with the previous one to create an interpolated frame. This causes your input to be delayed by half a frame.

    People who argue this input lag is manageable at higher framerates, are idiots, because if you have high framerates you don’t need DLSS or FSR to begin with. At lower framerates, where DLSS and FSR are used, that input lag is significant.

    So the only way I see Intel could do this without lag, is if it guesses what will be on screen next based on previous frames, player movement and previous gameplay sessions, just like net code guesses which direction players will likely be in to smooth movement. This will of course introduce artefacts since those guesses will often be wrong.

    Both techniques suck imo. But Intel’s technique sucks less.

    1. I just Googled the article and got the same survey question. That was my cue that this was most likely a site with bias in their articles towards the Politically Correct. No need for further examination.

      1. It says over 50,000 people have “voted”.

        They must be plastering that all over their site, because no way have 50,000 people read that single article.

        I wonder what they even do with that info. I don’t think it’s from a 3rd party, they probably want to know who reads their articles (men or women) to get more readers, but felt compelled to be ridiculously politically correct and added the alphabet soup.

        1. Who knows what they do with the info. Today the medical community recognizes 72 genders besides just male and female. Looks like 73 now with the Transmasc addition. Googled that and got this definition,

          people who were assigned female at birth and their gender identity and/or expression are masculine but not necessarily male. Transmasc people are often overlooked since our society tends to have a binary view of gender.

          In my day there were only 2 genders, well really 3 if you counted the hermaphrodites with both s*x organs but they were extremely rare mostly born in South Africa.

          1. “medical” community means talmudicks I believe I read it’s in that satanic tome they live by and the kabbalah as well, pagan sickos.

    2. I black list eurowokegamer, sh*t site. On Pride week they bombard you with rainbows and stupid woke articles. And they didn’t review hogwarts legacy as they are politically correct woke site.

  5. I don’t pretend to be an expert here but this is mostly for 4k+ players right? otherwise most cards from 1070 forward can handle most games at 1080p at high settings either 60 or near there and 2k some games lower/mix settings 60 depends on the paired cpu too of course. I get that there are 120hz+ screens now and of course big AAA games are released to a marketing schedule-holiday sales schedule etc dang what state they’re in, the game addicts just want their “new stuff” hit.
    If a latest game can’t be played at NATIVE 4k/60 it’s just going to have to wait or locked at 45 or 30 or something, or turn it down to very high instead of ultra etc. Then nvidia ever coming up with it’s gimmicks to sell cards comes out with dlss which is the main benefit for anyone on 2xxx level cards and lower end 3xxx cards, RT is still too much penalty there to be worth it. And now AI guessing at the next frame using what I think are custom cores in the card gpu, at so high a frequency that you don’t notice it.
    I’m thinking at some point this will be like Physx and the game itself will incorporate software based stuff by itself and will work with both the gpu and cpu to do this, not just the gpu but how much fps does people need? I don’t know that screens are going to be +++hz not worth it. For most games 200fps is a total waste, only online pvp like csgo really make sense. Then again 8k+ just start a..nother round of new shiny for little benefit.

  6. Thanks, I have just been looking for information about this subject for a long time and yours is the best I’ve discovered till now. However, what in regards to the bottom line? Are you certain in regards to the supply?

Leave a Reply

Your email address will not be published. Required fields are marked *