NVIDIA officially collaborates with Microsoft for the DirectX Raytracing (DXR) API, first screenshots

As promised, NVIDIA today announced its collaboration with Microsoft for a brand new DirectX API that will allow real-time ray tracing. According to the teams, the DirectX Raytracing (DXR) API will enable straightforward access to real-time ray tracing running on development graphics hardware, and below you can find its first screenshot from Remedy’s Northlight Engine.

Game developers that use this DXR API will be able to produce real-time ray tracing into their game engines and create a seamless hybrid between the rasterized and the ray traced image. Ray tracing can be used to add quality and accuracy to rendering techniques that are difficult to achieve with traditional rasterization techniques.

As noted, some of the improvements include accurate soft shadows, ambient occlusion, reflections and global illumination.

Remedy, one of the few developers that have access to this API has shared the following screenshot, showing this new DXR API in full effect. Epic Games has also shared a screenshot from Unreal Engine 4 using the DXR API.

Mikko Orrenmaa, the Technology Team Manager of Remedy Entertainment, said:

“Integrating NVIDIA RTX technology into our Northlight engine was a relatively straightforward exercise. Developing exclusively on NVIDIA GPUs, we were surprised just how quickly we were able to prototype new lighting, reflection, and ambient occlusion techniques, with significantly better visual fidelity than traditional rasterization techniques. We are really excited about what we can achieve in the future with NVIDIA® RTX. Gamers are in for something special.”

Tony Tamasi, senior vice president of content and technology at NVIDIA, added:

“Remedy has accomplished something very exciting in a short amount of time using NVIDIA RTX on Microsoft’s DXR. The combination of today’s powerful GPUs, industry standard API support and game engine integration finally puts real-time ray tracing within reach for the graphics industry and game developers.”

Among the game engines that currently support the DXR API are Unreal Engine 4, Unity, Frostbite and Northlight.

Last but not least, the teams claimed that this technology remains at research level, and as such it is far from being implemented into a video game!

Experiments with DirectX Raytracing in Northlight

SEED - Project PICA PICA - Real-time Raytracing Experiment using DXR (DirectX Raytracing)

64 thoughts on “NVIDIA officially collaborates with Microsoft for the DirectX Raytracing (DXR) API, first screenshots”

    1. I won’t be surprised if it cripples all of Nvidia’s current and future line GPU’s tbh.

      Nvidia always seems to boast on new and better tech, but it always ends up being used very little and usually ends up being used by at least 5% of the high end PC gaming group 5+ years down the road, rather than at least 50-60% day one.

      1. 50~60% would be kinda impossible, as the number of people with good GPU is always small. New tech will always be targeted to future technology. In case of Ray tracing, its and old tech that was always very demanding, so its impossible to expect that even the new GPU will be able to get really good performance.

        1. Raytracing is very demanding but form of “ray tracing” is used already in games as for instance voxel based effects works same problem with bit different approach but it is comparable to tracing light rays in simpler way.
          I do not see this became a mainstream, but it certainly good thing there is work in that area.

        2. In the case of aiming for future tech, that future tech then becomes old tech, and the next future tech becomes outdated when it is no longer viable.

          By trying to “future proof” tech, you’re making it next to impossible for it to gain massive widespread use, as well as advancements in multiple fields. The way Nvidia goes about this every few years just shows that they want as little people using it as possible, while still being able to maintain tech hype.

    2. As I understand it is technology based on Volta arch., so currently no mainstream gaming GPU support that. Vega could actually support similar techniques via rapid packet math.

      1. First GPU-accelerated bumpmapping tech was showcased in late 1999/ early2000 with only GeForce 256 supporting it through a very pricey implementation. Yet already in 2003 NOT having bumpmapping was making graphics dated. And it’s like that with every major visual advance. By the time games will start implementing it – midrange GPUs will boast the feature

  1. It may be a talking point that Microsoft are collaborating with Nvidia when their consoles feature AMD hardware. A potential nod to them ditching AMD for the next Xbox console.

    1. Nvidia wants to monopolize the market with MS, so this can’t get worse with this API announcement…meh

      1. You saw word “API” and thinking it’s DX12 replacement.

        Intel SGX API will replace DX12/Vulkan hurr durr..
        “The SDK is a collection of APIs, libraries, documentation, sample source code, and tools that allows software developers to create and debug applications enabled for Intel SGX in C and C++.”

      2. NVIDIA can’t really monoplize the market while there is competition but it’s a common thing to happen, even small bussiness do it like on high streets, put their prices up high because there is no competition. There really is no competition when it comes to Microsoft, like for Windows and DirectX but developers like customers want a good thing, they don’t care if it might be a monopoly later, they want what’s good for them now.

        1. Customers always get what they “want” and deserve for that matter. So if they “want” monopoly, they can and most certainly will get it.

  2. This makes little sense now, because this so-called RTX API will only work on Volta GPUs, and the upcoming next-gen GPU lineup (if it sees the light of the day).

    TITAN V currently is the only card that supports this feature, and that’s surely a high-end card.

    The TENSOR cores in Volta GPU arch might help accelerate this ray-tracing, like the same ones used to create say an AI de-noiser, so we might see this implemented in Games later on as well, if implemented properly.

    It’s not THAT far from being implemented into a video game IMO, but this still remains to be seen.
    —————-
    Btw, on a side note, this is what AMD has to say regarding this whole Microsoft/Nvidia announcement:

    “AMD is collaborating with Microsoft to help define, refine and support the future of DirectX12 and ray tracing.

    AMD remains at the forefront of new programming model and application programming interface (API) innovation based on a forward-looking, system-level foundation for graphics programming. We’re looking forward to discussing with game developers their ideas and feedback related to PC-based ray tracing techniques for image quality, effects opportunities, and performance.’”
    ..

    1. Lol, that titan V is a $2,999 bucks card. pretty sure mostly high end upcoming gaming gpus will take advantage of this tech.

      1. FYI, actually the Titan V is NOT a gaming card though.

        I don’t find this GPU practical for gaming purpose, even on 4K resolution, based on the card’s specs/architecture, as well as pricing.

      2. It’s that price because of the double precision performance and the tensor cores for AI learning. Titian Xp doesn’t have this, that’s a gaming card.

          1. Yeah indeed, that’s true, but VXGI is a different feature.

            VXGI actually works on most of the DX11 enabled cards, including AMD as well (DirectX11 hardware).

            “Mike Skolones” from the Unreal Engine forums, has given some insight into this before as well.

            The NVidia 900 series, and all the later cards actually just provide better “hardware” features that allows us to get more performance out of VXGI.

      1. RTX really is Volta exclusive but DXR as mentioned above can use direct compute path, in other words older gpus can only benefit from DXR!
        but RTX is another story, it uses hardware accelerators in volta gpus like tensor cores and ray tracing engines.

    2. Actually this is direct compute based and uses HW async , AMD can now use this but how good? No one knows..

    3. This will probably start to work from Volta, and nvidia will probably announce Volta by the end of the year, but i think that this this is a long term thing with microsoft we might be able to see some of it next year, i’m not sure about the games 4A, and Remedy are working on now, for example i’m pretty sure we won’t see this on the new Metro or the new game from Remedy, well, you never know, but this would mean they’re collaborating for much longer than one would think.
      Anyway i’m not sure about AMD having the same tech, maybe similar but not the same, like the whole Freesync, G-Sync thing, at the end of the story they’re basically the same, but are made in a different way. Also, if the collaboration is Microsoft-nVidia, why would someone complain if this only works on upcoming and future nVidia hardware? It seems pretty reasonable.

    4. How does it not make any sense? It’s hardware lvl. They have been working on it with MS for years. Pascal came out because volta arch was not ready. And still is not ready for gamers.

      And Tensor cores will be useless for this. Besides I don’t see developers using this in games for a while anyways.

        1. If you look at the Specs the Titan V has and the specs that actual ray tracing in video games would need…it really ain’t that far from it.

    5. Since RTX is compute based and Microsoft themselves prefer using general CUDA/stream processors to accelerate that; AMD is definitely not lacking behind. AMD is ATM better actually in that regard thx their async implementation.
      There is a reason why nvidia is using tessellation or for instance their SMP in otherwise compute based techniques (HairWorks, VXAO etc.) as they know they lack in general compute performance via CUDA cores.

      In any way, to have general API for ray-tracing is not a bad thing at all.

      1. Only AMD is a whole lot better at Tessellation now with Polaris / Vega then they were in previous archs. If anything based off Tessmark AMD is pretty much caught up

  3. gtc is very close, my guess is A)they’ll not announce any release date for volta cards or B)push release date further. regardless of what happens one thing is for certain, miners are going to empty shelves in a day or two, everything gets even more expensive and we’re screwed again, except this time we get double penetrated by both miners and nvidia or rather companies which supply nvidia. dark times to be a PC gamer specially if you’re a teenager. i don’t really see how or if things ever going to be normal again any time soon.

  4. about time we get some realtime raytracing, this is need some hardware level support, unless next gen gpus have hardware level raytracers asics inside their pipeline , this is not going to takeoff any time soon

  5. i dont see this working on any gpu, at least not in the near future, but if they manage to off load all the required compute task on a second, less powerful, gpu that could be a thing we can look forward.

    1. We’re going to hit a wall with how far we can continue to shrink process nodes on silicon and this is forcing investments and R&D into other materials. One that I’ve been trying to keep up with is research being done in making transistors out of carbon nano tubes. There are some problems to be worked out but it holds the promise of being able to make a chip that is up to 5 times faster while using 1/5th the wattage as a silicon chip.

      One of these days in the not too distant future there is going to be a breakthrough with some new materials and we’re going to take a huge leap forward in performance. Imagine a GPU running at 10 GHz with 5 times as many cores as a GTX 1080 Ti in a 250 watt envelope. The day is coming because it will have to come.

      1. totally agree with you. at some point a new material will be the key to unlock a huge leap on performance, but we need to be realistic about it, if such a material is developed, or discovered, i know chip makers are going to ask a good money for it. a good market share between nvidia a amd is crucial here.

  6. A: Microsoft made this not nvidia

    B: It is compute based

    C: It requires HW async

    D: AMD could use this tech today ( with abysmal performance because it is IN THE RESEARCH PHASE)

    Cya in 2020 when this is actually usable

    1. The nay sayers always said this sort of thing, cutting edge effects like Turf effects and VXAO are in mainstream games now thanks to NVIDIA. At least we PC users get to try these cutting edge effects rather than have to wait for 2 genrations of consoles or 10 years for it to become mainsteam standard.

      1. all of it is true btw , this micro api that nvidia re-packaged in gw sdk.. check the official specs it is all there… and since it is in research phase it will take some time to see it in games

        1. Didn’t say is was untrue, just saying like the nay sayers claim tech like this won’t come into mainstream games, it will just takes years, maybe 5.

        2. NVIDIA said that this tech was gonna be in games later this year…probably the new Remedy game and the new Metro.

          1. Nope. what they said was that the engine can be upgraded to use it , metro will not have this , read the papers again Nvidia stated that this is only in research state and that it will take some while for games to use it… so no , not this year

    2. I agree with you but we will see this in game before 2020. By 2020 though it may hit main stream so you sort right on the ball actually.

      Will see, regardless it is very exciting times for PC gamers and the industry.

      1. Dude , normal GW did not hit mainstream let alone this.. and by mainstream I mean that every game has it..

    3. This will probably be big by the time next-gen consoles hit, so i expect them (and game engines made for them) to take advantage

  7. This is really impressive stuff.

    The two level hierarchy traversal structure described in the MSDN article is surprisingly efficient on the GPU. I’m really excited to see the performance on mid range GPU’s.

  8. The fact that this works in realtime is quite incredible. The only logical next steps for game engines in terms of lighting is essentially raytracing because all the current capabilities have been reached it seems. I’m glad its in the works now. Hardware should always be playing catch up to software to push both manufacturers and devs.

Leave a Reply

Your email address will not be published. Required fields are marked *