NVIDIA has announced that it will be bringing real-time ray tracing to its Pascal GPUs. According to the green team, a new driver will enable all GTX10XX series graphics cards, from GTX1060 all the way up to the GTX1080Ti, to run real-time ray tracing via the DXR API.
As NVIDIA noted, GeForce GTX gamers will have an opportunity to use ray tracing at lower RT quality settings and resolutions, while GeForce RTX users will experience up to 2-3x faster performance. Naturally, this performance jump is thanks to the dedicated RT Cores that the Turing GPUs feature.
Realistically speaking, we don’t expect a lot of GTX gamers to be able to enjoy the real-time ray tracing effects at an acceptable framerate/visual ratio. Still, this is great news as it will allow more developers to start implementing real-time ray tracing effects in their games.
NVIDIA recommends Pascal owners to use basic real-time ray tracing effects with low ray count. The driver that will enable RT on the Pascal GPUs will come out in April and the the supported GeForce GTX graphics cards will work without any newer game updates.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email

First!
After how Crytek exposednvidia, now RT is possible even on 10xx cards 😀
Ray tracing is even possible on ancient GPUs.
No one ever said RT was a hardware-exclusive feature of RTX GPUs.
Also, what S0ldier said.
Nvidia have shown slides from beginning showing contrasting performance differences between Pascal and Turing cards with the RTX tensor cores in place.
So, no, not exposed at all in reality.
If they can match Tensor core ray tracing performance without tensor cores, then they would have exposed them, but that won’t happen.
Wheres 1050 ti there you racists! I wanna play raytraced quake 2!
what you want a slide show?! Might as pull aggregate a bunch a screenshots and flip through them in Powerpoint! 😀
I have a 1060 6GB Xtreme gaming on sale, want it?
I hear what you’re saying, as I too have a 1050 Ti. But I don’t want to play RT Quake 2 in 1024×768 like I did back when it launched. I’m totally pulling the trigger on a 2080 when I get my tax money.
looks like rtx did not sold enough lmfao
Contrary to what some people have been thinking, RT cores are actually not some gimmick though. Just saying.
https://uploads.disquscdn.com/images/61ad9b38a9054810509e3152b119c0ea72c5d40566f992e48e5eda4844960756.png
It was only a matter of time DXR support came to older GPUs, since it’s an API extension/DX12. “That it is technically supported was never in question, the question was how fast they can do it.”
Also, DXR has never been an NVIDIA-exclusive thing as many think. Every DX12-capable card can access and use it, it’s just slow (depending on the HW).
When it comes to AMD, I think they are still free to provide DXR support through their D3D12 drivers though.
Any D3D12 GPU is capable of running this DXR code, since it is just an extension of DirectCompute. Slow Performance remains a totally different issue though (imagine running the same operations on a GPU without specialized processors/cores).
AMD had actually previously stated before that all of its DX12 GPUs support real-time ray tracing via Microsoft’s DXR fallback layer, though I think it’s development was kind of halted, as it was deemed unnecessary…
Also, NVIDIA took an example of METRO EXODUS to show how the resources are allocated in Turing and other cards.
For example the 1080 Ti which only relies on its FP32 cores to render the frame, took a lot of time, whereas the Turing 2080 with INT32 cores halved the time but without RT cores, as evident from the graph imo..
Moving to Turing with all cores (FP32, INT32, RT, Tensor), it seems that the time is almost 1/4 of what the Pascal card actually took to render the same frame.
Overall, it looks like FP32 and INT32 are responsible for 12 TFLOPs, while the RT cores enable 23 TFLOPs, and the Tensors gives 9 TFLOPs of rendering performance. Slides from the conference.
https://uploads.disquscdn.com/images/2259ba2d12c12cc0179a4ecf644364930d80cb7715300e4524b6a13e0f087bb0.png
https://uploads.disquscdn.com/images/770f0946a3689e58e58ccf57c3a802ce34ca26afbc71f6b4c12669ff1f0c8341.png
https://uploads.disquscdn.com/images/3e2a213e34ada3dfee3db299add46fdf8750ea0c86b3c45c5e23dd72f6ccb25a.png
https://uploads.disquscdn.com/images/4215cfb2a492505129de7ed7282a2f2e5ffef7e73157d56ae94f9702b5113dc7.png
? ? ? ……..
Contrary to what some think, RT cores are not a gimmick though..
https://uploads.disquscdn.com/images/61ad9b38a9054810509e3152b119c0ea72c5d40566f992e48e5eda4844960756.pngmight
Contrary to what some people have been thinking, RT cores are actually not some gimmick though. Just saying.
https://uploads.disquscdn.com/images/61ad9b38a9054810509e3152b119c0ea72c5d40566f992e48e5eda4844960756.png
let’s not call it a gimmick but in today’s day in age when we are trying to get 4K at 144hz or so limiting our GPU’s die space to fill in hardware for ray tracing that takes up 40% of the die space seems dumb.
7nm or 5nm will probably be the last major upgrade in terms of fabrication and spending 40% of our die space on something 10 games or less use isn’t smart. Instead Ray Tracing should be its own dedicated card
4K 144hz is a long way of yet and so the logical step for them to offer something truly different and a visual leap was Ray tracing.
It’s not for everyone clearly, but it’s something that was only truly viable leap forward.
Thing is who in the hell would want to use it very little benefits for the limit in frame rates.
Dumbest thing Nvidia ever did
You cant have it all.
You can’t even have 4k Ultra settings 60 fps in all games currently.
I do see the benefits to ray tracing, but I agree, I’m personally happy to live with emulated effects currently, as I just would rather wait until it’s possible for technology to support it more easily, but it’s clearly of interest to people, as it sparked massive interest amongst developers, community and even AMD is talking about maybe supporting it with next cards in some form.
In my honest opinion, they took a gamble on how best to use technology they had already developed as part of their AI business, and this was almost like a free hit in a way. If it paid off and if it sold well, they can really push it going forward.
If it bombed and interest wasn’t there, they at least retain position as most powerful GPU’s out there, as AMD isn’t offering any competition still.
Nvidia had to start somewhere, because this ray tracing really needed an entry into the gaming/consumer market as well, since it’s an industry standard CGI.
They delivered us these new TURUNG GPUs, and this had to come up sooner or later. I appreciate NVIDIA for their continued innovation in Technology though.
IMO, apart from all the above factors, it seems we are basically paying an “early adopter” price for this new Turing tech/hardware, hence the premium.
We all know that Nvidia has totally changed the GPU arch as well, with the addition of new RT and Tensor Cores, and other design/pipeline improvements (memory/cache) etc.
But to take proper advantage of this hardware, very few games and software are currently out in the market. So basically the hardware won’t get fully utilized (if we think from this perspective).
Also, how well some of the upcoming Games will actually perform on a TURING GPU, with Real time ray tracing and DLSS, still remains to be seen. I think it will take at least another 2-3 years for this whole RTX technology to become mainstream.
As of now, some of the PC titles are going to take full advantage of this new RTX feature, provided Game developers also adopt and implement ray tracing, and DLSS deep learning AA in games as well.
Still, it’s good to see new Tech being released. With time things might settle down a bit, and the performance gain might be there when DLSS and Ray Tracing features are enabled.
Why not make it a separate card? Instead of wasting 40% of the die for something a few games will support maximize performance and then create a ray tracing card
4k 144hz is already here…
Turn everything to LOW…
Or in the future you’ll be chasing 144hz some more.
rt cores are not but ray tracing at this point is. i already bought a 2080ti for it though.
Ok with DX12 coming to Windows 7 + DXR enabled RT effects via driver…let’s see how my GTX1080 holds up!
Not that exciting, to be honest. I mean, every GPU can pull off ray tracing. The only notable feature of the whole RTX thingy is making specialized real-time ray tracing cores.
Project Sol Part 3: A Real-Time Ray-Tracing Cinematic Scene Powered by NVIDIA RTX.
https://www.youtube.com/watch?time_continue=1&v=b2WOjo0C-xE
i’d rather get dlss.
Yes obviously, they are supported as well, because these are TURING GPUs.
Cryengine is able to render ray tracing with non RTX cards guess what ? Vega 56
https://youtu.be/1nqhkDm2_Tw
we will enable rtx for 10xx users so that when they see the horrible performance they will get, they will maybe consider buying a 20xx series card
we will be trolled though. it’s april fools joke from nvidia.. 😛