Back when the Turing graphics cards were not available to the public or developers, DICE and other studios have been experimenting with real-time ray tracing using the NVIDIA Titan V GPU. As such, it should not really surprise anyone that Battlefield 5’s RTX effects can actually run with this particular card, even though it does not feature any RT cores.
NVIDIA has claimed that the RT cores basically accelerate the calculations of the real-time ray tracing effects, providing a significant performance increase. 3Dcenter recently has shared some figures and as we can see, there is a big performance difference between the RT and the non-RT graphics cards that were tested.
Let’s start with the most promising comparison that 3Dcenter shared. In an unspecified map, the Titan V was able to run Battlefield 5 with 80fps on Ultra details with RTX set to High, whereas the Titan RTX was able to push 87fps on the very same settings. However, we should note that there are scenes in Battlefield 5 in which there are minimal RTX effects. Battlefield 5 uses RTX for its reflections and in scenes that do not feature a lot of them, performance should – theoretically – be very close. But what happens in maps with more reflections?
Now here is the interesting comparison. While the Titan RTX was able to keep an average of more than 80fps (81fps with High RTX and 97fps with Medium RTX) in the Rotterdam map, the Titan V was simply unable to offer a smooth gaming experience on High RTX. The Titan V ran the Rotterdam map with an average of 56fps on Ultra settings with High RTX and with 67fps. So basically, we are looking at a 30fps performance difference between the Titan RTX and the Titan V.
The same behaviour can be witnessed in the Star Wars elevator real-time ray tracing demo. This demo was originally running on the Volta architecture via the DirectX Raytracing API however, this very same demo can also run on the Turing GPUs.
In conclusion, and while the RT cores do not provide a 10X performance boost, there is a very big performance difference between a GPU with and without RT cores. Let’s also not forget that the Titan V comes with 5120 shader units whereas the Titan RTX comes with 4608 shader units.
So, in case you were wondering, the RT cores are not a gimmick. Still, I have to say that it would be really cool if DICE provided a software mode for its real-time ray tracing effects so all players could test them and see whether their GPUs are powerful enough to run them in software mode.
Kudos to our reader “Metal Messiah” for spotting it.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
But stil Titan V has DL cores which are required for fast denoising.
Denoising is last stage of RTX. That test just show that Deep Learning is equaly important as RT cores for ray tracing.
Ok, you`re right, Nvidia realy stopped talking about AI denoising
after SW demo. Still why they don`t use compute for RT?
” We need something different than the classic approach for games” .
Claybook (2015) already uses Sphere Marching a.k.a Sphere Tracing a.k.a Signed Distance function Ray Tracing in 4k minus RT reflections.
Also the only big deal with RT(X) are real time reflections, nobody will notice correct direct occlusion, global ilumination and soft shadows until it will compare screenshots with rasterized shadows and RT shadows. Some would point out that crytek`s implementation of SSDO with bounce light would give better GLobal Ilumination cause it take light color into calculations.
Still don`t like nvidia ideas `cos they are always closed with minimal possibility to modify.
Only real Path Tracing will be game changer.
I like how we agree on some things:)
Its more like 1600$-1900$ here (depending on version), lol i wish it was near 1000$.. lol. I cant believe this. I’m wishing for a 1000$ card haha and think its cheap lol. When will AMD wake up or Intel join the GPU market… Nvidia are being super ballsy.
I miss the times when the best card was like 300-500$. When the entire build was around 1k. Now CPU.. GPU.. RAM push it WELL over 2000k big time.
Nvidia started us on the path to ever higher prices because they got away with selling a midrange Kepler GTX 680 for $500 7 years ago. It was clearly a midrange GPU just looking at the 256 bit Memory Bus alone but some people were talking it up as the Kepler Flagship. I remember even some so called GPU experts on Tom’s Hardware at the time making that claim.
So gtx 1080ti can match rtx 2070 😀
No that is Pascal. Titan V is Volta and has 5120 Cuda cores
Right but tensor cores alone is just not enough to handle how DXR is being handled in games.
BFV does not use Tensor cores.
Tensor cores are not exposed to game APIs like DX12 or VK.
To use them, you need to license GameWorks, AFAIK.
“because RTX cores arnt needed to run DXR with decent results”
This is right only for simple cases with few simple objects. Not for games with current graphics. In current games, you need RTX cores to run real time ray-tracing in real-time. It is not possible without it.
I am pretty sure it is just hoax. I can be wrong, but without seeing real evidence about this, I can not believe, that software ray-tracing is near its hardware acceleration.
LOL.
“Battlefield 5 uses RTX for its reflections and in scenes that do not feature a lot of them, performance should – theoretically – be very close.”
I do not think this is true. DICE developers mentioned that they use RTX to also power their PBR materials so that would mean they raytrace in EVERY frame regardless of how many reflections are apparent.
This is performance after the patch for the game. Imagine how bad it was before the performance patch. Let’s face it. You need a $3000 GPU to get that performance next to hardware that uses RT cores such as a 2080 Ti priced at $1200
To make the test even more fair based on RT cores it should be Titan V vs 2080 Ti honestly. As well as pre / post patch performance.
Just remember the actual RT cores are to help with performance over having 768 more cuda cores as well like the Titan V does. In fact a guy over at WCCFtech showed off BFV on his Titan V prior to the actual performance patch and it was terrible. While people who posted benches with their 2080 Ti’s were smoking the Titan V in overall performance.
Actually, if we go back on NVIDIA’s roadmap, then there was a hint given before which claimed VOLTA GPUs would have support for RTX. It seems Nvidia originally planned to develop support for ray tracing on Volta as well, powering this feature on these cards.
If you check the old star wars DEMO, then you will find this text, which hints Volta cards can power this feature: “The demonstration is powered by NVIDIA’s RTX technology for Volta GPUs, available via Microsoft’s DirectX Ray Tracing API (DXR)”.
TBH, “Volta was highly marketed for it’s ray tracing capabilities” ! No doubt it’s a 3000$ professional card. But, as many think, it’s not just only about “instructions”, or Hardware. MS’s DirectX Raytracing is simply an API, which means it doesn’t matter how IHVs implement this, the library only requires driver support. If the driver can correctly handle it, then it can be used on the production level/code.
But most importantly, we can’t always trick DXR into working on any GPU that doesn’t natively supports it. I think this Titan V was already capable of doing DXR, since DXR was first announced even without the full compatibility layer.
We actually don’t always need RT cores to do ray tracing. This was never true to begin with. RT cores only increase the performance, when they are ray tracing. To put it in other words, we can’t ray trace in DXR, with anything that is NOT a Titan V or a Turing card, and without the compatibility layer.
At the time of DXR launch, Titan V was advertised as the only card capable of DXR. My guess, the “FP16” mathematical instructions required by DXR are also making the Titan V work faster than the 2080 Ti. Ray-tracing has also been very demanding on the memory, so “HBM2” used could also be a major contributor when it comes to performance as well, along with the large caches and registers.
Also in BFV, ray-traced reflections require a lot more work to be done from the graphics pipeline, which favors the Titan V. It was mentioned earlier that this RT process can essentially have two parts, we have the BVH part (this is where Turing cards gain an advantage), and the denoise part, though the Titan V would be spending more time being bottlenecked by BVH than the 2080Ti.
With recent patches DICE has made improvements on how BVH representation works, but if we check the above linked article, the performance is nearly 35% slower on a Titan V in RT heavy maps, despite the identical theoretical perf (FP32 TFlops), and the Titan V having 50% more FP16 !
Yes. You even do not need GPUs to run raytracing. You also do not need them for graphics at all. Everything can run only on CPUs. But the performance would be horrible. 🙂
If there was possibility to run RT in real time without the need of special cores like RT cores, AMD would already have support for it. For now, they even do not botter to make at least SW support for DXR in their drivers.
The only thing is.. Volta uses Tensor cores but not RT cores. So using the Tensor cores only is like a brute force way of handling things. Much like when Nvidia 1st showed off RT having to use four Titan V’s.
So it makes sense that A Titan RTX / 2080 Ti would get much better overall performance. Just a shame a 2080 Ti is $1200.
Just to clear some misconception, contrary to what some people may think, the RT cores do not actually process ray tracing directly. These cores handle processing geometry information that the RAYS (that are being computed on the shaders) need in order to sample data more efficiently.
They do this by reading arrayed geometry data stored in Bounding Volume Hierarchies, that reduce the complexity of handling raw geometry data. These are not required for ray tracing, unless Nvidia is hiding some data concerning Volta/rtx.
As I know, RT cores also compute intersections between rays and scene geometry. So they are used to traverse BVH to find closest intersection from the ray origin in ray direction.
So basically the actual “magic” of RTX is a mostly CPU bound operation (for building the BVH from the scene graph) and can theoretically be made available in older generations or AMD cards.
In another word, yet another scam.
YES, I obviously know how this all works…..There is more to this, but I didn’t explain it in detail, because it’s beyond the scope of this forum/comment section…
Well NV is claiming this functionality in their official web sites. That they are using RT cores for BVH traversing and to compute intersections between rays and triangles.
Yes. That makes sense. 🙂 I was just surprise that Titan V without RT cores delivered pretty good results in performance.
By the way, Raytracing has already been working on AMD cards for quite some time. The GCN arch has tons of compute power to handle ray tracing, though not for realtime with Radeon rays.
This is what AMD has to say ( mostly for content creation):
Dubbed as “Radeon Rays“, the company’s ray tracing developer suite will now support real-time ray tracing in Radeon Rays 2.0. The new engine is compatible with OpenCL 1.2. Built on Vulkan, Radeon Rays 2.0 leverages the API’s advanced support for asynchronous compute to make real-time ray tracing a reality.
Now if AMD could work with somebody like ID Tech in say the next Doom game I am sure they could use RT type effects since they are a lot more talented with Vulkan than Dice is with DX12
Indeed.
“Damn, can’t even properly format/edit the text here on Disqus”. There is no preview button as well.
However I would have to look more into Radeon Rays 2.0 and see how that tech could carry into actual games. But I am not a software dev so I would have zero clue about that.
oh wow for AMD it’s all under GPUOpen… This is making RTX look like ShameWorks 2.0
“”oh wow for AMD it’s all under GPUOpen.””
Yup…
Well hopefully we see some Radeon Rays Demos soon because I would like to test it out just to see what happens. I only have a RX 590 but still I am interested
No. It is not making RTX look like ShameWorks 2.0. 🙂 RTX is implementation of DirectX extension for ray-tracing which is called DXR. AMD could (and I think they will in the future) do it too. NVIDIA is just first in this. What you said in your comment, is like you would criticizing AMD or NVIDIA for realeasing GPUs with support of some DirectX version before competition.
You even can’t compering RT in Radeon Rays with RTX. They are completely different supports of RT. Software RT vs hardware RT.
So where was all the love for AMD when they always supported versions of Direct X before Nvidia? And if AMD could pull up RT with Async in games and get decent performance that would be hilarious.
You don’t have to ask me. I am not AMD hater. I never criticized them for being first in something or for support or developing new technologies.
What is this RT with Async? Async can be used for better using of GPUs (in efficiency) but it will not allow you to have enough performance for RT. From where this statement comes ?
Well it seems, that work of RT cores is achieved in Titan V in SW way. RT cores in Turing are still good advantage, but Titan V is achieving good results without them. From that point of view, good results should be achievable by Vega GPUs or high end Pascals too. But that is not happening. NVIDIA would not add RT cores to their GPUs for nothing.
This Raytracing is software raytracing. Not hardware like in Turing GPUs. For example Star Wars demo on spring 2018 run real-time raytracing with 4 Titan Volta (software raytracing). In the end of summer it run with one Turing using RTX (hardware raytracing support). That’s difference between SW and HW raytracing. AMD is now in SW. It means for me (comparing Titan Volta and Vega 64 performance), that AMD would need at least 4 Vega 64 GPUs to challenge one RTX 2080 Ti in RT.
“The GCN arch has tons of compute power to handle ray tracing”
What compute power do you mean? I think only FP32 performance is important for RT. And in this case NV and AMD GPus are pretty close.
Yeah. I know obviously it’s software ray tracing.
But didn’t basically everything support it for a long time? What about that real-time ray tracing rendering view found in 3D computer graphics programs?
No they don’t. Metal Messiah did not write that. He just wrote, that AMD GPUs can run ray-tracing. And that is true. But it is only software ray-tracing which can’t run in real-time for now. And Async is just the capability to run rendering and compute operations concurrently on GPUs. That can give you some performance in some situations, but it will not help GPUs to run software ray-tracing in real time. It has nothing to do with that.
Async compute is about efficience usage of GPU. But it is not connected to ray-tracing alghoritm at all. You can achieve better performance for ray-tracing with it, but not in real-time. I would like to see AMDs results in performance. There is no possibility to get or to come even close to results achieve with RTX because of hardware acceleration.
” AMD way of doing raytracing is totally different compared to how Nv is doing it”
NVIDIA is using the same approach a decade. Software raytracing in Radeon Rays is not new. Radeon Rays 2.0 is just using capabilities of modern AMD GPUs, but it is still using software for rendering acceleration, not hardware.
There is no way for PS5 to accelerate RT without HW support. 🙂
Real time software RT is possible thats true. With few simple objects and nearly empty environment. But not for modern game in 2K and 60 FPS (RTX in BF V with RTX 2080 Ti).
This is how it looks like with Pascal GPU and DXR:
https://www.reddit.com/r/nvidia/comments/9omn95/rtxdemo_reflections_runs_on_pascalhardware/
However that’s different then was is used for DXR… Which is why you see no software under D3D working on Radeon GPU’s under Windows 10
https://gpuopen.com/gaming-product/radeon-rays/
Obviously YES, I’m fully aware of that. I was just pointing our AMD’s take on ray tracing… Jeez !
RT Cores are FAKE!! Created only to push the prices into the stratosphere!
GeForce videocards are fake. Created only to push the prices into the stratosphere! Hell, I can run Quake 1 with just my CPU!
Re-tards Ready
Touring To
Xtreme Xplode
$3,000 for a GPU with 8% improvement over last gens gtx-1080ti…that’s a big no from me dawg.