AMD logo image 2

AMD is working on next-gen software/hardware hybrid ray tracing technology, shares first details

AMD is about to release its new NAVI graphics cards and as we’ve already reported, these GPUs will not support ray tracing. However, it appears that the red team is already working on a software/hardware hybrid model for ray tracing, and has shared the first details for it.

According to AMD, this hybrid approach will address some issues that can be found with solely hardware-based ray tracing solutions, and will bring major performance improvements to games taking advantage of it.

As AMD detailed:

“The hybrid approach (doing fixed function acceleration for a single node of the BVH tree and using a shader unit to schedule the processing) addresses the issues with solely hardware based and/or solely software based solutions. Flexibility is preserved since the shader unit can still control the overall calculation and can bypass the fixed function hardware where needed and still get the performance advantage of the fixed function hardware. In addition, by utilizing the texture processor infrastructure, large buffers for ray storage and BVH caching are eliminated that are typically required in a hardware raytracing solution as the existing VGPRs and texture cache can be used in its place, which substantially saves area and complexity of the hardware solution.”

It’s worth noting that AMD has patented this hybrid model, meaning that we will most likely see it being used in next-gen consoles (both PS5 and Project Scarlett support ray tracing and since they will be using graphics cards based on NAVI, we are certain that this method will be heavily used).

AMD has patented this hybrid approach as “Texture processor based ray tracing accelerator method and system.”

“The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node.”

AMD has also shared some more details about this hybrid model that you can read below (or you can visit this link that features the entire patent).

“A texture processor based ray tracing acceleration method and system are described herein. A fixed function BVH intersection testing and traversal (a common and expensive operation in ray tracers) logic is implemented on texture processors. This enables the performance and power efficiency of the ray tracing to be substantially improved without expanding high area and effort costs. High bandwidth paths within the texture processor and shader units that are used for texture processing are reused for BVH intersection testing and traversal. In general, a texture processor receives an instruction from the shader unit that includes ray data and BVH node pointer information. The texture processor fetches the BVH node data from memory using, for example, 16 double word (DW) block loads. The texture processor performs four ray-box intersections and children sorting for box nodes and 1 ray-triangle intersection for triangle nodes. The intersection results are returned to the shader unit.

In particular, a fixed function ray intersection engine is added in parallel to a texture filter pipeline in a texture processor. This enables the shader unit to issue a texture instruction which contains the ray data (ray origin and ray direction) and a pointer to the BVH node in the BVH tree. The texture processor can fetch the BVH node data from memory and supply both the data from the BVH node and the ray data to the fixed function ray intersection engine. The ray intersection engine looks at the data for the BVH node and determines whether it needs to do ray-box intersection or ray-triangle intersection testing. The ray intersection engine configures its ALUs or compute units accordingly and passes the ray data and BVH node data through the configured internal ALUs or compute units to calculate the intersection results. Based on the results of the intersection testing, a state machine determines how the shader unit should advance its internal stack (traversal stack) and traverse the BVH tree. The state machine can be fixed function or programmable. The intersection testing results and/or a list of node pointers which need to be traversed next (in the order they need to be traversed) are returned to the shader unit using the texture data return path. The shader unit reviews the results of the intersection and the indications received to decide how to traverse to the next node in the BVH tree.”

103 thoughts on “AMD is working on next-gen software/hardware hybrid ray tracing technology, shares first details”

  1. Speaking of Stranger Things, you don’t even need to have seen the last season which airs July 4th. Just copy the information from other news outlets. Easy money I tell ya.

  2. My avatar spooks me the more I comment on this thread. They keep spreading… Blaukovitch is the monster of my nightmares.

  3. The ONLY thing I like about this is the fact that AMD is calling it as is “Hybrid ray tracing” vs Nvidia BS marketing “Real time Ray Tracing”.

        1. Nope not that much it’s pure BS, they charge you $100 more because they didn’t want price war at all!

      1. Well, AMD has almost the same prices for GPUs even without RT or Tensor cores (so no Raytracing or DLSS).

    1. Not sure why you think that NVIDIA’s ray tracing is BS. Ray tracing is ray tracing. Yes, we’re not getting fully ray traced games (was anyone expecting that?), but it’s still completely 100% real time ray tracing, for whatever the developers decide to ray trace.

      1. Because Nvidia can’t fully do what they claimed they can do. Without a huge performance hit. What’s the point of doing something if you have to go from 4K / 1440P down to 1080P just to play at a smooth framerate?

        1. I will fully agree that NVIDIA didn’t do a very good job at tempering expectations – they did oversell it a bit for people who didn’t really know anything about ray tracing and why we haven’t seen it before now.
          Yes, it’s a performance hit – did you expect it not to be? We’ve gone from impossible to doing it in real time – that’s a major step, and of course it comes at a cost. Is the cost more than expected? That’s debatable.
          However it doesn’t change the fact that they are doing real time ray tracing. There’s no degrees of ray tracing. You’re either ray tracing something or you’re not. The only adjustable parameters are the quality aspects, like amount of rays and bounces being the primary ones.

          1. The thing is… It’s tooooooo early for them to be doing it. $1200 for the flagship and all those people who paid top dollar for a 1440P 144hz G-Sync monitor or a 4K one now have to make their $1200 GPU into a 1080P GPU…

            Nvidia should have just waited and went off raster and selling the 2080 Ti for around $749…

          2. at one point they have to do it. trust me. even if nvidia postpone hardware based ray tracing solution to 2025 people will still going to say it is too early to implement them and should wait for GPU becoming more powerful. the thing is game complexity is a moving target. every year they still get more and more demanding even without ray tracing. tell me what do you think if nvidia push for RT to be used in games in 2014. did you know that there is one GPU maker introduced “RT core” inside their GPU at that time?

            nvidia can go all out on rasterization performance with 2080ti but those that has been following GPU progress for years should be aware that CPU progress has been very slow compared to GPU. with 2080ti it is the first time even with one of the fastest CPU (with overclock no less) have to struggle to keep up. it is about time when GPU maker starts introducing more demanding feature for 3D rendering. this in a sense not so different with what happen in 2009 before.

          3. This is the first iteration. Being an early adopter is costly and has limited use. NVIDIA opened the door, which is an important first step. It’s then up to people themselves to decide if they want to invest in it now.
            This technology will get better and cheaper as we move forward, just like any other technology, but for now, yes, it’s very expensive and comes with some drawbacks.
            But the most important part that overshadows everything is that we actually have real time ray tracing in games now. That door is now open and it’ll only get better from here on out. AMD stepping into the game, with an interesting approach, is only going to help things move forward.

          4. only the thing is Nvidia took full on advantage of it… And people gave them a pass. Now they will use that price point as the new price point for future cards…

          5. People gave them a pass? There was plenty of uproar…
            You also have to consider that they need to recoup 10 years (as they claimed they have spent) of R&D costs.
            I don’t disagree that there should probably have been some 1080 Ti beating cards without ray tracing at a more normal price, but then they’d complicate their own marketing since the main aspect of the new cards were to push ray tracing – make it a focus point. So from a marketing point of view it makes sense that they didn’t launch something like that.
            Will we see this being a new price point? I have no idea. It’s certainly possible, but we could also see next generation of RT cards be a bit cheaper as the technology has matured. Time will tell.

          6. yeah a ton of people. The sales numbers prove it. Everybody an their dogs is running a 2080 Ti on youtube that is known… So the next Ti is going to be around $1200 as well. Only this time they might give 12gig’s of vram instead of 11 😀

          7. Don’t get fooled by the vocal minority effect. These cards are far from common place. Have they sold plenty of them? For sure, there’s always people willing to spend money on the latest and greatest, and be thankful there is, because it drives things forward so these things end up costing less and becoming the norm.
            You can compare to car technology. That starts sometimes as high up as formula one, but at least in the expensive cars and then over time basically even the cheapest new car will have it. It’s just how these things work.
            I’m not going to say that the next round of cards will be the same price, because I have no idea. I can see reasons for both things happening, so yeah, we have to wait and see. I wouldn’t be surprised if the price largely stays the same though. 😛

          8. one now have to make their $1200 GPU into a 1080P GPU

            Why are some people still saying this nonsense? RTX 2080 Ti handle 2K in 60 FPS in RT games and for example in Battlefield V, it is around 40 FPS in 4K. For 1080p in 60 FPS you need RTX 2070. No 2080 Ti. That is good for first generation. Definitely not too early.

        2. Still doesn’t change the fact it is done in real time. Performance hit is another discussion altogether.

        1. Learn the difference between Ray Tracing and Path Tracing , the whole point of RAY Tracing is that it can be mixed up with Rasterization , What you’re thinking of is Path Tracing.

        1. No, it’s hybrid, as both NVIDIA and AMD call it. Ray tracing isn’t an effect, it’s a rendering method that has existed for decades. It’s true ray tracing whenever ray tracing is happening. It is not path tracing. It is not a fully ray traced game, but it is ray tracing and rasterization working together – hence hybrid. There’s no BS to be found here.

          1. He means the “whole scene” rendered in RT. Still RT is RT whatever you’re using it for.

          2. I know. But Ray tracing isn’t defined by whether it’s applied to the whole scene or not. Ray tracing is defined as traced rays.

          3. Yes it is. Both the rays are being traced and the intersection test is happening in real time.

          4. Sure, rays are traced. But, it’s not the holy grail implementation of tracing a ray per every pixel, which is what the end goal is. Until then, it’s not full, true ray tracing. It’s just ray tracing on a very low level.

          5. Sure, but nobody ever claimed we had full scene ray tracing in real time. NVIDIA just claimed they had achieved real time ray tracing, and provided examples using ray traced effects (reflections, GI, etc). And they were correct to do so.

            Essentially, it’s a hybrid of rasterization and ray tracing. And there’s nothing wrong with that, especially when nobody claimed otherwise.

          1. That’s not true at all. Rasterization is the common method of which the image is actually drawn. That’s not an effect, that’s generation of the frame buffer itself.

          2. You clearly missed the point. 😉
            Rasterization is a rendering method. Ray tracing is a rendering method. They are the same. Neither of them are effects.

          3. The ray tracing here isn’t a screen rendering method, that’s the difference. It’s just filling in world information, like shadows. Sure, it’s rendering a thing, but it’s not rendering the image.
            It’s not per pixel like true ray tracing is, like rasterization is currently.

          4. Sure, it’s not a fully ray traced scene, but that doesn’t mean that it’s an effect. It is, and has always been, a rendering method. It handles rendering of certain things in the scene until we reach the point where we can do it all ray traced (that’ll be quite a while). This does not mean that it’s an effect – it does not constitute as an effect. It is a rendering method and it is handling rendering of parts of the scene instead of rasterization.
            Again, this is why it’s hybrid. Ray tracing is doing some things and rasterization some other things.

          5. Ray tracing as it is used now is basically a shader. It’s not filling pixels, it’s not even drawing them.
            It’s why they called it a “path traced shader”. And, shaders are effects.
            Just look up raytracing/path tracing shaders, and you’ll see RTX everywhere.
            It’s not per pixel tracing, which is what true raytracing is. Every pixel is still rasterized, entirely. Everything that happens before that is just wedged into the pipeline as a shader or other geometry/shader adjustment.

          6. I think we have different ideas of what constitutes as an effect.
            You’re quite right that this is not full blown ray tracing and that rasterization is handling the final image. But there’s still ray traced rendering happening, no matter where in the pipeline it is and no matter what produces the final image. This is true ray tracing. There’s no degrees of ray tracing. You’re either doing ray traced rendering or you’re doing something else. Using an entirely different rendering method (than rasterization) in part of the pipeline is not an effect in my book.
            But at this point I think it’s just splitting hairs. If you want to call it an effect, then so be it. I won’t, because it is so much more than just something tacked on.

      2. Exactly, once again comes the brain dead AMD brigade. Pathetic really and so F*CKN dumb. Thanks to nVidia we now HAVE raytracing. And AMD are good at words, but POS at deliver. I wont hold my breath for as ALLWAYS their inferior tech. nVidia ALLWAYS do it so much better 😉

        1. This doesn’t have to have anything to do with AMD. There’s plenty of people bashing ray tracing simply because they don’t understand what it is, how it works and why it matters. If you’re a gamer you should be excited that the door to real time ray tracing has been opened.

      3. NVidia’s ray tracing do not cover all elements (only 3 afair), it is limited to reverse rt (calculated from camera point of vies, not from light source point of view).

        1. Doesn’t make it any less ray tracing. Just because the rays are going “the opposite direction” doesn’t mean it’s not ray tracing. It’s just a different method of doing it.

    2. there is no BS. since the very beginning nvidia said they were doing hybrid rendering (this is what RTX is all about) so they can run real time ray tracing effect with faster performance with turing. go back at nvidia first presentation ad GDC when they first unveil Turing last year.

    3. What’s BS about real time raytracing? That’s literally what they are doing. Tracing rays…. In real time

        1. My comment from below: “But Ray tracing isn’t defined by whether it’s applied to the whole scene or not. Ray tracing is defined as traced rays.”

          Nobody ever claimed that they were doing full scene Ray tracing you walnut. Not even NVIDIA themselves.

  4. DXR Raytracing was invented together by MS and Nvidia in 2018. All patents to RT cores belong to Nvidia and MS. AMD can’t use those solution without some kind agreement with Nvidia or MS.

    MS confirmed that their console will use hardware raytracing but I think that it will be hardware designed by MS not by AMD. AMD already confirmed that GPU for scarlet was created together with MS

    1. The sheer stupidity of this comment is astonishing! Tell me, how much green d*cks do you suck per day?

  5. Well… Nvidia enabled RT on it’s Pascal cards(they stink but they did), put out drivers that enable RT and let’s see how good this approach is.

    Less talky more action.

    1. Right. That’s one of AMD’s big downfalls. A lot of Talk and slides. But gamers only care about results.

      1. Exactly, it sounds good, at one point in time we had vertex shaders, pixel shaders… and them they became stream processors who could do everything.

        But i wanna see if it actually works.

      2. Except this is not from a slide, it’s from a patent publication. The problem here is that the article makes no effort to explain what all the technical stuff means. In short, nVidia has a 4:1 cores to RT cores ratio in their cards while this approach theoretically gives AMD a 1:1 ratio by repurposing all unused resources from the texture processors and very little to no need to increase the die area, unlike RT cores.

        1. However, their total Ray-tracing performance is strictly inferior to Nvidia’s small set of dedicated hardware and AMD admits to this. In addition, it comes at a fluctuating performance cost due to sharing hardware for ray-tracing with everything else the GPU must compete for. Meanwhile, Nvidia doesn’t suffer from this position. AMD’s entire agenda here is to wait on ray-tracing adoption to increase. In the meantime they know their GPUs are already behind Nvidia and so to not sacrifice performance elsewhere in scenes/games that aren’t ray-traced they keep full performance but in scenes that are utilizing ray-tracing you either have a real beast of a GPU compared to a less demanding game or must tone your settings down. At the high end segment their GPUs performance will literally never compete with ray-tracing enabled due to this approach. It is rather clever tbh, since even at 7nm they struggle just to keep up with Nvidia’s 12nm that even sacrificed die space for dedicated RT only hardware.

          1. You clearly have no clue what are you talking about neither understand what’s going on. Maybe you should learn GPU architecture so you can understand why are they doing this. Hint: no GPU fully use texture processors these days and they will also be customized for this.

          2. Thankfully you made this simple to the extreme. You showed you don’t really comprehend neither the architecture nor rendering techniques and you can’t cover up and fix your comment with how badly you screwed up. It is pretty clear you didn’t even read the statement from AMD in full which was bad enough.

          3. I actually read the whole patent and unlike you I’m not a basement engineer. Now, enlight me with your knowledge.

          4. Yeah, you clearly read it. Apparently you know better than AMD who states in that patent the only advantage it has over hardware is not dealing with the large buffer, which Nvidia offers solutions for in their documentation, and complexity, of which Nvidia is not significantly impacted by at this point while AMD is. BVH node traversal also has some real drawbacks. It is a fact AMD’s implementation will be competing for more resources than Nvidia’s regardless of how you try to downplay or claim otherwise. I’m not even going to entertain another reply from you. Fake with someone else.

          5. Actually they’re pointing out the drawbacks of both software and hardware and how their solution could fix those. But of curse, your stupidity doesn’t let you understand how they’re implementing their hardware solution in their approach and how it goes beyond of just reducing large buffers. Oh, and their approach can also reduce or eliminate BVH caches besides many other things. Good try though.

      3. “But gamers only care about results”
        What should we care about? It’s you being weird caring about something else actually.

      4. Yup. Still waiting on Tress FX 2.0 that Deus Ex MD never got that was supposed to be superior to Hairworks and do more than just hair.

  6. So when Microsoft claimed at E3 that nextbox will feature “hardware accelerated ray tracing” what they actually meant is that it’ll feature a hybrid ray tracing?

    I suppose Phil ‘True 4K’ Spencer will be calling this True Ray Tracing. ?

    1. We’ll see how it pans out, but I suspect that ray tracing in the next-gen consoles won’t be something to write home about and have very limited use, simply because, well, it’s a console – it’s far from as beefy as a PC.
      But yeah, no doubt they’ll spin BS. I remember when the Xbox One X was being revealed, they plastered “true 4k” all over the games, except you can count the amount of games that are actually true 4k (meaning it actually renders in 4k and isn’t checkered or whatever) on one hand.

    2. Ray-tracing is ray-tracing. Whether you do it in software or hardware-accelerate it doesn’t change the resulting image, it only affects performance.

      1. That’s not really true is it?

        It will most likely (especially as will be found in the consoles) run with lower quality results.

      2. The first sentence is not true. Ray tracing may be more or less accurate and used in fewer or more facets of graphics (for example, only for ambient occlusion). Results may vary greatly.

  7. AMD is so much behind, 16 nm 1080 Ti has 20% better performance per watt compared to 7 nm Radeon VII.

      1. Nvidia did not sold GTX1080ti to servers. They have tesla for that. And market share wise nvidia for years hold over 80% in that segment. If you think what happen in gaming market is terrible to AMD then what happen in professional market is even more brutal.

        1. Your comment is plain retarded. AMD did very well in the server/workstation segment. Just look at the new supercomputer/server Frontier, all AMD.

          1. The AMD turnaround has really taken place around the last 3 years or so with them now going full steam. They will pick up more.

    1. Correct, but we shouldn’t forget AMD’s input is on two frontlines (CPU+GPU) unlike Nviass pooping marginal increases. For AMD horribly catching up on GPUs kind of makes sense, because a colossal investment is leaned more towards CPUs development rather than perfecting GPUs.

    2. 1080 Ti was a gaming part. Radeon 7 is a great gaming and professional card. The GPU itself needed to be more complex

  8. So basically an inferior version of what Nvidia has done and admitting Nvidia took the correct approach in dedicated hardware as software options are simply no where near capable enough to suffice.

    1. Inferior? Whatever you say, when my RTX 2080 dips to 30FPS @1440p with ray tracing enabled in Quake 2 then I call anyone’s attempt at ray tracing inferior.

      1. And what would it dip to on an AMD card? “Inferior” in this context means “worse than AMD’s solution,” not “running at a lower framerate than I’d prefer.”

        1. I don’t care for hypothetical comparisons, the fact still remains that there are no worthy ray tracing cards out there unless you consider the 2080Ti a good deal. Using the hybrid approach AMD is taking as a means to shill for nVidia and conclude that their approach was the de facto one is what I disagree with.

          1. This is incorrect assumption. This is the equivalent of someone complaining OMG literally the BEST PC setup can’t handle Crysis at its release. If a game or technique is designed for the future current hardware may not support it at the highest fidelity levels. In the case of Quake 2 ray-tracing not only is ray-tracing incredibly intensive but that game has a LARGE number of light sources and possibly excessive bounces of rays compared to your average game among a few other possible performance impacting points. Your 2080 not always maintaining 60 fps at 1440p isn’t odd at all. Also, you wouldn’t drop to 30 fps likely if you disable v-sync. When v-sync is enabled and you drop below your refresh rate it cuts FPS in exactly half due to the timing so what would actually be like 59 fps turns out to be ~30 fps with v-sync enabled. This is why freesync and g-sync are a big deal because without them if you disable v-sync you run the risk of screen tearing. The 2060 can handle some games with ray-tracing, too. It just depends on how demanding a particular game’s scene is and what settings you are willing to possibly sacrifice for it (resolution? shadow? lower setting of ray-tracing? etc.). Of course, people go omg my GPU can’t handle a game with everything maxed out at 4k AND cutting edge graphics tech like ray-tracing. I’m not willing to sacrifice a few settings to high or medium or even resolution in order to enable ray-tracing at a good performance level. Like are you guys possibly new to PC gaming? This is how it has been for two decades and this is how devs make trade offs on console games (such as our current gen can totally hit 60 fps in games but they decide to make more graphically demanding games at 30 fps but thus better visuals than less demanding and higher fps). As for the 2000 series, their ray-tracing is undoubtedly still immature compared to what we truly need. I wouldn’t bet to much on their performance for at least another two generations. Nvidia’s & Microsoft’s current ray-tracing methods are basically cut down versions of true tray-tracing and only certain ray-tracing features enabled on a per game basis because we can’t handle true high quality full ray-traced scenes, yet. We need even better hardware and a wider market adoption before that becomes plausible.

  9. Ray Tracing is a solution without a problem. No developers are fully supporting it for a very simple reason; they have problems enough writing bug free code for cheap games and they do not want any more complications. Developers can’t even write game code to take full advantage of multi-core CPU assets let alone Ray Tracing Cores that are proprietary to nVidia.

    25% of Turing’s very expensive GPU real estate is reserved for Ray Tracing cores. That is a lot of money to spend for hardware that nobody even uses. That is money wasted.

    https://cdn.wccftech.com/wp-content/uploads/2018/08/NVIDIA-RTX-Turing-GPU_19.png

    1. Ray tracing solve the issue behind many baked effect we are using right now in games. You mention about bug problem did you know that RT can help game developer in this regard? Multi core cpu is different problem altogether. You can’t compare them with RT implementation in games.

Leave a Reply

Your email address will not be published. Required fields are marked *