Starfield feature 2

Starfield overperforms on AMD’s GPUs due to an architectural edge over NVIDIA’s GPUs

When Starfield became available, we were among the first who reported AMD’s superiority over NVIDIA’s hardware. Then, as we specifically said in our PC Performance Analysis, Starfield simply favored AMD’s architecture, which is why the AMD Radeon RX 7900XTX was faster than the NVIDIA GeForce RTX 4090. And… well… we were right.

ChipsAndCheese has decided to deep dive into Starfield’s performance, and examine why the game performs the way it does. In order to get some solid results, ChipsAndCheese analyzed a scene using Nvidia’s Nsight Graphics and AMD’s Radeon GPU Profiler.

Now while I won’t go into a lot of details (you should read their article though), ChipsAndCheese has basically confirmed everything that we’ve reported. At lower resolutions, the AMD Radeon RX 7900XTX is faster than the NVIDIA GeForce RTX 4090. However, at 4K, NVIDIA’s high-end GPU managed to close the gap.

This is mainly due to an architectural edge that AMD has – in this particular game – over NVIDIA’s GPUs. It has nothing to do with drivers, the power that the NVIDIA GPUs draw, or anything like that. NVIDIA’s GPUs perform exactly the way they should, and there is nothing wrong with them.

To make things simpler, as Reddit’s mikereysalo explained, AMD chose fewer cores with bigger register files for its GPUs, which means the GPU can track more threads per core. As such, in Starfield, this translates to a benefit of higher occupancy.

On the other hand, NVIDIA chose more cores with smaller register files to take advantage of parallelism. However, this ends up having lower occupancy and in this particular case, this can result in a lower performance.

AMD is also benefiting from higher L2 bandwidth because it has a smaller L2 cache. On the other hand, NVIDIA has a bigger L2 cache, but with lower bandwidth.

As ChipsAndCheese concluded.

“Starfield is a complex workload that sees different demands throughout a frame. In the two longest duration shaders we looked at, AMD was able to leverage its larger vector register file to keep more work in flight per SIMD. That in turn gave it a better chance of hiding cache and execution latency.

However, quantity has a quality all of its own, and it’s hard to argue with 128 SMs sitting on a gigantic 608 mm2 die. AMD may be better at feeding its execution units, but Nvidia doesn’t do a bad job. 128 moderately well fed SMs still end up ahead of 48 very well fed WGPs, letting Nvidia keep the 4K performance crown. AMD’s 7900 XTX uses just 522 mm2 of die area across all its chiplets. To no one’s surprise, it can’t match the throughput of Nvidia’s monster even if we consider wave64 or wave32 dual issue.”

In short, there is nothing NVIDIA can do in order to improve performance in this game. The only thing that the green team could do was to enable reBAR, which brought a 5% performance boost. However, you should not expect any additional major performance improvements.

And now you know why Starfield performs better on AMD’s hardware!

64 thoughts on “Starfield overperforms on AMD’s GPUs due to an architectural edge over NVIDIA’s GPUs”

  1. what new in this ? who said it s drivers problem , the problem is the developers optimised the game only for amd cause it s a sponsord game , they choose to leverage the advantage rdna has and they did , 4090 is more capable than 7900xtx in raw power and brute force even amd admit it

    1. You dont have a clue. Its obvious that 7900xtx is very capable if you target its architecture. A lot of games do the opposite. The correct approach is somewhere in the middle.

      1. Didn’t say the opposite, very capable but if you target something in the middle, 4090 will be the winner for sure, even amd said that they dont have an equivalent for 4090

    1. Because Nvidia has over 80% of the market on PC. Dev’s have to optimize more for Nvidia. Some games are a rare case for AMD when you see console ports put on PC that are actually optimized for AMD. COD for example. IW latest engine is optimized for AMD GPU’s big time. The dev’s said that themselves.

      1. This makes absolutely no sense. Many indie games use a stock, off-the-shelf engine and don’t know the first thing about “optimizations”. And Nvidia hardware is still faster on all of them.

        1. One of those stock engines is Unreal engine. And Unreal engine has close connection with nvidia. tim sweeney ceo of epic always was prone nvidia, dont know why, probably the reason is business wise and nvidia provides them their gpus or friendship with nvidia ceo. Search nvidia and tim sweeney on youtube

        2. Who does these benchmarks? And what engines… Most engines are designed to be optimized for Nvidia. Unreal for the longest time has been purely optimized for Nvidia.

          If a Developer for example makes a game right now designed for a current gen console with their own in house engine and just ports it over to PC. It will by defult be in favor of AMD hardware.

      1. That is ONE example. I think of other open world rasterized games like RDR2, the witcher 3, even assassin’s creed valhalla which is AMD sponsored. They all behave similarly under AMD or Nvidia equivalent GPUs

        1. Valhalla also favored AMD’s hardware, especially when using older CPUs. The performance gap is nowhere close to what we saw in COD: MW2 and Starfield, but it did run better on AMD.

          1. Trying in this comment since all my replies got flagged: So is there nothing you can do on your end and i am effectively banned? Seems like it…

          2. Perhaps Disqus sees that you write multiple posts in a really short time and flags you. Disqus has a “Spam” section (it’s different to the “Pending” section which we use) but that section is full of spam comments from numerous bots/fake accounts. So we have to examine over 200/300 spam comments to find yours (which is a difficult task to be honest). I suggest getting in touch with Disqus to resolve this and ask them why their system automatically flags your comments as “Spam”. This will be an easier way to resolve this.

          3. Diqsux would rather ban my account than help me, let’s be real here. You could help me solve this since it is affecting real users, woudn’t be just about finding one single comment of mine flagged as spam and saying it’s not? Seems like you don’t want to help.

    2. I’ve read a detailed article detailing the architecture differences and how Starfield makes use of it. Compute shader utilization seems to be RDNA’s advantage over Lovelace.

      https://www.hardwaretimes.com/amd-radeon-rx-7900-xtx-is-faster-than-the-rtx-4090-in-starfield-due-to-better-hw-utilization-and-cache/

      I guess when a game is targeting a feature leaning on a architectural advantage, the GPU that isn’t as good as that thing is going to suffer. Like AMD cards when it comes to ray tracing. No different here in this case with Starfield.

    3. It doesn’t matter if the title is rasterized or raytraced. This is about the shader complexity. Starfields shaders are really complex, and they need a lot of data, so the GPU needs a lot of registers to get good utilization.

  2. “AMD chose fewer cores with bigger register files”

    Bethesda being unable to use more than a few cores as usual.

  3. That still doesn’t change the fact that Starfield’s DX12 renderer code is coded by people who are not very proficient, as uncovered by Valve’s Linux & Vulkan experts who actually are very proficient, else the Steam Deck wouldn’t be the success it already has become.

    Here’s a quote from the Norwegian lead developer of VKD3D-Proton, which is the part that translates DX12 API calls to Vulkan:

    Add special game perf workaround for Starfield and other DGC junkies:

    The goal of this refactor is to optimize for cases where games (Starfield in particular) uses advanced ExecuteIndirect in very inefficient ways.

    Indirect count, but indirect count ends up being 0.

    Non-indirect count, but none of the active draws inside the multi-draw indirect actually result in a draw.

    Multiple back-to-back ExecuteIndirects are called like this, causing big bubbles on the GPU.

    Really tells you all you need to know when an external developer can pin-point the root cause of inefficient programming without source-code access, while the people who wrote the d@mn thing themselves cannot.

    BTW, all the talk of ID Software having helped with Starfield are obviously bull$hit, otherwise such a gross oversight in the renderer code simply wouldn’t have happened…

      1. Even without Carmack, id is a very capable studio given how well the past two Doom games run. Bethesda could’ve leveraged their talents properly had they had competent project managers but starfield clearly proves that they don’t.

        1. nuDoom performance comes down to baking all the lighting, using fake 6th gen style shadows (or the modern equivalent to that if you want to be generous), and multithreading the CPU code. That’s it. More isn’t needed to get a static, linear shooter running at acceptable framerates on PC. And that’s all they are. Relative to the graphics on display it’s just serviceable performance, nothing special. Doom 2016 ran really badly when it came out. And Doom Eternal looks about the same so of course by now it can deliver high framerates on most PCs. When id do attempt an ambitious feature you get the raytraced reflections which run terribly.
          Also people rave about the Switch port but id didn’t even make that, did they?

          1. Who gives a flying fudge if it’s baked-in or not? As long as it looks good and runs good that’s all that matters.
            And as for Doom 2016 running terribly, only the crappy multiplayer beta was bad. The launch version ran okay, and then it got the Vulkan update 2 months later that made it run great on all GPUs (especially on AMD’s).

    1. I was under the impression that id’s contribution was assistance with tuning the gunplay to make it work/feel better as a shooter?

      1. Nope id was actually not involved in the gunplay, Bethesda did that part themselves. It was just a few pieces of graphics tech, mainly motion blur. There are also some config file values that suggest some new shadowing techniques but those don’t appear to have made it into the final game.

    2. Little Todd is a great man, Conman Spencer should learn a thing or from him.
      Now the people that think he is “great” as a developer or the ones that eat his lies those are truly the imbeciles.

    3. Todd is a businessman and he’s good at his job. Bethesda games are financially successful and that’s what Todd really cares about. He’s great at acting whimsical about games, pretending like Fallout 4 in space is this decade old passion project for him etc. But if you can look past that facade there is the real Todd, counting bills.

  4. this is ridicolous , every bethesda game dont have any perfornance problem with nvidia gpu and the problem finish at the and of october when amd sporsorship terminate.

  5. 7900xt is a beast of a card but im happy with the 4080’s performance when FG is enabled in Starfield: Everything is set to ultra + 4k, 70-90fps whats not to like? Yeah, its not 120+ fps , but like i said im happy with it since it runs fine and 40XX has the edge on RT which many titles support nowdays.. so.. kudos to 7900!

  6. Nividia when sponsoring a game: sure you can add FSR too
    AMD when sponsoring a game: YOU MUST UTTERLY DESTROY NVIDIA, MAKE THE GAME RUN SHxT on NVIDiA muhahahahaha.

    F*CK AMD, F*CK FLOPFIELD

    1. Lmao amd clearly stated that nothing is stopping Bethesda from adding DLSS, infact DLSS is officially coming to starfield, stop crying

  7. Nah I don’t buy it that perf can’t be improved for nvidia cards, what is the def of performance anyways? high fps and looks, stable fps, scalability over median of systems. Anyways why would nvidia care about one game that AMD probably is a partner in development with real money and know how contributed. They love games that are heavy and need high level expensive cards.

    In a normal release it’s Bethesda that should want more systems to run on w good performance anyways, possible they get a cut of the percent of increased sales of AMD cards in this quarter etc They’ve said dlss will be in upcoming patches natively so will obviously work on a version of dlss tune it to the game, no idea if they’d be able to get bethesda to help with a FG version though.

    Idk though this whole thing about bad perf is overblown, I’m mostly medium, shadows low, volumetric low, indirect lighting high at 1080p res scale of around 80 w FSR2 w 88 sharpening and it’s been smooth most of the time 45-60, games limited to 60 via ncp by me for system stability etc. I don’t believe I get less fun out of it then at 2k/4k same fps tbh

  8. What the f*k are you talking about. nvidia was holding back resources for their game ready drivers in favour of the AI boom. The game driver devs wanted more resources for the drivers but they were ignored. And not only that Bethesda was developing Starfield with the help of AMD, ofc its going to perform way better

  9. Nonsense , worst explanation in the galaxy. If you end up dealing with more threads per core you are slower. You must minimize the number of threads per core. Always.

  10. This game is very CPU heavy which will cost you a lot of frames on an older or not so fast CPU.
    By optimizing their engine to more efficiently handle CPU calls everyone could gain 10-50 FPS regardless of GPU manufacturer.
    Having both a big CPU bottleneck and AMD GPU preference is sure to piss most people off.

    Good luck waiting for BGS to fix their engine though.
    I can still see the cliff racer AI from Morrowind in their stupid birds in Starfield FFS.

Leave a Reply

Your email address will not be published. Required fields are marked *