Bethesda has officially released Starfield to PC and from the looks of it, owners of the Intel ARC GPUs will have trouble running it. According to Bethesda Support, even the Intel Arc A770 GPU does not meet Starfield’s PC minimum requirements.
Bethesda Support claims that the Intel Arc A770 does not meet the minimum requirements of an AMD Radeon RX 5700XT or an NVIDIA GeForce GTX 1070Ti. And… well… that’s not accurate at all.
In numerous DX12 and Vulkan games, the Intel Arc A770 beats both of these two graphics cards. For instance, in Tomb Raider, the Intel Arc A770 can match the performance of the RTX 3070. Another example is Total Warhammer 3 in which the Intel Arc A770 can beat both the AMD 5700XT and the NVIDIA GTX1070Ti.
According to reports, owners of the Intel Arc A770 can actually play the game. That is of course with the latest driver that Intel released a few days ago. However, they can get corrupted textures and numerous crashes. These are major issues and Bethesda’s stance seems to imply that the team does not intend to fix them.
Yesterday, Todd Howard claimed that Starfield was optimized for PC. Since Intel’s GPUs came out in 2022, there is no excuse to not support them. So I guess Starfield isn’t optimized for the PC after all. That, or it might be optimized only for specific CPUs and GPUs).
As we wrote in our PC Performance Analysis, Starfield is a mixed bag. The game currently favors AMD’s GPUs and runs poorly on NVIDIA’s GPUs. Hell, even though the game doesn’t use any Ray Tracing effects, it cannot maintain 60fps on the NVIDIA RTX 4090 at Native 4K. Yet Todd Howard claims that the game pushes the technology. How?
Anyway, this should give you an idea of the post-launch support of Starfield. From the look of it, we won’t get any major performance patches for it. But hey, at least PC gamers can use mods to enable DLSS 3. Well OK, that’s only for the RTX 40 series owners but still.
Lastly, I guess this means we should not expect official support for Intel XeSS. And, ironically, there is a mod that has already added support for it. So, great job Bethesda, keep it up.
Stay tuned for more!
Thanks Reddit

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email

What is Tanaka smoking?
Some weed that Elon Musk passed around at the last interview.
https://uploads.disquscdn.com/images/fa59e126fc1b2fe76f2a530a8a4b91bf5552a3bb7a496bb36e916975e58503c6.jpg
That’s not Bethesda. That’s a customer support employee who doesn’t know what the fk to answer
So what do they get told in training? If in doubt about the answer then just tell the PC gamer in a nice way to f*ck off, we don’t care.
Yep, but imagine if the title “Some CS, probably outsourced from India, didnt have a clue about minimum GPU Supported in Starfield” , wont be that interesting doesnt it ?
Yep, but imagine if the title “Some CS, probably outsourced from India, didnt have a clue about minimum GPU Supported in Starfield” ,it won’t be that interesting isn’t it ?
Interestingly, what WAS outsourced to India was the majority of Starfield development.
Intel Arc graphics cards aren’t listed in the system requirements, therefore they don’t meet the system requirements. It’s not just about the performance of the GPU, but the features of the GPU and also the intention of the dev team when designing the game and the testing done by the QA team.
Once Intel releases driver updates to fix the issues with Starfield on their GPU’s we may see Bethesda reevaluate their system requirements, but then again we may not. Bethesda may not care to do testing on GPU’s that have a fairly small userbase compared to NVIDIA and AMD, and which have a history of buggy drivers (even if Intel has made great strides in stabilizing them and improving performance). Since Bethesda will have to provide support for the game running on those GPU’s once they add them to the system requirements, they may also decide they don’t want to in order to cut down on unnecessary support ticket volume generated by Intel’s driver issues.
As someone who has done both technical support and QA, I know that companies usually only bother testing the most common scenarios they expect their software to be used in, and anything else just doesn’t get tested at all. Doing thorough testing not only costs extra money in manhours, but it slows down production considerably and makes a lot more work for the dev team. Companies often see this as a waste of time and money if the majority of users won’t encounter these bugs.
Honestly… I see it as a first gen graphics card… though every other game under the sun worked… even if sometimes it tooks some time to do so reasonably.
Çok yararlı bir makale olmuş. Severek takip ediyorum. Teşekkür ederim.
https://uploads.disquscdn.com/images/8aaae920fb9c0080a0fa1159f48a42a2cf20d27f62b473abe00c8aecdb73865d.jpg
I see you’re a man of culture posting classic literature here.
Devs somewhat barely test AMD GPUs, let alone the new 2 worthwhile Intel ARC GPUs.
At any rate, doesn’t matter what the dev says, it’s up to Intel to fix it. Or Mesa/VKD3D on Linux.
I just took a look at the Steam Hardware Survey and it was an eye-opener. In the almost 100 discrete GPUs reported not a single Intel discrete GPU was reported as in use.
AMD only had 1 discrete GPU reported in the top 25 which was the RX 580 at a little less than 1% reported in use.
Nvidia has not only dominated the PC gaming market, they now own the PC gaming market. It’s no wonder that they set their MSRPs so damn high and name the 4060 which in specs is really a xx50 class GPU.
I have been wondering if Intel plans to drop discrete GPUs entirely. Judging from the lack of offerings and far more importantly the lack of marketing it seems they could soon.
It’s not eye opening at all. People use GPUs sometimes for 10 years. Most people definitely use GPUs for 3 years, easily. ARC GPUs launched in 2022, were panned in reviews and are considered Beta tier products. These aren’t sold en mass yet. They aren’t even priced THAT well against 6600 XT / 3060, or weren’t for the most time anyway.
It’s not surprising ARC GPUs aren’t in Steam Top 100 GPUs.
https://uploads.disquscdn.com/images/8aaae920fb9c0080a0fa1159f48a42a2cf20d27f62b473abe00c8aecdb73865d.jpg
They release a bunch of assets and the community finishes the job.
Gotta love this industry.
This is really not that simple. I’m also a developer, and I have a lot of issue with GPUs. Even that I have an Arc A750, the card D3D12 implementation is not very good, so there is no way to evaluate the performance until Intel fix the issues. What should I suppose to do, write workarounds in the code to every vendor specific driver problem?
The number one rule now is to develope a D3D12 renderer on an AMD GPU, because their D3D12 implementation is the most mature by far, and they provide a lot of tools to optimize. From there it is easier to fix the bugs on an NV and Intel GPU, if their driver is ready. Bethesda will do it too, if Intel will provide the necessary fixes.
As a developer you would develop libraries for each vendor specific performance issue, perhaps with the vendor, and then use these libraries within each engine you work on. Developing for AMD discrete GPUs first makes no sense given the small market share AMD have, unless you are just going to ignore the ‘PC’ altogether.
AMD is the only vendor now, who provides a hardware-based thread-tracing profiler, a memory analyzer, and a TDR detection. No other vendor gives this toolset, so it is imposible to catch deeper bugs in the engine with Geforce or Arc, we just can’t see it with their tools.
Here is an interesting story about profilers. I have changed my job a year ago, and at my new studio, every PC contained a Geforce. Mainly Ampere and Turing. I saw some strange behavior in the engine with the first frame, but unable to detect it any bugs. So I went to the project manager to ask him to buy some Radeon GPUs, to change the profiling environment. I got lucky they give me 15 new PC with Threadripper CPUs and RDNA3 Radeon GPUs. After that we managed to catch a very nasty bug inside the pipeline creation process, which is just constantly flush the GPU cache multiply times before the first draw, without any reason. And this bug was only detectable on the tools provided by AMD. The profiler from NVIDIA, didn’t detect it, from there everything was fine. We fixed the bug, and get 20% additional performnace on all GPUs. This is why most developers use Radeon GPUs to develope the engine. You can see very nasty and deep bugs, that kills the performance.
Using vendor specific services libraries are not the answer. These libraries are constantly changing, so the maintain cost is very-very high. We prefer to solve problems inside a standard API, because the D3D12 specification won’t change in the future. So, I don’t care how many libraries can a vendor provide to us, because the cost of implementing these are too high, and the project manager alwasy say no to it. He even said no the DLSS3 weeks ago, because NVIDIA require a Reflex implementation in the licence. This is too expensive for us. We are now looking forward to FSR3 for frame generation, because AMD provides these latency tools in the drivers, so we don’t need to implement anything costly. I hope NVIDIA will do the same in the future, because it is super hard to finance these projects.
So you are claiming that PC development costs too much to develop for the 80% PC market share while the price of end product has rocketed? You are also planning to use FSR3 exclusively, a product not even released yet, even though an open source project already exists allowing DLSS, FSR and XeSS to be added via one API? While I agree having a low level profiler is great for debuging low level code I certainly wouldn’t be using it to find such a bug that caused 20% performance loss. Optimisation is one thing but that’s just poor understanding / coding to begin with and not something you would expect from an engine developer.
Yes. Our management’s first question with ideas like DLSS is how many sales it will generate. And I have to say that nothing, because it’s just a tiny little extra, and we already using a standard alternative in the first place. So the answer is we won’t spend money to it, if we cannot turn it into a sale.
Good luck convincing these people who are working to minimize the spending while maximizing the profit.
When the Radeon GPU profiler way released, we heard a lot of story about how much performance it find in a lot of engines. It can just see bugs, that are normally hard to find. So we are not the only ones. That’s why most engine developers are using Radeons across the industry. The tools that AMD provides are just more robust, and the D3D12 driver itself is more mature.
You’re not going to convince Wrinkly as he’s biased against AMD.
This is really not that simple. I’m also a developer, and I have a lot of issue with GPUs. Even that I have an Arc A750, the card D3D12 implementation is not very good, so there is no way to evaluate the performance until Intel fix the issues. What should I suppose to do, write workarounds in the code to every vendor specific driver problem?
The number one rule now is to develope a D3D12 renderer on an AMD GPU, because their D3D12 implementation is the most mature by far, and they provide a lot of tools to optimize. From there it is easier to fix the bugs on an NV and Intel GPU, if their driver is ready. Bethesda will do it too, if Intel will provide the necessary fixes.
This is an AMD sponsored title. The same AMD that will see Intel as real competition within the low end discrete GPU market. The same AMD that paid for that sponsorship deal which included focus on AMD hardware and software.
LMAO..the Shitshow are getting bigger!
Really f*kn crazy 🤣
“Bethesda Support claims that the Intel Arc A770 does not meet the minimum requirements of an AMD Radeon RX 5700XT or an NVIDIA GeForce GTX 1070Ti. And… well… that’s not accurate at all.”
This is a nonsensical statement, of course it’s accurate. In order to meet minimum reqs the hardware needs to go through their QA process. Obviously they have not tested Intel’s card, let alone addressed the issues it has.