Ashes of the Singularity left its Steam Early Access program a couple of days ago, and Stardock has provided us with a review code for it. Ashes of the Singularity uses both DX11 and DX12, and it’s time to see how this strategy game performs on the PC platform.
For this Performance Analysis, we used an Intel i7 4930K (turbo boosted at 4.0Ghz) with 8GB RAM, NVIDIA’s GTX980Ti and GTX690 GPUs, Windows 10 64-bit and the latest WHQL version of the GeForce drivers. While NVIDIA has not added any SLI profile for Ashes of the Singularity, Oxide’s title supports multi-GPUs. Those with SLI/Crossfire systems can enable multi-GPU support via the game’s video options.
Let’s start with our GTX690. This particular card was not able to offer a constant 60fps experience during the benchmark. Even when we dropped the graphics details to Standard, we were averaging around 40fps. Do note that the benchmark uses a worst case scenario, however there might be some huge battles in which your framerate will be similar to the in-game benchmark.
But can the GTX980Ti provide a constant 60fps experience when all the bells and whistles are enabled? Unfortunately, it cannot. Our GTX980Ti was able to push 42.7fps on Crazy settings at 1080p (though we’ve disabled MSAA) in DX11 and 51fps on High settings in DX11. For a strategy game, these framerates are completely acceptable, however those seeking to run the game on Crazy settings with a constant 60fps will have to invest on a high-end system with a really high-end CPU and multiple high-end GPUs.
But what about DX12? Well, to our surprise, Ashes of the Singularity ran better under DX12 on our system. In DX12, our GTX980Ti was able to push 43.8fps on Crazy settings at 1080p (without MSAA), and 59fps on High settings. Do note that our GPU was being pushed to its limits in both DX11 and DX12, though there were some scenes in which the GPU usage was dropping for no apparent reason.
As we can see, there is an 8fps performance boost in DX12. We also witnessed a similar performance boost when we ran the benchmark at the lowest available resolution (but kept the Crazy settings).
What we also found really interesting with Ashes of the Singularity was our Average CPU Framerate stats. According to the benchmark, our CPU averages around 62fps. This number reflects the framerate that can be achieved when the CPU is not bottlenecked by the GPU. In other words, our hexa-core is not powerful enough to avoid drops below 60fps. So, while we are GPU limited on Crazy settings at 1080p, we are actually CPU limited on High settings at 1080p.
In order to find out how the game scales on various CPUs, we simulated a dual-core and a quad-core CPU and tested the benchmark in DX12 mode. Our simulated dual-core system was unable to run the benchmark as there were severe stutters. With Hyper Threading enabled, our dual-core system was able to push 40fps on High settings. On the other hand, our quad-core system ran the benchmark with 52fps when Hyper Threading was disabled and with 57.7fps when Hyper Threading was enabled. Last but not least, our hexa-core system pushed the same average framerate with and without Hyper Threading (62fps).
From the above, it becomes obvious that a single GTX980Ti is not able to max out Ashes of the Singularity. Not only that, but the benefits between 8 and 12 threads are minimal, despite the fact that the game scales well on all 12 threads. Furthermore, and even when turbo boosted at 4Ghz, an Intel i7 4930K will bottleneck high-end multi-GPU configurations.
Ashes of the Singularity is a really demanding title in order to max out. Graphics wise, the game looks good when lots of units are on screen. The game shines when there are lots of units on screen as those battles are really amazing with lots of effects, smoke particles and explosions happening simultaneously. However, and apart from its grand scale, it does not push the visuals to the next level. Yes, the game looks beautiful and has cool particle effects, but there is almost nothing to really ‘wow’ you. Some textures that have been used look blurry when zoomed in, and some structures and units look a bit blocky when zoomed in. Perhaps we expected a bit more from it. However, it’s a bit underwhelming witnessing NVIDIA’s high-end GPU being unable to max out a game that displays this kind of visuals.
Enjoy!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email


















The first actual DX12 game release. Sorry Microsoft, you blew it with Gears of War :/
I hope those screenshots are from the “low” setting, because they look horrible.
And you took an amd partnered game and did not use amd parts in the review 🙂
They don’t have infinite resources… Give them some money via Patreon and your prayers will be answered :).
They have enough money to buy a GTX 690 then a 980 Ti and a 4930K but when it comes to AMD hardware then the argument of infinite resources drop in yeah great… AMD cards are generally cheaper than their Nvidia counterparts and it doesn’t break your bank to buy a 390 or Fury (non X) to do some comparative bench-marking specially when you’re running a PC website that’s popular enough to grant you free review codes and copies, to put it simply they are Nvidia biased and only cover AMD for the sake of formality.
This performance analysis is also biased because for a strategy game it does look impressive despite having some blurry textures as strategy games are not meant to be played with super zooming, it also has some really cool technologies under the hood, for example which game allows you to pick multi GPU via it’s options ? but he didn’t said a single praising word in that regard. He’s only concerned with how it performs on Nvidia GPU ? Does he say the same when several Nvidia titles don’t perform well on AMD ? no he doesn’t because he don’t care about AMD.
People are salty here because the game is demanding and doesn’t run too well on their beloved Nvidia hardware, we’ll see how these fanboys will react when Nvidia will make a DX12 title with their GameWorks integrated, most probably it won’t run well on AMD but then people will say it’s AMD’s fault. Bunch of hypocrites !
The nvidia’s market represents 80% of discrete gpu’s market. The AMD’s one represents only the 20%.
So, stop to complain about that this website can waste their resources for buying AMD parts (I don’t see many cards in this test, so it’s not only about that they prefered nvidia cards, because I didn’t see some GTX 970 or some kepler card in the test, only a very old (now) GTX 690 and the flagship of current nvidia cards, so they haven’t infinite resources, like they said).
This website is running since AMD had more market share than 20% and from the beginning they are only covering Nvidia so your argument doesn’t make sense, secondly it doesn’t matter because there are only two prime competitors in discrete GPU market AMD and Nvidia so covering only one is bullish when you call yourself a PC Gaming website. Their performance analysis is quite funny because they always throw that obsolete 690 for the test and when it doesn’t cope up blame the game’s optimization.
When PCSS Ultra didn’t managed 60 fps on 980 Ti in ACS then everyone said that it’s a demanding tech but if this game is now running on 60 fps on 980 Ti then they are blaming the game just because it’s developed with AMD. Good going.
Also they should name it Nvidia performance analysis rather than PC performance analysis because AMD GPUs are part of PC Gaming whether you like it or not, if they can afford a 980 Ti and 4930K then they can also get a Fury so stop making stupid excuses to defend them.
Funny isn’t it how The Division is a NVIDIA Gameworks title and it runs better on AMD GPUs. Where are the NVIDIA gimping performance of AMD GPUs comments now, when AMD didn’t even need a driver update.
Division doesn’t run better on AMD hardware, it just run equal to Nvidia which is a good thing. R9 390 is a better card than 970 by every point so it’s not surprising if it outperform 970 and don’t forget that some Ubisoft engines prefer more bandwidth which is why Far Cry 4, Watch Dogs and now Division runs fine on AMD but it’s not like Nvidia is lagging behind or AMD sabotaged something which made Nvidia performance bad.
Additionally you’re forgetting that Ubisoft is moving away from Nvidia. If missing of HairWorks in FC:Primal wasn’t enough evidence for you then look at AMD’s Capsaicin event and what Ubisoft announced there, it’s a good decision from their part. The more devs move towards open technologies the better rather than black boxed standards.
Are you absolutely positive it was purchased and not donated? Also if they had enough for those pieces of hardware, what makes you think they’d have enough left over for an AMD piece of hardware. As an AMD hardware user, solely, I’d recommend you look elsewhere for benchmarks.
As an Nvidia user, I don’t come here seeking news of either Nvidia or AMD, I come here for PC gaming news. If they are making a performance analysis and call it “PC” performance analysis then they should includ at least one recent AMD card, I am not saying they should do a benchmark like some of those big websites having access to entire range of Nvidia and AMD cards but using one card is not a big deal specially when they have two Nvidia cards with one of them being the most top tier.
Additionally saying that it doesn’t have anything to “WoW” you ? seriously what you’re expecting from an strategy game ? Do you want it have heavy tessellation, depth of field, VXAO like ambient occlusion, PCSS shadows, Foliage and 4k textures to WoW you ? The game shines where a strategy game should shine, it has tons of units, depth, explosion effects, granular scale, good AI and so on. Campaign disappoint yes but multiplayer is amazing. It also has great graphics when you keep in mind that it’s a RTS.
If you have such an issue with someones reviews, skip the page. You aren’t forced to come to see their reviews. Think of all the other sites with their benchmarks and reviews and you choose the one place to comment that doesn’t do what you want, think about that for a second.
These guys are pretty tiny compared to WCCFTech. You could also check out pcperspective whose had some of the best reviews of this game since at least August.
I would like yo see amd’s 980ti counterpart in these benchmark for comparison.
Interesting. Somehow i was hoping for more free FPS with DX 12. 8 more for the best card in the world is a bit.. yeah..
Still, free fps is free so yay! Even if it gave 5 more, it would still be worth it. I remember gaming at 25 fps. That was really laggy, and every time the fps went to 30, it got so much smoother (that was like 2004) That extra 8 fps would make a big difference for some people!
I do wonder how much fps a 760 would get from it in a game like this one?
it’s because no amd was tested here, otherwise you will see much different results
Run the same tests on my computer.
i7 4970k@4.6Ghz
16Go DDR3@2400MHz
MSI R9 390X @ 1100/1625MHz (Crimson 16.3.2)
Dx11 High@1080p = 53fps
Dx12 High@1080p = 63fps
Dx11 Crazy@1080p (no msaa) = 39.2fps
Dx12 Crazy@1080p (no msaa) = 40.7fps
I noticed that my CPU fps for DX12 are a lot better than DSO ones :
High = 106fps
Crazy = 100.7fps
So a better boost (by a little) for Amd, but nothing fantastic.
This is weird. Is Async activated in Amd’s drivers ?
Yes, but i don’t know if Amd optimised Dx11 drivers or Oxyde optimised the Dx11 version of the game but i have benchmarked the early access a month ago in 1080p Crazy (no msaa too) and got theses results :
Dx11 = 32.6fps (now : 39.2fps)
Dx12 = 38.5fps (now : 40.7fps)
I’ve also noticed on older build / drivers all the batch fps are better in dx12. Now, with the last build and Crimson 16.3.2 only Medium and Heavy batch give a fps boost. (Same for DSO with Gtx 980ti)
There’s a range in which 8 fps would be more effective than let’s say “another range”. If you have 100 fps and you get a 8 fps boost then it would be a 8% efficiency increased compared to the old api. But in his case he gets 8fps when he’s hovering around 50fpd which makes it a 16% increase. That’s very nice!! So for his setup, demanding games hovering in that fps range with the new DX12 api will give him approximately 15% performance boost, depending on developper implementation. A “less bottlenecked” system wouldn’t benefit the same but would still benefit (much less in terms of % efficiency). Microsoft really does a good PR job on WIN10. Looking at the steam numbers, they clearly learned from their bad windows 8 launch. Now it’s nice for them to push DX12 to as many devs/engines as possible and saying that it’s for gamers etc but we all know that this business is about money and no gamer (a big part) would change their windows if the DX12 incentive wasn’t there. More gamers on W10, more money for Microsoft. After their Windows Store failure, which is STILL a failure as we speak, they’ve listened to the community and yet again in the next anniversary update due in may i think they’ll get even more customers on their new OS since they’re enabling Gsync/Freeseync/Overlays/ini tweaks. That patch is the patch that sold me W10. More people on their new OS means more herd testing and that means more bug/incompatibility crunching. All that to the benefit of everybody. But at the end of the day, money talks.
Don’t even know why i wrote that book lol anyways. Was longer to delete it than to post it.
Cheers.
I actually i enjoyed reading it and everything you said was on point, more or less i guess. From the store to DX 12. I’m about to go to W10 too, just waiting for Pascal to come out. Thats what worries me tho. Once i go to Pascal, i wont really care about free FPS and bonus gains. I will in probably 2-3 years, but not in the near future. I mostly want DX12 for my old card.
weird with an i7 6700k and a 980ti digital foundry said in dx12 mode they actually had around 10-15% less performance
This is because maxwell, and upcoming pascal, are geared towards DX11. In DX12 they still keep the benefits they had in DX11 (“multithreading”, tessellation and the like) however the cards having to perform a software context switching of Async causes it to lose some momentum. In DX12 Nvidia titles you’ll see a lack of Async usage.
i meant that digital foundry used also a 980ti like here but in this benchmark DSO actually gained performance in dx12 while digital foundry from eurogamer lost performance in dx12…my theory? digital foundry used an i7 6700k(4 cores 8 threads) while DSO used a 4930k(6 cores 12 threads) i think dx12 gives you performance if you have more physical cores
DX12 scales beautifully with more cores, very much so. Just remember not all DX12 titles are equal, there is a ton more features available that may or may not be included depending on partners and the like, even more so than DX11.
One very cool thing about DX12/Vulkan and them loving more cores, is the way the multithreading works in these API’s compared to previous, is that its much more efficient. The details is that its more parallelized as opposed to taking serialized operations and balancing them across cores. So effectively you’ll see the cores all being used, however the way they’re being used is much much better, much more functionality and a whole lot of awesome for us gamers.
There’s a lot of units on screen (I can recall another RTS that had a similar unit count though) but as for the actual material shading / rendering I don’t see what’s supposed to be the benefit of the all-new object based rendering technique, other than bringing MSAA back. Game looks like a cartoon. The materials aren’t convincing at all tbh, everything looks like clay. The ground doesn’t look like sand or ice etc. it looks like plastic or clay or whatever, like everything else. Look at the buildings and vehicles and tell me what material they are supposed to be made of. Hell, it’s hard to even grasp their shape in some cases. And the ground is blurry in some shots. That’s most of the screen, blurry…
Windows10 + DX12 = Catch-22
Where do we vote for top kek? Your post is todays top kek imo.
Thats the spirit, this game is a glorified DX 12 benchmark with added mini-games, it lacks all the elements to make it a proper Supreme Commander
Ohh yea my boy, we need a new Supreme Commander 😀
I can attest it plays well 3v3, larger maps, massing units compared to supcom. Although I loved supcom that was my biggest downfall regarding the series was the immense lag even in player vs player matches. Running FX-8320 + R9 390 w/ 8gigs ram.
I still get 1fps less in DX12 than 11 and terrible input latency when moving the camera with keyboard with both DX11 and 12.
Nvidia?
yep
Sadly, that doesn’t have the non-Fury testing and the DX12 testing of AMD chips (Oxide said AMD octacores rivaled i7s in their internal testing) that I’m hoping for.
No Amd cpu for me but some results for 390X :
With DSO settings :
Dx11 High@1080p = 53fps
Dx12 High@1080p = 63fps
Dx11 Crazy@1080p (no msaa) = 39.2fps
Dx12 Crazy@1080p (no msaa) = 40.7fps
With Digital Foundry settings :
Dx11 Extreme@1080p (no msaa) : 44.2fps
Dx12 Extreme@1080p (no msaa) : 54.5fps
i7 4790k@4.6Ghz
16Go DDR3@2400MHz
MSI R9 390X @ 1100/1625MHz (Crimson 16.3.2)
my results on crossfire…
on high we can see than fps are same for CPU and GPU
Oh good! That’s good news. If AMD can catch up to intel in terms of performance, we would all benefit.
They’ll catch up enough to Intel where it won’t even matter. There will still be arguments as to whose better, but as far as gaming is concerned Zen will hold its own no problem without the bottleneck the current series sees. Especially if they still price is smart.
Yeah i saw. Nice. But i still wonder how come the gap isn’t bigger than that ? In the digital foundry video the gap between furyx and 980ti was mich bigger. Which i was expecting.
What software do you use to show in game statistics about your GPU/CPU usage, temps and FPS?
I love how they do reviews using top hardware while most have crap pc according to steam hardware survey. If that isn’t marketing then what is it? Honest reviews should use medium range hardware from 1-2 years back.
Well nicely said since DX12 benefits weaker hardware.