Up until recently the performance gains to be found in DirectX 12 were merely guesses, there were a few benchmarks that used DX12 but nothing that gave an indicator to how it affects game performance. Now that Ashes of the Singularity is available in Early Access (Review Coming Soon) we can actually measure how DirectX 12 affects performance in game.
Fortunately for us Ashes of the Singularity comes with a really handy benchmarking tool so that we can easily show the difference in performance between DirectX 11 and DirectX 12.
First of all, here are my specs:
CPU: Intel i7 4790K
GPU: NVIDIA 780Ti
RAM: 32GB
OS: Windows 10
SSD for storage
Now for these tests I went ahead and set my entire system back to base clock so that we can get stable consistent benchmarks across the board. Here are the results:
DirectX 11
== Sub Mark Normal Batch =================================================
Total Time: 60.226936 ms per frame
Avg Framerate: 25.453726 FPS (39.286980 ms)
Weighted Framerate: 24.837635 FPS (40.261482 ms)
Average Batches per frame: 9831.856445 Batches
== Sub Mark Medium Batch =================================================
Total Time: 61.932709 ms per frame
Avg Framerate: 19.666506 FPS (50.847874 ms)
Weighted Framerate: 19.569185 FPS (51.100746 ms)
Average Batches per frame: 21690.908203 Batches
== Sub Mark Heavy Batch =================================================
Total Time: 57.855625 ms per frame
Avg Framerate: 18.822716 FPS (53.127296 ms)
Weighted Framerate: 18.215027 FPS (54.899727 ms)
Average Batches per frame: 46332.730469 Batches
DirectX 12
== Sub Mark Normal Batch =================================================
Total Time: 60.092304 ms per frame
Avg Framerate: 32.350231 FPS (30.911678 ms)
Weighted Framerate: 31.475912 FPS (31.770327 ms)
CPU frame rate (estimated if not GPU bound): 104.143234 FPS (9.602160 ms)
Percent GPU Bound: 100.000000 %
Driver throughput (Batches per ms): 5092.099121 Batches
Average Batches per frame: 10025.499023 Batches
== Sub Mark Medium Batch =================================================
Total Time: 62.053162 ms per frame
Avg Framerate: 24.946350 FPS (40.086025 ms)
Weighted Framerate: 24.651777 FPS (40.565025 ms)
CPU frame rate (estimated if not GPU bound): 84.904152 FPS (11.777987 ms)
Percent GPU Bound: 100.000000 %
Driver throughput (Batches per ms): 6828.028320 Batches
Average Batches per frame: 22584.240234 Batches
== Sub Mark Heavy Batch =================================================
Total Time: 57.862873 ms per frame
Avg Framerate: 22.812555 FPS (43.835510 ms)
Weighted Framerate: 22.597780 FPS (44.252136 ms)
CPU frame rate (estimated if not GPU bound): 53.867039 FPS (18.564228 ms)
Percent GPU Bound: 99.935158 %
Driver throughput (Batches per ms): 5368.355469 Batches
Average Batches per frame: 49178.707031 Batches
Now you notice that the DX 12 benchmark has a bit more results and that is to show where my system is being bottlenecked. In the case of Ashes of the Singularity the bottleneck is my GPU, but the game is still in the Early Access stages so likely there will be more improvements down the line.
Unfortunately I could not test how VRAM is treated differently in DirectX 12 as I have a single card setup. Only time will tell how DX12 will impact performance on future releases.
Matt Followell is another contributing author here at DSOGaming. A long time fan of PC Gaming and a huge supporter of the open source and homebrew movement. You’ll see him interacting with the community from time to time going by the user-name of Radapples.
Contact: Email
where’s AMD bench? it’s dx12 we’re talking about here, test AMD!
forums . overclockers. co .uk/showthread.php?t=18690671
they are scared to do amd bench.they are bias to intel and nvidia.
How can they be scared of amd but bias of nvidia? You mean biased towards amd.
they are scared of amd because now they have legit competition at half
the price.did you nit see the fury x become a fast as the 980 ti with dx12?
That I did not, no, sadly. But the real question is, how fast does the 980ti become with DX 12, eh? 🙂
nvidia doesnt support async compute.
http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head
with async compute.amd does way better in dx12 tests.
i dont think amd would lie like nvidia does dx12 games partnering with amd.
http://www.fudzilla.com/news/graphics/38679-dx12-games-partnered-with-amd
iea wanting dx12 win 10 min req for there games it looks good for amd.
http://www.kitguru.net/gaming/matthew-wilson/ea-frostbite-engineer-wants-games-to-require-win10dx12-by-2016/
Sadly, to my great surprise, AMD is not on top, something must be wrong here. They’re supposed to be more advanced than NV on asynchronicty. Judging from these benchmarks.
“FR = Framerate”
— NVIDIA —
***GPU TitanX*** @1409mhz/1652mhz, Average FR 48.8, Normal FR 57.1, Medium FR 48.8, Heavy FR 42.6, CPU 3930k @4.4ghz
***GPU 980 Ti*** @1484/1974, Average FR 47.3, Normal FR 58.7, Medium FR 46.2, Heavy FR 40.5, CPU 5820k @4.625ghz
***GPU 970*** @1450mhz/1900mhz, Average FR 21.7, Normal FR 27.2, Medium FR 20.8, Heavy FR 18.7, CPU FX-9590 @4.6ghz
— AMD —
***GPU Fury X*** @1120mhz/560mhz, Average FR 43.4, Normal FR 49.3, Medium FR 42.1, Heavy FR 39.9, CPU 3770k @4.2ghz
***GPU 290P*** @1285mhz/1500mhz, Average FR 33.6, Normal FR 41.0, Medium FR 33.9, Heavy FR 28.3, CPU FX-9590 @5.0ghz
_______________________________________
Conclusion, these test beds aren’t THAT good either. Somebody with a Titan X and a Fury X should test on the same system. Too many moving variables here. But surprisingly, NV is on top, did they release their software asynchronous drivers already ?. Am i missing something ?
Interesting. Indeed. Can’t wait to see that happen. We need that competition strong.
But will it dominate Pascal? And how will the 970 perform once proper drivers are out? That is the real question. Because I am waiting to try out Ashes until those drivers hit.
Funny how you say that… Yet It sounds like you don’t know what you are typing about… The only thing that is geared towards servers is NVLink. That’s the only true thing when it comes down to it. Everything else is about stronger DP, and DC.
And the drivers are not fully out since AotS is not fully out. Come on now…
yeah that’s a minor part… To think they are not gonna go for more then just making servers better well lmao… That’s just insane…
yeah but you seem very one sided about the situation… I mean AMD and Nvidia built better arch’s every 2/3 years… You did not see people complain when the 290 none X was kicking the crap out of the 7970 by 20 frames in some games but they sure did when the 970 was beating the 780 TI by 2/5 fps…
people say 400 but you are forgetting that Fiji is a new arch… A late 28 nm Arch but it’s a new arch… And what will the 400 series be? Well nobody knows… For all we know it could be a enhanced Fiji…
Well it will be fun to see what AMD brings to the Table come 2016 vs Pascal that’s for sure.
Yes indeed. Thing is, when they were showcasing Fury X with HBM etc, i’ve seen in many sites/reviews (pre-release) that it woukd be the Titan X killer. What happened ? They didn’t kill jacksh**. Let’s hope they bring a bigger hammer this time around to make nvidia scared a little. I really like AMD’s vision;more opened to sharing techs, but the products/drivers aren’t there for me. They’ll need something huge.
New Crimson is actually quite good. There is lot of features I would love to see which ain’t there atm, but the whole concept of application based profiles is very intuitive. This New driver with lot of New features and much more coming is kinda huge. Very same thing could be said about HBM. And same about GCN which might not seem to be very efficient now, but it will be on low level API.
IF used as intended (asynchronous shaders, prevention context switching, etc.). Those features are almost irreplaceable for VR.
And again all of those features are not only drining the industry forward especially in case of some application but they do not hurt competition nor community in any way, which cannot be said about many activities Nvidia has.
Touché. Good points there sir.
What? Wouldn’t be able to sell anything ? Coming off a bit strong here friend.
if async shader being used then slower than FuryX.
AMD has offered the best performance per dollar for some time now. I mean the 390 is the same price as the 970 in most markets and has twice the VRAM and offers MUCH better performance. I would LOVE to do AMD tests, I just don’t have it in my budget to purchase one of their cards right now.
the 390 has 8gb vram to and cost the same as the 970 but run as well as the 980 before dx12.i would love some full amd test the cpu to 6-* core and a gpu.sorry about all negative posts but amd should be be ststed aginst nvidia/inte. dx11 vs dx12.i would like to see that.
Quite the opposite, I own an Nvidia card, but I’ll be the first person to call them on their sh*t. I know how it can seem like that considering John’s PC has a 600 series Nvidia card and mine has a 700 series, but you have to understand any tests that I perform especially comes from hardware purchased on my own budget.
You seem very AMD $Hill how about you lend them one of your free AMD cards.
Amd vs Nvidia Dx12 Test -> http://www.tweaktown…..com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html
you are talking about 1 fps get over it..lol
I would love to show AMD Benchmarks, but I unfortunately don’t own an AMD card.
It was one of the main point of dx12; to uncap dx11 draw call limits.
I am really looking forward to seeing the benefits of a hexa-core and greater CPU.
Here is mine, but i do this long time ago, I’m waiting for V.1.0
http://oi57.tinypic.com/2cxhht3.jpg
http://oi59.tinypic.com/bi72iq.jpg
Also it’s MAX Settings -> Full Batches.
Here you go sir -> http://www.tweaktown……com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html
They compared a GTX 980TI with an AMD FURY X Nvidia with GeForce 358.50 Driver and beat the AMD with only 0.4 FPS…..are you serious 0.4FPS.By the way that cards are expensive,the FURY X with driver 15.10,but now there is CRIMSON DRIVER.15.11.maybe the performance will be better on an AMD.THE GTX 980Ti still is the best VIDEO CARD on market.any way only 0.4 FPS difference.not even 1 fps.and we talk about average 40 FPS on EPIC preset without MSAA,so on medium settings Nvidia score 72.5 while Amd only 72.1 FPS.ON MEDIUM SETTINGS.i am curious if in 2018 a Gtx 970,or a R9 390 will still be able to play games in 1080p max settings 60FPS.if now a DX12 in almost 2016 only 72.5FPS with a 980ti but in medium settings.NEW SERIES INCOMING.1000 FROM NVIDIA and 400 from AMD.
Romanian Language Set = 1 : Salut Alexandru. Inca un roman pe DSOG. 🙂
English : I’m pretty sure the new generations from both Nvidia and Amd will run ok in DX12. The new cards will be build for DX12. IF Amd OR Nvidia will do better than the other one it is only in OUR advantage because what we need is competition.
Salutare. Jubba.
No what we need is not competition is WELL DEVELOPEMENT like in the OLD DAYS. but companies like EA, UBI, MicroSoft, don’t even give a F**K about consumers, so yeah AMD vs Nvidia and Intel VS AMD, seems AMD has 2 OPONENTS, anyway the VS problem will still be….. PC vs Consoles, PC vs Mac, etc. but why? WE PAY FOR THEIR WAR, WE PAY FOR COMPONENTS, WE PAY FOR COMPARISION CHARTS.
yeah the new generation will be for DX12. I just bought an ASUS R9 390 OC, and payed 1776RON (aprox 450Euro more or less) on it next year? new one? Their competition, Our money.
Stardock has some sort of partnership with AMD so I would take benchmarks on nvidia hardware with a pinch of salt.
I don’t think its a partnership. Originally it started with Mantle because they really wanted a low level API. Because of how Mantle shaped DX12 I think the whole thing stuck.
AMD shouldn’t have their logo plastered on the games site then.
nvidia needs to keep their crapworks out of games.
Not really. You AMD users can simply disable it. You lot don’t seem to understand that
they still put stuff in games to screw over amd like they did with project cars and nvidia physx.you couldnt turn off physx in the game and it messed with amd hard.
Your comments get more dumb every article
http://www.pcgamer.com/project-cars-studio-denies-it-intentionally-crippled-performance-on-amd-cards/
If you read what physx does in the game, according to developers behind game, the impact of physx being used wouldn’t even be measurable with frame rates , as those tasks are offloaded to the CPU, and its CPU usage is minimal too
LOL WHAT? Maybe it messes with AMD hard because AMD cant make a card that can actually work right.
They’re partners with AMD. Mantle is in Vulkan, AMD performed a separate yet similar task on their end of DX12. Mantle can increase performance by a max of ~25% when a high end GPU use a high tier FX processor. DX12 can nearly double the performance of an AMD GPU when using an I7. DX12 does not have Mantle in it.
http://s169.photobucket.com/user/Rocketrod6a/media/Mantle%20vs%20DX12%20pgm%20guide.jpg.html
news flash most big game publishers ae going amd because of dx12.
Are you sure you’ve made enough comments on this article?
*false comments
the truth is a troll.
getting the truth out.
🙂
So many comments and so little sense, it’s tragic
Dude, almost every publisher in the industry has NVIDIA. It’s because AMD can’t get their act together.
Really DSOG ? if you’re going to post some benchmarks, at least use some AMD cards too so it can be helpful for users, all your performance analysis are also based on Nvidia cards and now benchmarks too, you’re PC gaming website or Nvidia gaming website ?
They are poor and that is all they have.
so they want to put out false tests and say there isnt much gain with dx12?when everyone knows there are.they should not have done this test if they cant test amd as well.
I didn’t say there wasn’t much to gain. I even said that the bottleneck was my GPU, which is an Nvidia card.
Precisely, which is why in my reports I have shown the estimated (by the benchmark) FPS if the GPU was removed from the equation.
Hmm people tested it with a fury x and there wasn’t the performance difference we thought. Nvidia came on top curiously.
They are poor because they picked overpriced Nvidia cards, should have gone with some Nvidia and some AMD for proper comparisons but jokes aside this is the first website that write performance reviews using one GPU brand only and now benchmarks too.
yeah but it’s sure better the PCgamer… It might be the best site dealing with PC gaming that covers games. Most sites take articles from this very site.
Yes it’s better than PCGamer though they completely lack as far as AMD information goes, they don’t need to buy several AMD cards like some websites do to cover reviews, just buying one high end AMD card will do the trick just like they make all performance reviews using Nvidia’s 690, they can cover the same review using HD 7990 or R9295X2 to give AMD users some insight about how a game performs on AMD hardware, for benchmarks they can buy more cards later.
Well John is on a 690… I don’t know about his finances but I am sure if he wanted something newer he would of done so by now…
Heck even a 380X would be good imo.
Yes man and those are cheap honestly, even R9 390 is 340$ on Newegg so I think they can buy it.
well I guess we will just see how things go with this site. But yeah 200/300 series are super cheap now.
a false and bias test.everyone know that amd gets better results because of
async compute.if you dont run a real comparison between amd and intel its a false and bias test.
Yeah it is really, I don’t see how it’s any interesting for AMD users and even Nvidia users since the article cannot be used for comparisons.
If they had some AMD cards they would…
Then they should get some, I can see a new HD 7990 for 695$ on Amazon and a used one for 420$, not very difficult to get.
why that’s old tech… They are better off getting a Fury or a Fury X…
Yeah true but just for the sake of equality, if they are using 690 for Green Team then it should be 7990 for Red Team.
Well like I said I don’t know their funding… But yeah more cards is always good for all. Btw their is a petition for SFV to be uncensored
If it’s the old (change.org) petition then I already signed it several days ago but I don’t think it will move Capcom though I plan to mod this game, hope we dig something up.
Btw it’s stupid as sh*t, Zangief goes on with his bulge hanging around and nobody gives a f’ but if it’s a female character then it suddenly becomes sensitive, West needs to change its attitude on how they bash a game and on what subjects otherwise stop talking about freedom.
It’s not even a issue that gamer’s have… It’s just femi nazi BS making a big deal out of nothing.
why arnt you using amd?nvidia doesnt support async compute thats why there is no big difference.amd does us it thats why there i huge gain for amd gpu’s.also you are using a quad core cpu and not a 6-8 core cpu like most amd usere use.
this is a FALSE AND BIAS TEST.you will see bigger gains using amd.thats what intel/nvidia are afraid of.every one knows that you see better results using amd.just look at the posts and you will see.
FALSE AND BIAS TEST.
Hello,
You stated 4 times “false and bias test” Once was enough for everybody to understand.
good just making sure everyone gets it.
Sorry dude, NVIDIA is always better than AMD. Both Performance and Price. You’re totally stuck in the 90s. I bet you have a 3dFX Voodoo and run Windows 95 and complain about how your 3dfx on your gaming rig just for Myst is better than a PowerVR.
My GeForce 980 will kick your AMD cards a*s. You’re like the simpsons comic book guy sitting behind your computer screen and ranting about the semantics of how a gaming website is doing a good job with their review.
I’m not sure he know what you’re taking about, let alone other people understanding what fees on about
Stay frosty man!
Why you use a 2 year old outadted card ile 780ti for dx12 benchmark? DX 12 need current cards(970 980 and 980 ti ) to show it full potential. Too bad it is a strategy a game a genre thati dont care so i will not test it but there is no doubt that my GTX 970 G1 GAMING will destroy your 780 ti
You cared enough to criticize the benchmark and the card being used so you do care to some extent.
Who cares about the 970/980? We know those cards destroy everything (right now) and the fps they give you is crazy good. The people that really need DX12 is the people with old cards and crappy laptops.
Why buy a new card if your 760/770 runs almost everything fine? Get my point? If DX12 keeps making our cards useful, why would anyone go for the new cards? Just to brag? Get real rich kid. People dont like wasting money. Fools do.
I already game at 45-55 fps in all the new games with my 770. Everything else under 2015 runs at 60fps (and way more)
Im sure 780 is even better lol.
P.s. My future pascal card will destroy your crappy 900 series card. So maybe you do need some DX 12 help lol haha!
Because it is all I have. :/
Please test with an AMD GPU, also with Maxwell and if possible a separate test with an FX CPU. It will be important to show that AMD GPU’s already benefit greatly from DX12 when using a high end processor (FYI FX series is currently NOT high end).
http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark/Results-Heavy
Hey guys, stop bashing Matt for trying to provide. He is giving us a hint on how dx12 performs on an nVidia GPU. At least he provides something. We will just have to wait for a real performance comparison test to come out. He doesn’t have the funds to buy several GPU’s in different configurations and that applies to most of us, GPU’s are expensive! Cut him some slack ffs! After all he does this for us.
Those people who complain should pony up and buy him that AMD GPU.
maybe he should have named the article “nvidia dx12 tests” otherwise it’s just incomplete – and the point about AMD gpu is not only for diversity
but also hardware support (async compute) is only on AMD as of now so it would be most interesting and most gains will probably be from their GPUs
Omg… i cant believe it…. what is wrong with you people. That mother f is paying himself for the hardware. Its a small site and in all my years here, i can tell you this is almost bias free. I can count maybe 5 articles with some bias. In all those years.
For me that tslk very good about this place.
if he can’t get amd it’s okay, but at least title should reflect it’s incomplete test with only nvidia
Yeah I agree, but that doesn’t justify bashing. Just leave a point and maybe he will cover it next time who knows…
Maybe, you guys just aren’t in the position to make requests on a free site where unemployed people share stuff for free. However you put it, it doesn’t justify bashing someone who gives us something for free. You’re simply in no position to complain or demand.
Just so you know, NVIDIA will support async via drivers so no, amd will have no edge.
im sure it will but amd has a 2 year advantage with mantle and by next year when nvidia supports it that will be 3 years working with low lvl api under amd’s belt.amd had the more powerful cards all this time nvidia has better drivers for dx11.thats not the case now
Sure you can have the most powerful gpus on the planet when your amd gpu draws the equivalent of africa’s yearly electricity bill. That’s a joke btw.
Yeah, totally an AMD problem
Yes top3 = 2amd products :). They see me trollin’
Both are dual gpu products. Troll harder.
Yes i knew that, i commented knowing that. Also should we bring the fact that we should test wattage requirements on normal pcbs ? Many manufacturers such as gigabyte/evga use custom pcbs for extra performance/oc hence higher electrical demands and higher heat output if you don’t have a proper cooling solution. These cannot be taken into account when looking for the best Watt to Performance ratio. That goes for AMD as well as NV. They should only test vanilla versions, just to see what companies have been able to yield without third party modifications.
I love you
Me too, love @ first sight
Yes, your comment is a joke.
Yes it was my intention. I stated that at the end. wtf…
AMD will still have the edge as they support it via hardware, not some bs software patch.
Indeed, sadly, for the moment.
And yet, the 980ti camev out on top still with benchmarks for dx12 in ashes of singularity and fable against fury X and that’s without ANY asynchronous compute support in place.
They are barely using Async compute because Nvidia was complaining about it and that is a fact directly from the developer. Oxide is only using about 5% of Async compute right now as with Fable.
So, can wetwe be privy to this “factual” statement the developer released? Cause, I’m finding no evidence of this
http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/
Have you actually read the article, because it doesn’t support your claims, and in fact states that oxide refused to disable settings. There’s no mention of using 5% either.
Nvidia asked them to disable certain settings because they haven’t got their drivers ready, and this puts them at disadvantage. Seems fair request, as benchmarks wouldn’t be representative of final results.
None of that reflects your claims though.
Did you read the article?
“Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that.”
You need to learn to read more than a paragraph and also read between lines. This is issue with modern media, it’s not known for applying context and simply relies on sound bites for his.
I’ll repeat my post to another user in regards to this matter:
That’s obvious though, as the feature wasn’t there in their driver. You can’t use what’s not there, that’s obviously going to create issues.
They asked for asynchronous compute ti be disabled in benchmarks, as their driver ti manage it through software iwasn’t ready, oxide disabled on NVidia cards, but not AMD cards, and yet Nvidia still came out on top in benchmarks.
Nvidia wanted benchmarks run without asynchronous compute to avoid skewing results. You can’t compare the two fairly without Nvidia’s software in place. That’s fair enough.
Oxide said no, and ran with asynchronous compute running on AMD cards, and again that’s fair enough, but obviously Nvidia want to have results that represent when they have drivers ready and supporting it via software, as it generates negative press, even though it’s not representative of final product.
Oxide have also stated ” We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”
so, what does that mean?
http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far
This article is quite useful and seems to indicate whilst Maxwell can support asynchronous compute, albeit not through hardware functions as gcn can, it’s not as proficient , and wasn’t designed to process workloads in same manner and has smaller gaps between shader utilisation anyway.
This all seems to hint that whatever oxide expressed regarding Nvidia fully supporting asynchronous compute, is likely to be done in very different fashion, via software and driver side management of resources that fits more naturally with Maxwell architecture.
As always, one aspect of a GPU architecture won’t single handedly define results, and i can guarantee you AMD isn’t going to just suddenly start trouncing Nvidia and in reality, we will see them trading blows across games
That’s obvious though, as the feature wasn’t there in their driver. You can’t use what’s not there, that’s obviously going to create issues.
They asked for asynchronous compute ti be disabled in benchmarks, as their driver ti manage it through software wasn’t ready, oxide disabled on NVidia cards, but not AMD cards, and yet Nvidia still came out on top in benchmarks.
Nvidia wanted benchmarks run without asynchronous compute to avoid skewing results. You can’t compare the two fairly without Nvidia’s software in place. That’s fair enough.
Oxide said no, and ran with asynchronous compute running on AMD cards, and again that’s fair enough, but obviously Nvidia want to have results that represent when they have drivers ready and supporting it via software, as it generates negative press, even though it’s not representative of final product.
Oxide have also stated ” We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”
so, what does that mean?
http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far
This article is quite useful and seems to indicate whilst Maxwell can support asynchronous compute, albeit not through hardware functions as gcn can, it’s not as proficient , and wasn’t designed to process workloads in same manner and has smaller gaps between shader utilisation anyway.
This all seems to hint that whatever oxide expressed regarding Nvidia fully supporting asynchronous compute, is likely to be done in very different fashion, via software and driver side management of resources that fits more naturally with Maxwell architecture.
As always, one aspect of a GPU architecture won’t single handedly define results, and i can guarantee you AMD isn’t going to just suddenly start trouncing Nvidia and in reality, we will see them trading blows across games
And how fast this psedo-support actually is, and why nvidia recommend not to use async shaders at all?!
Uggggggggggggggghhhhhhhhhhhhhhhhhhh … I don’t know how to express my disgust more than just extending a simple ughh. There are times that I want to be an admin here and just ban these fuc*ing idiots —__— . Especially that hvd dvd guy, yes I’m especially naming you —__—
How the fuc* is this a false and bias test? just because the guy didn’t test an AMD card that you soooooo fuc*ing love? or he tested an old gen 700 Nvidia card? for fuc*’s sake it could be a “PERSONAL TEST” and not a comparison test. Is there even a phrase that trashes the AMD brand there? didn’t you idiots notice that he wrote “MY” because it could be his personal GPU rather than provided by the DSOG site? DSOG is not as big as other game websites like IGN or Gamespot and thus they could not have the financial prowess to benchmark every god damn GPU card out there or just plain entertain your fuc*ing selfishness and benchmark the GPU card brand or gen that you so fuc*ing worship. Ugghh —__—
Unfortunately when you’re a gaming site and your putting up something for DX12 real world tests and you only have 1 configuration, it can be a bit damning to customers that want to read a non-biased report. Not many understand this is probably a test with the only available hardware by those at DSOgaming. Those at DSO should probably just understand giving a limited “real world test” isn’t as real world and should make it more specific to “DX12 test with the 780”. Its a biased world out there and there are those that are looking for unbiased reviews, and unfortunately will jump at the opportunity to pull the “biased” card out.
The sound of whiny AMD fanboys crying is comical.
The hilarious thing being that if AMD comes out on top through dx12 (not convinced by claim personally and think it will be trading blows in various games, depending on various factors) , it would simply serve tu highlight how poor their driver support is.
Dx12 should even playing field and glad, cause at end of day, I hate seeing such disparity in performance , if the hardware isn’t the factor. However the constant desperate, delusional and paranoid crying of AMD fanboys just irritates me and makes me almost want for them to end up losing out, just so I can laugh at them.
I have to agree. Looking at AMD fanboys’ invasion here and It is ridiculous.
1. It will support async compute via driver level.
2. What EA want is what EA wants, but the world shall not be at their mercy. So they probably won’t be getting that any time soon.
3. And yes AMD has the edge over NV and it’s a darnnn good thing for the market. Give nvidia some time and their R&D engineers will surely close the gap.
yep its a good thing and yes nvidia will get suppot but as i said befoe
amd had mant;e fo 2 yeas it a low lvl api like dx12.they will have the uppe hand with dives this time.nvidia just had the better drivers for dx12.the tables are turned now.
i cant wait to see all these benchmarks get crap for not doing intel vs amd dx12.since a lot of intel/nvidia fans havent upgraded to win 10 yet.
I can’t wait to see how disappointed you’re going to be when Matt does the benchmark on the AMD. I bet you’re going to crying like a little scrub. You’re gonna have to change your username to lil’scrub.
It seems like there has been a lot of assumptions being made here so I just want to clear the air.
1. I am by no means Nvidia biased. I just so happen to own an Nvidia card because my PRIMARY operating system is Linux which currently AMD doesn’t run too well on because of poor driver support.
2. If you read my articles and reviews regularly you will know that I am the first person to bash Nvidia for the crap that they pull, and boy have they pulled a lot in 2015.
3. I don’t get paid to do this. I do all of my writing because I love to interact with the community here and it gives me the opportunity to write honest reviews about games.
4. This is related to 3, I would do tests on AMD hardware, but I am an unpaid writer so I don’t have the money to allocate to a second GPU just for DSOG so I use what I have to run my tests.
5. This is a Greek site and I live in the US so even if John had AMD hardware to test on that still doesn’t mean I would because we live about 9400km from each other.
I love you guys, the great community we have here is what makes me want to write, but comments sections like these make me sad. I thought you guys at this point knew me well enough to know I’m no fanboy. I’m just a guy who likes to play games and write about it.
You didn’t need to explain yourself. The stupid ones who write crap lilke “amd!!! nvidia bias! gimpworks!” will not get it anyway. Nice article, but next time you should put the data in table so it can be easier to compare.
Man… that Peassent have been playing the troll roll in our site for just… too long. Drop the hammer. Keep working hard.
DSOG is blog … why people doesnt understand … if you want them to do amd benchmark you should ask big budjet website like techspot or tomshardware.
Found an article(from Oct 16) about AMD vs Nvidia. It’s not comprehensive, but you will get an idea -> http://www.tweaktown…..com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html
Amd vs Nvidia Dx12 Test (from Oct 16) -> http://www.tweaktown…..com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html
Yes I know, gamers are usually the liberal kind though rest of the world just need to stay away from the gaming industry and let devs/artists do whatever they want.
You know, if you own an AMD card, you could buy this “game” and do the bench mark yourself.