It appears that the first DX12 benchmarks for NVIDIA’s new graphics card, the NVIDIA GTX1080, have been leaked. These benchmarks come straight from Ashes of the Singularity’s database, and will give you a glimpse at what this new GPU is capable of.
Unfortunately, there is no way to compare the results of the GTX1080 with other cards. And that’s because we don’t know whether the other PC systems – found in the database of Ashes of the Singularity – are heavily overclocked or not.
For example, our OC’ed Intel i7 4930K is shown as a stock CPU in the benchmark results of Ashes of the Singularity. The same happens with OC’ed GPUs, so there is really no way to have an apple-to-apple comparison.
Nevertheless, we are pretty sure that some of you will find these benchmarks interesting, so there you go.
Thanks VideoCardz

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email



These numbers are kind of disappointing, not much better than an overclocked 980 Ti. Around 10% better than an overclocked 980 Ti and 25% better than stock 980 Ti.
What realy matter however is if never drops below 60 fps in evry game. If a 1080 factory overlclock card like the G1 GAMING version can run all games 4k 60 fps max settings then no one will complain.
What? You are comparing a reference card to another card that isn’t reference and not even the previous iteration. Compare a stock 1080 to a stock 980 and there is a massive difference. 25% better than a stock 980ti is a really good score, especially as the 980ti is at least about 20% faster than a 980.
Also consider that you’re essentially getting that rough 25% performance increase essentially for free in terms of MSRP. Oh, and significantly lower power consumption.
He’s comparing a 16nm FF card vs a 28 nm card thats said the 16nm FF card has 2x performance of a titan and you’re worried about reference design!?! I’d be more concerned about performance per dollar and if its actually a viable upgrade compared to maxwell even with those advanced specs produced. Might be worthwhile waiting for Volta if its only slightly better in real world usage.
”16nm FF card has 2x performance of a titan”
Nobody said the 1080 is 2x better than a titan X. It’s as good as 980 SLI, which aren’t nearly as good as titans + ~70% SLI scaling.
Did you even watch the announcement? he repeated it like 9 times without reading the fine print out loud.
No, but I read from multiple sites and none of them stated what you are saying. Also many people forgot to mention that some of those stats are regarding VR use.
Thats exactly the point, when everyone was hooting and hollering he said it has 2x the performance of a Titan, but then later on information was released and the fine print said in specific VR scenarios. He did NOT say it at the presentation, which a lot of folks watched and regurgitated from, however the followup articles mentioned it.
And I’m not joking, he literally repeated the 2x the performance of a titan several times in a row, making sure everyone had it ingrained that 1080 would be 2x performance of a titan.
The sites have the correct information. TA feo say that Jensen said it would be better than 2x Titan X in some specific cases. However, even when compared to 2x980s it was also implied in most cases and not all. One of the big cases where it would somewhat perform better than 980SLI in most cases is when its overclocked to 2.2ghz+. Then 980SLI becomes pants.
You compare an overclocked 980Ti to a stock GTX1080 with less TDP and less price, please stop now, you are embarrassing.
It’s even more disappointing when they made big claims like 2x performance of Titan X (even in VR) which means they were just exaggerating and trying to blow things out of proportion, the real performance is measured in games which is not as good as they claim so far.
They are also claiming 3x faster performance over previous generation (Maxwell) on their site but then their own graph shows only 1.6x – 1.7x increase over a single 980 (in TR2015 and TW3), my question 3x performance compared to what card ? 960 ?
No, they explicitly said 2x performance *in* VR. That’s down to some new tech they’ve packed in which allows them to render a stereo image from a single framebuffer (at least I think that’s how it works).
Non-VR performance will obviously have less of a huge leap in performance.
don’t worry it will not be disappointing at all in a few weeks XD. People think these benchmarks are everything they are cracked up to be. But Nvidia is troll people on purpose with various bench results. Five different user’s on 1080’s just out of the Blue having benchmarks show up… And one of them is Nvidia’s own General Manager for their Drivers XD.
And in a few weeks you will see just how great the new arch is with Pascal.
Oh comon John, fanboying already ? I expected better from you after all the neutral drama you have been doing ? the card isn’t even out yet. Remember the life cycle of Maxwell is ending in just about 2 years lol while Hawaii is still rocking, on top of that Maxwell didn’t do well in almost every DX12 game so please watch your claims before you go all fanboy.
It’s not fanboying at all. And yeah Hawaii has been around because AMD is slow to the party. But don’t worry your 980’s will be just fine since Nvidia is not going to EOL driver performance on maxwell like they did with Kepler.
And Maxwell does well with TombRaider in DX12 just like it does with Forza. Not to mention GoW. Only Game it does not do well with is AotS with Async on same with Hitman … And QB is still a joke of a game even after it’s 27GB Patch.
Yet games like KI, Forza, GoW, TombRaider run just fine on Maxwell XD
Sounds like AMD’s GPU game might step it up in terms of release. The announcement is later this month, hopefully with a solid release date so we can judge then. Sounds like rebrandeon is going to be for the notebook specifically.
im not impressed with that.
im gonna wait for VEGA or the 1080ti, any with HBM2.
1080>980sli>980TI oc>TITANX(on games), if you put this cards on the slide they show, there is little performance gain.
Not really disappointing at all, Smaller die mid range 1080 compared to largest die 980ti and it kills it using less power, cheaper and runs cooler, That is already impressive, Can only imagine how fast the larger die Pascal Titan/Ti cards are going to be 6+months down the track.
The comments section is full of retards. You take one game and make your final conclusion about the GPU? Where did you get your math? where are the bench marks?
So, if you compare this to OC3D benchmarks, published on the 4th of April, you get this.
980Ti – DX12 -1440p/Crazy: 36.8
FuryX – DX12 -1440p/Crazy: 44.2
————————————————–
GTX1080 – DX12 – 1440p/Crazy: 49.6
GTX 1080 is about 40% faster than the GTX 980Ti.
In 4K is worse:
Gtx 1080 – 36fps (Crazy bench)
Fury X – 34.8fps (Crazy bench)
1440
Gtx 1080 – 43.7fps (Crazy 144p)
Gtx 980TI – 41fps (Crazy 1440p)
All on Heavy batch.
Basically all you have done is point out worse case in one batch. Averages are done for a reason.
the 1080 is only 6 fps faster then the fury x its not that big of deal…..lol
Look buddy we get it, you’re a AMD fanboy. Now calm you’re tities and stop posting the same crap over and over.
and you and many others are nvidia fanboys im just making the point as the evidence shows under dx12 amd is just as good now as nvidia this shows that.only 6 fps difference and the fury x is an older card.
I just go with the best card out there, I’m not gonna lie, I’ve never tried AMD, and that kind of says it all 🙂
amd is now as fast as nvidia is these benches prove it the fury x and 1080 only 6 fps difference….lol
And we have no way to know if it’s a founder’s edition or not. 300mhz core oc can make a difference.
Edit: This is what Gamers Nexus thinks the Founders Edition is.
“This is, in our analysis of the situation, nVidia’s way of avoiding competing with its own partners in the space. The Founder’s Edition will cost $700. The MSRP is $600 – so vendors like MSI, EVGA, ASUS, et al. can enter market with cards cheaper than nVidia’s own, throw their own coolers on them, and overclock them differently. The vendors will exercise similar control and design/engineering over their versions of the GTX 1000 series as with previous generations. Also as with previous generations, nVidia’s version of the card (now “Founder’s Edition”) uses heavy materials that can run-up the cost. That metal shroud will run-up the BOM more than a plastic shroud from an AIB.
NVidia wanted to land at the center of the stack, providing room for vendors to undercut nVidia reference – err, “Founder’s Edition” – prices, but also allowing room for higher-end cards >$700. Drawing parallels to the GTX 900 series, a higher-priced card might be something like EVGA’s GTX 980 Ti Hybrid, which ran ~$750-$770 at first launch. This was a marked increase against the MSRP of $650, but offered features which helped carve its own price bracket.”
Actually, the Founder’s Edition is just a pure reference card. Nvidia will let you overclock it out the box, but it will run at stock clockspeed by default.
Yeah, I thought it was overclocked but the Founder edition isn’t.
It’s understandable – the higher price-point would suggest some kind of different binning or overclock capability, but it seems to just be a case of Nvidia not wanting to tread on their OEM’s toes.
I’ve edited my response above with what Gamers Nexus believes the Founders edition is. 🙂
Wtf i thought that too! Like pre-oc’ed like any evga/msi card.
I’d really like to buy directly from nvidia but they likely won’t iffer AiO hydro solutions. This is my opinion is what is going to make people buy evga, msi, gigabyte’s iterations of the product. Sad…
So 1080 gets 13 more fps than 980 ti on the same game. That good. If a game is runnning on 50 fps at 4k on 980 ti then on 1080 it will run 60+ fps. Very good
only 6 more fps then fury x im not impressed.
If Fury X did 5fps in one game and 1080GTX did 11fps, would you be impressed then? Still just 6fps, but a 120% difference…
40% in one game. You can’t just out right say it’s 40% faster.
Yes, in this benchmark, I didn’t say to any effect that the 1080 is 40% faster in every game, it could be less or more depending.
I was just thinking that too. Im sure it will even be worse at some other games (or better). It all depends on the game i guess.. This is why when i was getting a card, i was actually looking how good that card was doing with the game i wanted to play.
the fury x is almost as fast as the 1080
furyx 44.2
gtx 1080 49.6
only about 6 fps difference then amd’s older card.that tells me amd is the way to go for dx12.the fury x price will drop when the r9 400 series is out and the gtx 1080 and 1070 are released.ill stick with amd
these tests show that amd is just as good as nvidia is with dx12 and the 1080 will stay at $600 for quite some time.
IIRC the R9 400 series is being geared towards ‘mainstream’ level performance, and will target lower overall TDP and thermal profile, and AMD will likely use the Fury X as their flagship high-performance part for a while yet.
So unless they just decide to take a hit on the Fury X’s profits to undercut Nvidia, kind of like what they did with the FX9590 re Intel, then it’s also going to hover around the same price as the 1080, for significantly worse TDP and thermal performance.
thats fine im just pointing out the difference that the gtx 1080 in only a little bit faster then the fury x under dx12 now in win 7 it would be different.
Fury X is 10% slower than 1080, which won’t even be top NVidia card this gen. At worse TDP and higher price. Plus Fury X can’t be overclocked
That’s really irrelevant, I just put the FuryX in(it was a FuryX vs 980Ti benchmark after all),otherwise I’d be accused of being biased. I don’t think you get it do you(as usual), the simple fact is that the GTX 1080 is much faster than it’s predecessor and faster then the 980Ti.
its not irrelevant you just listed and showed the fury x in dx12 vs the 1080 in dx12 there isnt much difference.are you now making excuses for the evidence that you provided .
FuryX – DX12 -1440p/Crazy: 44.2
GTX1080 – DX12 – 1440p/Crazy: 49.6
they must have been running doom under win 7 to make them selves look better.this is only a 6ish fps difference that you showed.are you trying to damage control now?
You do know that the FuryX is 25% faster than the 980Ti in this game, AMD said the FuryX is 20% faster anyway. Oh and you keep saying only 6FPS, that is 15%.
you do know that the fury is a older card then the gtx 1080. you do know how people are saying how fast it is? im just pointing out the facts. 6fps difference isnt much.
im just saying nvidia is blowing smok up your as*es again with lies. look under real bench dx12 tests its not much faster. the lies will be known again. infact im already showing this to other youtubers gaming sites ect the word will get out.
From a guy that said the GTX 1080 will run at 94c because you seen Red Gaming tech put up a Max temp number which was an image from the specs. See you don’t understand anything.
now you are trying to change the subject nvidia used vulkan to boast the doom fps to show 100fps.under dx12 which will be used more it will be a different story.
you are just a mad nvidia fan boy right now.dont worry when the dx12 tests start to come out every one will know there is only a 6fps difference under dx12.
As usual you make no sense at all.
keep trying.
While AMD clearly has the upper hand when it comes to async compute, the other parts of DX12 applies as much to amd as it does to nvidia, meaning there’s yet a difference in performance to see in these areas. While that has been said, 15% in an environment of 40fps doesn’t seem much indeed. Extrapolate that into a 100fps environment and you get a 15fps difference. We all know pc gaming is about fluidity and i’m sure many of us have 144hz monitors, so more %/fps = win. Again, that being said price to performance ratio, nvidia aren’t competing much with amd and that’s true. Alot of people seem to put so much emphasis on async compute when only a handful of games use it. Devs said it is very hard to tune and hard to implement. So we’ll probably see that feature a little bit further down the road and by then NV and AMD will have switched architectures again and all of this with go back to square 1. If the previous comments are true (amd’s next architecture being mainstream) then nvidia will most likely stay king for some time since the 1080 is already beating it’s competitors and we haven’t seen big brother pascal yet, so the margin is going to increase no doubt. Still amd is getting back into the game and that’s good.
Doom doesn’t use DX11 or 12.
It is openGL and Vulkan.
Educate yourself.
thats right they knew if they used dx12 their card would perform like shi*.all devs will use dx12 as the standard api from here on out to make games for.
amd would destroy nvidia if they used vulkan.amd made mantle and mantle is vulkan.stop being a fanboy and look at the real numbers
oh and you need to Educate yourself..lol
Fury X + Async = awesome. However the benefit 1080 will have is no performance drop in async, it just doesn’t necessarily benefit from Async. So no 1-3% FPS drop. Volta very well will include async performance benefits.
those stats show 1080 and 107 still dont support async compute they showed doom running on vulkan and not dx12 so they lied and boasted their cards from what they really are.
just wait ad see polaris when they launch it soon.the watts to power is going to be insane.amd doesnt lie like nvidia doesn.
The interesting thing about “Async Support’ regarding Nvidia, is that it does not benefit from it. In that regarding Async does not work with nvidia cards. However they do have a workaround, and the compute is handled on the GPU. The difference being is maxwell had to shut off the entire rendering pipeline to performt he context switching, resulting in the slight performance loss. Pascal can do this without shutting down the pipeline or loss to performance.
As I mentioned previously, Volta will follow this evolution, and as the low level api’s should have seen wider adoption at this point as UE4, one of the major engines, will support DX12/Vulkan on both console and PC along with Ubisoft and others, will give Nvidia the boost in order to actually accommodate their hardware as well as their software scheduler in order to gain benefits from Async Compute.
well all i can say is amd uses async compute and they benefit greatly from it.maybe the next series after the gtx 1080 they will fully support it.amd has the leg up on low lvl api.remember amd MADE mantle which is now vulkan and is similar to dx12.amd has the leg up in dx12 and it shows.
also win 10 is the os all devs are going to program for because you can only get dx12 with ti so thats the os of choice.just to please nvidia fanboys ill say because of amd using low lvl api’s longer then nvidia since they have only stuck with dx11 amd has the upper hand.
AMD has had low level API in mind since GCN’s inception, not to mention they also planned on Async Compute in DX11 being effective, and it ended up not being by the very nature of the API. So of course they’re going to have a leg up on it, however how long did that take to show? Also it means nothing for the future that they have a leg up, the low level API’s are only now beginning to be adopted, they’re not throughout the industry just yet. So the issue being if POlaris/Vegas command processor doesn’t assist with the issues AMD saw with DX11 it might be too soon for these chips as AMD is looking to get market share. If AMD pulls through and their GPU can effectively multithread rendering drawcalls in DX11 then these GPU’s will effectively be the bastion that AMD REQUIRED. Just as ATI created tessellation and Nvidia ended up doing it better, the same could happen with Async. Notice how most nVidia titles did not employ Async? This is what happens when a company who owns the largest amount of the marketshare has the greater influence to game design. If AMD cannot push the low level API’s it could only hurt them in the long run as once Nvidia is able to utilize it in Volta, they might push it harder than AMD can deliver.
you just said it they had the low lvl api for years and was making their
cards with async compute in mind while nvidia just stuck with dx11 for years and now nvidia is going to suffer because of that.
again amd MADE mantle and vulkan IS mantle and dx12 is similar and you say amd cant handle low lvl api?…lol
No I didn’t, I said “AMD has had low level API in mind since GCN’s inception”
In mind does not mean they had the low level api for years.
They focused on Async Compute because that was a listed feature of DX11, which never came to fruition because the API is highly serialized. The way GCN hardware works is highly parallel. Think of it as an FX CPU in a single threaded environment, its not going to operate as efficiently. The majkor different, aside from CPU vs GPU, is that GCN could operate the serailized tasks with much more processing power. Although power usage and heat generation was a by product, it still gave FPS. However stuttering FPS and issues with software inefficiencies did not do AMD any favors, which is why they have the marketshare they do.
Also please state where I said “AMD Can’t handle low level API”. There is a major difference between a company being able to handle an API and an industry adopting it. I referred to the latter. There are only a slight handful of titles confirmed to use a low level API and there are still a large portion of DX11 titles still in development that AMD will have to operate with.
you said “No I didn’t, I said “AMD has had low level API in mind since GCN’s inception”
thats mantle which amd made which is now vulkan and dx12 is similar
But you also stated I said they couldn’t handle low level API’s, also how many titles was mantle used in? What was the majority of the API’s used in the industry? How does mantle being a thing past, present or future, help AMD’s performance in DX11 titles? How did Mantle help AMD acheive 20% market share of the GPU market when they once held ~50%?
And with all that. You simply took away Nvidia is merely going to suffer, when I stated they maek GPU’s for the now, which is DX11 which they will make video cards that will properly use Low level API’s when those are the majority of the API’s. Unless you think they hold 80% of the market cause they’re retarded.
If they really need async compute for their cards to shine isn’t that there is something wrong with their architecture? Because they need async compute assistene so they can get better utilization. also the result is not always positive even for amd card and seeing how dev need to tune it for each card it might be a feature not used by many unless they were sponsored to do so.
What i’m aware is AoS dev mention don’t tweak to architecture specific. They never talk about if they ever do card specific optimization. (And the most funny about that “don’t tweak to architecture specific” was they readily use AMD ACE which is exclusive to GCN hardware and does not exist on other gpu)
Even for AMD they use async compute with their ACE. take out that ACE from the equation and lets see how it goes. Nvidia aware about this under utilization problems hence they design maxwell as it is. To increase gpu utilization (has been mention a lot by nvidia when talking about the difference between kepler and maxwell). With kepler they might have utilization problem (it is the trade of they made to increase double precision performance and utilization) but they solve the problem with maxwell.
There is many ways to solve a problem. So i don’t believe async compute is the only solution to increase utilization in gpu. Take thia as example: nvidia gpu used to be bad on mining. With maxwell they increase their performance in mining. Did they build their gpu similar to amd gpu?
they will need it.im sure the next cards will have it in them.the gtx 1080 and 107 were being made before the whole async compute stuff. it will take them time but by the time nvidia cards actually ahve async compute amd is to far ahead.
nvidia is most likly be dumb and try to push vulkan when the industry standard will be win 10 and dx12 for pc gaming and im talking standard meaning required games using win 10 and dx12 only.
nvidia is going to fall way behind if they dont have cards with async support.
6 FPS is 15%, not inconsiderable. Math harder through life, you might find it more meaningful.
you guys can even get your percent right…lol.im glad dx12 showed the REAL tests you all can have your fabricated ill take REAL numbers on the mattter 6fps is a real number.
15% is pretty close. Closer to right than when your mom looked at you and thought you were a person.
So probably about 12 to 15 fps better than a 980ti in most games that is roughly what I thought it would be.
Keep in mind these leaks were put out on purpose XD. It’s just to show that Pascal can do Async XD. Just wait to see how things are when the card launches XD
A 980ti could do it what’s your point.
point? To get attention. They got peoples attention. And in a few weeks they will really have peoples attention. XD
Can we get a god damn firestrike score PLZ. God dayum.
So I’m guessing Nvidia still hasn’t solved the async compute puzzle then.
They’ve had Async compute since Kepler, it’s just NVIDIA can’t do context switching very fast at all, only light workloads can NVIDIA GPUs work well, which is not very good anyway. Apparently, Pacsal improves it but no, there is no ACE type units like AMD have. CUDA can work asynchronously.
Yeah, from what I gather, it has a lot to do with exactly how AMD and Nvidia each handle asynchronous workloads. Apparently there’s a lot of performance gain on Nvidia’s side when async commands are executed via the driver, rather than direct on the metal, but as AMD had a lot of sway over DX12’s implementation, that never happened.
Of course, that’s just the internet talking, so I’m taking all that with a fistful of salt, but AFAIK it’s got a lot more to do with what exactly ‘async compute’ means in DirectX parlance, rather than Nvidia flat-out not bothering.
But will there be enough dev actually care to use async compute in their games (unless sponsored to do so)? As per hitman dev they mention that async compute need to be tweaked for each card because each card have different bandwidth to compute ratio.
Wow, that’s an interesting fact I wasn’t aware of. Considering already that no single vendor supports every single DX12 feature, and the standard is beginning to look more and more like a bit of a mess.
Which is why Vulkan will be the new API of choice for most in the years to come.
Ideally it suppose to be like that. Here hoping that Vulkan no longer being held back by OpenGL will see better future.
Although when it comes to it most dev end up choosing DirectX for windows based machine not just because of performance reason but also because the tool and help provided by MS in developing their games. AFAIK to my knowledge Khronos Group did not offer direct assistance like that to developer. Most often game developer have to ask IHV help with their OpenGL implementation.
That comment coming out recently. In one of the interview after they mention in GDC that async compute as ‘super hard’ to tune. He attribute the hard part was about needing to optimize for each card and too much tinkering can cause performance regression instead of improving it.
As for Rebellion…yeah lets see if they willing to do it if they are not sponsored by AMD. the last time they were given freedom with Mantle they don’t even bother to support crossfire in mantle version of sniper elite 3.
If that’s the case then DICE must be inferior to rebellion then. We see 285 got worse performance in BF4 when running mantle. Same with hardline .
With mantle it is probably much easier to see performance improvement when the API itself was design specifically for GCN hardware. Even Richard Hudy mention that Mantle will always be superior than “trade of all jack” API like DirectX and OpenGL Next (Now Vulkan) because mantle design was for GCN hardware from the ground up.
Because they were low level API you need to tweak them down to architecture specific. Same with async compute. There is no such thing enable async compute will magically improve the performance. Developer still need to tweak it so the performance not regressing. 285 have bad performance in low level API is not limited to mantle only. even in dx12 we see sonething similar happen. In one of early hitman bench 380 takes performance hit in dx12 while 280X does not gain any performance. And in one of hardocp AoS test they still fine a situation where DX12 is slower than DX11 for Fury X. So AMD still not optimizing GCN1.2 for DX12?
6 fps under dx12 isnt much difference between the fury x and 1080.i dont buy the hype at all.
You’re getting drivers and APIs confused here. DX12 / Vulkan / Mantle’s purpose is to remove a reasonable amount of abstraction between the respective APIs and hardware for better execution speeds. If the removal of said abstraction actually creates a significant bottleneck, as is the case of Nvidia being unable to execute command lists via driver, then that’s a problem with the API.
And while asynchronous compute isn’t part of the requirements for a card to be considered DX12 compatible, its implementation is directly tied to the API, and that implementation outright favours AMD’s hardware. The reason I say AMD had a lot of sway in how DX12 was implemented is simple – Xbox One. AMD developed the hardware for XB1, and DirectX 12 was developed for XB1 and PC in tandem. AMD refused to let driver command lists be part of DX12’s specification, because doing so instead of executing directly on GCN’s async compute engine would cripple performance on their hardware.
Bottom line: DX12 will always favour GCN hardware because it was designed for it.
by the time nvidia catches up to amds async compute amd will be so far ahead of them..
nope why do you think nvidia is trying to push vulkan.
only a 6fps difference between the fury x and gtx 1080. it still looks like amd wins in dx12. the fury x is a much older card
as WrATH said below my post.
80Ti – DX12 -1440p/Crazy: 36.8
FuryX – DX12 -1440p/Crazy: 44.2
————————————————–
GTX1080 – DX12 – 1440p/Crazy: 49.6GTX 1080 is about 40% faster than the GTX 980Ti
FuryX – DX12 -1440p/Crazy: 44.2
GTX1080 – DX12 – 1440p/Crazy: 49.6
sounds like nvidia is blowing smoke out their as*es again. amd.
not impressed.. on 1440p less than 60fps..
No videocard can beat bad coding. Look at the performance in relation to other card in % and not fps. Context is your friend.
It’s all propaganda from Nvidia. There more they do what they are doing, the more they are being caught out.
Stupid fools.
Nvidia is full of cheap gimmicks…most definitely the greedy one from the two, ie, AMD and NVidia!
Wtf?
We have been beaten at our own benchmark!!! We should have paid Stardock more, but Mom wouldn’t raise my allowance!!!
crazy settings, a single $599 GeForce GTX 1080 puts up performance on par with dual GTX 980 Ti costing $1100.
On Extreme settings, the $599 GeForce GTX 1080 scores 7200 points with an average FPS of 74 whereas AMD’s top dog, the $1500 Radeon Pro Duo scores 7300 points with an average FPS of 75.
Nvidia fan boys are the worst. No one buys theses cards to play at 1080p. Stop making a fool out of yourself.
I do.
The fps is not the point. The % is.
damage control its only 6 fps difference you can have you % if it makes you feel better.the real results say 6fps difference.
Dude, you’ve been making the same point for nearly SIX HOURS now. The R9 Fury X is a good card. The GTX 1080 will likely also be a good card. Everybody wins. If you want a Fury X, then go out and buy one and be happy with your purchasing decision and feel vindicated about having six fewer frames per second.
Or do you just enjoy being the most pointlessly obnoxious commenter here?
Isn’t the fury x the size of 2 gpus and requires liquid cooling?
To be non objective and talk like that you must be butt hurt or something ? Not trying to be rude here just trying to understand how you can be so blind and “non understanding”.
im not butt hurt at all just pointing out some facts that people are to nvidia blind to see.
Have you considered the fact that you might be too amd blind to see others points ? It’s not always black and white. It’s all shades of grey.
im jsut pointing out facts its not being blind.
k.
No way to tell if they’re Stock or OC results so holding judgement until more is shown ?
No doubt 1080 ain’t even listed.
Ill stick with titan x at 1080p 144hz. Once 4k 144hz on single cards become a thing its time to upgrade
You really are a special kind of stupid.
50-60% per improvement isn’t that bad… its damned impressive, regardless of brand name. You have to keep in mind thats a MAJOR shift. Usually the smaller node jumps are easier to fill in and fix it up to not compromise on performance. Throw FF into the mix and you have an issue. Nearly 50% node reduction for 50% gain is pretty nice add in FF and the optimizations that occur, you can bet your bottom dollar Volta will have the node better utilized and tweaked accordingly. One could only hope for the consumers sake they start taking Async seriously so that the industry can move forward towards more parallelization.
We can only hope AMD is doing the same.
its only 6fps faster then the fury x dont expect much.
6fps faster, in that game, on those settings. Context friend, context. If 1080GTX is 15-20% faster in most games, thats a significant difference.
that game is to stress test gpus try again its only 6fps difference.under REAL dx12 test not bs vulkan.
Try and read what i wrote one more time.
na
k.
GTX 1080 is a dud card.. Still no async and it has not purpose in the gaming GPU world 6 fps difference in DX12 for new architecture? Weak sauce GPU and just a waste of money for a GPU investment. It wont last long for incoming DX12 games this year and onwards. It will be slaugthered by AMD GPU’s that has async engine in its hardware and not fake Async like Pascal and Maxwell.