When Oxide’s Ashes of the Singularity benchmark tool was released, NVIDIA issued a statement claiming that it does not consider this particular benchmark to represent what DX12 can achieve on its hardware. However, it seems that a lot has been going on in the background. According to one of Oxide’s developer, NVIDIA was pressuring Oxide to remove certain settings in its benchmark.
As Oxide’s developer claimed, NVIDIA’s PR department put pressure on the team in order to disable certain settings in its Ashes of the Singularity benchmark. Oxide refused to do so, which basically led to NVIDIA’s statement regarding Ashes of the Singularity.
“There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”
Oxide’s developer also revealed that NVIDIA’s Maxwell does not support natively Async Compute, and that NVIDIA asked Oxide to disable it for its graphics cards.
“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.”
All in all, it looks like AMD is currently in a position to challenge NVIDIA, especially when it comes to DX12-only games. It will also be interesting to see what will be NVIDIA’s reaction to this whole thing.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
Nvidia is going to pay even more developers to incorporate GameWorks into their upcoming DX12 games and cripple any performance advantage of AMD cards in those games.
I expect all DX12 games by EA, Square Enix to favor AMD and all games by Ubisoft, WB to favor Nvidia.
Next Gameworks game is comig tomorow. It will be MGS 5. Let see how it perfoms. I am sure my 970 will play it 1440p maxed with gameworks perfectly
It has no Gameworks but is still an NTWIMTBP game so I expect it perform well on both Nvidia and AMD. A single 980 Ti or Fury X will be enough to Max out the game at 4K.
Yeah, whatever Nvidia fanboy.
And I should trust an AMD fanboy either? This is why I do my own research.
MGS 5 is not a Gameworks game.
Yeah, its only going to get worse. They have to gimp performance otherwise we won’t buy their Pascal.
And their PR dept still thinks the GTX970 is a 4gb card.
It does have 4GB of VRAM, that’s an indisputable fact, the silly sheep who read from nobodies that it doesn’t. People like you still believe the earth is flat if it’s reported by the media.
It does but 512MB is pretty unusable in regards to dropping the cards performance as it is segregated from the rest (pretty sure that clock speed is the issue on that 512). I love my 970 and it runs great at 1440p so even with the crazy ram situation it’s still a really good capable card.
It’s not unusable at all because memory doesn’t work the way you think, people assumed it works like the synthetic benchmark when it doesn’t.
I said pretty not totally, the drop in clock speed is not worth the trade off, as I said I love mine and game at 1440p so have no issues with that ram situation, maybe you should chill my friend.
If you got no issues then why respond with a worthless reply?
Because no one programs to use that last 512MB, so unlike your reply mine was quite informative, SMH.
Again , that’s not how memory works, I can get ACU to use all 4GB easy so again you’re wrong. Best thing you can do is enjoy your card and stay out of technical discussion.
Lol, whatever, you enjoy being blinkered and thinking you are technical. LMFAO technical.
Well, I’ve just explained a bit technically why it’s not an issue, want me to go into more detail? It’s really not that hard to understand even though memory itself is complicated. Do people complain about Windows pagefile slowing their system down even though Windows does actually use it? No.
The whole 4gb of GDDR5 gets used if a game requires the whole 4gb. What you don’t understand is that not all graphics tasks benefit from high bandwidth memory so the drivers simply put all the tasks that don’t require high bandwidth into the last 512mb pool but only when a game requires 4gb of GDDR5.
I was sure there was an issue with a drop in performance/speed though, that’s why NVidia said they should of done better in communicating the ram design over traditional methods.
Yes IF the game uses over that amount of memory who cares is 512MB of memory going to break a game i’ve yet to have one issue at 1440P Heck most my games use less then 2.5GB of memory.
I’m not the one with the issue, I already posted that earlier that I also game on one at 1440p, and what great cards they are. I was trying to explain what happens when you use that extra pool, nothing more 😀 I just seemed to get attacked for stating it for some reason.
Yes but what you said was wrong and you can’t even prove what you said about games not touching the 512MB of slower VRAM. I’ve tried ACU at 4K using over 3.5GB of VRAM, not unplayable like people claim, only expected frame-rate is around 20FPS.
People just like to talk about what they read. not actually do proper tests in proper games.
Seems you are getting better performance than most with a 970 at 4k on ACU (at 1440p you get 21FPS), ok I did say in error about games not touching the slower ram but there is impact when you use the whole 4GB, not a game breaker no but an impact. I’ll paste this here too :
Simply put the GTX 970 is only able to fully utilize 3.5GB out of the 4GB of available VRAM optimally. While the card can still access all 4GB of VRAM, the last 0.5GB of memory is not accessed or managed as efficiently as the rest of the available memory. Which leads to the reported performance degradation when an application needs to access more than 3.5GB. In the examples Nvidia provided to the press the performance penalty is in the mid single digit percentages
Which is the point i was trying to make all along, even NVidia says it has an impact.
Yes we know this but when it comes to real games it’s not as bad as made out.
Also the Assassins Creed Unity performance is appalling isn’t it?. When I owned a 980 and 1080p monitor I lost 20fps simply by enabling MSAA. Usually when you lose that much performance it’s when you are upping the resolution from 1080p to 1440p.
Since I got my 1440p monitor I haven’t went back to Ass Creed Unity but it’s safe to say the only anti aliasing you can use is FXAA .
Yes the last 512mb has much lower bandwidth than the other 3.5gb but it’s not the end of the world as not all graphics settings require high bandwidth. For example textures don’t require as much bandwidth as shadows so as long as you have enough VRAM your textures will load in without any issues.
Another example in regards to textures is the fact that I was able to enable very high texture settings in Crysis3 @ 720p using a GT540m with only 1gb of DDR3 with less bandwidth than the last partition on 980 and the game ran fine with no texture streaming issues.
So in short some graphics jobs are more dependent on amount of VRAM rather than how much bandwidth the VRAM has and it’s those types of tasks that will be sent to the 512mb partition.
Also another thing you could do if you really feel like you need the extra VRAM is enable Windows Basic as Windows Aero consumes around 200mb of your GPU’s VRAM. So if you disable Windows Aero you will get a 200mb chunk of VRAM back , possibly even more if you are running higher than 1080p.
Lastly, Eurogamer did their absolute hardest to cause a memory issue on the 970 by enabling all sorts of settings that pushed the card beyond playable performance yet they couldn’t find any specific examples of stuttering that was unique to 970, it more or less performed as close as the 980 as you would expect it to even when thrashing the memory.
We shouldn’t let NVIDIA forget about this because even though 970 is a great card for a great price especially when compared to 980, at the same time we don’t want NVIDIA thinking this sort of disclosure is acceptable.
No you are completely wrong the last 512MB is segmented and it does indeed work. Games already use the full 4GB of memory i own one would you like a screen shot?
Games that can efficiently used 4gb of vram or more will have problems whit a 970 that is unquestionable.
As vram demands seem to go up it looks like the 970 will age quite fast not to mention it will get slaughtered by a 290 or 290x in dx12 games.
“it will get slaughtered” you AMDrones are living on promises of the future for years and years and years. I almost envy your gullibility
Tell me then why does AMD not have this problem?
Not pretty unusable, it is in fact not used at all, it cannot be, would be slower than RAM access anyway.
That’s just not true at all.
So is MSI after burner lying when games show over 3.5 usage? Yet system memory stays the same… Stop being a AMD lunatic.
Are you f*cking kidding me right now? These guys are all Nvidia owners and you just called him an AMD lunatic
He is not stop trolling and get a life you Multi account copy account douche
dont get in to it with this guy its worse the talking to a sony fanboy
And what about you ? you will defend Nvidia till the end of the Earth ? This further confirms the dirty games played by Nvidia PR. They are forcing devs to disable certain features on their benchmarks and we blame that only consoles are behind gimping PC games.
This also confirms that Maxwell is all bark no bite thing. Now Nvidia will advertise Pascal as the best thing they ever created while discarding Maxwell like it didn’t happened at all just like they did with Kepler, I guess the manufacturing of Maxwell will also stop.
I’m not defending NVIDIA on that point so I don’t know what you’re talking about. I’ve criticised NVIDIA on their Compute performance, see my comment history if you like. I’ve said consistently that their Compute performance is bad right from Femi to later Kepler, if that defending NVIDIA I don’t know what isn’t.
Well unlike Amd that uses basically the same design since 2011 with very little changes Nvidia and Intel actually create new parts and the old ones stop being made. That’s how the industry works you don’t continue to produce the same old products(cough piledriver) for ever.
Did you get mad when Intel quit making the pentium 4?
The thing is just now whit DX12 those AMD parts from 2011 will be fully utilized when Nvidia’s “newer” parts are already outdated.
dude it comes out even hell a simple driver could improve things even by 5%
Asynchronous Shaders those have been idle-ing on GCN card and Nvidia doesn’t have them. DX12 will finally start to use them.
Yes and that same old design (GCN) is performing close to Nvidia’s Maxwell and now even better in DX12, yeah so much for new parts. Intel and Nvidia make new designs every year to milk customers with products that have like 5% – 10% improvement over the previous one while AMD does so with incremental updates so what’s wrong with that ?
Even that mighty nonsense Maxwell based Titan X is a huge back step in double precision performance, which was the selling point of original Titan and the original Titan shined in that area. So why they are selling Titan X for 1000 $ ? for 12 GB GDDR5 ? run any game in 4k max settings on a Titan X and 980 ti and you won’t see any difference in performance.
As a hobbyist I render images in Blender every day with 4k resolution and 2000 samples or higher with a GTX 980 (4 GB vram) and doesn’t face any vram bottlenecks, so that bullcrap Titan X isn’t meant for professionals because it’s not capable of doing anything it’s just meant to milk stupid gamers who want some bragging rights.
Pentium 4 was dropped when it really got old and started showing it’s age, Core 2 was a real improvement over Pentium 4. Maxwell over Kepler or Skylake over Haswell not so much, your argument isn’t valid.
NVIDIA aren’t forcing developers to disable certain features in their benchmarks, stop talking nonsense. NVIDIA asked Ashes of Singularity developer to stop their engine trying to use asynchronous compute engines when running the benchmark on NVIDIA harwdare as it doesn’t work on NVIDIA harwdare. Yet here you are trying to put a conspiracy theory nut-job spin onto it by making it sound like NVIDIA wanted the developer to disable graphical/tech advancements all because AMD’s hardware does them better which is absolute nonsense.
So they aren’t forcing developers to disable certain features in benchmarks as how a game engine addresses their hardware isn’t a “feature”
Also NVIDIA hasn’t discarded Kepler, again errant nonsense. I own a GigaByte Brix PC with a custom Kepler GPU with 1344 shaders and 6gb of GDDR5 and it runs exactly how it’s intended. They haven’t “discarded” Kepler at all. AMD takes forever to get drivers out for games despite the fact they haven’t even released a new architecture, lets see how well GCN is supported once AMD release a totally different architecture
Lastly, lets not pretend that AMD hasn’t fudged up , lied or deceived gamers at the expense of turning this into a TL:DR I won’t list all the fcuk ups AMD has made in the last 4 years unless anyone specifically asks me to explain….
A developer will code the game/benchmark as they intend not as some graphic card manufacturer wants, end of story. If a DX12 feature doesn’t work on Nvidia architecture then it’s Nvidia’s fault not developer’s. From long time Nvidia is pushing developers to use unnecessary amounts of tessellation because they know that it’s the weak point of AMD but AMD doesn’t ask developers to disable it, they complain yes but they don’t say anything to developer.
Just enable HairWorks in Witcher 3 and we’ll see how your Kepler performs compared to a Maxwell GPU. In their pursuit to to gimp performance on AMD GPUs they are gimping it for Kepler as well. Soon they will do the same to Maxwell with Pascal.
No argument on the fact that AMD has weaker driver support though it’s not as bad as some Nvidia fans paint it, I have used AMD from past 4 years and didn’t faced the problems that some Nvtards mention, their real weak point is the good support for Crossfire which is improving over time and relations with developers other than that their drivers work just fine. Now I have shifted to Maxwell with 2x Gigabyte GTX 980’s just to know that this GPU doesn’t support some features of DX12 when it was advertised as first fully capable DX12 GPU, yeah great.
We’ll see when AMD will release a new architecture but so far they are going good with incremental improvements on GCN like 1.0, 1.1 and 1.2. This way it is becoming solid proven architecture and users don’t see some hidden surprises like 970 fiasco and now this “No Async Compute”.
Yeah AMD lied to gamers with Mantle being supported for a long time but the fact is there are no huge consequences of Mantle going out of the picture because the benefits that GCN got from Mantle are returning in Vulkan/DX12 but what do we get from Maxwell ? an architecture that is advertised heavily as first fully capable DX12 one and now we are seeing it’s weakness, to overcome this Nvidia will release a new architecture and will discard Maxwell like it didn’t happened.
Sorry I didn’t even bother reading all that post, I stopped at the part when you said a developer should try to force behind the scenes code that doesn’t make the game look any nicer, just because…. Or it’s NVIDIA’s fault that a developer is trying to write software that their GPU doesn’t support.
What business is that of you as an AMD gamer if NVIDIA tells the developer not to attempt to get certain code running on their GPU?
Its only wrong when Nvidia does it you see
Its advertised as full speed GDDR5 ram which it is not. It’s a lie.
Can’t argue with that but it does have 4GB of VRAM, it’s only an issue because the synthetic benchmark picked it up not the games.
It doesn’t have 4gb of vram like the 980 or 290x that is the problem.
In reality it has 3.5+0.5gb vram and that is how it should be marketed.
I think your analogy is backwards.
I upgraded a 970 to 6GB using some old DDR2 and chewing gum. No one can argue it does not have 6GB now.
No need to defend your purchase, you nvidia shill. We all know you feel equally betrayed, no need to hide it.
Shut up chimp.
you and i dont normally agree on anything but it does say it has 4 gigs of vram and it does.who are we to tell nvidia to use THEIR vram on Their product.they can make use of any amount of vram they want on their cards.if they make a card that has 4gigs or ram and only want to use 3gigs of it thats their business.if you dont like it buy amd.right?
Right.
Sean, if you’re going to be obnoxious, at least have your facts straight.
If what you are claiming were true, then the president of Nvidia itself wouldn’t have apologized for “miscommunicating.”
From Tom’s Hardware:
With Nvidia customers still fuming about the misleading specs of the GeForce GTX 970, along with a recent lawsuit filing against Nvidia, the company’s president and CEO, Jen-Hsun Huang, posted an open letter explaining the miscommunication within the company and apologizing to customers.
In the eyes of the consumer, the biggest issue with the GTX 970 was the memory, which some believe is misleading because the advertised capacity was 4 GB but was really 3.5 GB, with the final 512 MB of memory segmented with a smaller bandwidth.
I’m still not getting any major fps drops using Ultra textures in Rise of the tomb raider. Vram usage on my 970 goes up to 3651 Vram yet fps averages about the same. I can barely see any visual difference though. It’s there but you really need to pay attention in certain reflection patterns. Min fps is what brings down the average for a bit but not more.
However, what you people here fail to grasp is that a higher pull of Vram doesn’t by itself do anything whether it be used up or 50% unused, its just a “head room” and no more. If your gpu is not powered enough to take usage of such big memory pull as 5-6gbs you’re gonna suffer major performance drops regardless. And here we come to the central problem with AMD architecture. A lot of what it has over Maxwell is head room, so-called “future proofing” but the gpu remains a Frankenstein of old tech. Regardless how games will be implementing Dx12 your current hardware will not be what takes full advantage of it. You’ll be in the market for a new GPU just like the rest of us, and your current hardware “headroom” will be tucked away neatly in a box in a closet.
I bought my GTX970 (factory overclocked) a year and a half ago and it provided me with a major bump in performance at a very attractive price that no AMD card at the time could offer, unless i was willing to shell an extra $80 which i simply couldn’t.
Now I have the coldest PC i’ve had in my lifetime and my lil 970 has no problem getting +300mhz more than its factory(already overclocked) setting and doesn’t even use the damn fans any higher than 60% of their capacity.
This was a switch from an AMD HD 6780. The user side support is especially an incomparable improvement.
You see I like to get my performance and my bells and whistles NOW. The PC market isn’t a place to be doing extreme future proofing it just not viable. Even if you’re in the market for the most high end option a teeny Maxwell is now par to par with the old mighty GTX Titan.
I really send all my best regards to AMD’s design team, without your constant pushing of the envelope there would be no competition in the PC GPU market. I’ve been “switching loyalties” with my GPUs for the past 15 years, in fact i have never been a loyal AMD or Nvidia customer. It just seems stupid. Much like your echo chamber here.
All I’m gonna say is.. I have the money and I’m going to buy a FuryX when they restock.. Was strongly leaning towards a 980Ti.. Forgot how I’m not a fan of their business practices.
what sad is that bad as their PR made this out to be it still a damn good card
admin plz ban rubypbron.
Please stop spreading this nonsense, there is not proof what so ever that NVIDIA cripple AMD GPUs but only expose what AMD is weak at, just like AMD exposed how weak NVIDIA is in Compute.
Proof:
1. Go look at NVIDIA Femi/Kepler Compute benchmarks
2. Go look at actual games where Kepler gets destroyed by Maxwell in heavy Compute (Civ 5 beyond Earth).
3. Go look at benchmarks where AMD GPUs destroy Kepler in heavy Compute, forward rendering(Dirt Showdown).
4. The R9 390 is beating the GTX 970 in NVIDIA title games and the 390 is cheaper with more VRAM.
5. Gameoworks is optimised for NVIDIA’s optimal path on their GPUs, that’s the definition of optimisation, AMD GPU will run slower not slow because the dev or NVIDIA made it slow.
6. AMD driver updates fixed a lot of the performance problems with AMD GPUs in certain games, not because the game was biased optimised for NVIDIA.
7. Certain NVIDIA title games run faster on AMD GPUs (like Farcry 4).
Far Cry 4 has a bug with Nvidia Softshadows. Disabling it will double the performance on Nvidia cards and is already disabled on AMD cards hence the better performance.
So it’s not a conspiracy then.
I also had to disable shadows. They were detroying image quality. So instead of Nvidia shadows i used normal shadows
Like with Nvidia Hairworks.. almost halves your framrate for nothing… and TressFX running on Nvidia Card looks better…
I have a GTX 970 and enabling hairworks goes from 55FPS to about 46FPS. Go try TressFX and tell me how much it drops your frame-rate. The original benchmarks said it dropped the frame-rate of Radeon cards by 23% so again you’re WRONG.
TressFX didn’t even have wet hair.
That’s funny, bacause I have a GTX960 and enabled TressFX on Tomb Raider and just droped from 60 to 51.. But I have to point that Only Lara have the effet.. on The Witcher 3 most of characters looked better with the effect and the drop wasn’t that big too (from 54 to 46). But I have to say that TressFX still looks better.. The Hair looks more natural.
Well, each one have its Pros and Cons… let’s hope that both get better with time in a relation Quality per Performance.
I’m getting 4-5fps drops on my Nvidia enabling Hairworks, the planet where you live what does icecream tastes like up there?
i agree with you on Far Cry 4 on Dirt Showdown its a another story atm.
With the latest drivers, FarCry 4 runs pretty good on AMD GPUs.
Far Cry 4 is a Nvidia title but with the current AMD drivers, AMD has the advantage on that title.
Nobody talks about the bad performance of Fury X in older games though, Crysis, Crysis warhead being two.
NVIDIA just ticking the box for Async compute without any real practical performance.
The same bullshit tick box support with Geforce 7800/7900 series GPU with 32bit FP shader math.
The performance benefit with AMD GPUs is only based on how bad DX11 was, not how AMD has an 80% gain in DX12. NVIDIA was never going to get 80% gain over DX11, even devs said it maybe around 40%.
As for the FP issue, that was different, some say NVIDIA didn’t even get invited to the spec talk between ATI and Microsoft.
But 40% of perfomance gain over DX11 is huge! This means that a game in DX12 will run +10 fps faster than in DX11 with Nvidia gpu! We will still get better perfomance without buying a new card in all the future dx12 games!
In GTA V, Nvidia’s PCSS runs pretty well on my R9 290X.
This is not good news for Pascal. I thought they had at least rudimentary support for async compute done, but it looks like they don’t have any at all and were just faking it. It’s not likely they are going to get it done in 2016 or even 2017.
Nothing says they’re not already working on it, Pascal is expected to be a huge change in many ways, but I’m still sceptical
Well, here’s the thing. Working on it now would be great if they want to launch the architecture in 2018-2019. We have to bet that instead Nvidia was working on it 3-4 years ago if they want to have async compute in pascal. Given that they weren’t forward thinking enough to actually do the work when it came to the 900 series and instead just lied about support, I’m not sure we can bet on them being smarter years beforehand when the decision had to be made.
No one seems to remember that new technologies usually have poor yields. HBM2 hasn’t been finalized yet and AMD according to industry rumors is getting first pick.
Ha Ha Ha
NV need Next-Gen Hardware and that’s it… I told some people long time ago.
Pascal Now or no True DX12 for their Users. Shame cuz we can expect nV to criple DX12 Games (turn Off ACE meant not really gain for Low CPU Users 🙁
Especially x2/4 and x4 … and Med range GPUs like 6xx, 770 etc. (78xx, 27x etc)
Here The Truth about H/W spec of currents GPU’s from both camps:
I think with maxwell nvidia finally nailed an architecture best suited for DX11.
Yep | nV rules in DX11_0 thats the Facts. but what about -> every gamer has right to play GooD 🙂
an example how nvidia has top perfomance haha
Yes! Nvidia has the top perfomance. The sales of Nvidia vs AMD prove it!
Correlation is not causation. Damn you’re dumb.
Oxide lie! Nvidia never pressure them!
Your post is a lie.
I dont believe that amd has these feautures but Nvidia dont. Nvidia has been destroying amd all these years and now we are to the point where 4/5 pc gamers have Nvidia gpu. AMD is spreading these lies in an effort to battle Nvidia but we dont believe them!
Is he for real?
Because I’ve got a Bridge to sell…….
LOL, what else, is hitler still alive? Elvis?
In all your comments here, you didn’t bring a single valuable thing in this article. Keep it up so you’ll be banned in no time
And how is gonna ban me, wise a s s, you? DX12 is just a marketing tool, anything else. Im still waiting the “huge performance” in gaming since DX9, the only thing that really improved something in gaming was 64 bit architecture and it was AMD the first one to step on that direction.
Well, you see it in the Ashes demo on AMD cards, that’s a “Huge performance” gain over their DX11 renderer.
You know Sean, I know you are a NVIDIA fanboy since you were defending GSYNC. SO your opinion worth nothing to me, SRY.
People called me an NVIDIA fanboy while I had AMD hardware so who cares? A fanboy is loyal to a company or brand, I had a full AMD setup about 3-4 months ago, that’s not a fanboy, nor is criticising both NVIDIA and AMD.
You need to understand what the definition of a fanboy is before accusing someone of being one, clearly you don’t know what a fanboy is, you’re just using it for your own agenda.
I have a question to bring this convo back to earth. lol Do you think that this can be treated with better firmware/software or is this a ” new hardware” situation? I only ask because if Nvidia’s history is any indicator, they released alot of tech that at first they said was only available to Maxwell, but later on, enabled it on Fermi and Kepler. I just got a Strix GTX 980 Ti and hoping I am not setting myself up to get butt hurt. lol
Because you’re just trolling everyone you’re not agreeing with here. There were a lot of guys like you back then and they also got banned coz they got reported by “real” members
Yeah…
Exactly
Ban me, made a new disqus account, then ill be back to tell you how DX12 is a “s h i t”. Maybe the problem is you…
I’m not an admin so I’m not the one who will ban you. Also, I don’t give a flying fuc* if you’re dissing DX12. I’m only addressing your behavior of trolling people here
Your post is sh*t. I’ll take you on
As for now, we really don’t have any DirectX12 only game, so don’t sweat it. I am sure Nvidia will find a solution.
Question is will it be in hardware or software? Nvidia has great engineers and they actually do make bigger architecture changes then Amd has lately.
I know nvidia’s solution.. and that is Gimpworks !!!!!!
DX12 is just a house of cards. Where are those DX12 lovers who were saying “DX 12 FTW”?
so u got from a story that a gpu maker tries to make companies to make their games work better on their gpu by disabling some effects, that the dx12 is bad?
hahahaha ty nice one hahahahaha
DX 12 FTW
nvidia being nvadia
nvidia really needs to do someting with their latest drivers overall because all of the latest ones are POS for windows 10.
maybe its because i am using windows 10 idk but thats not an excuse to downgrade to windows 7 to have perfect drivers with no crashes or extremely low min FPS.
355.80 Windows 10 Hotfix drivers are very stable for me. Try them.
its says they are the same as 355.60 only with
GeForce Hot Fix driver version 355.80 that addresses the following issue:
Excessive virtual memory consumption for SLI configurations running Windows 10
The release note don’t mention all the things especially embarrassing bugs that will make Nvidia look like amateurs if they mention them in the release notes. So always try the drivers yourself to notice any improvements.
i will install them and see how they act:) its been a while since they didn’t release something new and i do belive its very dumb to have just one fix for SLI users lol.
but whats the point ?
tomorow we will not have MGSV and Mad Max geforce ready drivers?
What do you want from them with your gtx 680 card?
They wanna buy another card,maybe an pascal card.They gimp&crip every old card.
Your still on your 680? Still a capable card i guess
AMD will be gone soon so I don’t think Nvidia are too worried.
Yeah, AMD, literally one of the biggest Tech Companies in the world, literally worth Billions is going to flop tomorrow, right?
Or, are they going to shut down their GPU Division, which, for the record, not only supplies Chips & Tech for their APUs, but also both the PlayStation 4 & the XBONE, amongst a multitude of other things?
Wow, all the Trolls are appearing today I see.
Yeah
Look on the facts:
1. 2.5 GPU with HBM = AMD Fiji is Here and Now 😀
2. Zen is comming
3. Arctic Islands with HBM2 is comming
4. AMD APU’s are great H/W
5. even FX is not so Bad -> look in the Newegg/Amazon sales etc.
Their revenue is ridiculously low for a CPU AND GPU manufacturer. Look at their stock and market cap.
Compared to whom? Intel? Which manufactures SSDs, APUs, CPUs, Motherboards (AMD does CPUs, not actual Motherboards), & I can’t even remember what else off the top of my head?
Nvidia’s Reported Revenue for 2014: 4.13 Billion USD.
AMD’s Reported Revenue for 2014: 5.50 Billion USD.
Intel’s Reported Revenue for 2014: 55.870 Billion USD.
Yeah, sure, if you look at that & think “Nvidia makes GPUs, while AMD makes CPUs & GPUs” all you see is “AMD should be making more.” So what? Does AMD technically make less money than Nvidia & Intel? Yeah, sure. So what?
“AMD is the second-largest global supplier of microprocessors based on the x86 architecture and also one of the largest suppliers of graphics processing units.
AMD is the only significant rival to Intel in the central processor (CPU) market for (x86 based) personal computers. Since acquiring ATI in 2006, AMD and its competitor Nvidia have dominated the discrete graphics processor unit (GPU) market.”
They’re not going anywhere.
On the other hand, I’m amazed how much people insist on defending Nvidia, even after they’ve blatantly lied about GPU Performance & Specs just to get the Cards into people’s hands.
Footnote: No, I’m not using AMD right now. Yes, I have in the past, many times, just like I’ve used Nvidia many times in the past. I’m using a 780 right now because I found it at a better price at the right time, no other reason.
“AMD is the second-largest global supplier of microprocessors based on the x86 architecture and also one of the largest suppliers of graphics processing units”
The only other modern day company that makes x86 microprocessors is VIA. The ONLY other. And that’s not their main business by far – they mostly produce chips for others. That are not x86.
So that point is completely moot.
I am however interested in seeing how AMD Zen does.
Lol what billions? They don’t even have a billion on hand they really don’t they have 800 million. As for stocks VIA is worth more Amd only goes for 1.38B
Yeap, there is a rumor that MS will buy them..So yeah nVidia is nervous
If anyone were gonna buy them I would love it to be Microsoft. Truth is it’s not gonna happen.
no dude, it would just be named something else, like Sony or Intel xD
I don’t understand why anyone would be happy at the prospect of AMD going under, as soon as that happens nvidia and intel will slash development and raise prices…back to how it used to be.
This is untrue with intel anyways. Amd hasn’t been competing with Intel since 2006. Intel will ALWAYS name their price as long as they are top dog in performance. Intel will ALWAYS have to compete with themselves. Same kind of goes with Nvidia they will always have to convince users to upgrade from their older parts and they will have to price them according to their older parts.
Back in the late 90s Intel were drip feeding the smallest possible upgrade to Pentium with massive price jumps between each cpu. Intels pricing structure now is great compared to what it used to be, we don’t need a return to that.
Or Intel’s anti-competitive practices which they’ve gotten in trouble for numerous times.
It’s probably a lot more complicated. Honestly, I think it would be good. To put it simply… If AMD went under, it wouldn’t just vanish. It would most likely be bought out by a much more powerful company wanting a stake in the microelectronics market.
AMD is not actually very big. AMD and Nvidia each earned about $5 billion in sales revenue last year. Just to put this into perspective; Intel earned $50 billion in this same period, while IBM nearly hit the $100 billion mark.
Among the better known companies who could potentially be interested in AMD; Samsung earned $20 billion, and Microsoft about $80 billion. Google raked in about $60 billion from ad revenue alone. And these are just a few of the better known companies.
Intel could completely crush AMD out of the CPU market. But they haven’t. There are lots of factors here, but fundamentally, if we ignore things like patents and law suits for a moment, we can simplify this and say: AMD is too small to threaten Intel, but big enough to keep other players out of the market. However, companies like Microsoft or Samsung are much bigger, and if they bought AMD, then they could actually go head to head with Intel on much more equal footing – which is bad for Intel.
Anyway, to cut a long story short, this would be fantastic for consumers, since Intel would have much stronger competition, and Nvidia would definitely have to lift their game as well.
on the other hand amd are a much smaller company so they don’t need all that revenue either.
I don’t think it can be separated like this. From the start, if AMD had this kind of revenue, they would not be a much smaller company.
Simple fact is, if there was another company in this market with the same kind of budget as Intel, they would be able to compete much better than AMD.
of course.. that x86 cross-license has f’ed everything up for amd..
Maxwell is so good at DX11 it doesn’t really matter
Next year you will be forced to buy another gpu (from nvidia ofc,dont you?) because old maxwell dont perform well on DX12 games,kepler ofc is in the same boat.just lolz guys
Gameworks is a mess for gaming industry. With DX12 metal to metal cards we hope that mess will end.
I read this and is terrific for some users:
https://www.reddit.com/r/nvidia/comments/3iwt2j/oxide_nv_gpus_do_not_support_dx12_asynchronous/
Btw please put this link as what dev said:
http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995
A game developed by Oxide funded by AMD. Its developer says NVIDIA wants to disable = Lying!
NVIDIA godNESS!
Copyright © 2015 Oxide Games. Ashes of the Singularity is a trademark of Stardock Entertainment. AMD, the
AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. All rights reserved.
Damn… as always, dirty as F.
What is next? Plant bombs on fiji chips facrory’s?
Maybe they are about to sent a T1000 terminator with a maxwell chip. Who knows?
ROFL !!!
it doesnt matter.. put some money and gameworks will destroy amd dx12… as always ..
that is what I fear.. nvidia has money to throw around.
as always ..
Nvidia’s Windows 10 drivers suck donkey schlong. They butchered performance in all source engine games and the way they work (or don’t work) with Windows 10 Aero settings or whatever is forcing the vsync (while still alowing plenty of screen tearing) is holding hardware back to less than a 10th of the fps it can deliver and was delivering before the “upgrade”.
What some people don’t quite get here is Oxide’s benchmark seems to put an unrealistic emphasis on graphical features that AMD cards do particularly well at. If they’re going to showcase DX12 features in the benchmark, it is only prudent to try and replicate the way actual games will be using the same technologies and not overly focus on things one company’s cards do best.
About a year ago, remember that “Star Swarm” benchmark? That benchmark was geared specifically to focus on AMD’s outstanding memory bandwidth and pretty much nothing else.
Read again.
Ashes of the Singularity does use Asynchronous Shaders but not at their full capability and still GCN card get a great boost.
Asynchronous computing is probably DX12 most important new feature. Too bad it doesn’t work on Nvidia cards.
Actual games could rely this feature even more.
Star Swarm was geared towards Draw Calls mostly but didi not incorporate any important DX12 features and it did not behave like an actual game the same way Ashes of the Singularity does.
It has less to do with AMD features and more to do with Mantle and showcasing how much better a low level api is in terms of performance.
Greatness is coming…
I think that was a shady move coming from NV. Don’t take me wrong, i’m all for the green team, although, the industry doesn’t benefit from these “developer pressure moves”. I’m pretty sure NV will have this sorted out somehow when any relevant DX12 arrives.
nvidia just needs to wake up.its time for amd to shine with dx12.we already know thay nvidia
sabotage’s amd already in game with game works.and also lied to their customers about the gtx 970.we all know nvidia and intel for that matter like synthetic bench marks that show them being better then amd.intel is just faster raw cors then amd but with windows 10 taking advantages of more cores.amd will compete if not do better then intel.most gamers that use intel use i5-i8 quad cores.most amd users use 6-8 cores. and are way cheaper. im surprised not to hear anything out of intel about win 10 tho.get ready for the new era of amd gaming biys
It’s seems to me you can’t read right, you stupid people can quote the same sh*t all day if you want but what you don’t understand is that in real world tests on games, the issues with the segmented memory is not as bad as people made out.
Again, I NEVER SAID the 512MB WASN’T SLOWER, I SAID WHEN YOU TEST REAL GAMES AT 4K USING MORE THAN 3.5GB THE PERFORMANCE IS AS EXPECTED.
Is that clear?
I’m not denying the segmented memory issue but then you explain to me how ACU runs at 4K using more than 3.5GB and the performance is expected? You can? It’s really not using more than 3.5GB at 4K, so how come the game doesn’t drop to unplayable levels then?
Thanks for proving my point.
wow that’s shady
x86 is an Intel technology. It was invented and developed by Intel, and subsequently granted to AMD. Being granted the x86 license is quite possibly the best thing that ever happened to AMD…