AMD vs NVIDIA by MAHSPOONIS2BIG

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

Since the release of Ashes of the Singularity, a lot of controversy surrounded AMD’s spectacular results over NVIDIA’s underwhelming ones. Was this DX12 benchmark gimped in order to run faster on AMD’s hardware? Apparently not as Overclock.net‘s member ‘Mahigan’ shed some light on why there are so dramatic differences between AMD’s and NVIDIA’s results.

What’s also interesting here is that Mahigan has provided a number of slides to back up his claims (which is precisely why we believe this explanation is legit).

As Mahigan pointed out, Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task, whereas AMD’s GCN 1.1/1.2 is composed of 8 Asynchronous Compute Engines (each able to queue 8 Compute tasks for a total of 64 coupled) with 1 Graphic task by the Graphic Command Processor.

This basically means that in terms of parallelism, GCN GPUs should be able to surpass their direct Maxwell rivals, something we’ve been witnessing in the Ashes of the Singularity benchmark.

It’s been known that under DX11, NVIDIA has provided better results than its rival. And according to Mahigan, this is mainly because NVIDIA’s graphics cards can handle better Serial Scheduling rather than Parallel Scheduling.

“nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11.”

Regarding the really curious results of DX11 and DX12 on NVIDIA’s graphics cards, Mahigan had this to say:

“People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what we’re seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.”

And as for AMD’s underwhelming DX11 results, Mahigan claimed that this is mainly due to GCN’s architecture, as the graphics cards are limited by DX11’s 1-2 cores for the graphics pipeline.

“But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.”

This is definitely interesting and will make people understand why Ashes of the Singularity performs so well on AMD’s GPUs.

Do note that a game’s draw calls are not its only bottleneck under DX12. Both 3D Mark’s DX12 benchmark and Ashes of Singularity use a lot of draw calls. However, a game may hit a Geometry or Rasterizer Operator bottleneck, in which case an NVIDIA GPU will outperform an AMD GPU.

What ultimately this means is that NVIDIA will have to re-design its graphics cards in order to be able to handle more draw calls in parallel. A software solution sounds almost impossible at this stage, though NVIDIA’s engineers may come up with some interesting techniques to overcome this limitation. That, or some DX12 games may hit another bottleneck that may favour NVIDIA’s GPUs over AMD’s GPUs.

210 thoughts on “AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell”

    1. Dumb comment. Nvidia is going nowhere. They’ll still perform the best in DX11 which most games will be using. AMD needs DX12 to get at least near Nivida DX11 performance and most games won’t have DX12

      1. All of the last gen games are Dx11 but the future is Dx12. UE4 has switched over to DX12, which will account for a lot of midleware games. SE will be using it for Tombraider, Deus Ex and Hitman. All of the big companies are switching over.

          1. There’d be no point to switch back. Dx12 is a much more convenient API to developers. Developers have already expressed the reason switching to Dx12.

          2. You really think devs are going to be working on DX11 going forward when DX12 lets them do quicker and cheaper ports with more impressive visuals to flaunt at whatever trade show? I mean the only real reason to hang on to DX11 support right now is windows 7 users who have yet to go to 10. Both AMD and Nvidia cards actually have rudimentary support for DX12 going back years so we won’t get a Mantle situation. They aren’t going to do legacy support just to save Nvidia in a couple benchmarks.

          1. Ok, I’m just planning to use skylake no matter if I end up with amd gpu or not. So i will use min10 and turn off these spyware parts off.

    2. haha amd is good at one thing (draw call performance) and you claim bye bye. LOL do you realize 90% of games don’t push draw call performance? It’s mainly RTS games.

  1. Most of the upcoming DX12 games will be low scale NTWIMTBP console ports anyway and this will give them time to fix this issue for Pascal’s release.

    Till then Nvidia owners including myself will be getting the short end of the stick when it comes to DX12 performance in Neutral or AMD Gaming Evolved games.

    Atleast my Gigabyte 980 Ti G1 Gaming is destroying all DX11 games at the moment so its not that bad.

    1. The problem is that the consoles are using the same GCN architecture to get low level api advantages. With DX12 a lot of that is now going to carry over to pc development on AMD cards.

      1. Nvidia just needs to limit the damage until Pascal releases next year which will probably be More Friendly To Parallelism Than Maxwell just like GCN. In the meantime Nvidia will be destroying DX11 games and AMD will be struggling to keep up with Nvidia in them.

        1. Wonder if i should hold on untill Pascal is out? I got a 760/770 and i do get some annoyin lag in a lot of more recent games. I wanted to upgrade but this actually makes me wonder. Perhaps waiting would be wiser. 770 (even 60) can still deliver some pretty good FPS.

          1. Do what I’m doing, buy a 980ti. Then when Pascal cards come out, sell it on EBay, it’s like you’ve rented it. Just look after the box 🙂

          2. I think Pascal’s design has already been finalized so I think it would be better if you wait for the benchmarks. If Pascal’s architecture won’t address the issue stated above or Nvidia could not make a fix for it, you will be stuck with that card and may regret it.

            Do what I’m going to do and wait until DX12 games are out and find out what brand of cards will be better.

          3. well it’s not as if while designing Pascal they know nothing bout the weaknesses of their Maxwell arch

          4. Not a bad idea. I guess i’ll just wait to see how DX12 is doing. I’m not in a huge rush to buy a brand new card, but its starting to get obvious that i need a new one.

          5. Yes, wait for Pascal, actually everyone still having Kepler 600/700 card should do it. Maxwell is not worth it.

          6. Yeah, I’m still on Kepler. My 650Ti Boost is only just starting to feel a bit behind the curve now, but there isn’t a single game I can’t run at decent FPS here, even if I occasionally have to turn a few settings down to ‘high’, from ‘very high’ 😛
            I may hold out for Pascal and see how things are looking then, but I’ll probably just wait until NV’s first proper foray into DX12-optimised hardware before I upgrade.

          7. My GTX 570 holding up just fine. Pay more attention to your rig settings under win. Those lags etc. will go away.

        2. Pascal will improve things but I don’t think it will be a final fix. Nvidia will have to change more than AMD for the next generation I highly doubt they had time or anticipated it early enough.

          Next generation together whit DX12 AMD does have it’s chance and they deserve it, they always took the risks and tried to come whit something new and different and if they play their cards right we could see an impact as big as the first launch of GCN in 2011.

  2. I understood bugger all. 😀

    Seriously though, will DX12 make better use of the hardware (overall) or this here is a specific scenario (Gfx Card architecture dependant)?

    1. I wonder if DX12 can be “tweaked” to run on serial scheduling to give nvidia back the advantage. IDK, nvidia knows better, they will figure it out.

    2. It is mainly about draw call performance which can get really high in RTS games. FPS shooters do not stress draw call performance much.

  3. After all the bad news/bad press AMD has been under in the last months, at least they get a glimpse of good news. Being naturally optimized for parallelism is a must. I hope they can truly benefit on this and stop being who they are right now. Nvidia needs some competition. I feel they’ve been grabbin’ asses for the last 2 gens (even though they perform better than their counterparts). Now, pascal’s architecture is already done so there’s probably no modification possible at this point BUT, nvidia being nvidia, i’m sure they’re not going to let this go and come back strong either in a “somehow” major driver overhaul (less likely) or an architecture more prone towards parallellism, which is the future no doubt.

    For the time being i’ll enjoy my titan X but i can see myself buying a pascal gen as soon as the king comes out.

      1. Well not “swimming” in money but like everybody here i choose where i put my money. Top dog GFXs and Cars are what i like. Somebody else would say it’s uber useless etc but they buy freaking 200$ jeans which i don’t care for.. soo catch my drift ? 😉

  4. Heh, so technically despite having dx12 available from fermi and up, they will run garbage. Nice job nvidia. Meanwhile my 3 year old amd gpu will run just fine. Yay.

    No wonder the GCN was built for longevity.

    1. Yep and drivers virtually do nothing cuz in DX12 will be/is close to silicon programing.
      So goodbye new drivers (its not DX11 crap for bench). Radeon with Asynchronous Compute Engines in GCN is very future proof + lots of new effects in close to metal level. Look into new NFS, SW BF3 and FrostBite games (especially in Mantle 😉

      Here is my 3Dmark DX12+Mantle ovr. test
      XFX R280X 1025/1500 1.162v + CPU x6 3.86GHz
      Its a bug in 3Dmark so OC don’t work -> its BIOS 1st setting.
      Normally i have 1050/1550 1.175v and 1060/1600 1.181v and 1075/1650 1.218v

      1. I don’t understand one thing. With 355.60, my 970 gets 17mil draw calls. So how exactly is maxwell weak with draw calls in dx12? Unless the test is not designed to overload the *thingies* (64 in GCN, 32 in maxwell).

        I should add that an overclock results in ridiculous amount of perf gain, especially by using custom bios. I think the test does not load gpu at 100% boost clock on the fly which probably reduces the score. You could probably get more with a custom bios I think.

        Anyway I’ll stop, it feels like I’m defend a brand (which I hate).

        1. Why exactly do you hate them if you have reason to defend them/support their technology? That makes it seem like your hate is just because “people hate them”. Make your own decisions.

      1. Comparing a real life still to a videogame still makes no sense. How well your hardware renders that photos 60 times per second with physics calculation is more important.

    2. How can you know something “will” “run garbage”, when there aren’t any DirectX12 games out to judge by?

      Your 3 year old gpu probably will run So-so on medium settings like any 3 year old gpu will.

  5. This explains a lot really. I knew it was not a driver issue. But I sure did think maybe Nvidia did not have enough time with it’s driver. So now I am stuck with two 970’s wondering wtf future DX12 games will be like dealing with performance. So much for having 12.1

    And this shows Nvidia was full of it when they said they knew about DX12. Well If they knew of DX12 they would of built Maxwell around this tech years ago… But yet the only Arch that is showing advantage is AMD since let’s face it. AMD built it’s arch around Mantle. And DX12 has hints of Mantle. Wow. Who has M$ with Nvidia been trying to fool. ?

      1. Why? I enjoy so many games with them. I will wait and see how DX12 games go with them. Besides it’s not like I have money on trees to buy new GPU’s again.

        1. don’t sell them, its not worth it -> Stick with them, and when time come -> switch to Fury-X or Nano 😉
          I’m waiting for Fury-X now -> will have it around November 😉
          But for now my R280X is OK for most of the Games 😀

          1. F that I will still be on 28nm… It’s kinda a loss cause right now. We don’t even have DX12 games out yet … And everything that will be out will be a wrapper not a full on DX12 game. And by the time True DX12 games come out 970’s/Fury/Nano will be a thing of the past.

          2. Hmm maby you got the Point Bratan’ 😉 So we have to wait -> till Spring… 7 Games will have DX12 till then:
            DeusEx, NFS, SW BF3, Mirrors Edge2, Tomb Raider, HitMan, AoS, GRIP, Fable Legends [ Witcher 3, ARK etc. will have update to DX12 in the future]
            Now i’m happy and playing AC:Unity 😉 so my GPU is sufficient.

          1. 3.5 yet I run games at 4K going past 3.5 gig’s and I notice no fps loss or stuttering……

            Can’t say it’s gimped DX12 performance when their is still so much more that needs to be done with DX12 other then what was shown in AoS…

            And the main part of “Do note that a game’s draw calls are not its only bottleneck under DX12. Both 3D Mark’s DX12 benchmark and Ashes of Singularity use a lot of draw calls. However, a game may hit a ‘Geometry or Rasterizer Operator bottleneck, in which case an NVIDIA GPU will outperform an AMD GPU.” Should be taken into consideration.

          2. I know all that about the rop and l2 catch… And I am sure their marketing is the one who messed up as always. Just like they tried to blame AoS benchmark. It makes sense when 970 was around $200 less minus only cuda cores at launch.

            I been using Nvidia/AMD-ATI since 1999 and before that 3Dfx. . I seen both companies do stupid things over the years. But it’s nothing to get mad and throw a fit like a 2 year old about.

      1. It’s way to early to say Nvidia is at a loss that’s for sure.

        Interesting part is. “Do note that a game’s draw calls are not its only bottleneck under DX12. Both 3D Mark’s DX12 benchmark and Ashes of Singularity use a lot of draw calls. However, a game may hit a ‘Geometry or Rasterizer Operator bottleneck, in which case an NVIDIA GPU will outperform an AMD GPU.”

        1. ..or shader bottleneck, in which case AMD would have and advantage. It is really per game basis. I see gameWorks as much bigger problem to the industry than this. DX12 will boost basically all HW anyway.

          1. Seems the starting of DX12 no company AMD or Nvidia will have a huge Advantage other each other for games.

    1. Don’t sell out feature set 12.1 just yet John. Remember, it is Nvidia exclusive feature set and they wrote the code for it so it should work really well for Nvidia at least. Pascal will fix the issues if the future does hold problems for Maxwell.

      It sucks, but it is pretty much how the game works with computer tech. You buy mostly for the now, not tomorrow.

        1. Is it that much different than Gameworks? I thought about it and it is actually 12_1, which is a feature set instead of a different version of the same API.

          My question is what happend to 11_3 or was it 11.3 that was supposed to have low level API type performance for DX11 users?

          1. I don’t even see 11.3 being used… No game has even went past 11.1.

            And yes 12.1 would be much different then a SDK. Since SDK’s and API’s are two different stories.

            What will be most interesting is how AMD’s Radeon SDK as well as Nvidia’s Gameworks SDK work with DX12 games. How will performance be hit. That remains a scooby doo mystery.

      1. There is nothing to fix, most games will continue to run on DX11 for a long time yet and NVIDIA still have best performance in it. AMD will just have an advantage in DX12 though it reminds to be seen what will happen with driver updates and DX12 optimisations for NVIDIA. Pascal will just be more core DX12, NVIDIA never peak too early, just like with DX11 Compute performance where Maxwell is superior of all their GPUs and matches AMD very high performance with DX11 Compute ,which they have had for a long time.

        1. Yet all those that will need / would possibly benefit from much higher draw calls count will be able to use new DX or vulkan. For games not bound by number of draw calls we have DX11 as you have said. NP there.

  6. Hooo … so that’s how it works. There are still almost non-existent DX12 games so I’m going to stick with my Nvidia GPU for now.

  7. Most people are not comprehending the article. This is not a end all DX12 GCN rules all thing. They found that Nvidia can’t hit the high draw calls that AMD can. So, in games with high a*s draw calls, AMD will perform better. In games with high geometry or shading/rasterizing Nvidia will still perform better. Most games do not require sky high draw calls, although in the future DX12 games will most definitely be pushing draw calls exponentially higher than DX11 games were.

  8. All this says is that AMD have caught up with NVIDIA on API performance. NVIDIA has much better DX11 performance and DX12 not that much, but AMD have poor DX11 performance but much better DX12 performance so they have just closed the gap and at some time surpassed NVIDIA in DX12.

    Let’s wait for some proper driver optimisations, yes AMD have done well with their drivers in the past year or so and it seems DX12 suites them much better than DX11.

    1. The problem is, on DX12 most of the optimization will come from the game engine it self.
      Nvidia or AMD driver optimization will hardly do a thing, thats why nvidia is in panic.
      Nvidia is pretty good at sofware optimization, when DX12 arrives, the optimization need to be done close to the dev teams during the development of the games and not with drivers.

      So, one of the biggest advantage on Nvidia is lost (drivers), altough i wouldn’t be in panic, nvidia have great parternerships with dev teams, so i guess they just need to change their modus operandi.

      About AMD, well they already more prepared for DX12, so i expect them to have better results on this DX12 early games, only because of their GCN arquitechture that really takes advantage of more paralellism, that is something that DX12/Vulkan/Mantle is all about too.

      1. Let’s not forget that most A+ developers work anyway whit GCN as it’s in the new consoles so GCN will get it’s optimizations by default, AMD won’t even have to do anything.

        1. Yeah. Both the Xbox One and PS4 use GCN so engine-specific or game-specific optimizations for that will benefit AMD’s GPUs across the board. This is very good news for AMD but at the same time, I want NVIDIA users to get good DX12 performance. I hope NVIDIA figures out a way to improve performance for Kepler and Maxwell cards.

          1. From what I’ve read there nothing to improve.
            Nvidia already has been working very closely whit Oxide and what needed optimization got it from both Nvidia and the developer.

  9. Please just wait for at least one completed and oficially released DX12 game before deciding which architecture is better for new API.

  10. I will translate : Nvidia builds their cards in such a way they can become obosolete next year (or they treat them as obsolete the following year, as they did with Kepler). They dont give a sht about these results. As long as people buy their next product, there is no problem at all. Cant wait to see how they will treat 900 series once Pascal is out. They will make them look like the biggest garbage ever compared to Pascal.

    1. The truth behind the 970. Had it been a normal card, it would be perfect. They just had to plant 1 flaw in it to make it useless.

        1. It is not an issue when less than 3.5GB being used. Not much relevant now but it will be more and more relevant in games to come as they simply will use more memory.

          1. It’s not an issue when 4GB is being used, I have 4K videos to prove it and no games will not be using more than 4GB at 1080p or 1440p. The FuryX does some nasty hitching in games anyway from videos I’ve seen and yet people seem to think the GTX 970 will if they use more than 3.5GB as more than games might hitch anyway.

        2. Lol smarta$$, no need to defend your purchase. Just accept that nvidia betrayed you, getting you to upgrade sooner than you’d need to.

          1. It’s based on fact, my own tests, people were stupid enough to believe and spread lies that the GTX 970 has only 3.5GB of VRAM.

      1. I think generally as you move from 980—->950, it’s the same card with more flaws to lower the price tag. It’s been that way since the 500 series.

    2. It’s not planned obsolescence, otherwise, they would have dropped Kepler a long time ago. Kepler is still getting support – sure, it’s not where they’re focusing their driver optimisation efforts, but Kepler chipsets are still surprisingly viable if you don’t care about absolute top-end performance.

      Where Nvidia went wrong is that all their current generation cards were designed before DX12 was fully specified – meanwhile AMD licensed their GPU designs to Microsoft for the Xbox One (and hence DirectX12), and submitted their Mantle code to be used as the basis of Vulkan/GLNext, essentially allowing them to have huge sway in the direction of both major upcoming graphics APIs. AMD have, since moving to multiple cores, always favoured parallelism over single-thread compute performance, so this makes total sense.

      It’s not a case of Nvidia being deliberately malicious, they simply dropped the ball over the the changes to code methodology that Vulkan and DX12 require.

    3. That’s a little disingenuous. Nvidia operates on a really similar development plan to intel’s tick-tock.

      It’s not a hardware developer’s obligation to make sure that people who aren’t keeping up with the times are doing well. That’s just how technology works. If a gasoline alternative, far more efficient car becomes available, you can bet that they’re not caring about gasoline cars anymore.

      Similarly, no one is trying to make sure that floppy drives are doing well, and no one is trying to make quadruple-layer dvd’s when they have blu-ray. No one is trying to make highspeed dialup.

      Nvidia does a fair amount of backwards compatibility so far as making sure that older cards will work with DX12. 900 series probably will look like garbage compared to Pascal, but that’s natural. If you’re dumping nearly a billion dollars into a technological leap, of course you’re going to push for consumers to buy your new product.

      I can understand that you don’t like Nvidia, but Apple doesn’t support iPhone 3 anymore, Windows Vista is on its last legs as well. That’s just how things work. They also aren’t being released on a strictly annual basis so your “obsolete next year” is unfair as well.

      1. Are you seriously, *seriously*, comparing people that bought 700 series NV GPUs like the 780Ti to people that still use floppy drives and dialup?

        GTFO here.

        1. Yeah. The 780 came out in may 2013. That’s over two years old.

          Computer years are like dog years. For comparison, there have almost always been 2 years between iphones. There hasn’t been an update to windows 8.1 since april 2014. Over 16 months ago, due to the development of Windows 10. I highly doubt that Windows 8 will be recieving much support, especially given the fact that they have a free Win10 upgrade.

          –but some people will keep windows 8 right? Maybe tablet users prefer it, or they like the style, etc etc. They can’t expect their old OS to continue to be supported when their company moves on.

          These aren’t perfect examples, but in the world of hardware, if you don’t change with technology, there is no help for you. Think of how pissed people were when companies started abandoning physical releases of games? Why be angry unless you don’t have fast internet (need to update to new technology) or are nostalgic?

          It’s a waste of money and effort and time for companies to worry about things like that. It slows down progress. You can’t complain when the answer is almost assuredly: “Your card is old, buy a new one.”

          1. Dogs, Windows 8, whatever. It does not take away the fact that nVidia stopped optimizing it’s drivers for Kepler a long time ago. Even a GTX 960 became faster than a 780ti.

            If it was in order to make Maxwell more atractive or they stopped caring about the previous generation users, doesn’t really matter to a lot of people with very pricey hardware in their builds.

          2. The GTX960 became faster than a 780ti because the 960 is built on advanced architecture released in January 2015, two years later. There’s no “even” about it.

            And as for “pricey hardware” , the 780Ti is currently 599.00 on newegg while the 960 is 299.99. The maxwell is faster, more power efficient, and less expensive.

            I don’t think they needed to do anything about old drivers to make better, cheaper hardware more attractive. I ‘m failing to see what you’re upset about.

          3. “The GTX960 became faster than a 780ti because the 960 is built on advanced architecture released in January 2015, two years later. There’s no “even” about it.” You absolutely have no idea what you are talking about the 780 TI is lightyears ahead of the 960 in every single aspect and should never ever come close to it in performance. Gimping older yet powerful cards has been the way its meant to be played by NVidia.

          4. You know if you played on the fact that Maxwell has superior DX11 compute performance then yeah, the 960 would blow away the 780 TI. Did you know a 560 TI beat a 680 in some theoretical tests? See yes it CAN happen.

            Did you know that inferior GPUs on the AMD side can destroy NVIDIA GPUs in Compute heavy games like Dirt Showdown? This can’t happen with Maxwell but it happens with Kepler.

          5. “theoretical tests” “Doesn’t happen with maxwell”
            You’re undermining your own argument if you’re saying “you’re right, but it could!” The point is, the 900 series is more advanced. Any drawbacks have been a result of the fact that software and hardware tend not to match up.

            Maxwell is built on new architecture that is fundamentally different from what developers have had to use in the past. Nvidia scrambles to release driver updates for nearly every major game release. That’s unnecessary but they do it anyway. When you have thousands of developers making games for thousands of permutations of hardware, of course you’re going to see slowdowns here and there but that isn’t the hardware’s fault.

          6. The point is it can happen, I’ve put up a benchmark of Civ Beyond Earth where the 960 destroyed the GTX 760 by nearly 30fps. Case in point, it shows just how much Compute performance the Maxwell has over Kepler in a real game.

          7. “the 780 TI is lightyears ahead of the 960 in every single aspect and should never ever come close to it in performance”

            That’s like comparing apples to oranges. Maxwell is an entirely different GPU architecture to Kepler. Each Maxwell core has a much higher compute throughput than an equivalent Kepler core because they basically did a ground-up redesign of their processors, which you can read about here – http://devblogs.nvidia.com/parallelforall/5-things-you-should-know-about-new-maxwell-gpu-architecture/. Also bear in mind that even a stock 960 has a clockspeed far in excess of the 780TI.

            Please do at least try to know what you’re talking about before telling other people that they don’t.

          8. Sorry, did I sound upset? My native language is not english, that might be the reason.

            Yeah, the 960 es built on a more advanced architecture. Of course, that more advanced architecture is severily cut down specs wise in that model, as you may expect for the price (a 960 is NOT the same as a 980, and they both are Maxwell, right?).

            The 960 was a bit slower on launch than a, say 770 (wich is a 10~15% 680). Now, it became faster than a 770, and rivals a much more powerful card (although it’s based on an older architecture, it’s A LOT more of a card) 780ti.

            And it’s really not driver maturity pushing the 960 faster. Maxwell had been around for quite some time before the 960 launched. It’s driver optimization not targeting (or downright, gimping) not-so-old nVidia cards.

          9. Than how come the 290 and 290x are still getting better in performance? Nvidia ditched the 780, while AMD didn’t. They optimized their old tech, and it got better, unlike Nvidia.

          10. You’re still asking why old tech is old. They changed architecture, and lack funds/time/manpower to continue to support it.

          11. No, I’m not. Excuse me if I buy a 290x, and am glad that it got as strong as a 980 ti, or probably a bit better than what the GTX 1070 will be, and the 900 series didn’t get any stronger. AMD is a smaller company, yet somehow their last generation card in DX12 matches a current top end card. I’m glad that I got more out of my money than Nvidia ever gave me.

      2. I feel like you missed the point of my reply. You’re saying “F*cked over” and “shady” business practices. Those are all very edgy, disapproving phrases you’ve given that describe a completely normal business practice.

        Are you talking about making their 970 partitioned? The impact of that was only about 4%, which people were angry about but they could’ve instead gotten a 980. They’re angry that a purposely gimped card is gimped. Notice no one complaining that the 965 and 950 do the same thing.

        Their lack of support of old hardware isn’t a problem either. Consoles today rarely ever are backwards compatible. It’s not cost effective to support old hardware. Pushing forward is the only way that technology companies succeed.

        1. “Soon the 390X will be around as strong as a 980Ti”

          Probably, and by that time, Nvidia will have released newer cards that outdo it again. Gameworks is relatively new and badly needs optimization, that’s true, but those features are generally optional, and not entirely dependent on your GPU.

          AMD did the same thing with TressFX by publically refusing to optimize their software for nvidia cards, but I don’t see you complaining about AMD at all.
          It is anti-consumer, on a broad scale, but at the moment, Nvidia owns a 90% market share, and it’s a sneaky tactic to ensure that.

          Microsoft took a shot at google by integrating and locking Bing as its desktop search engine in Windows 10. No one is complaining about that either, they’re just using workarounds. Don’t act like these kind of industrial tactics are new.

          1. even comparing intel’s IGPU’s to dgpu’s is disgusting and you should feel bad.

            Everyone knows the ONLY market gfx matters in is gaming and media. A number less than 1, is how many serious gamers game on intel igpus

          2. the 8800p is not bad for the price for sure if you cana ctually find a laptop with 1 in it….

            much like the 5775c with the iris pro 6200 (1st non suck intel igpu).

            however this still doesn’t change th efact ALL the solutions you mentioned are all THE SUCK except for maybe league of legends 😛

          3. all the china pc cafe computers with on board not representative of the actual consumer market IMO

          4. TressFX is based on DirectCompute 5.0 so even intel can accelerate. to use integrated GPU to help with compatation would be also possible under Win10.

          5. You’re wrong about TressFX, it had a 23% performance hit on AMD GPUs, Hairworks only has a 16% hit but Hairworks is on a lot more objects in the game in TW3, TressFX is just on Lara.

          6. Because you’re cherry picking what you like to prove your point. The vast majority of games, the GTX 680 is vastly better with very few exceptions. The 7970 is a damn good card, AMD only updated the R9 200 series by a little anyway.

          7. Why wouldn’t it be? It’s a known fact that the *60 series get’s near the last *70 series. Just remember that the 680, 770 are Kepler, 960 is a new architecture which has vastly better Compute performance and better tessellation. 680, 770 is all about raw performance at this point and they would get slaughtered in Compute by AMD GPUs and Maxwell. It’s not uncommon for a GTX 760 to get near a GTX 670 and they’re just the same architecture.

            Just to prove a point. a GTX 960 is nearly 30FPS faster in Civ Beyond Earth than a GTX 760, you know why? It’s because of it’s superior Compute performance which Civ BE is know for being Compute heavy, so is Civ 5.

          8. Yeah well, it’s not always that straight forward, people seem to prefer the better driver and software support from NVIDIA. It’s not that NVIDIA have gone down so much, it’s that AMD have gone up a lot in their driver performance. People don’t want to take the jump to AMD for good reason, especially older games.

            I used to have a R9 280, it was bad with older games from what I tried, my older GTX 660 just seem to be better, but the R9 280 was great in AMD Mantle games.

          9. 390x is fair superior to 7970 ghz even…… before we even factor in the insane scaling. Pretty sure it’s the best $ per frame card on the market… With dx12 and vulkan approaching it sure isn’t getting any worse 😉

          10. Indeed – HD 7970 performs amazingly well, considering it’s age, just like 9700 pro did in it’s time. A few years from now, it’ll be remembered as a legendary GPU – A title which it fully deserves.

            Anyway, it’s still going strong, it still performs well, and on top of that it has at least one more year left in it! Truly mindblowing longevity, it just keeps getting better and better – It’s the best GPU purchase I’ve ever made, by far! I’m certain it has enough life left in it to last me until 400 series arrive.

          11. Hopefully it’s better the Nvidia 400 series was… (Fermi BBQ)

            Pretty sure we don’t have to worry as i have a feeling both next gen GPU’s will be AMAZING. Nvidia will finally have to sell the $’s worth of hardware as opposed to selling a really good software solution on top of some decent hardware.

          12. Worry not – Even if AMD fails (which is very unlikely), it still won’t be as bad as Thermi.

          13. the 7970 example vs the 680 and the 290x vs the 780ti are great examples…. The 390x (290x rebadge) is most likely going to do the same.

            This will make the 290x best 2 successive generations of Nvidia cards past the first year….

            I guess if you plan on just throwing money blindly at a problem because some company pays linus some kick backs go for it….. (It’s working for Nvidia so far after all)

            AMD’s issue is they treat their consumers with intelligence…. If they want to make $ they should just bve like Nvidia and treat their customers like ignorant scum. Clearly is the most profitable.

            I used to be a big Nvidia fan back in the day, first card was a Nvidia TNT 2 riva, i’ve owned twin 9800gtx’s back when you had to buy a special chipset to even SLI (Nforce boards). But the way they treat their consumers and the constant BS and lies from them has really pushed me away from buy any of their products again tbh….

          14. Yeh their hardware isn’t even half bad. They are just scum that exploit consumers. As i have a soul and a spine i understand that rewarding this behviour just creates more. So i’m doing my good for the world by voting with my wallet in all aspects of life. GPU’s included…

            Bad Luck Ngreedia

          15. GCN 1.0 was a miracle maybe 2.0 can be also ???
            This time around though Nvidia is super cashed up from the apple levels of gouging so i would imagine their pascal gpu’s to be very competitive (Competition is good for RRP’s).

            Best case scenario is actually both flagships being amazing. HBM and 16/14nm paving the way for an entire new generation of mindblowing software to take advantage of all this new power.

            Game devs have already been using tricks and work arounds for years just to make what the API would allow them. They haven’t even begun to stretch their legs yet 😉

          16. I won’t buy an nVidia card because I got screwed over by them during the bumpgate scandal. I had a Toshiba Tecra M3 with a 6200 chip that died about a month after I sold it to my mother-in-law. She was pretty upset about that, after I told her it should last at least a couple more years, and it would have, if it had Radeon graphics in it.

          17. I feel what you say is true BUT that’s unfortunately the way it works in many businesses. Let’s compare nvidia to acura (car). Acura canceled their TSX lineup in favor of a new one (TLX, hybrid between TL and TSX). Bear with me. So i have a tsx and yes they surely dropped the ball in term of fixing bells and whistles problems BUT just like nvidia would do, if there was something very very problematic, they would turn around and fix it, at least that i whar i believe. Plus, nvidia are in a spot right now that AMD would like to be in, they have a “certain” control over the market and that might help them in acting as such (dropping support rather quickly). It’s obvious they want you to buy the new card, amd would do it to in their position (it’s all about money). But amd right now doesn’t have the luxury to start dropped support of older products. It would be another bad blow to their actual reputation which is in somewhat bad shape. So they keep their customers ultra happy in order to maintain them. The market dictates the way the companies are going to act. We all know by now that nvidia price targers are higher than amd’s, my beat bet would be that nvidia’s customers (overall) “might” have more money (not saying amds customers are poor) and nv customers don’t care about changing gfxs every year or 2 year and that my friend is part of the problem you’re pissed about (frankly, i find it bad too but that’s the way it is). So nvidia decides to capitalize on that and guess what they’re much more healthy financially than their counterparts so, they guessed right! If people stopped buyings carda every so often, nv wouldn’t have any other choice besides procurring good optimization to older hardware since they couldn’t roll out new chips every year. All in all, depending on your situation dropping 500$/year might be easy or hars, i don’t want to know, but that how it is actually. I just pray they don’t gimp it on purpose (which i doubt) and they should learn from previous mistakes and no repear themselves for the next gens.

          18. TressFX code is public and nvidia can optimize their drivers any time, as they actually did. Gameworks code is closed and AMD cannot optimize their drivers to support it better at all, AMD never asked nvidia to make gameworks work properly on their cards, but having access to the code they could convert the CUDA redering engine in opencl, the same nvidia did with tressfx and there would be clear competition.
            Now, thanks to nvidia AMD users have running on their gpu’s code made by the competition witch is clearly inefficient, may intentionally harm performance, and for now if they want to play a title they cannot do nothing about it. This is clear unfair competition.
            Just to know. Toggling off gameworks options does not stop it completely from executing functions. A middleware is not an on-off thing, it is hardly implemented in the game engine.
            And no, nvidia does not share 90% market share, their slides tell their own story, others say Nvidia is at 63% amd at 29% and Intel and others at 8%. 90% is pure fantasy.

          19. I already addressed the discrepency of 8%, and at that point you’re splitting hairs if you’re denying that nvidia doesn’t have dominance of the dedicated GPU market right now.

            If nvidia has 52% of the steam survey, that’s still majority, and we’re only counting people who submitted that information.

            I’m sure blockbuster complained about unfair competition when Netflix came out too.

          20. Tress FX had its source code posted long before Tomb Raider was released and NVIDIA could have optimized it’s drivers. This can not be said by any of NVIDIA Gamedoesnotwork features as the code is not available for AMD to check and do optimizations as it is hidden behind shady agreements and contracts between NVIDIA and the game developers.

        2. What kind of BS is that? if 680 was 50% stronger then 7870 in 2012, its going to be 50% stronger in 2222.

          390x is similar to 780ti under DX12 in just ONE game, you have to be a pathetic fanboy to declare that nvidia lost based on this.
          2 next Trippe-A games is going to be MGS5 and then F4, guess which GPUs going to win? You can can keep dreaming of some DX12 supremacy while we going to enjoy gaming right now.
          you should be very proud that it took ati 12 generation of DX to finally match nvidia in performance.

        3. (Quote: Those are all very edgy, disapproving phrases)

          You have been watching too much TV and have become very programmed! You are not even making any sense! This is not a court hearing were people are under oath! This is a friggin comment section were people voice opinions!

          1. An opinion without making an educated/inferred response is just inane shouting. There are enough primates in the world that can only hoot and holler, so let’s leave simplistic noise responses to them, shall we?

        4. “The impact of that was only about 4%, which people were angry about but they could’ve instead gotten a 980.”

          No, the impact of putting ‘4GB’ on the box, but only having 3.5GB available for textures in most games is that the card, for all intents and purposes, only has 3.5GB of usable memory in games. That’s not a ‘normal business practice’, it’s an unconscionable lie. It’s like selling a ‘dozen eggs’ in cartons, but KNOWING with complete certainty that 2 or 3 of those eggs are rotten, and inedible. Saying that ‘the 4GB’s all there’ is like saying ‘all 12 eggs are there in the carton’. Sure, all the eggs are there, but 2-3 of them are inedible, and that’s just downright dishonest.

          nVidia will probably never live this scam down. Sure, they made tons of money off of people who bought GTX970s ONCE, but maybe NEVER again. People won’t forget this fiasco.

          1. Actually, I forgot about it, just like I forgot about this article, 3 months ago.
            …and I just bought a new laptop with a 970 in it, so I not only bought it once, but twice. It plays everything I throw at it at max, due to the fact that the GPU is not solely responsible for performance.
            I’m pretty satisfied, and I’ll probably buy whatever’s next when my laptop runs its lifespan out.
            Do you sift the internet for old comment threads like this?

          2. Well, you’re going to regret buying those GTX970s pretty soon, when DX12 games run poorly on them. But go ahead. Buy what you want. Just realize not everybody is as oblivious to cheating, scams and rip-offs as you are.

          3. I mean, it does everything I want it to. I understand the minutiae of what people are angry about, but it’s incredibly nit-picky.

            You might hope that I’ll regret it, and maybe I am owed 20 bucks back for what RAM/that performance difference is worth, but I can’t say in honesty that I’m at all upset.

            This ‘controversy’ is aging just as fast as technology. You aught to find something better than old articles to gripe about.

            Happy thanksgiving!

      3. Also Nvidia and Intel are fundamentally different hardware companies. When is the last time you installed a new CPU driver? You haven’t, because they don’t require that kind of firmware updating to cater to an inconsistent software developer pool.

        1. I don’t think nvidia has a responsibility to ensure that 3rd party GPU acceleration software functions properly either, sorry. (including video transcoding) Generally that kind of software is designed for OpenGL cards ala Quadro, not their GeForce line.

          “Massively Crushing” is another unsupported negative exaggeration too. Got any more? All your inane complaints are entertaining.

    4. Good try, but -1 anyway.

      First 9xx GPU’s (and thus their architecture) comes from 2014, so it’d have been an stupidity from their part to make GPU’s optimized for DX12 in a period when it was even FAR from being released, not to talk about any serious DX12 game (who’s going to play that AoS crap anyway? xD). Nvidia may have not made the most long-term call, but they made the right one, and that is reflected on the actual position of both companies.

      Now, all that remains is for Pascal to fix parallel scheluding and crush Greenland in DX12 performance.

      1. Nvidias only real advantage is gone under dx12 (drivers) , and if AMD goes with global foundaries over TSMC a theoretical process advantage over Nvidia’s 16nm TSMC process…

        Actually i think Nvidia might be in a bit of hot water for a change…. (about time)

        OFC your post reeked of fan boy so expecting you to be objective is probably an oversight on my part replying (did it for others more so than you)

        1. If you think that supporting rational strategies is being a fanboy, it’s your problem. You must really suck as an strategist then xD

          As you suck judging architectures that aren’t even out yet. Like if 2 nm would made all the difference in the world 😛 (BTW, both Pascal and Greenland are even in transistor count….well, at least if AMD can decide themselves in the “15-18b” range.

          Anyway, even given everything that’s happened, in a scale of 0-100 (0 being AMD, 100 being Nvidia) I’m still a 80. But I was lucky enough to skip this transitioning gen, so the REAL DX12 race begins next year.

          1. Yes this transitional period would be anightmare if you were an Nvidia owner…..

            Fortunately for Nvidia most of these consumers have more money than brains so they will most likely just buy a new card….

            However seeings as my 7970 ghz died 2 1/2 years into the warranty period
            i got a full refund and upgraded to a 390x… Guess i’ll be doing ok in dx12 especially considering i can add another 390x end of this year and avoid buying the next gen all together if i want :D.

            GL doing that on ANY Nvidia card….. Also Xfire > SLI (Nvidiiots don’t like admitting that reality)

          2. And it hasn’t been a nightmare so far for AMD, riiiight….are you stup!d or you don’t read any news apart from the ones that interests you?

            http://wccftech.com/4-out-of-5-pc-users-prefer-nvidia-gpus-according-to-latest-data/

            And you better not talk about “brains” when you’re buying cr@ppy cards for a generation of games that won’t be out for a full year yet, instead of more powerful cards for the games you can play RIGHT NOW. For when DX12 is any prevalent, Pascal will be out already, and will beat the cr@p out of AMD AGAIN in terms of performance.

            What a bunch of r€tards xDDD

          3. And it hasn’t been a nightmare so far for AMD, riiiight….are you stup!d or you don’t read any news apart from the ones that interests you?

            “4 out of 5 PC Users Prefer NVIDIA GPUs According to Latest Data”

            And you better not talk about “brains” when you’re buying cr@ppy cards for a generation of games that won’t be out for a full year yet, instead of more powerful cards for the games you can play RIGHT NOW. For when DX12 is any prevalent, Pascal will be out already, and will beat the cr@p out of AMD AGAIN in terms of performance.

            What a bunch of r€tards xDDD

      2. Developer kits for windows 10 as well as beta builds of DX12 were supposedly provided to GPU manufacturers even earlier than 2014. Since the entire focus of DX12 is hardware based, not software-based features, I’d say that it’s kind of a cooperative effort on microsoft and nvidia/amd’s part. There’s no way to design the API and then build the hardware after when they need to know what hardware capabilities they have to build the API around.

        IMO the most interesting goal for Pascal is unified RAM. Having to bump up cost for higher integrated VRAM is nonsense, not counting the fact that you basically need 6GB of VRAM to render 4k at a decent rate.

        It’s always on the cusp of some technological leap, and people are always frustrated when the product they bought becomes outdated. I’m curious to know where they plan to go with “Volta”, that they’ve already charted.

        Business wise, I feel like nvidia is doing the right thing. Right now they’re the leader in their field of research, and I’d rather have a new card out every 2 years than wait 6-8 years for volta to come out.

    5. I think it comes down to two different methodologies in parallel processing – one does better in one case, the other in another. The question is if NVIDIA can solve this without a hardware redesign – because they’ve built so much on the current CUDA platform (HPC market is where NV reigns supreme) they would not want to go away from it easily.

      Also, it’s important to keep in mind that Pascal’s target audience is HPC, not consumers – though it’s widely expected that late next year scaled-down versions will be made for consumers.

  11. Really interesting !! so he basically says amd should not be used be weak cpu’s for they’ll be bottleneck and that the gcn is just not tailored for dx11 in the first place.
    well gonna see how all this stuff goes

  12. No it is not. It allows you to set the tessellation multiplier which is a god send. Otherwise you would suffer like kepler with 64x tessellation in games like pcars and tw3. An r9 280x can best a titan in witcher 3 with hairworks thanks to that tessellation slider.

  13. So happy AMD finally has the hardware to overtake Nvidia. Right now Nvidia has a commanding lead of 4 to 1 in terms of units sold to customers. If AMD leaves this GPU business Nvidia will now dictate pricing and GPU advancement will slow to a crawl. We as consumers don’t want that as we’ve seen what happens in the CPU business.

    Everyone, support AMD and help balance this competition or there soon will be only one major player in the PC graphic card market.

  14. Ashes of the Singularity makes use of Asynchronous Shading. Now we know that AMD have been big on advertising this feature. It is a feature which is used in quite a few Playstation 4 titles. It allows the Developer to make efficient use of the compute resources available. GCN achieves this by making use of 8 Asynchronous Compute Engines (ACE for short) found in GCN 1.1 290 series cards as well as all GCN 1.2 cards. Each ACE is capable of queuing up to 8 tasks. This means that a total of 64 tasks may be queued on GCN hardware which features 8 ACEs.

    nVIDIA can also do Asynchronous Shading through its HyperQ feature. The amount of available information, on the nVIDIA side regarding this feature, is minimal. What we do know is that nVIDIA mentioned that Maxwell 2 is capable of queuing 32 Compute or 1 Graphics and 31 Compute for Asynchronous Shading. nVIDIA has been rather quiet about this feature for the most part.

    Anandtech made a BIG mistake in their article on this topic which seems to have become the defacto standard article for this topic. Their information has been copied all over the web. This information is erroneous. Anandtech claimed that GCN 1.1 (290 series) and GCN 1.2 were Capable of 1 Graphics and 8 Compute queues per cycle. This is in fact false. The truth is that GCN 1.1 (290 series) and GCN 1.2 are capable of 1 Graphics and 64 Compute queues per cycle.

    Anandtech also had barely no information on Maxwell’s capabilities. Ryan Smith, the Graphics author over at Anandtech, assumed that Maxwell’s queues were its dedicated compute units. Therefore Anandtech published that Maxwell 2 had a total of 32 Compute Units. This information is false.

    The truth is that Maxwell 2 has only a single Asynchronous Compute Engine tied to 32 Compute Queues (or 1 Graphics and 31 Compute queues).

    I figured this out when I began to read up on Kepler/Maxwell/2 CUDA documentation and I found what I was looking for. Basically Maxwell 2 makes use of a single ACE-like unit. nVIDIA name this unit the Grid Management Unit.

    How it works?

    The CPUs various Cores send Parallel streams to the Stream Queue Management. The Stream Queue Management sends streams to the Grid Management Unit (Parallel to Serial thus far). The Grid Management unit can then create multiple hardware work queues (1 Graphics and 31 Compute or 32 Compute) which are then sent in a Serial fashion, for Maxwell, and a Parallel fashion, for Maxwell 2, to the Work Distributor. The Work Distributor, in a Parallel fashion, assigns the work loads to the various SMMs. The SMMs then assigns the work to a specific array of CUDA cores. nVIDIA call this entire process “HyperQ”.
    Here’s the documentation for Kepler: http://docs.nvidia.com/…/6_Adva…/simpleHyperQ/doc/HyperQ.pdf

    GCN 1.1 (290 series)/GCN 1.2, on the other hand, works in a very different manner. The CPUs various Cores send Parallel streams to the Asynchronous Compute Engines various Queues (up to 64). The Asynchronous Compute Engines prioritizes the work and then sends it off, directly, to specific Compute Units based on availability. That’s it.

    Maxwell 2 HyperQ is thus potentially bottlenecked at the Grid Management and then Work Distributor (for Maxwell) segments of its pipeline. This is because both these stages of the Pipeline are “in order” for Maxwell and one stage is “in order” for Maxwell 2. In other words HyperQ contains, for the most part, a single pipeline (thus Maxwell/2 is more Serial than Parallel).

    AMDs Asynchronous Compute Engine implementation is different. It contains 8 Parallel Pipelines working independently from one another. This is why AMDs implementation can be described as being “out of order”.

    A few obvious facts come to light. AMDs implementation incurs less latency as well as having the ability of making more efficient use of the available Compute resources.

    This explains why Maxwell 2 (GTX 980 Ti) performs so poorly under Ashes of the Singularity under DirectX 12 and when compared to a lowly R9 290x. Asynchronous Shading kills its performance compared to GCN 1.1 (290 series)/GCN 1.2. The latter’s performance is barely impacted.

    GCN 1.1 (290 series)/GCN 1.2 are clearly being limited elsewhere, and I believe it is due to their Peak Rasterization Rate or Gtris/s. Many objects and units permeate the screen under Ashes of the Singularity. Each one is made up of Triangles (Polygons). Since both the Fury-X and the 290x/390x have the same amount of hardware rasterization units, I believe that this is the culprit. Some people have attribute this to the amount of ROps (64) that both Fury-X and 290/390x share. I thought the same at first but then I remembered about the Color Compression found in the Fury/Fury-X cards. The Fury/X make use of Color Compression algorithms which have shown to alleviate the Pixel Fill Rate issues which were found in the 290/390x cards. Therefore I do not believe that ROps (Render Back Ends) are the issue. Rather the Triangle Setup Engine (Raster/Hierarchical Z) are the likely culprits.

    Mahigan

  15. No you didn’t understand, not the draw calls are the most important but the use of the Asynchronous Shaders by the game.
    Both the PS4 and Xboxone have Asynchronous Shaders units and now whit DX12 is most likely that the majority of games will incorporate these Asynchronous Shaders. It’s an important DX12 feature after all.

  16. Pascal was revealed when Maxwell was only being introduced so that’s a plus 1 on the uncertainty side. You’ll never know the result unless you see it for yourself

  17. Lol, so much for power efficient, low heat and overclocking friendly architecture. They said that Maxwell is the first fully DX12 capable architecture so what they were doing in bed with Microsoft when they were making Maxwell ? they didn’t knew that DX12 is all about parallelism ? I also own two GTX 980s so I laugh on myself as well but this really funny and looked like a trick to sell their last batch of 28nm cards by call them fully DX12 capable.

    and before someone bash me for my comment, technically yes the architecture is DX12 capable though it doesn’t exploit the benefits of parallelism offered by DX12 which means it’s built to be more friendly towards DX11 rather than DX12.

    1. People forget that Maxwell was released about a year go and there are still no DX12 games even now. There is no point in going parallelism, especially when most games are DX11 anyway. Also, AMD GPUs see a bigger gain in DX12 but it won’t be 40-80% over NVIDIA GPUs, it will just bring on par or better, not a massive amount.

      Maybe in 2 years we will have a decent amount of DX12 games but Maxwell will be 3 years old by then and still be best in DX11 games, which will still be the majority.

      1. My friend… Maxwell was touted to be the first fully capable DX12 card and it lacks in one of the most prominent feature of DX12. What is the point of bringing a new architecture that favors old technology ? and if we go by your theory then GCN is also an old architecture in fact even older than Maxwell and it still gives good competition to Maxwell in DX11 games (not talking about big daddy Maxwells) and now it is gaining massive advantages in DX12 as well so which one is the better architecture overall ? The only new architecture from AMD is the Fiji which will take over big daddy in DX12. GCN is coming from 7000 series so it means AMD was gearing up for something that Nvidia still didn’t thought or cared about.

        And yes in 2 years Maxwell will be old of course but GCN will be even older and will still see good gains in DX12 so it means owners of GCN cards have better value in their hand right now. By rule Maxwell should last longer than GCN but right now we are seeing the opposite.

        People usually just defend what they own rather than accepting what’s true. It’s difficult but to be rational you have to stop defending your choice. We went with a short lived architecture and that’s the truth.

        1. What are you basing Maxwell do be short lived on? Asynchronous doesn’t prove it’s outdated, it just says it’s not as optimal as AMD design, Maxwell and other architectures will still benefit from DX12.

          AMD’s superior DX11 compute got them nowhere for 3 GPU gens, one game shows that AMD’s compute is superior, ONE GAME.

          AMD’s performed bad for years with DX11, now all of a sudden people are defending AMD like they did no wrong with their architecture for DX11.

          1. No one is defending AMD’s poor DX11 performance. I own 2x Gigabyte R9 280X and I don’t even defend AMD. Their DX11 performance is garbage and I was hoping they’d fix it with a major driver update. What people are saying is that NVIDIA dropped the ball with Maxwell and DX12. It’s an even newer architecture than GCN so one would expect it to be more future proof. Even GCN 1.0 cards like the HD 7970 GHz Edition and R9 280X benefit from DX12. And that’s an architecture that was first-released in early 2012.

            I hope AMD wrecks NVIDIA in DX12 so it forces them to make something even better. I want both companies to always try to leapfrog one another.

          2. When NVIDIA make something better, people will moan regardless, they already do so I’m not sure what people want other than the under dog to win somehow. If AMD do “wreck” NVIDIA, it;s not going to magically make AMD more viable as a business, people are still going to buy NVIDIA for better software support and driver support, you can’t change that in a small time.

            Trust me, NVIDIA will do something about this(they always do), it’s sad too see people making such big assumptions already.

          3. AMD has steadily been making improvements to their drivers. It’s in the best interest of PC gamers that AMD does well because it’ll push NVIDIA to do better. I don’t want to see NVIDIA die. In fact, I want them to make even better hardware but I also want AMD to make better hardware.

            With that said, it’s far too early to give the DX12 crown to AMD. We’ll have to wait until games are released with full DX12 support and see benchmarks from multiple reputable sources. Just because NVIDIA’s DX12 performance isn’t good in one benchmark doesn’t mean it’ll be poor in every single yet-to-be-made DX12 title.

          4. I didn’t said it’s outdated anywhere, I just said it’s short lived because GCN despite being older taking more benefit from DX12, it should be the Maxwell that should gain more from DX12 not GCN.

            AMD suffered in DX11 because of their poor relations with developers and less frequently updated drivers not because of their architecture. Their architecture is good because sometimes even games that are developed with Nvidia (Shadow of Mordor, Farcry 4, WatchDogs) actually run better on AMD.

            Also compute is application specific feature which not every game can benefit from while parallelism is more like a requirement because more and more games are getting open world, have tons of characters/units on screen etc, all this requires parallelism.

  18. Maxwell was a joke, that’s why many gamers (Nvidia owners) still have their GTX 6XX/7XX, personnaly I am waiting for Pascal GPUs.

  19. For those wondering why Kepler is so bad and why Maxwell is so good.

    http://i.imgur.com/jSDBDN8.png

    Look how a GTX 960 destroys a GTX 760, reason, because it’s Maxwell’s superior Compute performance. When games become even more Computer heavy, Maxwell cards will be just fine, 600/700 series won’t because of their bad Compute performance with Kepler

  20. You’re missing the fact that you and I and the audience in question are buying dedicated GPU’s. No one using intel integrated graphics is trying to play the witcher or crysis 3.

    And no, having 30 percent doesn’t mean you’re “hammering the sh*t out of their sales*, it means they’re behind. That’s basic arithmetic. I can’t re-teach you that concept. “buddy”

    If AMD is focusing on processing that means that they’ve effectively stopped trying to compete. That doesn’t mean “give them bonus points, they’ll come back.” That’s just ignoring the data.

    The point stands that Nvidia is currently dominating the market.

  21. I been saying that since the whole thing started when Witcher 3 and Project cars came out. Everybody Blamed Gameworks but Gameworks does not use Direct Compute.

    Witcher 3’s Physics uses Direct Compute so it will work on AMD cards. And Project Cars uses Direct Compute and uses no Gameworks effects.

    And blind fanboys who are Anti AMD think Gameworks “gimps” kepler when it clearly don’t. If that’s the case then games like Titanfall, Killing Floor 2, Dying light would run like the graph you are showing for Civ 5 with kepler cards.

  22. also if some1 bought a 7970 3 years ago they would still be fine for 1080p gaming….

    You just don’t get that type of value out of the green team EVER. (by design)

    1. No, of course. I’m gonna tell that to my GTX 560 Ti, which is still able to move modern games at 1080p and med-high detail.

      It’s just hilarious you’re commenting like if Nvidia hasn’t been the king of performance for the last decade or so. Pathet!c xD

  23. 780ti is Kepler

    The ONLY Maxwell from the 7XX series are the GTX 745, 750/750 Ti
    And keep in mind that these are the 1st gGneration Maxwell chips(GM107/GM108)

    The 2nd Generation Maxwell based GPUs are 9XX (GM200, GM204, GM 206)

    So please at LEAST read the links you give before you post them.

    1. Then don’t able Gameworks features on AMD cards, you have the options, why cry about options? AMD users can feel smug about Farcry 4 running better on AMD GPUs.

    2. With what feature? Killing Floor 2, which is an Unreal 3 engine game that looks like it was designed for a 750ti recommends a GTX 980 for gibs and fluids on Physx Flex at 1080p. The IDIOT who runs that instead of 1440p or higher? Should be laughed at anyways. You do not buy a 980 for 1080p lol.

      Hairworks? Download the monsters only mod, set tessellation to 16x in Catalyst. Grats. You now have higher performance than 9xx in Hairworks and the hairworks looks the same. Nvidia does not allow users to change tessellation in their driver. Know who gets F’d over by gameworks? Kepler owners, GTX 950 owners, 960 owners. GTX 970 owners. Why? Cus any Nvidia effect will be designed to just hit 60 FPS on their highest end card, or the one right under that. That is the whole point of Game Works. To keep people in a perpetual upgrade cycle with the most unoptimized effect possible. Know who will be f’d over by Game Works when the GPU after the 980ti releases? 980 owners. Who cares man.Buy the best value card at the best price. 980ti? Is a decent value. Below that? AMD has everything pretty much on lock for bang/buck, especially when end of line R9 290’s are considered.

  24. It isn’t. Do you even read your own links?
    From wikipedia: GTX 745, 750/750 Ti, 850M/860M (GM107) and GTX 830M/840M (GM108).

    –weird, 780ti isn’t on there.

    Just to make sure, lemme do a ctrl+F on your other link. Weird, maxwell isn’t typed a single time on that page.

    You dun goof’d. edit: “Get an education buddy.” aha..ha..hahahaa..wow. such vitriol. such oops.

    If you’re going to make personal attacks, you’d best be 100% on your fact checking man. Regardless, you aught to learn to participate in a debate without resorting to insults.

  25. Two possibilities for how bad they scored. 1) the game uses newer CPU extensions that the older AMD tech did not have. Dolphin Emulator has done this. Games have not. Maybe now they will start. We see AMD doing better on GPU with DX 12, it may be that Intel’s newer CPU architecture does better on CPU in DX 12. This is a PC exclusive I think (Ashes) so that might be the case. Triple A’s would probably not use things newer Intel architecture has and AMD does not as the consoles are AMD CPU’s. So we may only see this happen in a PC exclusive.
    2) We might have a core parking issue in Windows 10 like we did with 7. That was fixed with a third party program in 7. I do not have Windows 10. Could be easily verified if you do by looking at a game like Planetside 2 or BF4. Open up task manager and cpu threads. Play game for a bit. Go back to task manager and see if some are not moving at all. That was the case on Win 7 on my I7 and AMD CPU’s.

Leave a Reply

Your email address will not be published. Required fields are marked *