AMD’s Public Mantle SDK Coming This Year, NVIDIA & Intel Free To Use It

During a presentation by AMD’s Richard Huddy at PDXLAN, the Red team revealed that the public SDK for its lower level API, Mantle, will go live this year. As WCCFTECH reported, both Nvidia and Intel are free to write and release drivers that will enable Mantle on their hardware on all Mantle-enabled games once this public SDK goes live.

The big question now is whether NVIDIA will support it or not. A couple of months ago, NVIDIA claimed that Mantle wasn’t THAT ground breaking – or a necessary API – as DX12 will offer exactly what Mantle does, and even more.

Naturally, most of us would love to see NVIDIA supporting this low-level API, but that won’t happen. AMD claimed that the green team will be free to use Mantle’s SDK, though we are pretty sure that NVIDIA won’t be bothered with it.

Still, we can only hope that NVIDIA will think – at least for once – outside the box. After all, this would benefit its customers, and that would show everyone that the company actually cares about them.

As Richard Huddy claimed, there are currently over 100 game development teams who have signed up for Mantle.

Here is hoping that the green team will surprise everyone!

98 thoughts on “AMD’s Public Mantle SDK Coming This Year, NVIDIA & Intel Free To Use It”

  1. Good to see AMD not promoting proprietary crap as compared to Nvidia. Openness is the true potential of PC gaming. Good move AMD.

    1. Well, to be fair to NVIDIA, they have great OpenGL support and performance, even on Linux and their GPUs tech demos have used OpenGL. My guess is that NVIDIA don’t want to use Mantle because it’s from AMD, so they can’t buy and claim it for themselves like PhysX.

      Mantle has a pretty impressive lineup of games and games that are out use it, unlike DX11.2/3

      1. Yeah they don’t want to use Mantle because it’s from AMD but I would suspect it’s also because it wasn’t open until now either. Maybe now they will reconsider, though I don’t see chances of that happening.

          1. Not sure what you’re getting at, big difference between something as insignificant as PhysX and an entire rendering API.

          2. AMDs CPUs can use PhysX as intel CPUs can, It is highly unoptimized. GPGPU addition (the main code is still computed by CPU while some parallel parts are GPGPU) cannot be used by AMD due closed CUDA. AMD never refused PhysX, and NVidia making new FLEX which will be DirectCompute anyway!

      2. Yes that’s the attitude I don’t understand from Nvidia, they are great for their own customers but extremely punishing for those who don’t use their hardware. Right now AMD is trying to compete purely on performance basis while Nvidia is drumming their own tech like PhysX, GameWorks etc, they need to get out of this closed minded attitude and compete purely on performance grounds.

        Right now AMD has great opportunity to make Mantle as public as possible and it seems they are availing that opportunity properly.

        1. Nvidia investing millions in PhysX, GameWorks R&D and selling just like every company. AMD investing millions in Mantle, HSA R&D and giving it for free. Who’s stupid now? Nvidia’s profit and sales rising up. AMD’s sales, profit dropping, debt rising.

          1. Of course Nvidia profits will rise when they sell graphic cards for ridiculous prices like 1000$ or 3000$ and those stupid customers help who buy them. The reason why AMD won contract with three console manufacturers is because their hardware is sensibly priced and currently they have the best APUs in market.

            Then we have unnecessary money making GSync which should be free and open as shown by AMD’s Freesync.

            Maybe Nvidia is not stupid but they are certainly making their customers stupid. As for AMD their position is much better compared to what was happening 2 – 3 years ago.

          2. I wanna correct you a little bit though.
            Firstly, the only $1000+ single GPU cards they brought out were specifically marketed as being hybrid workstation/gaming cards. No reason to get it if you’re a pure gamer, but if you’re a gamer and a media creator, it could seriously help if you use CUDA rendering and double-precision calculations.

            Next, Freesync, as of right now, requires an addition to the monitor as well as gsync. It also requires specific AMD GPUs to work, meaning it isn’t “free” or “open” right now. Until such time, it’s just “AMD’s Gsync” and thus not really a fair comparison in the way you used it.

            nVidia does price their GPUs higher though; it’s undeniable. You buy nVidia if you want their features most and you buy AMD if you want their features most. AMD’s features include low price though, which is the feature many PC gamers want more than anything. If AMD really opens up mantle for everyone, then fantastic. nVidia said Intel and themselves had been asking for access for ages and got no replies in the past, so take everything with a grain of salt though.

          3. A little cross correction for you too, from what I remember and I do remember very well because I read a lot of reviews, visit both AMD and Nvidia websites often that Titan, Titan Z and Titan Black were heavily advertised as gaming GPUs, even today they are advertised as most powerful gaming GPUs.

            http://www.nvidia.com/gtx-700-graphics-cards/gtx-titan-z/

            See the description please.

            Now after these insanely overpriced titans Nvidia kind of cheated their own customers by releasing GTX 780 Ti which performs better in gaming and then AMD surprised us all by releasing R9 290, 290X, 295X2 which cost significantly low and perform equal or sometimes trade blows depending on game, now without getting biased tell me which is better ? there is no debate on the fact that AMD offers best price/performance hardware.

            As for Freesync you forgot that AMD worked with VESA (an open standard) and the technology will be based on DisplayPort 1.2a (a popular video standard). Now the thing is Nvidia likes to complicate things in the name of “Their own tech” thinking while a similar solution could be easily achieved. GSync will include licensing fees which FreeSync doesn’t have, of course you need an AMD GPU and a new monitor for that but the technology itself will be open for Nvidia too “If” they want to support it.

            Do you mean Nvidia was asking access to Mantle ? that’s the first time I am hearing this, as far as I know only Intel asked for access while Nvidia kept on bashing it. As for the access, you forgot that the API was still in beta stage (a work in progress) so this SDK will end that beta phase, I don’t see anything wrong here, they took their time finalizing it and then released it much like DirectX.

          4. NVidia said publicly they will not comment on MANTLE and they will not use it! MANTLE could not be offered because it was not finished which AMD said many times (they had to finish documentation etc.)

    2. Are you sure mantle wont be proprietary api? How open it will be? Like open source? I doubt it. By the way, direct x is proprietary too.

      1. By releasing public SDK they are making it like DirectX which means anyone can benefit from it. It will still be proprietary in the sense that new iterations of API will come from AMD only like new DirectX always comes from Microsoft.

        I say proprietary for Nvidia because they like to keep everything to themselves. I am 100% sure that AMD GPUs are capable enough to run GameWorks features properly if they get access to it’s libraries and then write optimized drivers for it. They are also capable to process GPU PhysX only if Nvidia allow it but it’s too much to expect from Nvidia.

        1. “…which means anyone can BENEFIT from it.” Thats debatable, i would say USE it. ” …new iterations of API will come from AMD only” And there i see the problem. Nvidia spend time and money on creating mantle driver and then AMD can release new version and oooops Nvidia, your driver doesnt work any more. Thats too bad. I, as big fan and customer of nvidia, would be really sad if nvidia start supporting mantle. I believe its waste of time.

          1. Of course we can debate that AMD will benefit more from newer versions of API but at least they are giving some study opportunity to others by releasing an SDK unlike Nvidia that keep all of it’s technologies completely locked and away from AMD.

            It doesn’t matter which side you belong to, I for example am a big AMD supporter but I know the fact that Mantle benefits the gamers and it’s something that forced Microsoft to work on Dx12 and if AMD is trying to make it open to Nvidia and Intel then it can benefit wider audience and it’s a good move. Personally I would like to see Nvidia technologies on AMD side too but like I said it’s too much to expect from a company like Nvidia that firmly believe in closed technology approach.

          2. The question is: who will intend the direction of Mantle development in the future? Will it be AMD and others will have to adapt to their solutions? Then it doesn’t matter if it’s open or not. If NVIDIA won’t have possibility to make decision of Mantle future, no one can blame them from unsupporting it. I doubt that AMD would support open tech when this tech will be only on hand of their competition.

          3. Not going to turn this into an argument but like I said AMD is taking one positive step which Nvidia isn’t, Nvidia will have to make some compromises on Mantle of course but at least they will get their hands on whatever AMD will release unlike PhysX or GameWorks which is beyond AMD’s reach thanks to close mindedness of Nvidia.

            AMD can’t optimize their drivers for GameWorks because they don’t get any access to libraries but in case of Mantle at least Nvidia can optimize their drivers for Mantle enabled games thanks to the SDK.

            And no one is blaming Nvidia for not supporting it but doing so they will only harm their own customers because Mantle enabled games will keep performing better on AMD.

            Secondly Mantle has a different and better approach of using dual GPUs, you can see what Firaxis (Civilization Beyond Earth) has done with Mantle, it could lead to better CrossFire and possibly SLI support but if Nvidia doesn’t want to get out of “Our own tech” thinking then it’s their problem.

          4. Mantle is advantage. That’s without question. The problem is the dependence on competition decisions to how Mantle will look like. Without chance to change it. I think if NV will come with their own API and with same rules as Mantle, AMD will not support it because of the same reason. But that’s my opinion. For now I don’t know how Mantle development will be done and if Intel or NV will have a chance on cooperation. Like DX. Mantle should be in hands of someone independent from GPU manufactures to have more chance to support from NV. To Intel it doesn’t matter. Their GPUs are not for games. In this way, they far behind AMD and NVIDIA.

          5. Same as NVidia cannot change how DX will look like they can make suggestions and the same they can do with MANTLE. Plus AMD never de-optimized code for NVidia HW. Yet we all know that NVidia will never support that! I’m glad for that. Hopefully they make their own platform and leave PC to be open and based on standards.

          6. “Same as NVidia cannot change how DX ”

            It’s not the same. DX is in hands of third part which need to cooperate with AMD, NV and Intel. Mantle is in hand of one company. If NV and Intel don’t have word on Mantle interface suggestions, they will be always in disadvantage. In biznis like this everybody tries to avoid situation like mentioned above. But maybe I’m wrong. I would like to know how it will work. I don’t believe AMD without question like you.

            And about de-optimized code. What is this? Do you really think that NVIDIA make by purpose code branches which run if AMD GPU is detected and cause to bad performance? They don’t have to do that and if they would do it, nobody would cooperate with them.

          7. You are wrong! Microsoft do not have to cooperate with anybody and obsolete DX is proof of that! The API is in hand of a company same as MANTLE is in hand of another company. Point is anyone can see the source code. So you can very well prepare driver, NVidia can also offer suggestions (and if AMD wouldn’t accept them they could just stop using MANTLE).
            Yes they do and for that reason they never let you see their source code. Just look at performance and you have to see that difference. It is also been confirmed with tessellation (they overtessellate some games, well known). You find many games that NVidia supported and those runs very badly on AMD. On the other hand look at AMD games and some of them often runs even better on NVidia!

          8. 1. DX is used in 99% of games. Mantle use how many? 5? How could you tell that DX is obsolete? DX is obviously much less effective then Mantle, but it doesn’t mean that this API is obsolete. Maybe in the future Mantle or Mantle like API replace DX (in means current SX, it could be DX12 too). This is far and not clear future. Until then, DX is relevant and now even more relevant API then Mantle or any other (for PC gaming).

            2. NV really overtesselleted some games like Crysis 2 (plain walls, not complex objects). But it doesn’t mean that everything what can’t be handled by AMD GPUs is overtessellated. In some cases you need high tesselation. And of course none of us know if Hairworks is overtessellated or not like you claimed above. AMD performance in tessellation isn’t reference to show what is or what is not overtessellated.

          9. AMD is taking one step? AMD technologies are open as possible when NVidias technologies are proprietary and black boxed as possible. It is not one step!

          10. Not criticizing them mate, I meant that AMD is going in the right direction. I am a huge AMD supporter and the major reason for that is their open mindedness.

          11. MANTLE already set new future by simply showing what’s possible. DX12 or OpenGL would never come that fast without it!

          12. I think that too. It’s good that Mantle is here. It started to make pressure on GPU API efficient. That’s something what I never question.

          13. problem is that black boxed GameWorks cannot be optimized by AMD as AMD cannot review the code to optimize their drivers and we see that all GameWorks games running worse on AMD (regardless how much AMD is trying to make blind optimizations). Radeon SDK and their effect (that are often much better than NVidias anyway, PCSS vs CHS, TressFX vs HairWorks, etc.) runs well on any HW (distribution in source code is available to everyone! + documentation + samples)

    3. Being proprietary isn’t the issue here, documentation, access and software is. devs don’t give a flying f*ck if it’s proprietary and OpenGL isn’t used much in video games on Windows because of other more important reasons.

      1. Yeah that’s right but what I meant is that right now AMD is doing more for the industry while Nvidia is mostly thinking about itself, there is nothing wrong in that too but you’ll have to drop your rigid attitude once in a while to serve the industry or say consumers as whole.

        I have seen TressFX and Hairworks on both AMD and Nvidia and I can easily say that TressFX is less costly on both while Hairworks is several times more costly on AMD while easy on Nvidia, similarly we have GameWorks, PhysX which are as closed as they can get, on the other hand AMD opened up Mantle too, which is a good thing for everyone and AMD itself after all it will make Mantle adoption more attractive option now.

        1. Hairworks have bigger performance hit because it’s used more than one person.

          Also if Mantle is open doesn’t mean it will succeed. Look DX vs OpenGL.

          1. In Call of Duty Ghosts only one dog Riley and couple of wolves use Hairworks and still the performance hit is much bigger on AMD than Nvidia while on the other end TressFX works quite well on Nvidia too. See Tomb Raider benchmarks.

            Mantle has already succeeded and serving it’s purpose, Microsoft and OpenGL developing new APIs because of it, see the line up of games supporting it already and the support will likely increase after this public SDK.

          2. TressFX offers same render time of 1 strand for comparable GPU from AMD and NVidia while HairWorks is GHOSTS tax AMD by 900% higher render time!

          3. Yes I have seen that in Call of Duty Ghosts, on my R9 290 the game usually runs at 100+ fps most of the time but upon looking that dog Riley it dips to below 60 if Hairworks is turned on, 40 fps hit is simply unrealistic and makes no sense for a high end card.

      1. With this SDK you are not forced to buy AMD graphic card, if Nvidia use it then it can work on Geforce too simple as that. CPU is irrelevant, Mantle helps any kind of CPU whether Intel or AMD by putting more work on GPU and relaxing CPU to do other tasks, though it’s more beneficial when your CPU is weak and GPU is strong.

        As for the game, you’ll have to buy any game whether Dx based or Mantle based, I don’t understand why you even mentioned this.

      2. Point is if everyone would take same approach as NVidia then you would soon need to have 3 PCs to be able run everything as intended. Intel, AMD and NVidia. In that sense PC would stop being PC and become more console like machines. Someone can be happy about that, but I hope that most will be not.

    1. Somewhat. That driver update was pretty awesome but Mantle makes games run crazy well. Like Battlefield 4 with everything maxed at 1080p without even a stutter or dip below 60. I pre-ordered Dragon Age Inquisition solely based on the fact that there is Mantle support. I know it is going to run like butter, no question.

    2. Shader cache is a driver side hack to process them by the GPU instead of the CPU. It’s just a short term fix, it’s not a fix. NVIDIA’s ultimate fix would be to use their own API for their own GPUs.

      1. Sean, do you have a more thorough explanation of what you’ve just said? This is the first I’ve heard of this.

        1. It’s basically taking the complied shaders during a loading screen or during game and putting them into disk as cache, this reduces CPU because the shaders don’t have to go through this compilation process using CPU time during gameplay. NVIDIA have just done this at the driver level.

          Call it an optimisation at the driver level then because there is no other way NVIDIA can do what Mantle can do.

          1. No they don’t need it, they have Mantle which actually solves the fundamental issues that shader cache tries to work around by reducing CPU load.

    3. Nvidia’s so called drivers are getting worse by the month! I see zero difference…in fact saw 2-3fps drop in the past 6 months!
      And shader cache is just a hack like other said…nothing else! Mantle is a brand new API with a new architecture…

  2. I don’t think Nvidia will adopt this. I think Nvidia WANTS bad performance on older computers so that you’ll upgrade your computer with a fancy new graphics card. Console ports are now up to a GTX 680 as a Minimum* spec, even though a 680 is easily twice as powerful as a console. That has to make Nvidia smile big.

    1. It doesnt make sense to me. Mantle help mostly when you have weak CPU, so it wont make GPU requirements lower by any significant percentage.

      1. I’m not sure exactly how it works, but it’s something that must be implemented on a graphics driver level. It works with all CPU’s, but you’re right it has the biggest benefit with older CPU’s (up to 40% performance increases in some cases).

    2. You mean how mantle doesn’t even work on anything Amd has besides GCN and directx 12 will work on like 400 series and up video cards from Nvidia yeah they are greedy

          1. The features on of the graphics API need to be done in hardware, you can’t do it always via software. Also using older DX version hurts the game, i.e having dx9 for a DX11 game.

          2. 12 isn’t a hardware change its a pure software change, 12 doesn’t support any new features to improve graphics meaning hardware isn’t the problem as much as driver support from manufacturers and in this case Nvidia creamed Amd, However i agree games that use 9 and 11 have to make compromises.

          3. Those old cards will not be able to support DX12 fully, it’s not pure software. Maxwell 900 series and R9 285 already have DX12 support

          4. Well i guess we will know in a year if Nvidia offers more support i will call Amd idiots and probably never recommend them again for the long term if not i will call Nvidia trolls

          5. Again we will see if Nvidia supports it more then Amd on older cards my guess it yes since Nvidia actually told us so and Amd didn’t say a damn thing.

          6. No i’m quite sure your not clearly knowledgeable when it comes to the facts. Simple as that you may believe in conspiracy theory’s but others don’t and Nvidia claims they can support 12 on more hardware compared to Amd. Again you are either blind or don’t understand basic definitions

      1. Most likely DX12 won’t work with 400 series at it’s fullest, I’m not that sure it will work with Kepler either as it doesn’t even have native support for DX11.1 not to mention 11.2 or 11.3.

    3. But the API and driver are the overheads or bottleneck. Have a look what I posted above about draw calls with ACU, DX and Mantle. NVIDIA can’t get around the problem with shader cache driver hack, it’s just a knee jerk reaction to Mantle with some marketing spin.

      1. “NVIDIA can’t get around the problem with shader cache driver hack”

        Yes it’s not the same but really helpfull. After that driver, performance in many games is significantly better and framerate more smoothly. But of course it’s not comparable to API like Mantle.

      1. That test was with unoptimized drivers.
        Learn to read.

        Also look at the minimums, they are quite OK compared to AC for example.

  3. As much as I want it to happen I don’t think it will… Not to mention I believe DX12 is going to be a Joke on PC. So looks like I am going back to AMD.

  4. of course mantle not offer many difference compared to directX, but Microsoft today refuse to deploy latest Direct except to the latest Windows product, even Windows 7 doesnt get direct 11.2.

  5. Here’s why Mantle should have been used in games like Assassins Creed Unity.

    “The game (in its current state) is issuing approximately 50,000 draw calls on the DirectX 11 API. Problem is, DX11 is only equipped to handle ~10,000 peak draw calls. What happens after that is a severe bottleneck with most draw calls culled or incorrectly rendered, resulting in texture/NPCs popping all over the place. On the other hand, consoles have to-the-metal access and almost non-existent API Overhead but significantly underpowered hardware which is not able to cope with the stress of the multitude of polygons.”

    Mantle and draw calls.

    “This won’t be the case for the Mantle API. AMD have boasted that they are targeting a rather astounding figure of 100,000 draw calls, at what they consider to be “reasonable frame rates”. They believe that the figure of 100,000 draw calls can also be improved upon with more optimization! That’s an astounding figure, and the increase in graphical detail in PC games would be astounding.”

    1. Issuing 50K draw calls is completely idiotic in the first place, that’s a new hallmark of inefficiency.

      1. It kinda makes me wonder why they would need to put such large crowds of people into their game. It seems they weren’t very concerned with the performance hit.

        I smell a rat…

        1. It was their selling point, we have large crowds, hell there’s a DSO interview where they said they reduced the draw calls… APPARENTLY NOT.

          50,000 draw calls? That’s just atrocious. They just don’t know what to optimize. You just can not have that many drawcalls in a proper optimized environment. Even hitting 8K is a huge hit but 50? That’s actually impressive now just thinking about it. Sub20-30 frames with 50,000 draw calls. I had 9-10K in one of the levels I made(before optimization) along with other things and that alone tanked the fps to single digits.

          1. Yeah, but I’m not sure AC:U can be considered ‘working’ – have you played the game or watched videos? The NPC interactions are wacky – clipping through them, animations initializing half a second after passing an NPC. It’s a mess.

            I’m not even sure large crowds are much of a selling point. I’ve not heard a single AC fan say, “You know what, I love the game but I wish there were more people getting in my way on the street.”

            When you were having 9-10K draw calls, were you working in 64bit? I bet that helps a bunch.

          2. Yeah it was just a joke about how they managed to get that fps from INSANE drawcalls. A feat but for all the wrong reasons.

            Yes it was in a 64bit environment but that doesn’t matter all that much because the drawcalls depend upon the API(DirectX11 in this case), 64bit would only matter in terms of available memory.

          3. Well, I’m not sure if 64 bit helped with your program, but I’m sure it helps with the consoles. The new consoles do have 8GB of RAM, and I’d guess they’re stressing all of it on AC:U

        2. The rat is Ubisoft throwing brute force hardware at their game while it’s fundamentally broken at the API level. Intel and NVIDIA fanboys may say it run good on their systems but it doesn’t solve the glitches that you see because of the draw call overload, Intel’s 12 thread CPUs and SLI 980s won’t get around that either.

          1. Assuming that’s where most of the draw calls are at then yes, and it probably is. I mean seriously, that is a crazy amount of draw call reduction but Ubisoft wanted their 5000 NPCs, I mean this is not Nitrous Engine benchmark, which can throw over 10000 NPCs on screen and still keep a playable frame-rate with Mantle.

          2. Perhaps they will retroactively patch the game with Mantle support. I’ve read it’s not all that difficult to implement.

  6. GameWorks is completely different tech. It’s not an API but set of features. It can’t be comparable with Mantle.

  7. The Titan and Titan Black were definitely called the strongest GPUs at the time they released… because they were. And still to some degree are; considering the higher vRAM buffer and memory bandwidth compared even to the 980s. But they

    The Titan Z is, was and always should be regarded as a joke though; I’m with you there that it didn’t need to come out and no denying it was being overpriced.

    There is video proof of nVidia talking about AMD’s Mantle at I think one of the PAX events stating how they said Mantle was to be open to all and both nVidia and Intel have reached out and haven’t gotten anything back. I really can’t go hunting for that video now though, but it IS around somewhere. Shortly afterward they said they were working on DX12 this whole time and yadda yadda yadda PR whatever etc. But they DID mention it, I’m certain.

    I haven’t forgotten anything about Freesync, but the point is that until it comes out full-fledged and is confirmed usable on both cards (unless nVidia specifically decides NOT to use it or allow support for it on their own cards, at which point blame is on them and hurrah AMD) then it’s all PR. Everything is for the sake of PR until it releases and we see some facts.

    Also, please remember that Displayport 1.2a isn’t really “popular” and you would need a card with that specific connector; DP 1.1a and Thunderbolt 1 ports (on laptops) won’t work if that’s the case (and yes, laptops can use Gsync). Gsync to my knowledge will work on HDMI, DVI, DisplayPort, etc which is an advantage of it.

    Anyway, my objective wasn’t really to say how nVidia is right or wrong or that AMD is right or wrong… your post looked heavily AMD-bravado-ish, and about the Titan chips, nobody was buying them over 780Tis after doing like 5 mins of research… which should have been done before buying a $1000 GPU.

Leave a Reply

Your email address will not be published. Required fields are marked *