AMD has shared the first official gaming benchmarks for AMD Radeon VII. The red team has benchmarked the Radeon VII against the Radeon RX Vega 64 in 25 games, and the results overall fall in line with what most of us expected from this new AMD GPU.
Now while AMD claimed that this new GPU targets 4K gaming, it’s obvious that in some titles it cannot come close to it. For example, in Assassin’s Creed Odyssey, Monster Hunter World, Tom Clancy’s Ghost Recon Wildlands the AMD Radeon VII pushes an average framerate of 36fps (in 4K on max settings) and in Total War: Warhammer 2 it pushes 35fps.
There are some games in which the new AMD Radeon VII can offer a 60fps experience, however you should keep in mind that these are average and not minimum framerates (meaning that there are most likely deeps below 60fps in these games).
Perhaps the most promising benchmarks are those for Battlefield 1 DX12, Call of Duty Black Ops 4, Doom, Star Control Origins, Strange Brigade and Wolfenstein 2 in which the AMD Radeon VII could push more than 80fps.
For these benchmarks, AMD used an Intel i7 7700K and 16GB DDR4-3000, and run the games on Max settings in 4K.
The AMD Radeon VII releases on February 7th, will be priced at $699 and we are pretty sure that there will be some third-party benchmarks prior to its launch so stay tuned for more!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email

https://uploads.disquscdn.com/images/efe21ebdc14c066fa4fb3be2176c269eb1b94fe71269dd7dff34b8edd6b28b1c.jpg lol
Inappropriate pic, but well you’re probably another fanboy roaming around.
Intel i7 7700K
16 GB VRAM
1 TB/s
Still can’t do 60 fps in demanding games.
Salty as fck.
In all fairness neither can the RTX 2080 (non-Ti).
From TPU review of 2080 Founders Edition ($800) with an 8700k CPU:
https://uploads.disquscdn.com/images/9dfe45bbe241e5e00822d084807ba71d1aaab1a7012ef3a64ebe5df98534b888.png
https://uploads.disquscdn.com/images/7ebc7691b84b93da77941ca57aa27aa9c83e0e8a7f2411481ab8526ee139341a.png
https://uploads.disquscdn.com/images/a90b8ca0b2ae7e25d403c108390c3cda6bba6f9a4e01890af7c4e897cd454ffe.png
https://uploads.disquscdn.com/images/653338a511f13c48a5946287daa353863e81cc0ac902410547034de9478b611d.png
https://uploads.disquscdn.com/images/537221b9e5f47f17d8355b9a854357614debd9ee650e3acd559c93ee14140b7a.png
https://uploads.disquscdn.com/images/0bd9e6afb44a5a9ed1ad299092dbb2210ab10f032e1527bd2e2d24adb46c845e.png
https://uploads.disquscdn.com/images/7c9623e1446f5cabe6a98c584a3591442da55daaf1e31b4a77ee6c510af3074f.png
True but RTX 2080 doesn’t have overhyped specs.
AMD fanboys bought Vega 64 (8 GB/483 GB/s) over 1080 Ti (11 GB/484 GB/s). But now “it has 16 GB VRAM”, “I’m content creator”…
You are biased as always …
https://uploads.disquscdn.com/images/e1c679656176df3371b0165d94162055b8ebd10c105b9ccae9439e94a73bdb7d.jpg
“gamers” will not think twice and just buy the 2080.
AMD themselves were talking about it as a GPU for content creators who also play games.
If I am in the market for a decent 4K monitor and GPU, why would I pick the RTX 2080?
You are not the average consumer, especially if you spend any time reading and commenting on hardware articles.
It’s better because 16 are more than 8 see?
Xbox 360 was 359 more than Xbox One and was more popular so the answer must be yes!
I remember back in the day HIS released a 1 GB version of Radeon X1800 or something. Clearly should’ve been twice as fast as a 512MB 8800GTS! And certainly not the other way around.
And because HBM is ridiculously fast compared to GDDR5X
Fury X must be obliterating 1080Ti then
Lmfao nice comparison. A GPU from 2015 vs a GPU from 2017
Love is blind.?
I’ve heard HBM is ridiculously fast compared to GDDR5X. Surely it would make any kind of a difference?
But hey, soon we gonna learn that memory bandwidth is the least important part of a videocard. Even less so than 16 GBs in 2019 where very few chosen games hardly take 6 GBs even in 4K
*Talks about bandwidth*
*Cites VRAM size*
Lmao this is too funny
Pretty smart to compare 4gb to 11Gb.
amd gpu – intel cpu !
Well it’s progress, that’s good. But when you compare it to Nvidia’s counterpart it’s so-so.
As consumers we have a choice. Curious to see AMD’s stock in the next 2-3 months. Nvidia took such a nose dive lmao. Blizzard, activison too. Good stuff
With the combination of price, performance and ram size, this looks more like a professional gpu, rather a consumer one..
“AMD used an Intel i7 7700K”
Now there’s a show of faith in their own CPUs! /s
If they had the 3000 Ryzen, they would’ve used those, but they’re not final. I’m just confused why they didn’t just straight up used 9900K at 5.2 GHz to remove all CPU bottlenecks.
They want to be the most realistic as possible because not everyone has 9900K (very expensive) whereas 7700K is almost equal to 6700K (a popular cpu) !
Hard to tell if that’s sarcasm or pure stupidity. When you test a video card you need to show it in the best possible light so there does not need to be any CPU limitations therefore the GPU will be shown in its full potential. You can’t account for the vast range of Hardware configurations that people have out there and many people do have some stupid bottlenecks. And anyone buying a $700 GPU would be a moron not to have one of the fastest CPUs made.
Hard to tell if you’re a troll or just pure stupid? Jokes aside, you aren’t going to bottleneck a gpu because you don’t buy one of the fastest cpus. Gpu are fairly far behind other components and actually some cpus that are cheaper and “slower” produce better results for graphics.
You have to remember that we are not launching a spatialship to the moon but we are going to play games. It is rudiculous to buy the most expensive hardware in order to play (after few months your very pricy hardware will be outdated by a new budget midrange ones) so a balanced pc will suffice for playing or benching.
haha, thats an AMD response. They use a weak proc from intel 😉
Probably chosen based on their target audience of the product, i think they nailed this aspect.
The Witcher 3… no 4K60fps???
Lisa summed it up, Vega 7 is a gpu for content creation first and gamers second
the whole vega architecture is compute focused arch, and lackluster in gaming
AMD doesn’t care anymore about gaming, and nvidia cashin in more and more if NAVI is also lackluster in gaming, then expect the rtx 3050 to cost 300$ & 3060 to cost 450$
That’s an AMDspeak for when they fail to deliver a competitive product. Been like that since Piledriver. “It’s not for gaming, it has 8 cores so it’s for content creators!”. Then when they finally manage to get their sh*t together (a little) e. g. Ryzen suddenly it’s “for gaming” – if only until intel answered with 6/8 core consumer CPUs. Now suddenly Ryzen was for “content creators / servers” again. It’s them basically admitting that they wasted that hyped as hell move to 7nm and it can barely keep up with 12nm RTX2080.
Ryzen was not competitive? Threadripper was not competitive?
This is why Vega 7 is miles better. And for it to compete with the RTX 2080 while being the older Vega arch like Vega 64 is pretty amazing in it’self. https://uploads.disquscdn.com/images/4f369032613b33c6330d8d972c2f3dcc8170d30302bd564c2591f5d536c1dc44.png
That is one point of view. From gaming point of view you can say, that AMD needs 7nm GPU with 16GB VRAM, 4096b bandwidth, 128 ROPS and 3840 cores to compete with NVIDIA 12nm GPU with 8GB VRAM, 256b bandwidth, 64ROPS and 2944 cores. And Vega even don’t have RT or Tensor cores. 🙂 Do not get me wrong. I am not criticizing the new Vega. I am just trying to say, that pure numbers are not everything.
AMD’s next GPU should be named “ketchup” because they’re not catching up and nothing new, VEGA on 7nm.
Hahahahahaha! Nice one!
So this pretty much proves that all the Vega arch needed was faster clocks. It just like the RX 590 having faster clocks over the RX 580. But with Vega 7 you are getting 16 gig’s of HBM2 on a 4096 bus over Vega 64’s 2048 bit as well as 128 rops over 64 rops.
HBM2 confirmed? Su used the term “high bandwidth memory” at CES, no “2” IIRC.
HBM was only made in 1GB stacks.
This uses x4 4GB required to get the 1TB bandwidth.
It also proves that it’s ridiculous AMD is bad at leveraging multiple cores fore their GPUs and that proven time and time again. HD 3000 had **less** cores than HD 2900 XT yet it performed the same/faster while being more power efficient. HD 6000 had *LESS* cores than the HD 5000 yet it performed faster, given the GPU wasn’t properly used. FuryX and Vega 64 had similar number of cores, yet Vega performed faster mostly due higher clocks. VII finally has again fewer cores than the Vega64, yet higher clocks and it performs faster again.
They need better tech and higher clocks, not more cores in their GPUs.
Even more underwhelming.
The fact they’re comparing it to Vega 64 says much.
Comparing their new card to their former is pretty obvious. Nvidia does the same thing.
Nvidia has basically no competition but itself, the same cannot be said for AMD.
How can you even think that? The only card AMD doesn’t have an answer to is the 2080ti, and at the price point isit basically irrelevant.
7nm vs 14nm, more power hungry (probably), same price, still has to come out on the market, how is that an answer? Also they only showed one card, they still don’t have even a hint of an answer for 2070 and 2060, and no 2080Ti, so what do you want to talk about?
You seen to forget that their old Vega cards still perform as well or better than the 2070 and 2060. They don’t need a second answer to them. Even a 300 Watt video card just used about as much a PS4. How is that realistically a concern?
Their OLD Vega cards perform similar to 1070 and 1080, and that has nothing to do with 2070 or 2060, besides they are even power hungrier. It’s not about concern it’s about tech quality, and optimization.
You people need to freaking understand that AMD isn’t even close to nvidia level of tech, they had to go to 7nm and it wasn’t enough to get very good performance, and not even a less consuming card, it’s completely crazy, and they even dared to price it the same nvidia priced their card. That’s not even a discussion seriously, just deal with the fact AMD is way behind when it comes to GPUs and stop denying that or trying to pull out some more excuses like power isn’t a concern or AMD are nicer or other BS like this. Vega is utter garbage, hopefully Navi won’t be, but we’ll just have to wait, and at that time there might be a third player disturbing everyone.
If only they had priced it at $500-600, it would have been much better.
Vega64’s problem is that it costs 500 bucks and is available in few places.
7’s problem is that it’ll cost 700 bucks and be available in few places.
If the performance increase is less than 30% like these benchmarks suggest, and the price is 40% higher!!!!, then what’s even the point? It really doesn’t matter if nvidia’s cards are priced EVEN WORSE because they are high end. According to these benchmarks the VII will be inferior to the 2080, 2080 TI, Titan RTX and Titan V CEO. Let’s be generous and assume it beats the 2070 which is not a given. Chasing that high end crown is completely futile for AMD. People who pay absurd amounts for a GPU clearly want “the best”, not a 5th place loser who arrives a year late.
I know chasing the low-midrange hasn’t been working out for AMD either but they’re still going to have to pick their fight and the high performance segment makes no sense. Announcing “the first 7nm GPU” for a ludicrous price only for it to be neither high end nor affordable, what a way to start off the year. If they handed nvidia the high end win without an attempt like this we could have speculated that AMD could compete in that segment if they wanted to and they’re just being smart by letting nvidia claim pointless victories with thousand dollar cards that few people buy. Instead the VII proves beyond the shadow of a doubt that in 2019 AMD cannot keep up with nvidia’s architecture even when they reach a smaller manufacturing process first. And they announce this in January, setting the tone for the whole year. Baffling.
The VII beats the 2070 and is on par with the 2080.
That’s not a fair assessment based on existing benchmarks. Biased?
It’s an assement based on AMD’s provided benchmarks, which is what we have at the moment.
GAMING BUNDLE OF 3 very anticipated games adds value and somewhat dissipates the crypto miners, think about it for a second.
590 with the gaming bundle to me was a very great deal, haven’t bought for the full price of 280 because i paid on money and got 3 titles probably worth between 40 and 60 so assuming that each values 50 you are getting 150 dollars worth of games, so the true price of the card is 700-150.
Hmmm, let’s see…
A sequel of a dead MMORPG and two console ports. Well, it’s not something I really want to play on latest high-performance GPU.
Can I get 150$ discount instead of these “games”?
AMD used an Intel processor to run the benchmark? hmmmm…..
Some people claimed that Vega was held back by having just 64 ROPs. These benchmarks pretty much annihilate that theory. Doubling clearly isn’t helping all that much. Underwhelming increases in some games but WTF is going on in Fallout 76? From 45 -> ~75.
AMD just don’t care about PC gaming anymore. They are focusing on consoles and their partnerships with Microsoft and Sony. The custom silicon that they have created will deliver 4K60 in the console space with the PS5 and the Xbox Scarlett (Anaconda). Developers of course could choose to utilize the power on pushing visual fidelity instead of higher native resolutions and framerates.
No doubt 90% of titles will not be native. I doubt they can squeeze a powerhouse console out the door next year for less than £399 that can challenge these newly released cards.
They should be able to do native easily. The Xbox One X already does native 4K 30 in many games and that is with a 6TF GPU and an old Jaguar CPU. Next-gen consoles will be using Zen 2 CPUs and much more powerful GPUs so native 4K30 could be very easily attainable and potentially 4K60 for some games. Although as I’ve said many developers could choose to push visual fidelity and opt for a lower native resolution, using upscaling techniques to reach 4K, as well as going for 30fps instead of 60. This is of course for the PS5 and the high-end Xbox Scarlett model (Anaconda), not for the more budget friendly model (Lockhart).
I play most games in 4k with an R9 390 (high). 3/4 years and £400-500 later and there’s barely a difference in FPS. I think it’s about time AMD/ Nvidia lowered their prices a little…
Last year performance with today’s prices!
It’s true that AMD GPUs aren’t as efficient but it depends on what you pay for electricity as to whether it really matters. If an AMD GPU used 100 more watts than the Nvidia counterpart then for me it’s insignificant. I game an average of 20 hours a week and I pay 11 cents per kWh so that extra 100 watts an hour would add about 95 cents a month to my electricity bill.
Don’t forget the associated thermal issues that’ll cause cooling fans to spin up more quickly and remain spinning for longer which all results in a more noisy PC.
I know that is distracting to people but it’s just never bothered me. In my present rig I have 4 case fans, 2 GPU fans and the PSU fan. I can hear the fans especially if the GPU is under load and the fans are spooling up and down but not really very much over my speakers. Also somehow I’m able to filter out background noise like fans when I’m immersed in a game even in the quiet parts of a game.
I can’t remember even a single time when I was distracted and pulled out of immersion due to fan noise.
Hello JOHN,
I just found an interesting find. I thought of sharing this with you. Kindly give this a read. It looks like AMD will have it’s own alternative to Nvidia’s DLSS technology named as DirectML.
This is interesting. They plan to create an open version of DLSS.
“In a recent interview with 4 Gamers (Source in Japanise), AMD’s Adam Kozak confirmed that their upcoming Radeon VII graphics card would support DirectML, a Machine Learning (ML) extension to DirectX.
Think of DirectML as the Machine Learning equivalent of DXR (DirectXRaytracing), allowing DirectX 12 to support advanced features and utilise AI to improve future games.””
https://www.overclock3d.net/news/gpu_displays/amd_s_radeon_vii_supports_directml_-_an_alternative_to_dlss/1
Hello JOHN,
Some interesting info on the upcoming upcoming Turing RTX card from Nvidia has been released.
It’s a rumor though, but could contain accurate info as well. The GeForce GTX 1660 Ti is to become NVIDIA’s first Turing-based card under GTX brand. Essentially, this card lacks ray tracing features of RTX series, which should (theoretically) result in a lower price. It feature 1536 CUDA cores.
GTX 1660 Ti. It should have been named as a GTX 1160 Ti though, in my opinion.
https://videocardz.com/newz/rumor-nvidia-geforce-gtx-1660-ti-to-feature-1536-cuda-cores