2K Games has just released Tiny Tina’s Wonderlands on PC. Powered by Unreal Engine 4, it’s time now to benchmark this new game and see how it performs on the PC platform.
For this PC Performance Analysis, we used an Intel i9 9900K with 16GB of DDR4 at 3800Mhz, AMD’s Radeon RX580, RX Vega 64, RX 6900XT, NVIDIA’s GTX980Ti, RTX 2080Ti and RTX 3080. We also used Windows 10 64-bit, the GeForce 512.15 and the Radeon Software Adrenalin 2020 Edition 22.3.2 drivers.
Gearbox has added a respectable amount of graphics settings to tweak. PC gamers can adjust the quality of Anti-aliasing, Texture Streaming, Materials, Shadows, Draw Distance, Clutter, Terrain Detail, Foliage, Volumetric Fog, Screen Space Reflections, Characters and Ambient Occlusion. The game also has a Field of View slider, a Resolution Scaler, and a Frame Rate Limiter.
Tiny Tina’s Wonderlands features a built-in benchmark tool which is what we used for both our GPU and CPU benchmarks.
In order to find out how the game scales on multiple CPU threads, we simulated a dual-core, a quad-core and a hexa-core CPU. Without Hyper-Threading, our simulated dual-core system was unable to run the game. With Hyper-Threading, that particular system was able to push more than 60fps at all times at 720p/Ultra settings. There were some rare stutters here and there, though they are nowhere close to what we’ve experienced in other recent games. Furthermore, the game has a shader compilation at launch which eliminates the shader cache stutters (that plagued a lot of recent triple-A PC games).
Tiny Tina’s Wonderlands does not require a high-end GPU for gaming at 1080p/Ultra. As we can see, even the AMD Radeon RX Vega 64 was able to provide a constant 60fps experience. At this point, we should note that the game performs significantly better on AMD’s hardware.
At 1440p/Ultra, our top three GPUs were able to run the game smoothly. And as for 4K/Ultra, the only GPU that was able to provide a constant 60fps experience was the RX 6900XT.
Graphics-wise, Tiny Tina’s Wonderlands looks similar to Borderlands 3. Therefore, and if you enjoyed that title, you’ll love this game’s art style. Tiny Tina’s Wonderlands also packs some destructible objects. However, don’t expect any kind of next-gen-ish destructibility of interactivity. Tiny Tina’s Wonderlands looks and feels like a cross-generation title.
All in all, Tiny Tina’s Wonderlands can run smoothly on a wide range of PC configurations. The game does not require a high-end CPU, though it does require high-end GPUs for resolutions higher than 1080p. The game also does not suffer from any shader cache stutters, though we did notice some rare traversal stutters. And, surprisingly enough, it runs noticeably better than the launch version of Borderlands 3.
Enjoy!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email






















it use unreal engine 4 and not have FSR DLSS its ridicolous in 2022
Odd. Computerbase had the 3080 outperforming the 6900 xt in badass settings. I believe they tested in directx 12 though direct x 11 is generally faster though with wider frame times.
Both Tiny Tina’s Wonderlands and Borderlands 3 favour AMD’s hardware. Don’t know why Computerbase claims that the RTX3080 is faster than the 6900XT (especially in DX12). That’s inaccurate. They also… complained about the shader caching at the beginning of the game for DX12? So now it’s better to have shader cache stutters while playing the game? Hell, they complained about these sttuters in Elden Ring ( https://www.computerbase.de/2022-03/elden-ring-benchmark-test/2/#abschnitt_elden_ring_hat_grosse_probleme_beim_framepacing ) and now they complain when a game has a solution for it? Talk about double standards 😛
https://uploads.disquscdn.com/images/232dea750c8a45418544824412bb89f4cf20ae6dd4b06759722715cb6a3f0526.png
A 6900XT is basically a tie with a vanilla 3080. And 3080 12, TI and 3090 are almost universally faster at 4k. It was the case at launch, it was the case throughout 2021 and its the case now. Looking at this pic, Borderlands 3 is 3% ahead on 6900XT. How did you manage 17% faster in this game at 4k ? How about those absurd 1080p results that will 100% never show up on any other benchmark ? Every single result that you put with the 6900xt makes that card look faster than an overclocked 3090. When in reality its basically a tie with 3080. The percentage difference is margin of error stuff. Not 20%
They use an AMD Ryzen system, we use an Intel system. They also have DDR4 at 3200Mhz whereas we use DDR4 at 3800Mhz.
Also, the RX 6900XT is faster than the RTX3080 in numerous games (and can match even the RTX3090) so you basically don’t really know what you’re talking about -> https://www.pcworld.com/article/393807/amd-radeon-rx-6900-xt-review.html
Paul’s Hardware also shows the RX 6900XT surpassing the RTX3080 in numerous games -> https://www.youtube.com/watch?v=FxoPz1DO0Sg
As for Borderlands 3 specifically, Hexus reported that the RTX3080 is slower by 24fps at 1080p (and keep in mind that Tiny Tina’s Wonderlands got some optimization improvements which is why it runs way better than Borderlands 3) -> https://hexus.net/tech/reviews/graphics/147043-amd-radeon-rx-6900-xt/?page=5
So after im giving you a 50 game benchmark that was done a month ago your response is cherry picking a 2020 review ? You are looking at a 50 game benchmark where 6900XT is slower than a 12 gig 3080. Its in front of your eyes. At 4k the minimal differences between ram and cpu dont matter. Computerbase also has B3 faster on amd. But it seems Tina runs differently. They have an entire testing laboratory there, with multiple people. You’re a guy swaping cards in his personal rig.
The difference between 3080 and 6900xt was a tie at launch as well, if you want to go in 2020. 1.3% faster at 4k, over 4500 averaged benchmarks
https://old.reddit.com/r/nvidia/comments/kgov0g/final_performance_summary_of_2021_4550_benchmarks/
Look at the current state of gpus from PC Games. 20 game benchmark. At 4k, 6900XT stock is 3% ahead. A tie basically. And the fastest liquid cooled and ocd 6900xt is 10%. Like i said. Your weird inflated results from amd and lower than average nvidia results are not to be found anywhere else.
https://www.pcgameshardware.de/Grafikkarten-Grafikkarte-97980/Specials/Rangliste-GPU-Grafikchip-Benchmark-1174201/
And again, they are using an AMD Ryzen CPU with slower memory. We have a completely different setup. We also used the latest drivers which according to AMD offer “up to 23% increase in performance in Tiny Tina’s Wonderlands” (all other publications used the 22.3.1 driver).
According to PCGamesHardware, the 6900XT is faster by 10% than the RTX3080 (we use the 10GB version). There isn’t any tie. DON’T use the 3080Ti which is 2% slower, use the regular 3080 GPU. Add to this the 23% boost of the 22.3.2 and you get a 33% maximum increase for this particular game.
At 1080p, the RX 6900XT is faster by 24% (normal figures given the performance boost of the latest AMD driver). At 4K, the difference is 19%. Like it or not, these are accurate figures.
3200 memory is the sweetspot for ryzen though, isnt it ? Lets see when other sites benchmark this game. But still, a card that comes at 1 to 3% faster than vanilla 3080 on every site, somehow you consistently get 15-20% here. The 6900xt is just not that fast. Its extremely weird
I’ve edited the comment so re-read it. The 22.3.2 driver brings a MAJOR performance boost for the RX 6900XT -> https://www.amd.com/en/support/kb/release-notes/rn-rad-win-22-3-2
6900XT is definitely a competitor to the 3080 Ti / 3090….not the base 3080….when will people understand?
Given you guys have a 6900 XT, that’s faster than a 3080 and you like to test CPU performance scaling, have you considered switching the chart (or ar least, adding to it), the scaling for 6900 XT vs CPU power/threads/clocks?
Nvidia has higher CPU overhead than AMD in pre-DX12-Vulkan titles. And probably more haywire usage in DX12/Vulkan as well.
The problem with AMD is that they have no market share, while preying on the ignorant with legacy tech and console GPUs. They have no support for hardware accelerated AI, while providing last gen RT performance which is up to 50% slower than competing Nvidia products. Even on the subject of the new DxIO, I’ve not seen AMD confirm ture storage to GPU with GPU decompression. Nvidia have been advertising this for well over a year.
It’s true AMD can compete in rasterisation, but only due to using a smaller node, 7nm, than Nvidia, 8nm, while also using clock speeds up to 50% more. Now imagine where AMD’s performance would be if Nvidia also went down the route of 7nm.
I’ve no idea if this is another AMD sponsored title, but perhaps we should also ask John to use a 12900k, or even a 12900ks when testing?
Hardware accelerated AI is unnecessary now that FSR 2.0 is gonna be a thing, DXIO is far from being a tech that’s implemented in PC games yet and what does ray tracing performance have to do with CPU performance scaling?
FSR 1/2.0 is based on 30+ year old ideas. As such it’s great for consoles and mobile, while both Intel and Nvidia appreciate the more advanced image reconstruction available from AI, e.g. search YT for Nvidia image reconstruction. DXIO is already being implemented in games. Stop living in the past.
Uhm… FSR 2.x isn’t based on old ideas. Hell. Neither is FSR V1. V1 just wasn’t based on the latest ideas because their hardware didn’t support a temporal ai based solution.
RDNA 3 will but even then they found a way around it. So the RT hardware can focus on RT
RT performance has to do with cpu performance because of draw calls and other things but that CPU isn’t a bottleneck even at 4k.
Why Nvidia went 8nm is irrelevant. The point is that on a less efficient larger node, Nvidia are still making the better GPUs.
RT is great, stop living in the past. Get a job if you can’t afford a decent GPU. I use a 3080 at 1440p providing 60-105 FPS in RT titles.
We do not know the final requirements for AD, but they will be higher than AMD due to dedicated RT and AI cores.
RTXIO fully supports GPU decompression and is waiting for Microsoft to complete the API.
Sounds like you’re one of the ignorant fools who bought a console GPU for his PC?
I very much doubt you have, given your inability to communicate, never mind understand the subject.
First of all the tech is good. I said I own a 3080 as it’s a card that performs very well with RT workloads, no shilling involved.
I’m a 53 year old software developer who started at the age of 12, professionally at 16 and my own company at 21. Understanding tech is part of the job.
The tech is good it’s just expensive so they have to limit the hardware responsible for doing the ray calculations.
The hardware also takes up half the GPU die. Over time that hardware gets cheaper and smaller. That’s why nVidia was able to basically double the tensor cores going from the 2xxx series to the 3xxx and they will again for the 4xxx.
LOL. RTX is bullshit marketing tech. 3D vision was way more immersive and they got rid of that. Fck Nvidia.
Wait wait wait …
Hardware accelerated ai as in DLSS? AMD doesn’t need it. Neither does Intel. nVidia does though and considering AMD is adding plenty of RT hardware, beats nVidia in rasterization, low level API’s, and generally in compute and price… The launch of RDNA3 could be a real eye opener for nVidia and it’s about time and this is coming from someone who owned a 1080ti, 2080ti FTW3 Ultra Gaming, and currently a 3080ti FTW3 Ultra Gaming.
We need competition to bring prices down. nVidia has been slowly raising them for 10 years… Basically ever since fiji stomped all over them with the 290x with 8 gigs of VRAM.
Plus Intel is entering the market at the low end so it’s going to be a fight for the midrange and high range this year.
On top of that we have new Ryzen launching too
You have issues. I love AMD and always have but RT is literally the future of gaming. DLSS 2.x can look better than native 4k because of the excellent anti aliasing without the blue.
Put your fanboy away… It’s showing and it’s sad
Wtf is wrong with you? In 20-30 years games won’t have ANY rasterization. They will be fully path traced aka ray traced.
I’m a little confused … will you buy the 12900k for him?
Do you think that just because this is a video game and PC hardware site, John has access to all the CPUs and/or GPUs he wants ???
And magically Intel, AMD or Nvidia give him everything he desires???
???????????????
Btw he already owns an Intel CPU, at this point it would be more interesting to see comparisons with Ryzen CPUs, don’t you think?
Just a little?
You have that backwards. Nvidia has SIGNIFICANTLY lower overhead in dx11 and prior API’s.
AMD started focusing on low level API and compute like 8 years ago with Fiji aka the 200 series. They created Mantle to show what a low level API could do. In Thief using dx11 I got 4k60 with setting lowered and even then I had dips. With mantle I maxed it out at 4k and had no issue passing 120fps.
I had a 290x when doom 2016 came out then shortly after upgraded to a GTX 1080. The 290x ran Doom in Vulkan maxed out at 4k only 10 fps slower than the much newer 1080. Vulkan is actually based on Mantle.
nVidia doesn’t run these low level API’s as well as they should and it’s the hardware not the drivers. It also doesn’t help that devs don’t properly use the API’s.
Also the 6900xt is FASTER than the 3090 in rasterization and borderlands is pretty much just that… Simple rasterization.
Next gen the 7000 series from AMD should really put a dent in nVidia sales. It will still be better at rasterization, low level API’s and they are adding dedicated RT hardware plus releasing their true DLSS 2.x alternative.
Going to be very interesting later this year when they unveil and start battling both nVidia and Intel at the same time with new GPU’s & cpu’s.
I can’t wait.
I don’t have that backwards. Nvidia has a “special driver sauce” in DX9-10-11 that splits tasks more than what the game engine already does. AMD doesn’t do this.
In CPU bound games, sometimes this can help alleviate CPU bound scenarios on Nvidia vs AMD.
At the same time, the “special driver sauce” is actual code that is executed by the driver that takes up CPU time. So if you have an older CPU with few cores/threads that is already maxed out by a game, that extra code run by the driver INCREASES the CPU bound nature of the game compared to an AMD GPU that doesn’t have that extra workload.
Check out Hardware Unboxed videos on the Nvidia driver overhead.
That’s not true, at least not in the way you understood it. First nVidia has significantly more efficient dx11 drivers but dx10 and lower it’s the same.
AMD gains dx11 performance by taking certain shader code (code running on cuda cores/stream processor’s) and they rewrite it to both look and perform better. That’s where the AMD fine wine effect comes from.
AMD also has hardware better designed for low level API’s than nVidia along with much more mature drivers for dx12u & Vulkan. Hopefully nVidia fixes the hardware aspect with the 4xxx series.
Now… If you take the same games, same hardware except GPU’s and benchmark a dx11 game the CPU usage will be nearly the same. If nVidia CPU usage is higher it’s only because it’s running at a higher frame rate and making more draw calls or moving data between sys mem and VRAM more often.
It’s not because their driver is doing more special things.
Hardware unboxed aren’t game devs. They also have thousands of videos. Do you happen to have a link? A timestamp of where they talk about what you’re talking about would also be helpful.
Then I can get you a white paper on why they are wrong or better explain what they are actually saying.
“It’s not because their driver is doing more special things.”
It literally is.
1: https://www.youtube.com/watch?v=JLEIJhunaW8
2: https://www.youtube.com/watch?v=G03fzsYUNDU
3: https://www.youtube.com/watch?v=TD6-Y2a1XbE&t=383s
https://uploads.disquscdn.com/images/f448c81bb891fd4e89eef13f0bfe7b8b3631611f87591bcacddfa908cfbcdc79.jpg
https://www.computerbase.de/2022-03/tiny-tinas-wonderlands-benchmark-test/2/#abschnitt_benchmarks_in_full_hd_wqhd_und_ultra_hd
you must’ve gotten the most magical 6900xt in existence because zero of your benchmark results with it compares with pretty much any other website in the world.
Hardware unboxed did a 50 game benchmark a month ago between 3080 12gb and 6900XT. The results ? 2% for the 6900XT. Add another 2 or 3% against the 10 gig model. How do you manage to consistently get results to the tune of 15-20% for the 6900xt i dont understand.
You’re confusing averages across a stack with single examples.
6900 XT can also be faster or slower than a 3090 as well. Depends on the game, API and resolution.
People also test different sections of games.
Shouldn’t we be looking at averages for the best idea of how the GPU performs?
https://www.moddb.com/games/apocalyptic-vibes/news/apocalyptic-vibes-sprites-shoegaze-and-immersivesim doom stalker
i am too afraid to click
ah its its a worth a buy on ghostwire tokyo.
Watched this morning. He ripped it a new a$shole.
imagine being so fragile that the colors preclude you from playing a game
“only tough colors in my games, and none of that “woke” garbage – i need a muscled male protagonist or i can’t get
hardinto it”Tiny Tindas was a wonderland. Probably was the best of the DLC from Borderlands 3.
If the 6900xt is best for 4k then this game is using simple tricks to achieve it’s visuals. The 6900xt has excellent compute but it’s also the fastest gpu for pure rasterization and the kind of effects borderlands uses
That being said anything from the 3080 up should run it maxed at 4k60 just fine. Hell the 3080ti, 6900xt and 3090 should all be pushing past 4k 100.
I wonder if it’s the ram used. Perhaps more and something faster with lower latency would do the trick.
Yes….”should”
My 3080ti FTW3 Ultra paired with an amd 5800x has no issue with it at 4k120.
I think ram is the issue here. I’m using 4000mhz CL14-14-14-34 which is super low latency memory at high clock speed.
I grabbed the game a couple hours after posting.
Got the title, its not bad… unless your allergic to woke stuff
What, ghostwire is amazing get out of here