Intel will soon announce its upcoming 14th-generation CPUs and according to a leaked slide, the Intel Core i9-14900K will be 2% faster than the AMD Ryzen 9 7950X3D.
The Intel Core i9 14900K will be the flagship CPU, featuring 24 cores. The CPU will have 8 P-cores and 16 E-cores, and it will support up to 32 threads.
What’s shocking here is the fact that Intel has included some games in which the Intel Core i9 14900K underperforms. For instance, in DOTA 2, the Ryzen 9 7950X3D is 23% faster than the i9 14900K. Similarly, in Fortnite and Cyberpunk 2077, the AMD CPU is faster by 18% and 11%, respectively.
Here is the full list of games (from the left to the right).
- DOTA 2
- Fortnite
- Cyberpunk 2077
- League of Legends
- Horizon Zero Dawn
- Final Fantasy 14
- Red Dead Redemption 2
- Forza Horizon 5
- Assassin’s Creed Valhalla
- Galaxy Breaker
- Company of Heroes 3
- Far Cry 6
- Mount & Blade II: Bannerlord
- COD: MW2
- GTA5
- Hitman 3
- Rainbow Six Siege
- World of Tanks Encore RT
- Serious Sam 4
- Civilization 6
- Counter Strike Global Offensive
- Ashes of the Singularity
- Total War Warhammer 3
- Starfield
- Metro Exodus
It’s also crucial to note is that the second CCD of the AMD Ryzen 9 7950X3D can harm its performance in some games. This is one of the reasons we’ve disabled it in our PC performance analyses. And I’m certain that Intel has not disabled it in its very own tests.
What this basically means is that by disabling the second CCD, the gap between the Intel Core i9 14900K and the AMD Ryzen 9 7950X3D may be even narrower. Okay, I can’t imagine a narrower performance difference than 2%, so the Ryzen 9 7950X3D might be faster without its second CCD.
Still, if these initial performance benchmarks are to be believed, Intel’s new CPUs will not be that impressive for gaming.
Stay tuned for more!
Thanks HXL

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email

0.1-1.9% perhaps? I’m no expert though. /s
Hah, well at 2% your basically down to test variance/error at that point.
So in a nutshell, wait for intel 15th gen or Amd 8000 series. I am runnung a 5900x, so will be waiting for the 8800-8900 X3D. I passed on the 7900x/7800X3D because, the 7900 its not enough of a jump, the 7800X3d is slower in some non-gaming tasks that and the increased power draw is significant.
>the 7800X3d is slower in some non-gaming tasks that and the increased power draw is significant
That can’t be right, the 7800X3D uses significantly less power than the 5900X in pretty much all cases. Did you mean the power consumption of the 7900X is high? If so, there’s the non-X variant (i.e. the 7900) that’s very power efficient (also surpasses the 5900X’s efficiency) and is only like 8% slower than the X version on average in non-gaming applications, and only about 3.5% slower in gaming on average.
“Non-Gaming” tasks.
Basically for me I dont see the logic in spending £720 to get none or a marginal improvement in non-gaming workflows.
I take it you’re counting the cost of a new motherboard and DDR5 RAM since you’d have to go from AM4 to AM5. Yeah don’t bother since you’re already running a 5900X. You’re indeed better off waiting for Zen 5 as hopefully by then we also get a more mature AM5 platform via 700-series motherboards that will hopefully come with better RAM compatibility too (especially with support for higher RAM speeds). I’m personally running a 5700X and don’t plan on building a new PC until 2025. It’s not like there are a whole wad of new games worth getting top-end hardware for anyway.
In gaming the 7800 blows away my 5900, but on non-gaming its comparable or slower. For example in tasks where multi core is king, eg compression/decompression, Blender, Cinebench, the 5900 is faster. Regarding power, the 7800 is 120W and my 5900 is 105W, albeit with PBO enabled my 5900 spikes above 120w.
gotta be srsly rtrded if buying 10nm intel in 2023
If the price is right… But it won’t be.
Who care? Mine RYZEN 7 1700 that ig ot in 2017 runs all games maxed 1440p 60 fps since 2021 that igot RX 6700XT 12 GB RED DEVIL. You dont need very good cpu for games.
MINE MONSTER PC IS
RYZEN 7 1700
RX6700XT 12 GB RED DEVIL
16 GB DDR4 3200MHZCL15
MP600 2TB WRITE:4950MB/S READ;4250MB/.S
DELLP2416D 24”2 560X1440 06HZ IPS
Will you shut the hell up about your mediocre PC already? You keep spamming this EVERYWHERE you go, and you’ve been at it for YEARS. Nobody cares about your outdated chihuahua-tier “monster” PC. Go outside.
Go the fuck back to (((Wccftech))) and spam your comments there.
We care because of performance stagnation and increases in price and power consumption, and not only at the top of the line-up.
And uses 800W in turbo mode….
And runs @ 150C temp
The 7950X3D is $650, so you would think the 14900K would be a good bit cheaper then. There’s also rumors of it having 20% better power efficiency than 13th gen. It will be interesting to see user benchmarks.
No chance, it is the same 10nm raptor lake node with higher clocks, if anything it is going to use more power.
It’ll be $599 just like the 13900K
No way will it have 20% better power since they are on the exact same node …. More likely it will be a 20% INCREASE because of the higher clocks speeds and the non-linear clock speeds vs current draw relationship all silicon has.
You need a node shrink to get that kind of power efficiency gain ….. Either that or your first device was poorly designed to begin with
John, your math is off. The 7950X3D is 29.9% faster in Dota2, 22% faster in Fortnite and 12.4% faster in Cyberpunk 2077, not 23, 18 and 11% respectively.
When you’re given a number that shows A is not as good as B (i.e. the number is less than 1), you take the inverse of that number to figure out how much better B is compared to A. In the case of Dota2, the 14900K is 0.77X as fast as the 7950X3D, which means the 7950X3D is 1.299X faster than the 14900K. To get in percentages, you subtract the 1 then multiply by 100.
Your math is indeed correct.
For anyone not convinced of what Shieftan is saying then multiply 1.23 * 77 and you get 94.71 which ≠ 100 (the baseline) but if you multiply 1.299 * 77 you do get 100 so the 7950X3D is 29.9% faster than the 14900K in DOTA 2 in this benchmark. Need more benches from other sites for a clearer picture but that should come soon.
Like all percentages it just depends on which number comes first in the equation
For instance 100 to 110 is a 10% increase ….. However 110 to 100 is a 9% decrease
https://www.youtube.com/watch?v=6OA-PANaPPY
both of these CPU’s are overpriced, and an average gamer simply doesn’t need an oven like these two. OC community who play benchmark simulator instead of actual games, are a bunch of weird fools. I don’t care about top performance, just run my games and browser without me noticing you.
Mind you buying Raptor Lake Refresh is kinda fools errand on top of that, since there is next gen coming in a year that will need DDR5 anyway. You might as well buy mature DDR5 and the rest of the rig then.
These likely won’t overclock any better than the 13900K does, there is just no headroom left in them and the manufacturing process has gotten so good there is little variation between units anymore like there was 10 years ago where you could often find chips that would run well above their specs because they had to allow for a much higher range of variation so the specs were adjusted lower to account for that.
The “Silicon Lottery” is dying right along with Moore’s Law
This raises the question even more…
What’s next for more CPU performance for gaming?
I guess cache amounts only?…
– Core count is complicating threading and is counterproductive even,
– E-cores aren’t doing much,
– Clock speeds are hitting a ceiling and cause to much power consumption and heat already,
– Process shrinks at this point are making little difference in power, heat and clock speeds,
– Architectures aren’t doing much either…
We can squeeze a couple more gens of very minor improvements between all those together but really, what’s next?
The death of x86 and into ARM requiring all software and games to transition into that too? Hmmm…
Personally the wattage Intel uses scared me away. Like if it was 150W I’d have no problems but 100W more is just too much. Also I’d much rather just have an 8P cores system as I don’t need the additional multicore performance.
This raises the question even more…
What’s next for more CPU performance for gaming?
I guess cache amounts only?…
– Core count is complicating threading and is counterproductive even,
– E-cores aren’t doing much,
– Clock speeds are hitting a ceiling and cause to much power consumption and heat already,
– Process shrinks at this point are making little difference in power, heat and clock speeds,
– Architectures aren’t doing much either.
We can squeeze a couple more gens of very minor improvements between all those together but really, what’s next?
The death of x86 and into ARM requiring all software and games to transition into that too? Don’t quite think so.
Maybe the outcome will be either devs really starting to thread their games, or UE doing something there, or devs just adapting to CPU performance stagnation.
I don’t follow the subject very much at all but I think the next big leap in CPUs will come when silicon is replaced with some other material. I recall reading about research into carbon nano tubes as a possibility years ago on ArsTechnica. Scientists estimated that it could make transistors 5 times faster while using much less energy. There were some hurdles to overcome first but it did show some promise.
That we don’t hear much about research progress into alternate materials doesn’t mean that the research isn’t in full swing in some large companies. They aren’t likely to say much about their progress until they have the solution and can patent it. Talking too much beforehand could end up hurting them by helping the competition.
In any case, whoever does get the patent process for making transistors out of new materials is going to make an incredible fortune with it so there is strong incentive to be first to do so.
As the saying goes: “Necessity is the mother of invention.” We are going to need to replace silicon eventually when we do hit a wall that makes shrinking transistors further without costs being prohibitive the reality.
Before doing something as radical as changing the underlying materials powering todays CPUs, there is a way easier route to take, namely expanding the ISA and enabling that via software.
Which is exactly what Intel is doing already:
Recently they introduced APX (Advanced Performance Extensions) and started early work on enabling that feature via patches for open-source compilers & the Linux kernel, which makes sense, because Linux is at the heart of data-centers around the world and therefore seeing massive investments from all major players, including Microsoft.
Here’s a quote from Intel about APX:
If a tech advance relies on Linux in any way then it’s a failure for mainstream consumers. Linux isn’t the future for mainstream users and not for the vast majority of gamers. You must surely know that even though you wish it were not so.
I don’t dispute your logic for Data Centers though and there is a lot of money to be made in that.
I didn’t say that APX relies on Linux, just that it’s the first OS which is getting support for it, which once again makes sense, because Linux has been at the forefront of technological innovation for many years now when it comes to advancements in the IT sector.
Plus you fail to see the bigger picture:
Eventually, these will probably trickle down to Windows 12+ many years from now, but in the meantime Linux-based operating systems such as Valve’s SteamOS will be able to take advantage of it earlier than that.
Plus it will be interesting to see how many users will upgrade to the next Windows iteration in 2025, when support for Windows 10 will officially end from Microsoft.
Like I already said in the past, interesting times ahead, even within Microsoft itself, as I have heard from fellow Linux experts who are working at that company right now and are getting royally paid for further improving Linux on all fronts…
Oh, silicon! Yeah, i see that also as one of the most dramatic/fundamental last resort steps as going all ARM, or the “new battery/storage/fuel/nanotube-thing” tech that will be pushed back for as long as possible because of costs, standardization, patents, dominant parties, disputes and such.
Also, this applies to GPUs too which are already massive in size and weight and consume even crazier amounts of power and are may be even more expensive to make than CPUs.
So after all… I suspect we should already be hearing a bit more buzz anyways on where tech is converging towards for the next steps, as is always the case. Question is: why are we not?
PS: Honestly, thanks for always engaging in discussions in an enriching manner. I’d love to see more of that here. 👏🏻
Welcome to the Post-Moore’s Law Era ……
Soo the 7800x3d will still be the gaming camp if you value in all factors. Performance, price and efficiency. Can understand why AMD choose to “gimp” the 7800x3d max boost vs the 7950x3d’s considering they almost breaks even.
Could only imagine if the more gaming tuned 7800x3d had those additional 0,7ghz of max boost.
A 2% (unnoticeable in gameplay) increase over your competitor is nothing to brag about when you are using 50% more power. All they have done is tighten up the chip fabrication so they can overclock it a couple hundred MHz consistently ….. But that comes with a power penalty because a 10% increase in frequency comes with a much higher than 10% increase in current needed because clock speed and power are not linear