Intel will release its new CPUs that are based on the Kaby Lake architecture in January, however things are not looking good. Expressview has tested this brand new CPU and compared it with the previous i7 CPU, the i7-6700K. And while the i7-7700K appears to be slightly faster than the i7-6700K at default settings, that’s mostly due to its increased Turbo boost and not due to its new architecture.
Expressview initially put both the Core i7-7700K and the Core i7-6700K through 11 different single and multi-threaded CPU tests at default settings. According to the results, the i7-7700K was around 7.40% faster in single-threaded and 8.88% faster in multi-threaded tests.
Here is the catch however. The i7-7700K is clocked around 7% higher than the i7-6700K.
When Expressview clocked both of these chips at 4.0 GHz, it got some really disappointing results. According to the tests, the i7-7700K was actually 0.86% slower in single-threaded and 0.02% slower in multi-threaded tests.
Needless to say that these results will disappoint everyone that was looking forward to this brand new CPU.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email




title and a few parts of the first paragraph says 770k, i think you meant 7700k
Obviously ;). Already fixed
cool, just looking out for you, i love this site so any help i can provide, ill be more than happy to do so
D’awwww
https://uploads.disquscdn.com/images/fd2125dc276966cabd13215b0567fc604889bb36ce4bbd0c5d7ee1b408625636.jpg
THOSE CHEEKS!
AWWWWWWW.
i love that pic dude lol
But hey its probably more power efficient cause that’s what we all care about in a desktop PC right guys? right?
Jokes aside I would like to know about power efficiency.
You seem not to understand how overclocking works in a fundamental way.
My comment had nothing to do with overclocking, it had to do with Intels recent trend of focusing on power efficiency over raw performance which is great for mobile devices but basically worthless for anything like a PC which is constantly hooked up to a power source.
Haha that’s hillarious
Why would they produce a boundary-pushing CPU when the only thing their competitor is good at – is spreading conspiracies about Intel and NVidia forcing every developer on the planet Earth to put in the crippling code that prevents their products from working well.
That’s not a conspiracy, ya know. It’s stuff that actually happens and you can test / observe it yourself.
yeah, yeah, like that one time AMD officially accused Project CARS devs of gutting Radeons by using PhysX for the game physics – and then it got revealed that the game uses an entirely in-house physics engine for almost everything in the game bar CPU-bound box pushing. And the reason for a subpar performance was AMD ignoring dev request for improved drivers for 6 months prior to release. So yeah, keep believing it’s all evil conspiracy and not a complete AMD impotency for the past 8 years.
You’re denying that games running PhysX run poorly on AMD cards now? Based on one game that didn’t actually use it? I’m not sure what your point is other than pointing out a poor assumption from a company.
Generally, AMD chips perform fine either way. Not amazing, certainly not on the top of the list, but they do a good job for post users.
“You’re denying that games running PhysX run poorly on AMD cards now?”
since when gpu PhysX capable of running on AMD gpu? and i see many people have this misunderstanding that game with PhysX will force to run all the physic calculation on cpu causing slower performance for system with AMD gpu. if PhysX causing poor performance on AMD card then why Hitman (2016) and Deus EX Mankind Divided running faster on AMD hardware (to the point even 390X can match 980Ti performance in Hitman) when both game engine use nvidia PhysX for it’s physic engine.
um what the hell are you talking about? Hitman 2016 and Deus Ex Mankind Divided do not use physx for the physics engine you clown.
The Dawn Engine uses PhysX, so maybe you wanna check your intel before start talking like that
https://uploads.disquscdn.com/images/ddd3e19999a9f54ede90551584dab4b80fd45be8caa53b983cf73f295382878c.jpg
well I had searched but obviously not well enough. so does Hitman use it too???
Just because the engine features it, doesn’t mean it’s in the game.
then what physic engine they would use? if they use havok then havok would have been mentioned. and unlike unreal/cryengine/unity etc this game engine are not meant to be license to many outside SE. if they did not intend to use PhysX in their games then there is no reason to license PhysX at all. the fact we see PhysX being listed meaning the game using this engine will definitely going to use nvidia PhysX for physic calculation.
But in both links that i’ve posted there is a clear word and is “Used”. You could claim the same thing about Bink, but as you can easily check, the game use that tech.
As a matter of facts in both titles is used PhysX, like it or not. No AMD or NVidia fanboysm.
PhysX doesn’t run native on AMD cards, but is emulated via software. This is the origin of the “poorer” performance on AMD side, but nobody told AMD to not develope a middleware for physics itself if the Nvidia one’s not good enough.
That’s the thing, lately AMD has been developing some open source software for things like this (think G-sync and all that). It’s really what the community needs, not hardware specific software.
AMD did try to make vendor neutral solution for gpu accelerated Physics back in 2009-2010 (they work with Bullet Physic to make it happen) but developer in general have no interest with gpu accelerated physics effect with their games.
http://www.xbitlabs.com/news/multimedia/display/20110324174038_AMD_Game_Developers_Not_Exactly_Interested_in_Hardware_Accelerated_Physics.html
most gpu physx being disabled entirely when the system have AMD gpu. for games with gpu physx there always option to enable gpu physx. with AMD gpu in the system this option cannot be enabled at all. the only game with gpu physx that i know of forcing calculation to be run on cpu is borderland 2.
AMD can’t get an erection? WAT?
There is no serious competitor against intel, so they can afford to warm up the same hotdog in new wrappers for you, without any consequences…
10 years ago AMD competed so well they had better CPU in every way. Result was they kept loosing market sharing anyway due unfair business practices intel used. You right, there is no competition even if competitors made actually better product in every way.
true dat but AMD CPU’s are long dead RIP
Also because Intel’s CPU’s have had higher IPC’s for decades compared to AMD. It looks like AMD is finally moving away from the “MOAR CORES” mentality with Zen though.
But are they? People have been saying this ever since Zen was first rumored, but is there evidence? There’s been word of a 32 core CPU in the Zen lineup. It seems like they are absolutely still chasing MOAR CORES and I’m worried there won’t be anything gaming-capable on offer yet again. Zen needs to deliver at least one model with strong single-core performance that rivals recent i5s in real games, NOT in DX12. I’m building a new computer for Quake Champions some time in mid or late 2017 and if Zen doesn’t deliver my money goes to Intel just like it did in 2011 when AMD was already asleep at the wheel. It is absolutely infuriating to see a company losing market share for so many years and doing nothing to correct their disastrous course.
If they’re still chasing the DX12-Vulkan dream, they’ll most likely already have piggybacked the entire Zen train onto its multi-core capabilities, unfortunately.
If not, then perhaps they may have vested some resources into figuring out the “less cores, more power” problem of recent years, though I wouldn’t hold my breath over it, sadly 🙁
I’m all for the Vulkan dream. Vulkan (and Mantle before it) has actually proven itself to be useful, at least when you have an AMD GPU. DX12 though has been a complete joke, and there’s no way MS will release it from the death grip of Win10 so it’s just a non-starter. It’ll continue to get sloppy DX11 ports.
This, pretty much, agreed.
It’s apparent that not a single person here has done any research whatsoever on Zen. So I will help out:
Jim Keller designed it along with a team of engineers. He also designed Apples A8 processor and Athlon 64.
It’s a complete redesign from the ground up and has nothing to do with Bulldozer or any previous design.
It employs SMT(Simultaneous Multi-threading) basically the same thing as Hyper-Threading but AMD’s version of it.
It will be on the FinFet 14nm process, so power draw should be significantly improved. Bye bye planar
read more here:
http://www.redditdotcom/r/Amd/comments/4ibfdo/lets_talk_zen/
It’s also being made by AMD, which in recent years has been facing considerable R&D difficulties, so even if it is made by the almighty Apple Gods, *pinch of salt* mate 😛
Thanks for your opinion. Oops, pressed enter on accident. Anyways I understand that you’re taking precedence into account, but the launch of the 480 was well done and is a great card. Their last couple releases have been good. And referring to Keller as a pinch of salt shows how little you understand microprocessors.
What. You misunderstand, the “pinch of salt” expression was being used in the frame of “I’m taking everything I hear with a pinch of salt”, not “I’m taking Keller is a pinch of salt” or whatnot. Regardless, even God can make a dud, especially when working at a new company, with a different way of doing things, with considerably less R&D resources, & far different partnerships to go with those.
Yes, the 480 is a great card, but that’s the point; it’s a great card, it’s not a legendary card, which is exactly what AMD needs right now. Rivaling is great, but what they need isn’t “2 Frames Per Second higher than a GTX 1080”, especially now that Nvidia hasn’t even unveiled the 1080 Ti yet. What they need are “20 Frames Per Second higher than a GTX 1080, at lower-than &/or equal-to power consumption levels, & a lower-than &/or equal-to price tag” & that, seems extremely unlikely to happen, Keller-sama or no Keller-sama.
A few things:
1. AMD never advertised Jim Keller as a selling point. This was found out before Zen was announced. I can’t remember if it was from his Linkedin account or elsewhere, but the internet found out that he was working on an unnamed project. Later we found out he was working on Zen with AMD. I personally feel that you’re not giving the guy much credit considering his accomplishments.
2. What legendary card does Nvidia have besides the 980Ti(which is maybe one of the best graphic cards of all time imo)? The 1080 is a mid range card sold as top tier and Nvidia knows the hivemind will buy it.
No, no, I recognize his talent, but I also recognise that he’s not working at Apple anymore. AMD has less resources, different partnerships, a different way of doing things, etc. etc. etc.
There is a distinct possibility he will succeed in creating something amazing, & there is a distinct possibility that without the rest of the Apple Engineering Team, he’ll also fail, it could go either way for me.
I agree, the 1080 fine but nothing glorious, but it doesn’t change the fact that that’s exactly what AMD needs – glorious, not fine, because, as you put it yourself – “the hivemind.”
blogs.barronsdotcom/techtraderdaily/2016/12/05/intel-morgan-stanley-encouraged-by-renduchintalas-pragmatic-influence/
Interesting article, makes me think that Zen will perform with parity with current gen so Intel is readying 10mn as a response. I personally think AMD will hit Ivy Bridge-Haswell levels of IPC and parity with everything else for multi-threading.
We can hope ^^
For decades? For one decade precisely. :p AMD chips used to outperform Intel because AMD had higher IPC before Core2Duo’s.
back then intel believe they can cover the lack of IPC with moar gigahertz. they believe they can get 10ghz with netburst. forget about 10ghz they can even reach half of that when power and heat start becoming problem above 3ghz mark. funny thing is when intel ditch the idea AMD actually picking it up for their bulldozer.
“unfair business practices intel used”
So what, for example? Could you elaborate, please?
betanewsdotcom/2009/12/16/ftc-intel-fell-behind-against-amd-used-unfair-tactics-to-catch-up/
I won’t go into the unfair business practices by Intel but the main thing that brought AMD down was Intel’s Core 2 architecture. AMD had nothing to compete with that and they have been behind Intel ever since.
Indeed.
aha, Zen will be very serious competitor and Kabylake will be only huge failure !! only fools will buy 7700k and pay it same as for 8/16 c/t Zen
well they did say their new cpus may not be faster, but more efficient.
My next comment my not be as impactful, but it’s more efficient.
This comment, am i right?
TAKE MY MONEY
Yea well, how much power did it need to accomplish the same task ? that’s a good question i think. If this gen turns out to be an “efficiency” gen then so be it. Next one will pack more punch.
LMAO, Intel dropping the ball with an obvious rehash gen right before Zen launches? This is your shot, AMD. If you can’t score this one there’s really no hope.
zen is going to e comprable to current intel anyways. who cares? zens no gamechanger like was promised.
also le pepe reddit meme? grow up kiddo.
LOL @ liking your own comment.
The math so far from overclock dot net threads should put Zen around Haswell levels of single core performance and with multicore on par.
Maybe what you say is true, but pricing can make all the difference. If they actually price the zen CPU’s lower than their Intel equivalent while offering the same performance, we might see the gamechanger. And Intel wil be forced to lower thier prices too. Everybody wins.
Most people who buy PC parts go for the best value depending on their budget.
It will be a game changer if they perform as well and cost much less
0.86% and 0.02%
Are those big numbers?
Can someone moar tech savvy bring me some enlightenment please? 🙂
Or am I allowed to thrown in a “intel is DOOMED” joke somewhere?
It’s tiny and insignificant. And it’s only assuming you would, for some reason, downclock your CPU. There is nothing to take home here.
That means I can throw my joke then! 😀
*ahem*
“0.86% and 0.02% less performance”
OH GOSH, INTEL IS DOOMED!
Yeah, why didn’t the person running this overclock the 6700 instead of downclocking the kaby lake? Or better yet, why isn’t there a head to head of overclocking v. temp? Anyway, kaby lake was always touted as a ‘no big deal’ performance upgrade and was all about getting the 10 nmchipset up and running with enough minor improvements to attract performance freaks and bleeding edge enthusiasts.
I do hope intel gets to a point where they settle on a chipset and stick with it for a bit. That kind of ‘future proofing’ will do a lot more for people thinking it’s worth an upgrade than incremental improvements, where real gains are in better standards and video cards.
Also, the xeo server processors are clearly the push this time around. With 32 cores each and larger cashes, they are clearly hoping to get into AI and simulation processing that has been dominated by high end GPU’s thus far. Given that CPU strength is hardly a bottleneck for existing games, that seems like a smart shift of focus.
It does sort of negate the whole purpose of the article, doesn’t it? Why does every processor need to be faster anyway? Pushing a processor that’s just smaller (cheaper to produce) more efficient, or provides better yields? It’s not always (if ever) about the consumer with Intel. I feel like the article doesn’t realize this.
well they at least made a hardware level drm, pathetic
it’s time for amd to shine.
i’m waiting for something that is significantly faster than my 4.5ghz 2500K. like when i make the jump from my 3.6Ghz E7500 to 2500K.
Meanwhile, I own a Core i7 2600 while I’m sitting here reading this article. Bought almost 3 years ago for half the price of a Core i7 4770. Even then I thought I was paying a bit too much for that CPU, but now I have no regrets. You can also overclock a non-k Core i7 Sandy or Ivy Bridge CPU to around 4-4.2 GHZ, too. Still, even at stock it does gaming just fine and those extra threads help in rendering videos a ton.
Seriously, after Sandy Bridge, Intel barely made any progress with each new CPU line-up. Now they’re actually regressing?
I’d still be using that ungodly X58 Hexa-Core i7 Extreme I had if it hadn’t burned out on me 🙁
X58 was Intel’s pinnacle, in my opinion. Sure, the early Bridge series had some technological improvements to them, but meh.
Forgot about Nehalem and all of its variants. Those were a big leap too from the Core 2 Quad 9XXX series, mainly in terms of IPC. And also forgot about the Core i7 Extremse series, those were beasts with more memory bandwith available and more PCI-E lanes, but freaking expensive too, holy hell. How much do these CPU’s sell today, cause finding an X58 motherboard for them ain’t cheap…
I mention Sandy Bridge because it was another large leap (compared to what we get today…) in performance, not only in IPC, but power consumption and base clock speeds too. From Core 2 Quad Q9XXXX to Nehalem and Westmere it was about 20% IPC improvement, from Nehalem and Westmere to Sandy it was again another 20% IPC improvement. And that’s the last time we ever saw this kind of leap because Intel decided to milk customers for all their worth by barely improving CPU’s each generation… no thanks to AMD and their disastrous Bulldozer launch…
Speaking of AMD, apparently the fate of Bulldozer was sealed by signing the agreement with GlobalFoundries (see WSA, short for wafer supply agreement). AMD bought wafers from GlobalFoundries, while Intel and Nvidia bought from TSMC.
For more info on that, watch the video “AMD – The Master Plan” by AdoredTV. A cool tech channel that I recommend. 🙂
Oh, yeah, TSMC has been crippling the GPU market for years; every time they need to switch to a smaller die size, bam! production problems. Thank f*ck everyone’s moving on from them now, & good riddance, really. Ugh.
Yeah Sandy Bridge was a performance leap, but I always saw the new architecture as a step back compared to the Nehalem & Westmere’s, so I’ve always looked down on it since, by comparison.
Ugh, Intel. Like the other guy said, ever since they reshaped the CPU industry with the Core 2 Duo, they think they’re gods.
Not just the base cost of buying an X58 Chip, Motherboard + triple-stick RAM, but also cooling for an X58; if you’re going for an i7 Extreme & you want to overclock (manual overclock on an old-school BIOS, not one of those fancy new UEFI setups we get these days with the “does-it-all-for-you” buttons), you’ll need a full water cooling loop to get real numbers out of it, etc.
Ah, the good old days when you can overclock any Intel CPU, before the BCLK lockdown and only K processors allowed to overclock via multiplier. Intel, you greedy bastards you… Suddenly I feel nostalgic…
My first PC that I got when I was finishing junior high was in december 2007. Had an Intel E4500, 1 GB of RAM, Geforce 8400 GS all housed on a Gigabyte GA-965P-S3 mobo (I remember the freaking serial number, wooo). Pretty terrible even for those times… but hey I never had a PC until then, nor a console, if you count the Terminator 2 famiclone as a console…
Still loved my PC back then, then 2 years later I wanted to play better games, then realized they kinda ran crappy and went “why my games run bad?” Then I watched some youtube and found out about hardware configurations. My parents didn’t knew a goddamn thing about PC’s but I saved some money and bought my first GPU (a HD 5570 1 GB DDR3, should’ve gotten the GDDR5 version, but that’s what my local store had at the time and within my budget) and a another 1 GB stick of DDR2 RAM. Finally I can play games and I did: Need for Speed Carbon, Assassin’s Creed even freaking Crysis (if you consider 1024×768 with High settings with about 30 to 40 with drops to 25 FPS playable … I sure did at the time), though not as playable as I’d like. Also keep in mind I was like 15-16 years old and didn’t knew what Steam or paysafecard was (if it even existed) not to mention why didn’t even had PC game stores (but they had a hardware store, WTF) in the shanty town I lived at the time… Let’s say I got the games via other methods…
Then I found out about overclocking. I actually was afraid at first but did so anyway with my GPU first and was shocked that I could actually gain free performance. Then learned about CPU bottelnecking. I overclocked my E4500 too to around 2.75-2.80 GHZ (stock was 2.2 Ghz). Was really monitoring the temps carefully but the performance was worth it. Like I felt really proud of myself. I even talked with somebody much older than me who was really good with PC’s who pretty much begged me not to overclock, but I did it anyway because I could. 😛
Just finished high school, held on that PC as long as I could, went to college and also moved to a better town (though not by much…). Then upgraded my system to a Intel Pentium G2020, with 4 GB of DDR3, same HD 5570 GPU and an Asrock H61M-DGS mobo. It was a rather cheap system but hey it was a stil a really good upgrade from my old PC. Had very good grades in college and I entered at the “tax free” branch of students because of it. Now I was really making some savings and decided to go big with my next GPU. So I got a Sapphire HD 7870 and another stick of 4GB DDR3 RAM. Hell, some classmated were saying I was an idiot for spending that much on PC hardware. I also learned about paysafecard and how you can pay with it on Steam. Yeah I was also called stupid for that too… even though I was always on the lookout for sales and whatnot. I have a no more than 20 euros on a game policy… My parents would freak out if they knew I was paying full price for just 1 game and I would die on the inside too…
Anyway, I realized that the CPU was bottlenecking so I ended up saving some more and got myself an I7 2600… best buy of my life. And I’m still using the same setup now. i7 2600, 8 GB of RAM, HD 7870 and a Asrock H61M-DGS. The HD 5570 was given to a friend of mine who plays DOTA and DOTA2. He thanked me for that.
Wow, this is the most stupidly long comment I’ve ever written in my whole life and probably the most stupidly long comment you’ll ever read on this website. Don’t know if I should feel proud or sad about this… Well wrote this all the way through, might as well post it. If you read this through you’re awesome and you earn a medal and a cookie from me (imaginary at least). 😀
Just wanted to share my experience of being a PC gamer. :3 Just because you mentioned the word BCLK. The heck is wrong with me. :))
Ah, the old 7870…… And a Sapphire 7870 too! 😀
😛
Your specs are a bit sad, but I’ve seen worse. Other than that, eh, we all started somewhere – most of us somewhere extremely stupid, even. Then we got better…. maybe fried a couple of important parts trying to overclock on an air cooler…..
….. Maybe spent a ridiculous sum on an X58 Hexa Core, with associated super-Mobo, because they came into some extra cash……….
…… etc.
P.S. COOKIEZ.
That’s why we have to wait for at least 300% boost to buy a cpu/gpu 😀
So, wait, let me get this right;
Everyone makes a big fuss when Microsoft announces they’re going to cut support for all non-Windows 10 OS’ for the Intel 2017 lineup, & then some 3 months later or so, it turns out the Intel 2017 lineup is a joke anyway?
Wow….. Ouch.
What do you expect from a company that supports scam artist Anita Sarkeesian and Feminist Frequency with money…
Unfortunately we have no choice atm if we have a decent GPU.
This is really discouraging.. Intel hasn’t made any strides in performance in years, just better power efficiency which is great, but we want both.
because they already dominate every x86 player on desktop space. so intel turn to mobile to prey on ARM instead. but so far ARM still hold on their own when it comes to mobile. in intel wildest dream they probably hoping to dominate both desktop and mobile with their x86.
if Kabylake 7700K is best what Intel will offer in 2017 then amd Zen will have clear way for win!!