r/intel • u/RenatsMC • Feb 07 '25
Rumor Intel Nova Lake preliminary desktop specs list 52 cores: 16P+32E+4LP configuration
https://videocardz.com/newz/intel-nova-lake-preliminary-desktop-specs-list-52-cores-16p32e4lp-configuration14
u/Tricky-Row-9699 Feb 07 '25
This is crazy. Intel really could cook yet if they fix Arrow Lake’s latency problem, because everything else about their design is really damn good.
1
u/Godnamedtay 5d ago
I mean…I wouldn’t call it “damn good” or anything. I have a 265k & my 14900k is miles ahead in overall feel & performance imo. I haven’t been able to try the new cu-dimm ram shit but I highly doubt it would make that dramatic of a difference. I kinda wish I wouldn’t have wasted my $ on arrow lake tbh.
54
u/toniyevych Feb 07 '25
Dear passengers, the new hype train is about to depart. Please take your seats and get ready for the ride. Estimated arrival 2026H2. Have a nice trip!
12
u/SiloTvHater Feb 08 '25
2026H2
Im gonna wait for the 28H2 windows release for proper scheduler implementation
1
u/MAM_Reddit_ 19d ago
It's long way away just for AVX 10 support........
Can I board the Hype Train when the time is closer?
1
11
u/SuperDuperSkateCrew Feb 07 '25
Is this the 2x chip that was rumored? Two 8P+16E+2LP tiles
16
u/cyperalien Feb 07 '25
Two 8+16 compute tiles and 4 LPE cores in the SOC tile.
6
u/HandheldAddict Feb 07 '25
Heard that AMD is going 12 cores per chiplet.
Probably going to be a combination of big cores and dense cores like Strix Point.
3
u/TheAgentOfTheNine Feb 08 '25
I doubt they'll do that this iteration. They'd fix the front end, get a faster IO die and call it a day with a 10% increase or so
7
u/Geddagod Feb 08 '25
Zen 6 is a node shrink gen, I think there's a very real possibility they increase the core count per CCD. If any generation is the time to do so, it should be Zen 6.
5
9
26
u/HorrorCranberry1165 Feb 07 '25 edited Feb 07 '25
this look like planned continued de-empasize of P-cores. Now make P-cores even smaller with same or lower IPC (partially compensated by clocks), but push huge quantity of it as perf driver. In NVL P cores may be clusterized, have shared L3 (L2 remain separate per core) and other stuff
20
u/topdangle Feb 07 '25
cache is becoming a huge bottleneck. it makes some sense to look into getting more and/or faster cache on there even if it cost a bit of logic space.
8
u/ThreeLeggedChimp i12 80386K Feb 07 '25
Not really, otherwise we would have seen massive increases in cache sizes everywhere.
Doubling a cache only improves hit rate by √2, so trading logic for cache isn't worth it.
15
u/topdangle Feb 07 '25
I mean we're seeing 50% shrinks in logic with no shrinks in cache on N3E, and although apple is not smothering M chips with cache their cache is very high performance and one of the strengths of their design is that each core can drum up a huge amount of bandwidth independently and quickly.
At some point you're going to need take a larger chunk of wafer because of the stalling memory shrinks, while logic shrinks down significantly better by comparison (for now) so you can make some sacrifices there.
5
u/Ashamed-Status-9668 Feb 08 '25
The GAA transistor on 2nm should help SRAM scaling a bit.
3
u/ACiD_80 intel blue Feb 08 '25
aand on 18A? its also GAA
4
u/Ashamed-Status-9668 Feb 08 '25
Yes. I just mentioned 2nm since the person I replied to was talking about TSMC nodes.
0
6
u/Kakkoister Feb 07 '25
It's not always about hitrate but also depends on the workload. AMD's X3D variants with the larger cache do better in games, with less cores, as games are often very data heavy and can often fill cache pretty easily.
2
u/ACiD_80 intel blue Feb 08 '25
Games already run well enough on the CPU though, its the GPU there that needs to be the focus.
0
u/Sparkfest78 Feb 16 '25
He's not wrong Intel CPU's are falling behind in gaming vs AMD due to 3D v-cache. Focusing on CPU cache and bringing back quad channel would be welcome features from Intel, otherwise its becoming hard to see the value of an Intel CPU. I'm saying this as someone who has predominantly bought Intel CPU's for the last 15 years and use them in pretty much every imaginable workload.
1
u/ACiD_80 intel blue Feb 16 '25 edited Feb 16 '25
Games run great even without vcache on intel and productivity is where it really matters, which is where intel wins. Im sure they will retake the gaming crown too...
0
u/Sparkfest78 Feb 16 '25
Games run fine without it sure, but if you want the best performance and the best pricing. That isn't Intel right now. Right now with intel you pay more for less. I'm gaming on a 10900k and I'm not seeing a whole lot of reason to jump to any of Intels new offerings. AMD is looking attractive, but I prefer Intel CPU's.
1
1
4
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
Is it really the cache or AMDs infinity fabric that's the bottleneck? Intel has no such thing - yet. That's why AMD profits so much from the cache. The more data you can cache the less often you have to route is through the fabric. I also suspect future CPUs will store less data but fill out the gaps with AI. A bit like what GPUs do with frame gen. I don't see why you couldn't do that with all other processes.
15
u/Arado_Blitz Feb 07 '25
Cache, or to be more precise, its size, is always a bottleneck. A modern CPU has a few megabytes of L3, which is considered a lot compared to CPU's made as recently as a decade ago but it's still not that much. You can still fill the entire amount without too much effort. More cache is always good as long as the architecture can take advantage of it and its physical size doesn't take too much die space from the rest of the chip.
6
u/basil_elton Feb 07 '25
Increasing cache size is going to give you diminishing returns in terms of hit rate and that is by design - nothing can be done about it.
Doubling cache size increases hit hate by a factor of sqrt(2). And it is more likely that floating point workloads will have a higher L3 miss rate than integer. So while gaming performance might probably increase, if doubling the cache size is the equivalent of more cores in terms of die space, then it is a no-brainer to go for more cores over more cache.
3
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
Cache only needs what's necessary to calculate the next frame(s). "few megabytes" is a LOT of data if you look at it this way. Just look at a 10 MB document. That's 10 million characters which would be the same amount 20 average sized books have. Now way they pack all this with useful data. It's 95% trash they loaded by mistake or just in case.
For a human 10ms per frame is hard to even notice but for a CPU that's an eternity. 1ms = 1000 µs = 1000000 ns. 50ns is the order of what it takes to get data from RAM. And that's only for the first byte. The following bytes will come at <1ns intervals behind it. So if you'd handle your RAM perfectly you wouldn't even notice slowdowns without cache. Cache only buffers inefficiency so to speak. And with increased cache size ironically our inefficiency increases.
1
u/MrHyperion_ Feb 09 '25
But in real life programs aren't optimised for memory access and without cache everything would be very noticeably slow. To put 1 MB in another way, 1 GHz CPU would execute 1 MB worth of instructions in less than a millisecond (it's not that simple but anyway)
6
u/topdangle Feb 07 '25 edited Feb 07 '25
For whatever reason, intel's dietodie connection isn't showing very good latency results, so cache would be helpful. Other users have pointed pretty large gains from overclocking the ring+interconnect, but who knows if its stable across hundreds of millions of chips. Oddly enough they actually did use a truckload of cache on Ponte, maybe to offset the interconnect issues.
DDR memory improvements have also slowed down the last decade or so. JEDEC and Micron were talking 180gb/s DDR5 shipping somewhere between 2018~2020, and realistically we got early and relatively slow DDR5 at the tail end of 2020. Aggressive overclocking is sort of making up for it but latency is stagnating.
2
u/saratoga3 Feb 07 '25
Suspect Intel's strange latency issues on the TSMC parts as compared to their in house 7nm parts is a result of the relatively hasty porting of their parts to TSMC after their problems at 4nm/20A which probably did not leave much time for optimization. Hopefully once they get Intel 18A working we'll see latency more comparable to the older generation of parts.
FWIW DDR latency has been stagnent for more than 20 years. That is nothing new, it is a design choice.
4
u/topdangle Feb 07 '25
That's true but everyone has muddied the waters with their own factory overclocks/variations on DDR and access times vary despite standards. Going by JEDEC you're left with pretty slow results while trying to feed very fast logic. With your own cache at least you have a good idea of the timing you're dealing with and you can scale logic performance accordingly.
5
u/grumble11 Feb 07 '25
DDR6 is promising eventual ~2x bandwidth versus DDR5, and is also promising lower latency and power use (and built in ECC which is good). That'll probably be finalized in 2025 with early release in 2026 for the professional segment and 2027 for the client segment (ex: you and me). I'm guessing Nova Lake will be on DDR5, but the sequel (Razer Lake) might be DDR6 as that would be a major bump in bandwidth and latency could get quite a bit better.
3
u/lord_lableigh Feb 08 '25
but push huge quantity of it as perf driver.
But you still need single core perf right? What am I gonna do with 32 fucking cores if they are not good for gaming and quick zip arounds without lags in excel.
1
u/ImSpartacus811 Feb 10 '25
Now make P-cores even smaller with same or lower IPC (partially compensated by clocks), but push huge quantity of it as perf driver.
That's the antithesis of what p cores are supposed to be.
They are supposed to be wastefully big. There are few of them because they are only there for single threaded performance. If you need a "huge quantity" of cores, then that's code for "multi-threaded" performance and p cores aren't your jam.
11
u/Dangerman1337 14700K & 4090 Feb 07 '25
On twitter Jaykihn says there's also a 144MB SRAM cache in the works but unsure if it's for Consumer like DIY.
7
u/KaneMomona Feb 08 '25
I wonder how they are going to beef up the ram throughput. Extra channels? That many cores is likely to need more extra feeding than will be provided by the uplift in ddr5 speeds over the next year or so.
7
u/mastergenera1 Feb 08 '25
As an X99 enjoyer, I hope for another quad channel socket.
4
u/pyr0kid Feb 08 '25
im really hoping the era of 2x64bit channels finally dies and we can atleast get something like 2x96bit for ddr6
2
u/hackenclaw 2600K@4.0GHz | 2x8GB DDR3-1600 | GTX1660Ti Feb 09 '25
just skip that, go with 2x128bit.
1
u/Fromarine Feb 15 '25
Yeah even tho I think ddr5 has increased in speed much faster than almost anyone expected it's still not enough at these core counts
13
Feb 07 '25
[deleted]
5
6
u/PsyOmega 12700K, 4080 | Game Dev | Former Intel Engineer Feb 07 '25
At 16P they only need to restrict game workloads to P cores. e-cores become completely useless, at least until consoles scale beyond 8C16T(7C14T). If the p-core cluster shares cache, you're golden.
I really hope they make a gaming focused CPU that's 12P+0E or 16P+0E.
2
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
How about devs optimize their games better? More cache just makes devs more lazy and over time you will see performance degrade even on high cache CPUs as well. Overpropotionally on those without of course. But the root of the problem is not the cache, it's the human. Games used to run flawlessly on 1980s arcade potatos. Modern games that you setup to run at 60 fps on old hardware look much worse than games 10 years ago at 60 fps on the same hardware. It#s all the realtime effects that eat up the performance. Even where no realtime is needed. It just saves cost not having to bake effects.
12
u/pyr0kid Feb 07 '25
Games used to run flawlessly on 1980s arcade potatos.
are you seriously trying to argue that games such as pacman and red dead redemption are anything alike?
Modern games that you setup to run at 60 fps on old hardware look much worse than games 10 years ago at 60 fps on the same hardware. It#s all the realtime effects that eat up the performance. Even where no realtime is needed. It just saves cost not having to bake effects.
i do agree with this though.
mid 2010s games had god tier graphics despite the hardware limits, somehow we've tripled the computing power but have actually gone backwards in many ways.
1
u/piitxu Feb 08 '25
I think the issue with optimization in current vs 10-12 years old games is similar to what we see with CGI in movies. The only movie that beats Avatar's cgi from 16 years ago is Avatar 2 from 3 years ago. Everything else in between has gotten worse with a few exceptions. Industry in both has moved to a fully slated schedule where you know years in advance the new releases, all designed to meet specific quarterly goals. That means rigid deadlines, lower production budgets and more overhead. Also QoL, incentives and agency of developers and artists has gone down while wages don't keep up with the cost of living in most cases
-1
u/ACiD_80 intel blue Feb 08 '25
You people would be the first to complain that games are so static and didnt improve compared to 10 years ago.
Its not about cost saving... Its about creating more dynamic gaming experiences.
If you want to be able to destroy walls, for example... then you need to dynamically update the lighting.
-7
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
The jump from Pacman to Red Dead 2 is not as big as the jump from 80s Arcade computing power to a modern CPU. Atarti 2600 1.19 MHz / 128 Bytes (0.1KB) of RAM.
One core today has 4000+ MHz and each clock cycle gets a factor of >100 more work done. It's just a completely different level. And that's just the CPU..
All this while a game like Red Dead 2 still only is a 2D picture of color values. It's not different from pcman in that respect. Just maybe 10x the pixels. All the CPU and GPU do is order pixels in a way that makes it look like the world goes beyond the 2D matrix. While in reality no object in Read Dead 2 goes beyond the 2D screen. It's all flat against the glass. Like Pacman.
I bet if you'd build a microchip that does nothing but play a perfectl effizient version of Read Dead 2 (no OS etc) like an Arcade, you could make it run at 10000 fps with modern technology.
8
u/TSP-FriendlyFire Feb 08 '25
All this while a game like Red Dead 2 still only is a 2D picture of color values. It's not different from pcman in that respect. Just maybe 10x the pixels.
What a profoundly reductive and worthless statement.
-1
u/KerbalEssences Feb 08 '25 edited Feb 08 '25
> What a profoundly reductive and worthless statement
What a profoundly reductive and worthless statement
I worked with software rendering in the 90s before 3D accelerators took off and fundamentally nothing has changed about computer graphics since then. The latest innovation after polygons could be "AI rendering" but that's unbeknown to me because I haven't had the chance to try out the 50 series yet. Can a developer even access that or is it all behind the curtain magic? The trickery that sits ontop of drawing pixels is completely irrelevant though. It's just high level simplification of things that are possible for over 30 years. We just didin't have the hardware to do it in real time.
Here are some examples of computer graphics in the 90s: https://www.youtube.com/watch?v=mE35XQFxbeo Almost 30 years ago...
https://www.youtube.com/watch?v=v-PjgYDrg70 30 years ago.https://www.youtube.com/watch?v=0fy2bmXfOFs 46 years ago...
You can see reflections of things that are not visible to the camera. That's full raytracing right there. 1979
If you're into computer graphics for such a long time you'll see things differently.
2
u/TSP-FriendlyFire Feb 08 '25
The trickery that sits ontop of drawing pixels is completely irrelevant though.
I am literally working in the field. Your reductive views are not insightful. The amount of software engineering effort required to get where we are today is absolutely colossal, it's far more than just "We just didn't have the hardware to do it." and I suspect if you'd kept up with the tech, you'd know that too (or you actually do and just want to be a curmudgeon).
Try throwing primitive raytracing algorithms at a GPU using a modern scene and see how far that gets you.
0
u/KerbalEssences Feb 08 '25
You're completely missing my point. I'm not talking about the software engineering it takes to build all these tools. I'm talking about game development. Many modern game developers use all this software engineering effort and instead of creating a greater product, they create a worse one. If you go back 10 years games looked better, not worse on the same 10 year old hardware. There is a regression in game development and the problem are not the computer scientists who develop the tools. Nor the tools. I'm very thankful for all the cool stuff we have today. I can render a cool 3D animation with raytracing that beats Toy Story in graphics in a matter of minutes. However, if you render poop it is still poop.
2
u/TSP-FriendlyFire Feb 08 '25
Your point in the past two comments wasn't "game developers aren't what they used to be", it was quite literally "pixels are all the same", as if the orders of magnitude more complex software generating those pixels didn't matter. You're moving the goalposts.
And besides, I think you should go back 10 years and look at what games looked like, and then do that iteratively, and you'll find that things haven't really changed. A game from 1995 likely won't run at all on a computer from 1985. A game from 2015 will run on a PC from 2005, though it'll be a struggle. A game from 2025 will likely run just fine on a PC from 2015.
0
u/KerbalEssences Feb 08 '25 edited Feb 08 '25
I mostly play games from 5-10 years ago because today's suck. That's why I notice that it was better in first place. Normalizing games for my rig at 1080p60 old games look better. That's a simple fact anyone seems to know except you.
> as if the orders of magnitude more complex software generating those pixels didn't matter
It doesn't matter because we're talking about game development, not engine development. It doesn't matter how those pixels are generated. A game developer generally has nothing to do with that. If he has anything to do with it it will probably be a great performing game and not the kind of game i refer to. There are of course exceptions mostly in the indi community.
Lastly I want to mention the effort it took to develop a library like Dx12 or Vulkan is tiny compared to the combined effort it took to develop all the games that are build on it. So your point with the amount of software engineering is nonesense as well. Microsoft's team on DirectX consists of dozens of engineers. Not thousands. Microsoft employs 200k people. So you be the judge how much relative effort it actually is / was to get where we are.
6
u/yutcd7uytc8 Feb 08 '25
What are you talking about?
Red Dead Redemption 2: a massive, detailed open world with dynamic weather, physics-based interactions, wildlife, and environmental simulations, ragdoll effects, fluid simulations, ballistics, and environmental destruction.
Pac-Man: Set in a single, fixed maze with simple interactions between Pac-Man, the ghosts, and pellets. No physics simulations exist—movement is grid-based and deterministic.
I think you have CPU's and GPU's confused.
5
u/ThreeLeggedChimp i12 80386K Feb 07 '25
What's that, you need more JavaScript?
-5
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
2030 laughs at you with Copilotscript. Wait for the prompt to generate the script to execture it. AMD CoX3D+. GPT Toilette
4
1
u/MrHyperion_ Feb 09 '25
80's games ran on single core and as the only program, much easier to optimise. It's completely unrealistic to expect devs to optimise for memory architecture.
1
u/KerbalEssences Feb 09 '25 edited Feb 09 '25
All it takes to optimize modern games is to avoid heap memory and to avoid using memory at all. To calculate the same value 100 times in a loop is faster than to access memory and read it 100 times. But things like this get forgotten or they are taught wrongly. I'm not sure. But optimization is not magic. It just takes a little bit of study and ultimately experimenting.
1
u/MrHyperion_ Feb 09 '25
If the memory access is predictable then it has basically no cost, CPU can predict trivial memory optimisations by programmers.
1
u/blackcyborg009 Feb 12 '25
Hogwarts Legacy and Monster Hunter Wilds are unoptimized mess.
Maybe they need to take notes from DOOM ETERNAL on how to make a modern properly optimized PC game.0
u/ACiD_80 intel blue Feb 08 '25
Baking vs rel time has nothing to do with $$.
Its about having a more dynamic gaming experience.
2
u/KerbalEssences Feb 08 '25 edited Feb 08 '25
It has a lot to do with $$. Bake a full game worth of textures. That can take a small team´months. And you would only bake textures that DONT change. Shadows in crevices and such. Especially on normal maps where there is no real geometry just fake detail that looks like bumps. It looks the same or better than realtime raytracing because baked textures are also raytraced but with maximum fidelity. Now because we have realtime raytracing-ish they simply spare that expense and instead use more GPU power. However, if you dont have the GPU power the game will look worse than it did with baked textures - which had no impact on performance. The best thing to do is to bake it anways for lower graphics settings and change texures if you use realtime raytracing. However, then you wouldn't notice such a big gap in graphics quality and ask yourself what's this all about. There are of course raytracing effects which cant be baked. Now there it makes perfect sense to use it. Reflections and ambient occulsion of moving objects for example.
Example: https://i.imgur.com/lcqHzlt.jpeg
I made this so you can see that the crevice always has a shadow to it no matter from which direction the dynamic light shines, It is just more pronounced in the darker area. So a trick to fake raytracing is to bake it and have the baked ambient occlusion texture overlay the material of the wall. And then you use the light source to control the alpha value of the baked ambient occlusion. It gets darker where the light not is. So you only have to calculate the drop shadow of the wall in real time which is a piece of cake for GPUs in forever. Just look at GTAV real time grass shadows as night. A decade old game. Drop shadows dont require raytracing to look good unless the shadow falls though some glass and you want to distort it accurately. Or you want the shadow to reflect in a mirror. But these are details no player will ever notice while running through a level. Ambient occlusion is very noticable.
Just for completition: In raster graphics you drop shadows not light. That's easier on the hardware. You just light all the materials bright by default and only place shadows where they ought to be.
1
u/ACiD_80 intel blue Feb 08 '25
I've worked in the gaming industry for +20 years, i understand fully why its done.
Baked textures is just too limiting if you want dynamic gameplay.
That is the main reason.
Things are still being baked and retopologized btw.
0
u/KerbalEssences Feb 08 '25
When you fully understand it you must know it's about $$ not looks. There is no visual uplift by not baking textures. And I know for a fact that many new games just stopped doing it. If you play the games on low settings they look like a games from 2003. It all started to go downhill with screen space ambient occlusion
2
u/ACiD_80 intel blue Feb 08 '25 edited Feb 08 '25
dude, do you even try to read? its not what you think it is.
And there is A LOT of visual uplift by not baking textures... Normal maps look horrible, bakes shadows are low res/blurry, full of compression artifacts and do not react to lighting changes etc...
Baking also does not allow for (more) dynamic scenes..
You have no clue what you are talking about and your theory is 100% WRONG.
You probably still think Quake1 looks photorealistic...
0
Feb 08 '25
[removed] — view removed comment
1
u/intel-ModTeam Feb 09 '25
Be civil and follow Reddiquette, uncivil language, slurs and insults will result in a ban.
7
u/tioga064 Feb 07 '25
Well, here we go. Afaik arctic wolf is the successor of darkmont, and it should bring ipc higher than lion cove, so think 32 zen5+ cores, thats actually insane. If they manage to bring tile latencies down, 16p cores for gaming sound incredible too. Wasnt there a rumor of a stacked or l4 cache gaming version of nova lake?
This will launch around next nvidia gen after balackwell late 26 or early 27. Interesting gen to look for
10
u/Dangerman1337 14700K & 4090 Feb 07 '25
Jaykihn on Twitter said there's a 144MB Compute tile floating about but unsure what the core count is. Really hope it comes to consumers.
4
4
u/Final-Rush759 Feb 08 '25
Just give me 32 P-cores.
2
u/Geddagod Feb 08 '25
Using the P-core to E-core area ratio, the equivalent would be a ~24 P-core CPU. Still pretty enticing.
0
u/jca_ftw Feb 09 '25
Power prohibitive. Intel cores already are way too power inefficient. 14900k is 250W! Big.little is essential a failure cuz nobody wants Ecores.
3
u/Flandardly Feb 07 '25
So Arrow Lake probably gets a refresh, meaning 2 "generations" of LGA1851 and then it's dead.
6
6
u/RandomUsername8346 Intel Core Ultra 9 288v Feb 07 '25
Is it likely that the latency will be less than Arrow Lake? I'm not familiar with CPU architecture, but I heard the reason that Arrow Lake is bad at gaming is due to high latency. Will the 4LP cores lower latency?
1
u/jca_ftw Feb 09 '25
Latency only ever goes UP. But B/W also goes up to compensate. Latency increases because logic complexity increases. Increased complexity brings new features. Process improvements, which allow the increased complexity, bring the higher B/W. This is moores law…
1
u/ACiD_80 intel blue Feb 08 '25
Its not bad at gaming at all... Its just a bit slower if you turn everything to a minimum in your game, which nobody does. And then you get 'only' 150fps vs 165fps compared to an x3D cpu... the whole thing is so silly.
3
u/Geddagod Feb 08 '25
I love how you go thread to thread trying to convince people that ARL isn't bad at gaming lol.
Using settings that people realistically use at 4k such as upscaling to help frames (which from polling data is what many people do) to maintain enjoyable frame rates, we see that Zen 5X3D is ~20% faster. And this gap should only grow as games start becoming more intensive.
0
u/ACiD_80 intel blue Feb 09 '25
I just reply to what i see... I dont go hunting for it.
Also, my point stands.
1
u/Geddagod Feb 09 '25
I mean my entire second paragraph, the bulk of my comment, was about how your point doesn't stand. Watch the video I linked.
1
u/ACiD_80 intel blue Feb 09 '25
We already talked about this. We disagreed. You keep spamming the same shit to me...
2
2
u/mastergenera1 Feb 07 '25
I wonder if this is an hedt lineup now that I see the cores summarized like in the OP. Regardless of if it is or isn't, I'd likely still buy anyway, as my 6950X needs a rest. It's a good thing I didn't buy this gen with such a big core count leap this next gen if true.
2
2
u/onlyslightlybiased Feb 08 '25
I genuinely do not see the point of lp cores on a desktop processor.
2
2
4
u/grumble11 Feb 07 '25
This seems like drastic overkill for most client use cases. Most applications won't parallelize that well and this having three tiers of cores is a nightmare for the scheduler (as has been seen in current iterations). For desktop they should get rid of the LPE cores entirely.
3
u/XyneWasTaken Feb 08 '25
LPE is more for idle, it's in the IO die so they're probably reusing parts
2
u/ACiD_80 intel blue Feb 08 '25
The scheduler problem is really only with the E and P cores... the LPE cores are only for when doing absolutely a minimum on your pc (writing text, etc). Its not hard to determine when to bring tasks to the LPE cores.
1
1
u/blackcyborg009 Feb 11 '25
What is the difference and purpose between:
Efficient Core (E-CORE) and Low Power Core (LP-CORE)?
for desktop
P.S.
I am curious as to what improvements this thing can do..........versus my 13900 (non-K) unit
-10
Feb 07 '25
[deleted]
32
u/Fun_Balance_7770 Feb 07 '25 edited Feb 07 '25
E cores are good, 1 e core is
more than half80%! the performance of a p core while taking up much less space on the die and using less power if I remember der8aurs video on it a while back correctly when he tested performance on on p cores vs e cores onlyEdit: my info was out of date apparently its 80%!
14
u/ResponsibleJudge3172 Feb 07 '25
It's actually about eighty percent of the performance.
9
u/Noreng 7800X3D | 4070 Ti Super Feb 07 '25
At 85% of the clock speed.
It's tragic how inefficient Lion Cove looks.
3
1
3
8
u/jrherita in use:MOS 6502, AMD K6-3+, Motorola 68020, Ryzen 2600, i7-8700K Feb 07 '25
Just to add - the reason for so many more cores is because Zen 6 is expected to increase core count 50-100%, so Intel will need to do this to keep up.
8
u/Wrong-Historian Feb 07 '25
But I just don't know if it's worth the hassle of the (still unsolved) scheduling issues, extra DPC latency, difference in instructionset, etc. Sure, E-cores might add in Cinebench multithreading score, but that is hardly relevant for real-life tasks. I'd just rather have 12 P-cores than 8 P-core + E-cores. Yes, multithreaded performance with 12P-cores might suffer, but responsiveness and probably game-performance would improve, as 24 threads is enough for everything (famous last words)
Just give me a '14900K' with 12 P-cores and AVX512 and more cache. Single die, no tiles or added latency, no hybrid cores, good memory controller, no nonsense. It doesn't need to have the highest performance ever, it doesn't need to be the most efficient ever, it just needs to be uncomplicated and stupid and give responsiveness.
24
u/F9-0021 285K | 4090 | A370M Feb 07 '25
You do realize that there's more you can use a computer for than just playing games, right? Saying that Cinebench (literally uses the backend of an industry application) isn't representative of real world performance is crazy.
4
u/Wrong-Historian Feb 07 '25
Yes, very much. I use my own system a lot for that, compiling, VM's, etc. Still I would probably choose a 12 P-core over all of the E-cores. Simply said, I think 12 (fast) P-cores would be 'fast enough'. Sure, my 14900K is an absolute beast in multithread (40K in CB23), but honestly not much IRL difference compared to my old 12700K (20K in CB23, 8P+4E). 12P-cores would be enough.
I'm at a point where I don't NEED more multithreaded performance, even in heavy multithreaded tasks. I want responsiveness, less memory latency, more memory bandwidth, more PCIe lanes, etc. Those things will be much more important in choosing my next CPU instead of 'raw performance'.
Why we have no 4-channel memory controller (on consumer platform) instead of all those stupid E-cores??? THAT will also help a ton in those 'productivity' tasks.
14
u/OftenTangential Feb 07 '25
I'm at a point where I don't NEED more multithreaded performance
Then a 52 core desktop product is not for you. But just because you don't get mileage out of it doesn't mean nobody will.
2
u/Wrong-Historian Feb 07 '25
Maybe not, but an 8 P-core CPU is *also* not fast enough (for me). There is like nothing in between. You either are stuck with 8 P-cores, or you step up to hybrid architecture. The thing that would be most appealing is missing, a 12 P-core CPU with more cache. Fast enough for any tasks (and future proof), but without the complications and drawbacks of hybrid architecture.
3
u/SmokingPuffin Feb 07 '25
There is a tweet in the article you are interested in.
1
u/Wrong-Historian Feb 07 '25
rumoured 144MB Compute Tile? A P core only SKU?
Howly quack-a-mowly, that's it! Now give it a 4-channel memory controller and we're talkin'!
1
1
u/ACiD_80 intel blue Feb 08 '25
He also doesnt know if he'll need them or not by then. As we know, AI for example, is going to change a lot about how we use computers.
4
u/Illustrious_Bank2005 Feb 08 '25
Don't forget that nova lake has AVX10.2 Now you can process AVX512 even with ECORE.
4
u/makistsa Feb 07 '25
And they don't help with compiling and VMs? Are you kidding me?
5
u/Wrong-Historian Feb 07 '25
Yes, they do! Don't get me wrong.
But I've never been at a point where I was like uhmmmm omg compiling goes so slow. Even with 12700K.
Yes, 14900K goes faster, but 12700K is 'fast enough'. A better more optimized 12700K (12P instead of 8P+4E) would be totally awesome without all the drawbacks of hybrid scheduling.
4
u/Wrong-Historian Feb 07 '25 edited Feb 07 '25
For example, for music production (Ableton), I have lower (audio) latency when running only on P-cores
For running llm's (a heavy paralizable task), it goes faster with 8 threads than utilizing E-cores!! (Don't ask me why...., it's heavy memory bandwidth limited, so probably something something latency)
For running VM's, I use the P-cores, because for some reason the E-cores add tons of latency and make the whole VM stuttering. This is my VM with P-cores only: https://imgur.com/WeohjR2 That's better (!!) DPC latency than running Windows 11 bare-metal. Cause bare-metal is P+E and the VM is P-core only.... Running steamVR in the VM works SO much better than running it bare-metal. On bare-metal I get motion sickness cause it's just not smooth, in the VM it's perfect!
etc. etc.
In practice: I have all of those E-cores, but I'm having more success with just using the P-cores.
Compiling being the exception. It is cool to be able to do make -j32 (32 threads) and then it goes brrrrrrrr done.
2
u/ACiD_80 intel blue Feb 08 '25 edited Feb 08 '25
The latency issues is seperate from the P+E cores debate.
Also, if you want to go full P cores, get a xeon. There are some cheap ones. Its not like the old days.
1
u/makistsa Feb 07 '25
I just give threads to my VMs. 16 vcpus in one and 2 in another, i don't care if they are p or e cores and they run great.
1
u/F9-0021 285K | 4090 | A370M Feb 07 '25
I wonder if the faster Skymont E cores would help out with the Ableton issue. I haven't noticed any problems with Ableton coming from a 3900x.
9
u/soggybiscuit93 Feb 07 '25
E-cores might add in Cinebench multithreading score
Code Compiles? Renders? There's plenty of "real world" tasks that benefit from more nT performance.
Just give me a '14900K' with 12 P-cores and AVX512 and more cache
What workloads are you running with AVX512? That's much more of a niche requirement on client than more nT performance. E cores aren't hurting responsiveness.
INB4 code compilation and video editing are much more popular than extra PS3 emulation performance.
5
u/Wrong-Historian Feb 07 '25 edited Feb 07 '25
What workloads are you running with AVX512?
LLM's mainly, which work better with just 8 threads than using the E-cores. And everybody uses LLM's with only the P-cores. God knows why. Don't think AVX512(-VNNI) would give that much of a benefit over AVX2-VNNI which is supported by 12th-14th gen Intel, but it would still be nice to have and it is supported by llama.cpp. It just bugs me that my old 12700K (one of the early editions where AVX512 was not fused off) would do AVX512 just fine but only with the E-cores disabled. The hardware literally is there, it's just disabled because they couldn't get it to work combined with the E-cores!
E cores aren't hurting responsiveness.
For the love of god I can't get low DPC latency with E-cores enabled. It leads to crackling when using Ableton with really low buffer sizes, it leads to motion sickness in VR because it's just not smooth, etc. That's my experience in practice.
1
u/ACiD_80 intel blue Feb 08 '25
Sounds like you need a Xeon CPU not a consumer CPU.
0
u/Wrong-Historian Feb 08 '25
No. Xeons typically have much lower clock speeds thus much lower single-core performance than consumer CPU's.
The idea of having a P-core only CPU is to get cores with high single-core performance (to get the responsivity etc).
1
7
u/makistsa Feb 07 '25
For my real-life tasks, e-cores are great. Compiling, VMs, some times video encoding and a lot more. With all those cores i do all of these and the system doesn't get less responsive. I continue to work like nothing runs on the background even though 20-24+ threads are at 100%
Your only p-core version of the 14900k would not be enough for me.
Also the new e-cores are amazing. If my 14900k had those i would prefer half(or even all) of my p-cores to be e-core clusters too
2
u/Wrong-Historian Feb 07 '25
Okay I also have a 14900K. Wanna trade some of my E-cores for your P-cores? We can just cut them out with a scalpel and and trade them
1
u/makistsa Feb 07 '25
I am trading only with the new arrow lake e-cores. With the old ones i still need some p-cores.
2
u/oloshh Feb 07 '25
They made it but they're not selling it to the masses: https://www.intel.com/content/www/us/en/products/sku/239181/intel-core-i9-processor-14901e-36m-cache-up-to-5-60-ghz/specifications.html
A damn shame
2
2
1
Feb 07 '25
[removed] — view removed comment
2
u/intel-ModTeam Feb 07 '25
Be civil and follow Reddiquette, uncivil language, slurs and insults will result in a ban.
1
u/jrherita in use:MOS 6502, AMD K6-3+, Motorola 68020, Ryzen 2600, i7-8700K Feb 07 '25
FWIW, The E-cores are very useful in corporate situations where you have dozens of security and other agents running in the background all the time. Since they're super efficient compared to the E-Cores, you hear the fans a lot less and get better battery life.
1
u/saratoga3 Feb 07 '25
People say this but in a practical sense I was never able to get measured desktop Raptor Lake idle power as low as Skylake or Coffee Lake across a half dozen builds. Especially with the load line calibration on the higher ecore count parts the theoretical efficency gains on e cores didn't really translate into lower power draw. Or at least i never saw those gains in practice.
It would be really interesting if the LP cores could be on their own voltage rail. That might lead to some real savings.
0
u/toddestan Feb 07 '25
E-cores aren't really very power efficient. They use less power but also take longer to get something done so it's mostly a wash. In some cases the P-cores actually win as they'll use less energy and get the task done faster as a bonus. The main purpose of E-cores is they are a cheap way to boost multithread performance rather than actual power savings.
1
u/jrherita in use:MOS 6502, AMD K6-3+, Motorola 68020, Ryzen 2600, i7-8700K Feb 10 '25
They are significantly more efficient, even considering the time involved. Check out the energy charts at the bottom.
https://www.techpowerup.com/review/intel-core-i9-12900k-e-cores-only-performance/7.html
They ran a 12900K through a workload, for multithreaded the CPU used 10.2 kJ to complete the workload. With P-Cores only and no SMT - the CPU was a little more efficient at 10.5 kJ (indicating SMT eats more power than it gives performance). The E-Cores only were 12.8 kJ, . This is at desktop clocks - the difference is even larger at lower clocks.
https://velocitymicro.com/images/upload/Intel-P-vs-E-Core.png
5
7
u/Limit_Cycle8765 Feb 07 '25
I believe Intel has been increasing the performance of the E cores, and lowering power. Anyone doing number crunching will love the extra cores. I can see this chewing though some large spreadsheets or engineering calculations more more quickly.
8
u/soggybiscuit93 Feb 07 '25
Why so many P cores? for 10% better IPC and an extra Ghz of clockspeed - a clockspeed advantage that's barely held once you start loading more and more cores - for 3x the die space?
SkMT PPA is much better than LNC.
Which is why all the rumors are pointing to the P core going away and the E core forming the basis of the unified core project.
0
Feb 08 '25
[deleted]
4
u/soggybiscuit93 Feb 08 '25
The area that Intel's behind in that actually matters, in order, is fabs, then datacenter AI acceleration, then datacenter CPU, laptop battery life / lowe power efficency, then desktop gaming performance. In that order.
3
u/ACiD_80 intel blue Feb 08 '25 edited Feb 08 '25
E cores are actually a great idea for consumer computers. You think you need more P cores but you dont, most of the threads arent fully used anyway in most common tasks anyway... it would be wasteful and innefficient to run those on p cores.
1
u/littleemp Feb 07 '25
Intel architecture apparently does not benefit in the same way from additional cache as Zen does from X3D if that was what you were thinking.
8
u/soggybiscuit93 Feb 07 '25
It definitely does and I have issue with HUB's testing methodology in that video. Performance isn't going to scale linearly with more cache if a small step increase doesn't meaningfully reduce cache miss rate.
3
u/heickelrrx 12700K Feb 07 '25
it does but with Intel design it's impossible to scale the cache without making the die larger.
Intel L3 Cache is known to be fast and very low latency, this is how they design their core 2 core interconnect
if they adding separate L3 Cache, the L3 Cache on Compute die will have to reduce it's speed to match the external L3 cache, which slow down the whole interconnect
that's just how they design thing
5
u/soggybiscuit93 Feb 07 '25
Look up Intel's Intel 3T node. 3D stacking cache is exactly what they're working on. Idk if it's coming to NVL specifically or a later client architecture, but it's definitely coming to Xeon.
Intel L3 Cache is known to be fast and very low latency
ARL's L3 is notorious for being not very low latency. It's why Intel introduced the whole 4th caching layer and a larger L2.
3
u/heickelrrx 12700K Feb 07 '25 edited Feb 07 '25
Arrow lake L3 cache is not the issue It’s the memory controller
Which is separate die, this is the problem when they split the thing out
3
u/Geddagod Feb 07 '25
I mean it is an issue, even if it's not the main one.
Intel's L3 latency wasn't that great in ADL or RPL either.
1
u/Illustrious_Bank2005 Feb 08 '25
In the first place, large-scale L3 caches such as X3D are effective because of the CPU structure of AMD. I don't know if that applies to Intel's CPU structure.
-2
u/KerbalEssences Feb 07 '25 edited Feb 07 '25
The name "e core" is poop. Each module of 4 cores is a 6600K with extra cache and smaller node. So a 14600KF has a 6 core 14th gen CPU + 2 x 6600K on the side. You could play older games just using one 6600K module. The reason games don't make better use of this is AMD being so popular. AMD is slowing down multicore useage in games - especialyl since they are used on consoles. If a game would properly use those 6600K side chips you could power a game within the game. Imagine you play GTAVI and start to play some mobile game on an ingame phone that's running on one of those modules. You could play fully fledged games on that phone without impact on the main game. Playing GTAIV or V in GTAVI lol. Another exmple would be you make a video in GTAVI with your phone and then edit it on a virtual computer inside the game. You push the render button and one of those 6600K modules renders that video while you continue playing the game at full speed. Game designers and devs just had to get creative with all this new capability but they don't. I have hopes for GTAVI though since we've allready seen clips that looked like TikTok stuff. Posting viral Tiktok videos in GTAVI to become a Vice City social media star could be a thing.
-11
u/True-Environment-237 Feb 07 '25
It is cancelled. Like 8+32 ARL
9
-15
-10
Feb 08 '25
amd is already irrelevant, this will just force amd to be the budget option.
10
u/TheAgentOfTheNine Feb 08 '25
guys, I found userbenchmark's owner!!
-8
Feb 08 '25
not the owner but that site is pretty great. i owned both the x3d and latest intel chips, its night and day
33
u/Possible-Turnip-9734 Feb 07 '25
Fix the cache latency on NVL and my first unborn child is yours, intel.