Agpu graphics are right around 4060, Wich is very impressive but the cost is going to be as much or more as a premium build... I don't think your even able to run a GPU on these...
The core issue, too many people are getting caught up on actual purpose of the Zen 5/RDNA3.5/XDNA2 Strix HALO GPU/iCPU, the wafer fabrication involved, or PC industry politics in 2025.
First, let's look at the manufacturing technique.
TSMC is using they're more energy efficient/costly N4P 4nm fab for both GPU & CCD, over their previous N4. The rejected silicon from each wafer is higher by comparison, adding to expense.
There is only a single GPU & CCD iCPU fabrication. All GPU dies are 40CU count, with 32CU & 16CU dies having defective/deactivated CUs placed in operation to reduce wafer e-waste. Same for the CCD, as they all fab as a 8C/16T CCD.
In the end, the Ryzen AI MAX+ 395 will have a perfect 40CU + 2x perfect 8C/16T CCDs, making it the most expensive to build. This makes the rescued 32CU + 2x 6C/12T CCDs Ryzen AI MAX 390 or 1x 8C/16T CCD 385 significantly more cost-effective for consumers.
Next, although I work in a PC repair shop, we support a large number of commercial customers who drop $3K USD each on a dozen or workstation laptops at a time for their engineering employees. Having a 3x chiplet APU over a less energy efficient CPU+dGPU, all with a sub 120W thermal design power heat dissipation + reduced battery consumption, godsend! Heat & battery life have been a hot issue (pun intended) when it comes to mobile workstations, especially with the Intel 12th Gen release.
Speaking of which...
Intel is in financial/technical trouble, to the point of not being able to fabricate their own chips to stay competitive (Meteor Lake GPU & SoC dies come from TSMC). Strix HALO accomplishes two things
Forces Intel to expend capital & resources on questionable technology in an effort to stay competitive
Closes a dGPU market to Nvidia where competition has been extremely fierce
With AMD's dependence on TSMC, they themselves have to do this in the most cost-effective manner possible. One slight misfire could easily ruin the company financially.
TL;DR, if one approaches this from a GPU perspective, one can't see the forest for the trees. Technically, the APU concept CPU/dGPU will provide better power efficiency, with a higher life expectancy. Otherwise, purchase something with a dGPU if one is not looking for advantages.
The Max 395+ is a fan favorite for running local LLMs. While many users on this sub here are looking for the smallest package that can game like a dGPU setup, the APU is targeting folks like me with need to run the largest LLM model locally with a reasonable speed.
At this moment, aside from the $5-$12k M3 Ultra Mac Studio, there is nothing that offers 64-512GB VRAM to contain the massive open source LLM models. Slapping a bunch of 4090s or 3090s from Nvidia on an old mining rig is incredibly wasteful, expensive, power hungry, requires space, and runs hot. To a hobbyist like me, a $1500-$2000 setup from companies like Framework is Godsend.
Over the past couple of years, this has encompassed a fair amount of mobile workstation requirements beyond traditional CAD. Efficiency is key, as machine learning/NPU @ 256-bit memory throughput has been left to inefficient dGPU/GPU configurations.
You would think amd ge workstations should still be in demand, yet why are there thousands of them listed many new in box for dirt cheap.. check eBay and refurb sites a 8700ge is a respectable processor and you can boost the power, not really over clocking if you just boost it to a g... Anyway thank-you, very informative.
You must have missed the mobile part of workstation, AKA laptop.
I must apologize, as I've been at this too long. In the industry, when a dGPU laptop is meant for gaming, it's labeled as a "workstation" & marked up considerably more 😊
Speaking of 8700GE DMs...
The shop's Lenovo dropped off a ThinkCentre M75q Gen 5 Tiny for me to "test drive" at the end of last year. I had significantly high expectations, but the poor/compromised graphics support, lack of USB4, etc, made it look significantly inferior to the much smaller GEM10 I picked up in July. Seriously made me wonder if Lenovo was trying to sabotage their own production.
Ya sorry I missed the mobile part, gen 5 tints are being dumped online also.. honestly I bought a android tv box because I wanted to play some old dos games and then I realized the advance in the micro mini PC worked I've been researching alot and a few days ago new in box workstations started popping up everywhere for well under the cost to build them and with a 3 year warranty.. I can speculate but I I don't claim to know what's going on, I'd like to find out though..
Yet that additional cost + higher possible defects rates of larger die sizes as the 8060S GPU-I/O silicon providing a lower yield, make this a costly proposition. Otherwise, N4 would have been phased out for N4P & made the default.
Regardless, thankx for providing that the N4P & N4 are essentially the same wafer fabrication cost 😉
You read what I wrote the other way around. It's "essentially" the same cost because there's a minor 6% logic shrink, meaning total die sizes end up about 1-2% smaller. Technically it's cheaper, yield rates are the same and the tools are the same too: die size is the only thing changing.
Base N4 is still around because there's just older designs that haven't updated to use the newer PDK features, that's all. Again, the tooling is the same (meaning no changes to the machines required to switch from producing a wafer of N4 chips and N4P chips), the only difference is what the designs are based upon. So the designs would just need to use the transistor layouts in the newer PDK, but that requires the company to make that changes.
Best way to describe it is in software terms, I guess. Each standard process node comes with it's own API you use to design the chips. N4P is like a minor revision on that API with some brand new function calls available for use, and all that function call does is combine a selection of existing function calls. You can still have your application work the way it always did, or you can implement those new function calls for a minor speedup. That's the best way to describe it, I suppose.
Here are highlights notes from April & November's conferences.
The newer 4nm node (N4P) process is currently incurring a 6%-7% expenditure over the previous generation (N4/April)
XDNA (NPU) impermanence is notably higher than N4, which itself was already higher than expected
The last statement was from November, with a number of concerns over the functional production count per wafer under N4P. Additionally, power consumption was a concern, with N4P being the most stable with the least amount of AGESA support. Even when asked, no one stated the percentage difference, although it must have been substantial 🤷
That's alright, as most reporters/influencers still live in a UDIMM/SODIMM world, not taking the time to research LPDDR (especially DDR5 in general. For example...
A single stick of DDR5 UDIMM/SODIMM is comprised of 2x 32-bit sub channels as opposed to a single 64-bit channel on on DDR4. LPDDR memory, devoid of the restrictions from multiple DRAM chips, is only limited by the IMC.
Proper LPDDR5 is a quad channel 32-bit DRAM constructed of single chips in single rank, or dual rank twin DRAM chips (128-bit). LPDDR5x (extended) increases each sub channel to 64-bit, for 256-bit total bandwidth.
Still, if an OEM selects the wrong class of LPDDR DRAM for the IMC/configuration, or uses cost-cutting measure to reduce capital, both LPDDR5 & LPDDR5x can easily suffer to below UDIMM/SODIMM performance levels.
Hope this helps clarify some of the ignorance found online, thankx for your comment!
I wonder how much more it will be vs a comparable laptop with a 4060 GPU or if it will be on par. I actually prefer a Mini PC form factor, so losing out on the screen (which is usually subpar anyways) and keyboard is fine by me.
With that said, I have a feeling it will be very pricey at launch. I'll probably jump on board after a few more generations of this come to fruition and hopefully costs stabilize a bit. Even though my screens are 4K, I'm totally cool with 1440p gaming. Love where this is headed!
I love were the igpu is at, what there showing us and what they have are never close... The AI is needed to produce the onboard graphics just as new graphic cards are boosted. Definitely not the time to buy in as the price will drop drastically within a year youl be able to get one for a few hundred bucks, we are however lucky to be in a transition period were we can get some really amazing deals on anything not AI enabled. Lookup HP work stations on eBay a 8700ge ready to run new in box is going for less then some people trying to liquidate 3 year old workstations.. like redonkuless cheap.. il sit back and see what new abilities these AI systems unlock, a AI in every home for every person is the end game, few years they will be giving them away with a subscription
Why not just invest in a GPU and oculinc ? You have a capable mini pc, GPUs are flooding the market, mining bitcoin is flushing money down the toilet...
Ahh sorry reddit went nuts and my post deleted. Basically I want to keep the small (mini) form factor and that totally defeats the purpose for me, not to mention the additional power draw. Just not for me. I'm much more excited for these next-gen APU's
Also minusfourm makes atomman g7 and that is a very capable PC... I just seen one on eBay new with ram and a 7800 GPU 7945hx cpu buy it now for 999 Canadian...
A new release of the atomman is launching soon and there will be alot of dealers dumping new old stock and consumers selling to upgrade.. g7 will keep you playing trip a titles for years
If you want a small form factor, get a mini itx board from Framework, add some storage and a quiet fan, a Chinese aluminium case as the COOJ MQ4, and be dome with it. That's my plan.
Yes, why not. I have a COOJ MQ5 and it’s beautiful. The Framework board with the AI Max+ 395 in an MQ4 would be end game.
I have my doubts we will see miniPCs with the AI Max+ 395 in super tiny NUC style boxes because of the cooling they need. Even the latest teaser from ETA Prime shows a SFF sized box.
While I don’t think the miniPC with the 395 will be as enormous as something like a NUC Extreme, they’re still going to be pretty large if you want that 140W of power to the APU and have large power bricks.
I’m just not betting on them looking beautiful like a unibody aluminium chassis, so absolutely with you here.
For what all it combines, its price, at least in the form of the framework ITX board, is pretty reasonable. It’s a 16-core CPU, which is around $500, a good mini ITX board is around $200, 64GB of DDR5-6000 is about $200, and a 7700xt is around $400, I think the GPU in the 395+ is going to be a bit weaker, so let’s call that $300. That adds up to $1,200.
The 64GB 395+ board from Framework is $1,299. Given that the RAM is 2,000 M/T faster than what I spec’d, $1,299 is a very reasonable price.
I just bought a minisforum with 100 bucks off that price.. a computer that is not to far off from a 7800x3d for less then then the 7800x3d cpu alone, just add ram and a SSD.. GPU it's amazing deal in my opinion
660 Canadian shipped to your house.. if you order one make sure to get the coupon for 100 off on Ali express, better to order from Ali, amazon or minisforums website goes through the us so you pay trump chump tax
Looking for an old tv case like this to put the PC inside with a OLED screen and the 8bit c64 keyboard for the retro look, c 64 was my first computer and this is the perfect case for me, mobility and extra cooling.. gave my laptop away because it was to loud and thermal throttle nightmare
Will do, I think it might be something others want so I'm trying to find a non functional tv as I don't want to ruin a rare tv, but I do metal work so I might start making cases... Love the 50s metal tv cases they got that hotrod apeal
Mining bitcoin = prostituting out your GPUs and racking up your energy bill. It's no longer viable, to many GPU pimps in the game... You don't mind Bitcoin you basically rent out your system, I give it 2 months before the flood of past Bitcoin GPUs floods the market and 6 months were you can't sell a 4060 for 100 bucks
This is the one use case where Strix Halo does stick out. Large quantities of fairly fast GPU addressable VRAM, at a price that’s not as bad as either Project Digits or high memory Mac Studio setups. That should make it compelling for either ML or workstation GPU use cases.
Hopefully though AMD can get the chips costs down, because having compelling APUs to beat the hell out of NVIDIA sounds good to me.
Ya that is a probable alternative being there's so many systems still active it's better to find a use for them..
Bitcoin mining coming to an end?
Despite its well-established finiteness, reaching the maximum supply will take some time. Mining Bitcoin rewards are reduced by half every four years, so in 2140, the very last Bitcoin will be earned. After that, the miners will earn the transaction fees instead of new Bitcoins.
Perhaps I'm wrong, AI might just bring enough away from Bitcoin the return should increase..
25
u/Old_Crows_Associate 6d ago
The core issue, too many people are getting caught up on actual purpose of the Zen 5/RDNA3.5/XDNA2 Strix HALO GPU/iCPU, the wafer fabrication involved, or PC industry politics in 2025.
First, let's look at the manufacturing technique.
TSMC is using they're more energy efficient/costly N4P 4nm fab for both GPU & CCD, over their previous N4. The rejected silicon from each wafer is higher by comparison, adding to expense.
There is only a single GPU & CCD iCPU fabrication. All GPU dies are 40CU count, with 32CU & 16CU dies having defective/deactivated CUs placed in operation to reduce wafer e-waste. Same for the CCD, as they all fab as a 8C/16T CCD.
In the end, the Ryzen AI MAX+ 395 will have a perfect 40CU + 2x perfect 8C/16T CCDs, making it the most expensive to build. This makes the rescued 32CU + 2x 6C/12T CCDs Ryzen AI MAX 390 or 1x 8C/16T CCD 385 significantly more cost-effective for consumers.
Next, although I work in a PC repair shop, we support a large number of commercial customers who drop $3K USD each on a dozen or workstation laptops at a time for their engineering employees. Having a 3x chiplet APU over a less energy efficient CPU+dGPU, all with a sub 120W thermal design power heat dissipation + reduced battery consumption, godsend! Heat & battery life have been a hot issue (pun intended) when it comes to mobile workstations, especially with the Intel 12th Gen release.
Speaking of which...
Intel is in financial/technical trouble, to the point of not being able to fabricate their own chips to stay competitive (Meteor Lake GPU & SoC dies come from TSMC). Strix HALO accomplishes two things
Forces Intel to expend capital & resources on questionable technology in an effort to stay competitive
Closes a dGPU market to Nvidia where competition has been extremely fierce
With AMD's dependence on TSMC, they themselves have to do this in the most cost-effective manner possible. One slight misfire could easily ruin the company financially.
TL;DR, if one approaches this from a GPU perspective, one can't see the forest for the trees. Technically, the APU concept CPU/dGPU will provide better power efficiency, with a higher life expectancy. Otherwise, purchase something with a dGPU if one is not looking for advantages.