popularDr.W posted Sep 20, 2025 04:25 AM
Item 1 of 2
Item 1 of 2
popularDr.W posted Sep 20, 2025 04:25 AM
GMKtec EVO-X1 MFF AI Mini PC: Ryzen AI 9 HX 370, 32GB LPDDR5x, 2TB SSD, OCulink $699.99
$700
$1,600
56% offMicro Center
Get Deal at Micro CenterGood Deal
Bad Deal
Save
Share
Leave a Comment
12 Comments
Sign up for a Slickdeals account to remove this ad.
You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
Sign up for a Slickdeals account to remove this ad.
Besides, I'm pretty sure that Ryzen AI 300 series come ONLY in soldered memory designs - unsure if that's a technical requirement or just arbitrarily set by AMD, but it does make sense. Even in gaming scenario, we'd probably see LPDDR5X-7500 perform 20-30% better than socketed DDR5-5600.
Besides, I'm pretty sure that Ryzen AI 300 series come ONLY in soldered memory designs - unsure if that's a technical requirement or just arbitrarily set by AMD, but it does make sense. Even in gaming scenario, we'd probably see LPDDR5X-7500 perform 20-30% better than socketed DDR5-5600.
Sure, we can technically run larger models with more socketed RAM, but it'll be so painfully slow that it's not actually useful in any sort of real-time use case.
32GB is plenty enough to run the sort of models that can actually run fast enough on this level of compute and bandwidth.
Sure, we can technically run larger models with more socketed RAM, but it'll be so painfully slow that it's not actually useful in any sort of real-time use case.
32GB is plenty enough to run the sort of models that can actually run fast enough on this level of compute and bandwidth.
But you clearly have never run a useful model locally, as 32gb is only enough to run gimmick level LLMs.
Leave a Comment