Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
popularDr.W posted Sep 20, 2025 04:25 AM
popularDr.W posted Sep 20, 2025 04:25 AM

GMKtec EVO-X1 MFF AI Mini PC: Ryzen AI 9 HX 370, 32GB LPDDR5x, 2TB SSD, OCulink $699.99

$700

$1,600

56% off
Micro Center
12 Comments 5,939 Views
Get Deal at Micro Center
Good Deal
Save
Share
Deal Details
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc

Community Voting

Deal Score
+12
Good Deal
Get Deal at Micro Center

Leave a Comment

Unregistered (You)

12 Comments

Sign up for a Slickdeals account to remove this ad.

Sep 20, 2025 04:41 AM
421 Posts
Joined Feb 2008
jasoncwleeSep 20, 2025 04:41 AM
421 Posts
Is this good for running AI model locally using unified memory?
Original Poster
Pro
Sep 20, 2025 05:08 AM
11,272 Posts
Joined Nov 2020
Dr.W
Original Poster
Pro
Sep 20, 2025 05:08 AM
11,272 Posts
Quote from jasoncwlee :
Is this good for running AI model locally using unified memory?
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
1
Sep 20, 2025 01:08 PM
814 Posts
Joined Apr 2014
insaneXprovalSep 20, 2025 01:08 PM
814 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
That Oculink port can help with the dedicated GPU part. And Oculink generally has better performance for eGPUs than USB4.
1
Sep 20, 2025 04:32 PM
661 Posts
Joined Jul 2007
mesowetoditSep 20, 2025 04:32 PM
661 Posts
man, i feel so old...there USB4 now??? i can't keep up with this...wheres my ps1...
1
2
Sep 20, 2025 05:40 PM
1,939 Posts
Joined Aug 2008
cuoreesitanteSep 20, 2025 05:40 PM
1,939 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.

You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
Sep 20, 2025 11:25 PM
491 Posts
Joined Apr 2008
sr27Sep 20, 2025 11:25 PM
491 Posts
Quote from cuoreesitante :
Quote from Dr.W [IMG]https://api-web.slickdeals.net/images/misc/backlink.gif[/IMG] :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.

You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
I am no expert, but this seems like a bad config to run AI. The soldered ram is a big blocker. Plus you can only connect one gpu using oculink. May be squeeze in another on usb4. Either way this can probably run small models and even be used for tuning small models. But not suitable above hobby level AI activities.
1
Sep 20, 2025 11:31 PM
6,386 Posts
Joined Feb 2008
RyanLSep 20, 2025 11:31 PM
6,386 Posts
I grabbed one of these a couple of days back for my son's birthday. Seemed like a the best deal I could find on mini pc with 890m GPU. It was the last one in parkville MD though (not sure if other stores have more stock).

Sign up for a Slickdeals account to remove this ad.

Sep 20, 2025 11:46 PM
533 Posts
Joined Sep 2011
lolrofllolSep 20, 2025 11:46 PM
533 Posts
Quote from sr27 :
I am no expert, but this seems like a bad config to run AI. The soldered ram is a big blocker. Plus you can only connect one gpu using oculink. May be squeeze in another on usb4. Either way this can probably run small models and even be used for tuning small models. But not suitable above hobby level AI activities.
Soldered memory is preferable because LPDDR5X has more bandwidth than socketed DDR5. Although, LPDDR5x-7500 is not the fastest spec available. These unified memory designs benefit heavily from the additional memory bandwidth of soldered RAM, and AI inference tends to be bound by bandwidth more than compute in these situations.

Besides, I'm pretty sure that Ryzen AI 300 series come ONLY in soldered memory designs - unsure if that's a technical requirement or just arbitrarily set by AMD, but it does make sense. Even in gaming scenario, we'd probably see LPDDR5X-7500 perform 20-30% better than socketed DDR5-5600.
1
Sep 21, 2025 10:21 AM
2,784 Posts
Joined Jun 2012
lennonstSep 21, 2025 10:21 AM
2,784 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
You could just say youre not sure, since clearly that is the case. 32gb is garbage for running useful models.
2
Sep 21, 2025 10:24 AM
2,784 Posts
Joined Jun 2012
lennonstSep 21, 2025 10:24 AM
2,784 Posts
Quote from lolrofllol :
Soldered memory is preferable because LPDDR5X has more bandwidth than socketed DDR5. Although, LPDDR5x-7500 is not the fastest spec available. These unified memory designs benefit heavily from the additional memory bandwidth of soldered RAM, and AI inference tends to be bound by bandwidth more than compute in these situations.

Besides, I'm pretty sure that Ryzen AI 300 series come ONLY in soldered memory designs - unsure if that's a technical requirement or just arbitrarily set by AMD, but it does make sense. Even in gaming scenario, we'd probably see LPDDR5X-7500 perform 20-30% better than socketed DDR5-5600.
You have no idea what he was posting about, lol. A measly 32gb of soldered ram is garbage for this thing as it will be the limiting factor in running large LLM models which is the entire point of his post.
2
Yesterday 04:56 PM
533 Posts
Joined Sep 2011
lolrofllolYesterday 04:56 PM
533 Posts
Quote from lennonst :
You have no idea what he was posting about, lol. A measly 32gb of soldered ram is garbage for this thing as it will be the limiting factor in running large LLM models which is the entire point of his post.
You clearly have never tried to run any models on your own hardware. More capacity is worthless without sufficient memory bandwidth, and on Strix Point, we're severely bandwidth limited. Soldered LPDDR5x in this case will provide somewhere around 20-30% more bandwidth than socketed DDR5, hence the preference.

Sure, we can technically run larger models with more socketed RAM, but it'll be so painfully slow that it's not actually useful in any sort of real-time use case.

32GB is plenty enough to run the sort of models that can actually run fast enough on this level of compute and bandwidth.
Yesterday 05:38 PM
2,784 Posts
Joined Jun 2012
lennonstYesterday 05:38 PM
2,784 Posts
Quote from lolrofllol :
You clearly have never tried to run any models on your own hardware. More capacity is worthless without sufficient memory bandwidth, and on Strix Point, we're severely bandwidth limited. Soldered LPDDR5x in this case will provide somewhere around 20-30% more bandwidth than socketed DDR5, hence the preference.

Sure, we can technically run larger models with more socketed RAM, but it'll be so painfully slow that it's not actually useful in any sort of real-time use case.

32GB is plenty enough to run the sort of models that can actually run fast enough on this level of compute and bandwidth.
His point has still gone over your head, no one is arguing soldered ram isn't better.

But you clearly have never run a useful model locally, as 32gb is only enough to run gimmick level LLMs.

Leave a Comment

Unregistered (You)

Popular Deals

View All

Trending Deals

View All