Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
popularDr.W posted Yesterday 04:25 AM
popularDr.W posted Yesterday 04:25 AM

GMKtec EVO-X1 MFF AI Mini PC: Ryzen AI 9 HX 370, 32GB LPDDR5x, 2TB SSD, OCulink $699.99

$700

$1,600

56% off
Micro Center
8 Comments 4,176 Views
Get Deal at Micro Center
Good Deal
Save
Share
Deal Details
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc

Community Voting

Deal Score
+6
Good Deal
Get Deal at Micro Center

Leave a Comment

Unregistered (You)

8 Comments

Sign up for a Slickdeals account to remove this ad.

Yesterday 04:41 AM
421 Posts
Joined Feb 2008
jasoncwleeYesterday 04:41 AM
421 Posts
Is this good for running AI model locally using unified memory?
Original Poster
Pro
Yesterday 05:08 AM
11,202 Posts
Joined Nov 2020
Dr.W
Original Poster
Pro
Yesterday 05:08 AM
11,202 Posts
Quote from jasoncwlee :
Is this good for running AI model locally using unified memory?
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
Yesterday 01:08 PM
813 Posts
Joined Apr 2014
insaneXprovalYesterday 01:08 PM
813 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
That Oculink port can help with the dedicated GPU part. And Oculink generally has better performance for eGPUs than USB4.
Yesterday 04:32 PM
661 Posts
Joined Jul 2007
mesowetoditYesterday 04:32 PM
661 Posts
man, i feel so old...there USB4 now??? i can't keep up with this...wheres my ps1...
1
1
Yesterday 05:40 PM
1,937 Posts
Joined Aug 2008
cuoreesitanteYesterday 05:40 PM
1,937 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.

You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
Yesterday 11:25 PM
488 Posts
Joined Apr 2008
sr27Yesterday 11:25 PM
488 Posts
Quote from cuoreesitante :
Quote from Dr.W [IMG]https://api-web.slickdeals.net/images/misc/backlink.gif[/IMG] :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.

You'd be pretty limited as to what models you can run thou. Considering the ram is not upgradable I'd opt for at least 64gb if you want to run local models with the next few years in mind.
I am no expert, but this seems like a bad config to run AI. The soldered ram is a big blocker. Plus you can only connect one gpu using oculink. May be squeeze in another on usb4. Either way this can probably run small models and even be used for tuning small models. But not suitable above hobby level AI activities.
Yesterday 11:31 PM
6,385 Posts
Joined Feb 2008
RyanLYesterday 11:31 PM
6,385 Posts
I grabbed one of these a couple of days back for my son's birthday. Seemed like a the best deal I could find on mini pc with 890m GPU. It was the last one in parkville MD though (not sure if other stores have more stock).

Sign up for a Slickdeals account to remove this ad.

Yesterday 11:46 PM
532 Posts
Joined Sep 2011
lolrofllolYesterday 11:46 PM
532 Posts
Quote from sr27 :
I am no expert, but this seems like a bad config to run AI. The soldered ram is a big blocker. Plus you can only connect one gpu using oculink. May be squeeze in another on usb4. Either way this can probably run small models and even be used for tuning small models. But not suitable above hobby level AI activities.
Soldered memory is preferable because LPDDR5X has more bandwidth than socketed DDR5. Although, LPDDR5x-7500 is not the fastest spec available. These unified memory designs benefit heavily from the additional memory bandwidth of soldered RAM, and AI inference tends to be bound by bandwidth more than compute in these situations.

Besides, I'm pretty sure that Ryzen AI 300 series come ONLY in soldered memory designs - unsure if that's a technical requirement or just arbitrarily set by AMD, but it does make sense. Even in gaming scenario, we'd probably see LPDDR5X-7500 perform 20-30% better than socketed DDR5-5600.

Leave a Comment

Unregistered (You)

Popular Deals

View All

Trending Deals

View All