Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
forum threadDr.W posted Today 04:25 AM
forum threadDr.W posted Today 04:25 AM

GMKtec EVO-X1 MFF AI Mini PC: Ryzen AI 9 HX 370, 32GB LPDDR5x, 2TB SSD, OCulink $699.99

$700

$1,600

56% off
Micro Center
3 Comments 2,847 Views
Get Deal at Micro Center
Good Deal
Save
Share
Deal Details
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Available In-store Only. In stock at all stores (except for AZ) when posting this.

SPECS:
  • AMD Ryzen AI 9 HX 370 2.0GHz Processor
  • 32GB LPDDR5x-7500 RAM
  • 2TB Solid State Drive
  • AMD Radeon 890M Graphics
  • Microsoft Windows 11 Pro
  • WiFi 6 802.11ax
  • Bluetooth 5.2


https://www.microcenter.com/produ...ai-mini-pc

Community Voting

Deal Score
+5
Good Deal
Get Deal at Micro Center

Leave a Comment

Unregistered (You)

3 Comments

Sign up for a Slickdeals account to remove this ad.

Today 04:41 AM
421 Posts
Joined Feb 2008
jasoncwleeToday 04:41 AM
421 Posts
Is this good for running AI model locally using unified memory?
Original Poster
Pro
Today 05:08 AM
11,199 Posts
Joined Nov 2020
Dr.W
Original Poster
Pro
Today 05:08 AM
11,199 Posts
Quote from jasoncwlee :
Is this good for running AI model locally using unified memory?
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
Today 01:08 PM
811 Posts
Joined Apr 2014
insaneXprovalToday 01:08 PM
811 Posts
Quote from Dr.W :
Yes, it's a very good choice to locally run AI models if you optimize (use quantization, efficient frameworks, keep models reasonable in size). It's especially good for inference, testing, and lighter workloads. For large models or many simultaneous sessions, a discrete GPU rig might do more, but this is solid for what many users want locally.
That Oculink port can help with the dedicated GPU part. And Oculink generally has better performance for eGPUs than USB4.

Leave a Comment

Unregistered (You)

Popular Deals

View All

Trending Deals

View All