popularDr.W posted Sep 28, 2025 03:27 PM
Item 1 of 2
Item 1 of 2
popularDr.W posted Sep 28, 2025 03:27 PM
GMKtec EVO-X2 AI Mini PC: Ryzen AI Max+ 395, 128GB LPDDR5X-8000, 2TB SSD, Radeon 8060S $1795
$1,795
$2,800
35% offGood Deal
Bad Deal
Save
Share
Leave a Comment
34 Comments
Sign up for a Slickdeals account to remove this ad.
This was never $1699 pre-order. The price was $ 1,799 for the 64GB or 96GB RAM variant. Search it up in the SD search bar and see for yourself:
Sign up for a Slickdeals account to remove this ad.
Our community has rated this post as helpful. If you agree, why not thank wiouxev
The high RAM is for running AI locally. You can download LLMs (Large Language Models) and have your own "ChatGPT" and run it on your computer. The way LLM's work is accessing data thousands of times per second.
Memory bandwidth speed for SSD's = 3-7 GB/s
Memory bandwidth speed for RAM or VRAM = 500-1,500 GB/s
So thats why they need to be stored in RAM (regular RAM or VRAM from a GPU which is even better)
Sorry if this is TMI or you don't care, just thought i'd answer your question in case you didn't know!
The high RAM is for running AI locally. You can download LLMs (Large Language Models) and have your own "ChatGPT" and run it on your computer. The way LLM's work is accessing data thousands of times per second.
Memory bandwidth speed for SSD's = 3-7 GB/s
Memory bandwidth speed for RAM or VRAM = 500-1,500 GB/s
So thats why they need to be stored in RAM (regular RAM or VRAM from a GPU which is even better)
Sorry if this is TMI or you don't care, just thought i'd answer your question in case you didn't know!
No, that's a great answer thank you. So would this be good for also running something like stable diffusion or would that be more GPU side than ram since it's images/video? As for chat gpt type, how would you train it on your desktop? Or do you not even need to?
It would probably be considered a mini desktop, yes, as it doesn't have a display. But it's size is a big advantage for mounting it out of the way, even to a back of a monitor, putting it in a rack, just being portable in general.
The high RAM is for running AI locally. You can download LLMs (Large Language Models) and have your own "ChatGPT" and run it on your computer. The way LLM's work is accessing data thousands of times per second.
Memory bandwidth speed for SSD's = 3-7 GB/s
Memory bandwidth speed for RAM or VRAM = 500-1,500 GB/s
So thats why they need to be stored in RAM (regular RAM or VRAM from a GPU which is even better)
Sorry if this is TMI or you don't care, just thought i'd answer your question in case you didn't know!
No, that's a great answer thank you. So would this be good for also running something like stable diffusion or would that be more GPU side than ram since it's images/video? As for chat gpt type, how would you train it on your desktop? Or do you not even need to?
Small caveat worth noting: while it is normal system RAM in the case of this computer, you would allocate some/most of it to the iGPU so technically it would be considered VRAM
For LLMs you have model size and context length
For SD, you have resolution
As far as your other question, I'm sure someone around SD will see this and give a better answer, as I am a bit of a noob.
As I understand it, LLM's & SD and similar models are already "trained." That's essentially what you're downloading when you go here and download a model:
https://ollama.com/library
But there's a caveat to this, LLM's can also leverage "RAG" which is kind of a 2nd layer of "training" that might be like your personal docs you feed it. Think of that as like a personalization layer, but it can be leveraged to make the model "Smarter" or more "specific"
Small caveat worth noting: while it is normal system RAM in the case of this computer, you would allocate some/most of it to the iGPU so technically it would be considered VRAM
For LLMs you have model size and context length
For SD, you have resolution
As far as your other question, I'm sure someone around SD will see this and give a better answer, as I am a bit of a noob.
As I understand it, LLM's & SD and similar models are already "trained." That's essentially what you're downloading when you go here and download a model:
https://ollama.com/library
But there's a caveat to this, LLM's can also leverage "RAG" which is kind of a 2nd layer of "training" that might be like your personal docs you feed it. Think of that as like a personalization layer, but it can be leveraged to make the model "Smarter" or more "specific"
Great info, thank you
Sign up for a Slickdeals account to remove this ad.
Leave a Comment