Minisforum has select
AtomMan G7 Ti Barebone Desktops for the prices listed after applying code
BF120 at checkout.
Shipping is free.
Thanks to Community Member
Suryasis for finding this deal.
Available:Spec (G7 Ti):
- Core i9-14900HX (8P+16E)/32T 1.6 GHz (5.8 GHz Turbo, 36MB L3 Cache) for G7 Ti
- RTX 4070 8GB GDDR6 Graphics, 140W TGP
- 2x 4400 RPM Fan + 5 Set Heat pipes
- 2x DDR5 Slot (Up to 5600 MT/s, 96GB)
- 2x M.2 2280 PCIe Gen 4.0 Slot
- 1x M.2 2230 Slot For Wi-Fi (Populated)
- Intel BE200 Wi-Fi 7 + Bluetooth 5.4
- Ports:
- 2x USB 3.2 Gen 2 Type-A 10 Gbps
- 2x USB 3.2 Gen 1 Type-A 5 Gbps
- 1x USB4 Type-C 40 Gbps, Alt DP 1.4
- 1x HDMI 2.1 (8K@60Hz, 4K@120Hz)
- 1x SD Card Reader
- 1x Audio Combo Jack
- 1x 2.5 Gbps RJ-45
- 1x DC 19V Power Port
- 1x Power profile Switch Button
- 1x Power Button
- DC 19V 280W Power Adapter
Expired:
Top Comments
The Cooling is way way better on this one. Despite having 24 Core i9-14900HX CPU, in CodeBench R24 all core load, it runs significantly cooler than Core Ultra 9-185H in Asus NUC. The temps never went over 90 degree C and the average was just around 80-82 degree C.
Unlike other RTX 4070 Laptop GPUs which can max reach up to 110W in 99% cases, this device can actually push it to 125W (Default TDP of RTX 4070 laptop) almost all the time and even near to 140 Watt in highly GPU dependent games, like in Cyberpunk.
While I love minipcs, these prices are a joke.
42 Comments
Sign up for a Slickdeals account to remove this ad.
Our community has rated this post as helpful. If you agree, why not thank Suryasis
My current machine was a little cheaper and a little better specs, but way bigger size.
Sign up for a Slickdeals account to remove this ad.
anyways, for non-LLM useage, I used to look at their previous version that comes with 3070, but now I think Beelink with ex dock is probably the better way to go.
Wow that's a big ole lolacoaster.
While I love minipcs, these prices are a joke.
there are few models out there, such as llama 3, they come with 1B, 3B, 7B, 11B, 90B.. etc. for a 3B model (text chat), you need about 24G vram for a context size of 128K.
you wont' be able to load that in a 4070, while the actual inference speed is much faster on the 4070... what good does it do if you cannot even load the parameters? you gonna fall back to system ram and your speed is going to be abysmal. if you deploy qwen 2.5 72B model, which takes about 45G varm to run, you got less than token per second on 4090 because the vram cannot hold it, whereas on M3 or M4 max you can still get reasonable 7/8 token per second. you need a 48G RTX Ada 6000 from nvda to got above 24G or vram, to load larger model sizes.
Sign up for a Slickdeals account to remove this ad.
All mobile laptop components. This 4070 is more like a 3060 desktop.