popularDr.W posted Dec 31, 2025 05:42 PM
Item 1 of 2
Item 1 of 2
popularDr.W posted Dec 31, 2025 05:42 PM
Apple MacBook Pro (Refurb): 16.2", M1 Pro, 32GB RAM, 512GB SSD, Space Gray $779.99
$780
$1,799
56% offeBay
Get Deal at eBayGood Deal
Bad Deal
Save
Share


Leave a Comment
24 Comments
Sign up for a Slickdeals account to remove this ad.
This laptop is great but lot slower for AI or I would be all over this. Can run larger models but much slower. Good deal though IMO if you need a big laptop. Beware it is heavy 4.5lb plus charger will take a toll on you for sure
As for faster, depending on the M series's GPU but most of the time the recent RTX cards will be much faster. Apple has unify memory and can run larger models more often but small/mid LLM size will be much faster on RTX.
My old M1 Max with 24GPU was slower for Ollama running a 5B parameter models vs. RTX 2070. My current M4 Pro with newer 16 GPU cores still can't beat RTX 2070 (didn't try MLX but normal use for LLM the older generation of RTX cards are still much much faster.
lots of way to slice it though as there are diff factors.
Sign up for a Slickdeals account to remove this ad.
Power faulted on the PC, I figured I will really troubleshoot it properly since it was some weird power thing. Followed some basic troubleshoot thing and the power supply was clicking and clicking, tripping basically.
Took off power on the Mb etc. to rest, still no go so it's def power supply...
Decided to go outside for the last test since I thought it would be fun and it was!!!
Fired it up-- then I saw this.
All had stock components for many years) but I was stressing the hell out of RTX 2070 that came with the machine originally (that's why I wanted to get RTX 5060)
I will rent GPU online for a while I guess.
As for faster, depending on the M series's GPU but most of the time the recent RTX cards will be much faster. Apple has unify memory and can run larger models more often but small/mid LLM size will be much faster on RTX.
My old M1 Max with 24GPU was slower for Ollama running a 5B parameter models vs. RTX 2070. My current M4 Pro with newer 16 GPU cores still can't beat RTX 2070 (didn't try MLX but normal use for LLM the older generation of RTX cards are still much much faster.
lots of way to slice it though as there are diff factors.
you can run local ai model up to 64gb (vram)
you can run local ai model up to 64gb (vram)
Sign up for a Slickdeals account to remove this ad.
Leave a Comment