This post can be edited by most users to provide up-to-date information about developments of this thread based on user responses, and user findings. Feel free to add, change or remove information shown here as it becomes available. This includes new coupons, rebates, ideas, thread summary, and similar items.
Once a Thread Wiki is added to a thread, "Create Wiki" button will disappear. If you would like to learn more about Thread Wiki feature, click here.
popularDr.W posted Today 05:05 PM
Item 1 of 3
Item 1 of 3
popularDr.W posted Today 05:05 PM
Apple MacBook Pro (Refurb): 16", M1 Max, 64GB RAM, 1TB SSD, Space Gray $1135
$1,135
$3,000
62% offeBay
Get Deal at eBayGood Deal
Bad Deal
Save
Share
Leave a Comment
33 Comments
Sign up for a Slickdeals account to remove this ad.
Paired with a good local model like qwen3:30b-a3b-thinking-2507, it'll comfortably run 40+ tokens/s all while having Docker containers and loads of other apps running concurrently.
Without benchmarking to compare, most people would be hard-pressed to spot a performance difference between this and, say, a brand new M4 Max that'll run 4x this price.
Sign up for a Slickdeals account to remove this ad.
With the 64GB of ram, this is a great machine for local LLM & development use. Paired with a good local model like qwen3:30b-a3b-thinking-2507, it'll comfortably run 40+ tokens/s all while having Docker containers and loads of other apps running concurrently.Without benchmarking to compare, most people would be hard-pressed to spot a performance difference between this and, say, a brand new M4 Max that'll run 4x this price.
It's got about half the memory bandwidth, that makes a pretty big difference
With the 64GB of ram, this is a great machine for local LLM & development use. Paired with a good local model like qwen3:30b-a3b-thinking-2507, it'll comfortably run 40+ tokens/s all while having Docker containers and loads of other apps running concurrently.Without benchmarking to compare, most people would be hard-pressed to spot a performance difference between this and, say, a brand new M4 Max that'll run 4x this price.
AI slop about making AI slop?
With the 64GB of ram, this is a great machine for local LLM & development use. Paired with a good local model like qwen3:30b-a3b-thinking-2507, it'll comfortably run 40+ tokens/s all while having Docker containers and loads of other apps running concurrently.Without benchmarking to compare, most people would be hard-pressed to spot a performance difference between this and, say, a brand new M4 Max that'll run 4x this price.
Just out of curiosity do you know of any good comparison videos/ sites? I'm looking for a new laptop, mainly do simple 4k video editing my m1 MAC mini handles fine with 8gb. With that said in looking to create AI images/ video locally
I don't do AI image/video, but it is a lot more GPU intensive, so for that you may wish to consider something like an RTX 5090 on a desktop PC.
for basic editing and daily use only
Sign up for a Slickdeals account to remove this ad.
Leave a Comment