B&H Photo Video has
Apple MacBook Pro (MRX33LL/A, Space Black) on sale for
$1699.
Shipping is free.
Thanks to community member
Dr.Wajahat for finding this deal.
Specs:- Apple M3 Pro 11-cores (5x Performance 4GHz + 6x Efficiency 2.7Ghz) Processor, 14-core GPU, 16-core Neural Engine
- 14.2" (3024x1964) Liquid Retina XDR, 254ppi, 1000-nits sustained (1600-nits peak) ProMotion 120Hz Mini-LED Display
- 18GB Onboard Unified Memory
- 512GB Solid State Drive SSD Storage
- Wi-Fi 6E 802.11ax + Bluetooth 5.3
- 1080p FaceTime HD Webcam
- Studio-Quality Three-Mic Array with High Signal-to-Noise Ratio and Directional Beamforming
- High-Fidelity Six-Speaker System with Force-Cancelling Woofers
- Backlit Magic Keyboard w/ Touch ID sensor & Force Touch Trackpad
- Ports:
- 3x Thunderbolt 4 (USB-C) ports w/ support for Charging
- 1x HDMI 2.1
- 1x SD card reader
- 1x Headphone jack (3.5mm)
- 1x MagSafe 3 Power w/ 70W USB-C Power Adapter
- 72.4 Whr Li-Po Battery
- 12.31 x 8.71 x 0.61" (3.5 lbs)
34 Comments
Your comment cannot be blank.
Featured Comments
Apple Silicon had Neuro-engines for a few years. The software has yet to catch-up to utilize it fully. So don't hold your breath for the next "M4".
The most prominent software that uses "AI" is the local large language models and they are all coded to use the GPUs for best performance, not the NPUs. I suspect it's for portability with nvidia code since the rest of the world is using Nvidia chip so GPU based approach is still king and perhaps easier to code on Apple Silicon GPU/Metal API. Also, GPU has a lot more power ie: 40 GPUs highest on M3 Max chip. I don't know how many NPUs because no one cares to talk about it and I can't remember this info either.
Nest version of Siri may use some onboard NPU like how Apple Watch Ultra 2 is doing local Siri language understanding so it has less lag than having to going to the server to know what you're asking but that's just very minor "AI" integration, nothing mind-blowing like LLMs. LLMs are still too big for the most part to be embedded. Smallest one is like 1GB but performance may be low for real practical use beyond the basic "Siri" type of workload to run locally. The lowest LLM I've use for good enough has been 3.8GB in size.
Pretty sure Adobe creative suite uses GPUs for their AI features and using NPU since NPU are still very low in core count and no one benchmarks the NPUs thus far and only the GPUs via Cinebench etc.
The most important AI stuff will always be in the cloud for long time because of the sheer processing power needed. You won't generate Sora video locally easily without a very powerful machine, you can't anyway since Sora is closed-sourced still.
LLM wise, the decent competitive ChatGPT competitive is like 26GB sized Mixtral-8x7B and you need a decent machine with 32-36GB just to hold the data in memory. I use the 3.6GB version of it's baby brother Mistral on my 16GB machine.
Sign up for a Slickdeals account to remove this ad.
Wow this thing is gonna FLY so fast she will be amazed. Great father.
"AI" is buzzword, pronounced Ayyyyyy -- like the Fonz. What do you plan to actually USE this machine for?
Sign up for a Slickdeals account to remove this ad.
Sign up for a Slickdeals account to remove this ad.