Has Apple deliberately nerfed the M3 Pro CPU? And for what reason?
From Apple’s slides starting at 10:29:
M3 = 35% faster CPU than M1; 20% faster than M2
M3 Pro = 20% faster CPU than M1 Pro; No comparison to M2 Pro was given! 🤔
M3 Max = 80% faster than M1 Max; 50% faster than M2 Max
When Apple announced the M2 Pro they claimed it was 20% faster than M1 Pro. So are we to assume M3 Pro has no performance improvement this gen?
They’ve reduced the number of performance cores from eight to six, and as per the OP memory bandwidth at 150GB/sec is lower than the 200GB/sec of the M1 Pro.
It seems reducing the number of performance cores in favour of efficiency cores has eliminated any overall performance uplift M3 Pro had over M2 Pro. We’ll have to wait for benchmarks to be certain, but I’m sure Apple’s omission of a comparison to M2 Pro is very telling.
These things already had incredible battery life, I’m not sure why Apple would choose to sacrifice performance for more battery life? The people buying these machines, myself included, are pros that need performance, and the rest of the M3 family has CPU performance improvements, so why not M3 Pro?
Not offensive at all. I’m currently working on neurolinguistic models in deep learning as a lecturer and researcher at a university in London. These models require real-time processing of data feeds from MRI and EEG to be integrated into deep learning models for the study of consonant and vowel production activity in the brain. I can’t afford any lag in this process, and we’re in the process of upgrading to 9T MRIs, which will provide a significant bandwidth increase. Previously, our fMRI scans had a resolution of 1-2 mm, while higher-resolution scans, such as diffusion tensor imaging (DTI), can have a resolution of 0.5 mm or less.
Assuming a 1 mm resolution, the prefrontal cortex, with a volume of approximately 700 cm³, would contain around 700 million pixels. Wernicke’s area, with a volume of around 5 cm³, would contain about 5 million pixels. So, with the upgrade, we are dealing with 2.8 billion and 20 million pixels, respectively. However, most of these are filtered out when no activity is present, which is typically around 90% of the time, reducing it to approximately 300 million pixels during live processing. A 4k display contains around 10 million pixels, so we essentially have the equivalent of 30 of them as inputs. However, I’m only focusing on selective areas at any given time for deep learning, which amounts to around 10 million pixels. Running this through deep learning in real-time to detect patterns is extremely resource-intensive. So, every upgrade is highly beneficial. Functions like ray tracing and dynamic caching can also be utilized to support this. To reduce memory wastage in simplex analysis, as opposed to static memory running the same amount as in compound analysis, where you lose 40-50% of your GPU memory, it’s crucial.
Additionally, my university’s computing resources leave much to be desired, and the only alternative is using GPU farms, which introduce significant delays. I want to proceed as autonomously as possible. I can also connect my MacBook Pro to my Mac Pro (M2 Ultra with a 24-core CPU, 76-core GPU, and 192GB RAM) in my office using a 10Gbps LAN to use transcoder and share the workload, providing much lower latency compared to off-campus GPU banks (~10ms to 150-200ms for return).
I also have a semi-professional hobby of 3D modeling, which is again extremely resource-intensive when rendering motion from a complex mesh with dynamic shading. Both the increase in CPU and GPU power should help speed up this process, although it doesn’t have to be real-time but can be time-constrained.
I am somewhat disappointed that memory bandwidth hasn’t seen an increase with the introduction of ECC RAM, as it has been around for a while and seems like the next logical step. It’s available in the Mac Pro, offering 800 GB/s bandwidth, and there doesn’t appear to be a limitation in offering it in the MacBook Pro (although Apple’s additions to ARM are not typically open source, making it hard to know for sure until a teardown). Additionally, it’s worth noting that the M3 MacBook Pro offers nit syncing with the studio but not with the XDR display, which is more commonly used in research. Sitting at a desk for 12 hours running models and switching between multiple screens with different nit settings that don’t refresh simultaneously can strain the eyes. Perhaps they’ll add this feature later, or it may not be supported. In the meantime, I’ll run a backend like I did before, using delta time to keep them in sync, but that disables the promotion."
I do also actually replace everything every year basically. I’ve only skipped a few models of iPhone and Apple Watch. where the updates just seemed totally insignificant, so maybe a bit of an Apple tech obsession (well tech otaku all around really lol)
Wow amazing thanks for insight
The university must have a nice tech budget