• 0 Posts
  • 2 Comments
Joined 11 months ago
cake
Cake day: November 1st, 2023

help-circle
  • Not offensive at all. I’m currently working on neurolinguistic models in deep learning as a lecturer and researcher at a university in London. These models require real-time processing of data feeds from MRI and EEG to be integrated into deep learning models for the study of consonant and vowel production activity in the brain. I can’t afford any lag in this process, and we’re in the process of upgrading to 9T MRIs, which will provide a significant bandwidth increase. Previously, our fMRI scans had a resolution of 1-2 mm, while higher-resolution scans, such as diffusion tensor imaging (DTI), can have a resolution of 0.5 mm or less.

    Assuming a 1 mm resolution, the prefrontal cortex, with a volume of approximately 700 cm³, would contain around 700 million pixels. Wernicke’s area, with a volume of around 5 cm³, would contain about 5 million pixels. So, with the upgrade, we are dealing with 2.8 billion and 20 million pixels, respectively. However, most of these are filtered out when no activity is present, which is typically around 90% of the time, reducing it to approximately 300 million pixels during live processing. A 4k display contains around 10 million pixels, so we essentially have the equivalent of 30 of them as inputs. However, I’m only focusing on selective areas at any given time for deep learning, which amounts to around 10 million pixels. Running this through deep learning in real-time to detect patterns is extremely resource-intensive. So, every upgrade is highly beneficial. Functions like ray tracing and dynamic caching can also be utilized to support this. To reduce memory wastage in simplex analysis, as opposed to static memory running the same amount as in compound analysis, where you lose 40-50% of your GPU memory, it’s crucial.

    Additionally, my university’s computing resources leave much to be desired, and the only alternative is using GPU farms, which introduce significant delays. I want to proceed as autonomously as possible. I can also connect my MacBook Pro to my Mac Pro (M2 Ultra with a 24-core CPU, 76-core GPU, and 192GB RAM) in my office using a 10Gbps LAN to use transcoder and share the workload, providing much lower latency compared to off-campus GPU banks (~10ms to 150-200ms for return).

    I also have a semi-professional hobby of 3D modeling, which is again extremely resource-intensive when rendering motion from a complex mesh with dynamic shading. Both the increase in CPU and GPU power should help speed up this process, although it doesn’t have to be real-time but can be time-constrained.

    I am somewhat disappointed that memory bandwidth hasn’t seen an increase with the introduction of ECC RAM, as it has been around for a while and seems like the next logical step. It’s available in the Mac Pro, offering 800 GB/s bandwidth, and there doesn’t appear to be a limitation in offering it in the MacBook Pro (although Apple’s additions to ARM are not typically open source, making it hard to know for sure until a teardown). Additionally, it’s worth noting that the M3 MacBook Pro offers nit syncing with the studio but not with the XDR display, which is more commonly used in research. Sitting at a desk for 12 hours running models and switching between multiple screens with different nit settings that don’t refresh simultaneously can strain the eyes. Perhaps they’ll add this feature later, or it may not be supported. In the meantime, I’ll run a backend like I did before, using delta time to keep them in sync, but that disables the promotion."

    I do also actually replace everything every year basically. I’ve only skipped a few models of iPhone and Apple Watch. where the updates just seemed totally insignificant, so maybe a bit of an Apple tech obsession (well tech otaku all around really lol)


  • I think what you’ve put is well-written and informative, and I agree with your analysis of the M3 Pro performance gains. It’s disappointing that Apple didn’t provide a direct comparison to the M2 Pro, and the reduced number of performance cores and lower memory bandwidth suggest that the performance gains over the M2 Pro are likely to be minimal, if any.

    They didn’t seem to post a GPU comparison between M2 Pro and M3 Pro either? Perhaps I missed that as I’m a Max user.

    However, as you mention, the M3 Pro still offers a significant performance boost over the M1 Pro, and it’s likely that the overall performance will still be excellent. For users who are upgrading from an M2 Max, the performance gains of the M3 Max should be more noticeable, especially in terms of CPU performance.

    Overall, I think your post is a fair and balanced assessment of the M3 Pro and M3 Max performance gains. It’s important to note that Apple hasn’t released any benchmarks yet, so it’s too early to say for sure what the real-world performance difference between the M2 Pro and M3 Pro will be. However, based on the information that we have so far, it seems likely that the performance gains will be modest (I suspect around the 5% mark which Apple don’t really wish to tout as an overwhelming gain, and there may be some losses where active memory is key. Perhaps the dynamic cache will make up for some of this, but its disappointing to see a loss in bandwidth at this price scale.

    I’m upgrading from M2 Max 12-core CPU, 38-core GPU 96GB ram, to M3 Max 16-core cpu, 40-core GPU 128 ram, and I’m hoping that will offer significant performance boosts, based on Apple’s claims, however if benchmarking does indicate otherwise, I’ll cancel my order and stick with the M2 Max for another year (well I’ve had it 9 months).