• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 hours ago

    The Apple chips also have a wide interface to the RAM. That means you can run chatbots (LLMs) and other AI workloads that are memory-bound at crazy speeds compared to an Intel (or AMD) computer.

    • JohnDClay@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      Really? How fast is the memory bus compared to x86? And did they just double the bus bandwidth by doubling the memory?

      I’m dubious because they only now went to 16gb ram as base, which has been standard on x86 for almost a decade.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 hours ago

        Depending on the chip, they have somewhere from 100 to 400 GB/s. I’m not sure on the numbers on Intel processors. I think the consumer processors have about 50 - 80 GB/s. (~Alder Lake, dual channel DDR5) Mine seems to have way less. And a recent GPU will be somewhere in the range of 400 to 1000 GB/s. But consumer graphics cards stop at 24GB of VRAM and these flagship models are super expensive. Even compared to Apple products.

        The people from the llama.cpp project did some measurements and I believe the Apple “Metal” framework seems to outperform the x86 computers by an order of magnitude or so. I’m not sure, it’s been some time since i skimmed the discussions on their Github page.