• Emotet@slrpnk.net
    link
    fedilink
    arrow-up
    49
    ·
    5 months ago

    Those are some very bold and generic claims for an accelerator chip startup, that doesn’t provide any details or benchmarks other than some basic diagrams and graphs while they are looking for funding and partners.

    Kind of reminds me of basically every tech kickstarter ever.

  • MalReynolds@slrpnk.net
    link
    fedilink
    arrow-up
    35
    ·
    5 months ago

    “Extraordinary claims require extraordinary evidence” (a.k.a., the Sagan standard)

    Should I even click?

  • Codex@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    5 months ago

    Valtonen says that this has made the CPU the weakest link in computing in recent years.

    This is contrary to everything I know as a programmer currently. CPU is fast and excess cores still go underutilized because efficient paralell programming is a capital H Hard problem.

    The weakest link in computing is RAM, which is why CPUs have 3 layers of caches, to try and optimize the most use out of the bottleneck memory BUS. Whole software architectures are modeled around optimizing cache efficiency.

    I’m not sure I understand how just adding a more cores as a coprocesssor (not even a floating-point optimized unit which GPUs already are) will boost performance so much. Unless the thing can magically schedule single-threaded apps as parallel.

    Even then, it feels like market momentum is already behind TPUs and “ai-enhancement” boards as the next required daughter boards after GPUs.

    • Emotet@slrpnk.net
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Eh, as always: It depends.

      For example: memcpy, which is one of their claimed 100x performance tasks, can be IO-bound on systems, where the CPU doesn’t have many memory channels. But with a well optimized architecture, e.g. modern server CPUs with a lot more memory channels available, it’s actually pretty hard to saturate the memory bandwidth completely.

    • ramble81@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Glad I didn’t have to scroll far to find this. That’s right where my mind went. Though if you think about it, it’s functionally no different than GPUs, upcoming NPUs, E-cores on chips or other ASICs.

  • Warl0k3@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    5 months ago

    Big if true. Going to need some real convincing benchmarks to believe this one, though. From a read, seems like they’re implementing ASIC on processor dies, which is not at all a new concept.