• Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    5 months ago

    Any information on the GPU they are pairing with it?

    Does anyone know if it’s possible to use a regular AMD or Nvidia GPU with it?

    • braindefragger@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      edit-2
      5 months ago

      This is not for someone to daily drive. You’ll probably get better performance duct taping and raspberry pi to Bluetooth keyboard and 7 inch pi display.

      • Toes♀@ani.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        5 months ago

        haha, that doesn’t answer the question at all. But I appreciate you.

        • zelifcam@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          5 months ago

          It does actually.

          Edit: It’s an article about how a company is going to assist in providing RISC 5 dev boards to framework. It’s not about a consumer ready product with a dedicated GPU.

    • morhp@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      5 months ago

      The GPU inside the processor/soc has the following specifications:

      • Imagination BXE-4-32 GPU with support for OpenCL 1.2, OpenGL ES 3.2, Vulkan 1.2
      • Video Decoder – H.265, H.264 4K @ 60fps or 1080p @ 30fps, MJPEG
      • Video Encoder – H.265/HEVC Encoder, 1080p @ 30fps

      I don’t think you’ll be able to use a separate/external GPU with it. Thunderbolt support is highly unlikely and that processor has only 1 or 2 PCIe lanes (depending how USB is connected), which is likely already used for WiFi.

    • Technus@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      The processor it’s using is linked in the article: https://www.cnx-software.com/2022/08/29/starfive-jh7110-risc-v-processor-specifications/

      It’s a system-on-chip (SoC) design with an embedded GPU, the Imagination BXE-4-32, which appears to be designed mainly for smart TVs and set-top boxes.

      The SoC itself only has two PCIe 2.0 lanes on separate interfaces so you can’t use both for the same device, and one is shared with the USB 3.0 interface.

      That’s not even enough bandwidth to drive an entry-level notebook GPU from over a decade ago. Seriously: the GeForce GT 520M, launched January 2011, wants a full PCIe 2.0 x16 interface. Same with the Raedeon HD 6330M. You could probably get away with just 8 lanes if you had to, but not only one.

      The other commenter wasn’t kidding by saying you could get more power out of a Raspberry Pi 4. It’s even mentioned in the article.

      • morhp@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Seriously: the GeForce GT 520M, launched January 2011, wants a full PCIe 2.0 x16 interface. Same with the Raedeon HD 6330M. You could probably get away with just 8 lanes if you had to, but not only one.

        Connecting a GPU with just one PCIe lane isn’t the biggest problem. You’ll just slow down data exchange between the CPU and GPU (mostly loading textures and vertex positions).

        If your game mostly relies on shaders and renders lots of rather static stuff, you’ll mostly just get longer loading times but FPS shouldn’t suffer too much.

        • Technus@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Given how much modern games stream data in and out of VRAM, I think it would actually be quite a significant issue. Although, for modern games the 520M would probably be below minimum requirements anyway. It was just to illustrate my point.

          • morhp@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            It would be obviously “an issue” and drastically reduce performance in many cases, but compared to the buildin igpu, you’d probably still be able to get a much better performance for lots of applications.