• kibiz0r
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    Article says it’s likely an OpenAI partnership.

      • AliasAKA@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        6 months ago

        Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 months ago

          People are really racing to destroy the planet so their phone can make a crappy summary of what’s on wikipedia.

          • AliasAKA@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            Not even a summary of what’s on Wikipedia, usually a summary of the top 5 SEO crap webpages for any given query.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          6 months ago

          near ChatGPT4 performance locally, on an iPhone

          Last I checked, iPhones don’t have terabytes of RAM. Nothing that runs on a small battery powered device is ever going to be in the ballpark of ChatGPT. At least not in the foreseeable future.

          • AliasAKA@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.

            For what it’s worth, according to their wwdc note, they’re basically trying to do this.