• JustARaccoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    47 minutes ago

    I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain

  • pjwestin@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    2 hours ago

    I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    2 hours ago

    Sounds like another WeWork or Theranos in the making, except we already know the product doesn’t do what it promises.

    • lando55@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 hour ago

        It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.

        It’s that last bit that’s going to be where the drama happens

      • Smokeydope@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        1 hour ago

        It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too. Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.

        The problem for openAI is they have competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed. Most of them offer their cloud models at very competitive pricing especially mistral.

        The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life. Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards. This month several comptetor llm models released which while being much smaller in size also beat that big openai model in many benchmarks. Neural networks are here and they are only going to get better. Were in for a wild ride.

        • Stegget@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          28 minutes ago

          My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    3 hours ago

    Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    3 hours ago

    What! A! Surprise!

    I’m shocked, I tell you, totally and utterly shocked by this turn of events!

  • ramble81@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 hours ago

    So where are they all going? I doubt everyone is gonna find another non-profit or any altruistic motives, so <insert big company here> just snatches up more AI resources to try to grow their product.

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    2
    ·
    8 hours ago

    Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.

    I hope he gets raped by an irate Roomba with a broomstick.

  • kippinitreal@lemmy.world
    link
    fedilink
    English
    arrow-up
    121
    arrow-down
    2
    ·
    edit-2
    9 hours ago

    Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.

    The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.

    The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.

    This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.

    • somethingsnappy@lemmy.world
      link
      fedilink
      English
      arrow-up
      72
      ·
      9 hours ago

      Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.

        • frunch@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          5 hours ago

          It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.

          I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.

          • sunzu2@thebrainbin.org
            link
            fedilink
            arrow-up
            1
            ·
            55 minutes ago

            A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it

            Life time of propaganda got people confused lol

            Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.

            While some non profits are charities, many are just shelters for rich people’s bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc

            • frunch@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 minutes ago

              Thank you! Like i said, i figured there’s something I’m missing–that would appear to be it.

  • barnaclebutt@lemmy.world
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    4
    ·
    11 hours ago

    I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    234
    arrow-down
    5
    ·
    12 hours ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓

    • patatahooligan@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      3
      ·
      7 hours ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • rsuri@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 hours ago

        I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.

        There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        4
        ·
        7 hours ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 hours ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            7 minutes ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          3
          ·
          6 hours ago

          That is a different kind of machine learning model, though.

          You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

          And those image recognition models aren’t something OpenAI is currently working on, iirc.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            5 hours ago

            I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.

          • TFO Winder@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            4 hours ago

            Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    258
    arrow-down
    1
    ·
    12 hours ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    Altman downplayed the major shakeup.

    "Leadership changes are a natural part of companies

    Is he just trying to tell us he is next?

    /s

    • xavier666@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 hours ago

      Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”

    • Avg@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      12 hours ago

      The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.