Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    4 months ago

    This came up in a podcast I listen to:

    WaPo: "OpenAI illegally barred staff from airing safety risks, whistleblowers say "

    archive link https://archive.is/E3M2p

    OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.

    While Iā€™m not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe theyā€™re building the P(doom) machine.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      4 months ago

      I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      4 months ago

      Short story: itā€™s smoke and mirrors.

      Longer story: This is now how software releases work I guess. Alot is running on open aiā€™s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and thereā€™s no more training data. So the next trick is that for their next batch of models they have ā€œsolvedā€ various problems that people say you canā€™t solve with LLMs, and they are going to be massively better without needing more data.

      But, as someone with insider info, itā€™s all smoke and mirrors.

      The model that ā€œsolvedā€ structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically itā€™s a price optimization afaik).

      The next large model launching with the new Q* change tomorrow is ā€œapproaching agi because it can now reliably count lettersā€ but actually itā€™s still just agents (Q* looks to be just a cost optimization of agents on the backend, thatā€™s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, theyā€™re so confident in this model that they donā€™t run the resulting python themselves. Itā€™s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to umā€¦ checks notes count the number of letters in a sentence.

      But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

      Expect more of this around GPT-5 which they promise ā€œIs so scary they canā€™t release it until after the electionsā€. My guess? Itā€™s nothing different, but they have to create a story so that true believers will see it as something different.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        4 months ago

        Yeah, Iā€™m not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think theyā€™re the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.

        • BlueMonday1984@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          4 months ago

          I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.

          Part of me suspects they probably also arenā€™t the sharpest knives in OpenAIā€™s drawer.

          • imadabouzu@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            Ā·
            4 months ago

            It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employeeā€™s stocks are tied to how scared everyone is.

            Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?

            Strange hysteria like this doesnā€™t need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        4 months ago

        Well, itā€™s now yesterdayā€™s tomorrow and while thereā€™s an update Iā€™m not seeing a Q* announcement.

        • imadabouzu@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          4 months ago

          Q*

          My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe itā€™s the new larger model or maybe itā€™s GPT-5 or maybeā€¦

          itā€™s all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and theyā€™ll keep doing that.

          • froztbyte@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            4 months ago

            OH

            I first saw this then later saw the ā€œopenai employees tweeted šŸ“ā€ and thought the latter was them being cheeky dipshits about the former. admittedly I didnā€™t look deeper (because ugh)

            but this is even more hilarious and dumb