OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn’t matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.
We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn’t inspire confidence if every bit of news is received with a “LOL computers aren’t conscious GTFO”.
That’s why I hate when people retort “GPT isn’t even that smart, it’s just an LLM.” Like yeah, the machines being malevolent is not what I’m worried about, it’s the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these “not so smart” models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it’s so hard to know what to believe.
Removed by mod
Also “Yeah what are people’s minds really?”. The fact that we cannot really categorize our own minds doesn’t really mean that we’re forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn’t mean that it cannot fuck with us - especially if it is an even more powerful one in the future.