Yeah, it says that we write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.
Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.
So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?
I’d say it does to an extent, dependant on the source material. If they were trained on actual military strategies and tactics as their source material with proper context, I’d wager the responses would likely be different.
Totally. Properly trained AI would probably just flood a country with misinformation to trigger a civil war. After it installs a puppet government, it can leverage that countries resources against other enemies.
Let’s think here… I’ve always heard history is written by the victors, which logically implies historians are the most dangerous people on the planet and ought to be detained. 🧐
This says more about us than it does about the chatbots, considering the data on which they’re trained…
Yeah, it says that we write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.
Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.
So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?
I’d say it does to an extent, dependant on the source material. If they were trained on actual military strategies and tactics as their source material with proper context, I’d wager the responses would likely be different.
Totally. Properly trained AI would probably just flood a country with misinformation to trigger a civil war. After it installs a puppet government, it can leverage that countries resources against other enemies.
Maybe… But, hear me out, what if it means you can win nuclear wars? 🤔
Lol … to an AI, humans on any and all sides can’t win a nuclear war … but AI can.
I hope it’s obvious I was being hugely tongue in cheek
What if there are no winners in any wars?
Let’s think here… I’ve always heard history is written by the victors, which logically implies historians are the most dangerous people on the planet and ought to be detained. 🧐
Removed by mod