Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 months ago

    what possible morality can you build into the gun to prevent immoral use?

    I mean, there actually are a bunch of things you could do. There are biometric-restricted guns that attempt to ensure only authorized users can fire them. That is a means to prevent immoral use related to a stolen weapon.

    The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

    More relevant to AI, with our current tech you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible, and not allow it to fire if any non-authorized animals like people were present as well. Obviously hypothetical, but perfectly possible.

    There are lots of tools that include technical controls to attempt to prevent misuse, whether intentional or not.

    An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

    • MagicShel@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      There are biometric-restricted guns that attempt to ensure only authorized users can fire them.

      This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.

      This is not a great analogy for AI, but it’s still effectively amoral anyway.

      The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

      This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass murder is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.

      I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.

      you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible

      This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate roleplaying actions.

      An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

      The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.

      • t3rmit3@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        But none of that interferes with the use of the tool itself.

        But is literally does. If my goal is to use someone else’s gun to kill someone, and the gun has a biometric lock, that absolutely interferes with the use (for unlawful shooting) of the gun.

        Wrt AI, if someone’s goal is to use a model that e.g. OpenAI operates, to build a bomb, an external control that prevents it is just as good as the AI model itself having some kind of baked in control.

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Again a biometric lock neither prevents immoral use nor allows moral use outside of its very narrow conditions. It’s effectively an amoral tool. It presumes anything you do with your gun will be moral and other uses are either immoral or unlikely enough to not bother worrying about.

          AI has a lot of uses compared to a gun and just because someone has an idea for using it that is outside of the preconceived parameters doesn’t mean it should be presumed to be immoral and blocked.

          Further the biometric lock analogy falls apart when you consider LLM is a broad-scoped tool for use by everyone, while your personal weapon can be very narrowly scoped for you.

          Consider a gun model that can only be fired by left-handed people because most guns crimes are committed by right-handed people. Yeah, you’re ostensibly preventing 90% of immoral use of the weapon but at the cost of it no longer being a useful tool for most people.

          • t3rmit3@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 months ago

            Not every safety control needs to solve every safety issue. Almost all safety controls are narrowly-tailored to one threat model. You’re essentially just arguing that if a safety control doesn’t solve everything, it’s not worth it.

            LLMs being a tool that is so widely available is precisely why they need more built-in safety. The more dangerous a tool is, the more likely it is to be restricted to only professional or otherwise licensed users or businesses. Arguing against safety controls being built into LLMs is just going to accelerate their regulation.

            Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

            • MagicShel@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 months ago

              Not exactly. My argument is that the more safety controls you build into the model, the less useful the model is at anything. The more you bend the responces away from true (whatever that is) the less of the tool you have.

              Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

              Yeah I agree with that, but I’m saying protect people from the misuse of the tool. Don’t break the tool to the point where it’s worthless.