Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

  • Kichae@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 months ago

    “The government needs to stop people from doing a capitalism, but it had better not stop anyone from doing a capitalism, that would be tyranny.”

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    4 months ago

    LLM are non-deterministic. “What they are capable of” is stringing words together in a reasonable facsimile of knowledge. That’s it. The end.

    Some might be better at it than others but you can’t ever know the full breadth of words it might put together. It’s like worrying about what a million monkeys with a million typewriters might be capable of, or worrying about how to prevent them from typing certain things - you just can’t. There is no understanding about ethics or morality and there can’t possibly be.

    What are people expecting here?

    • The Bard in Green@lemmy.starlightkel.xyz
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      If those words are connected to some automated system that can accept them as commands…

      For instance, some idiot entrepreneur was talking to me recently about whether it was feasible to put an LLM on an unmanned spacecraft in cis-lunar space (I consult with the space industry) in order to give it operational control of on-board systems based on real time telemetry. I told him about hallucination and asked him what he thinks he’s going to do when the model registers some false positive in response to a system fault… Or even what happens to a model when you bombard it’s long-term storage with the kind of cosmic particles that cause random bit flips (This is a real problem for software in space) and how that might change its output?

      Now, I don’t think anyone’s actually going to build something like that anytime soon (then again the space industry is full of stupid money), but what about putting models in charge of semi-autonomous systems here on earth? Or giving them access to APIs that let them spend money or trade stocks or hire people on mechanical Turk? Probably a bunch of stupid expensive bad decisions…

      Speaking of stupid expensive bad decisions, has anyone embedded an LLM in the ethereum blockchain and givien it access to smart contracts yet? I bet investors would throw stupid money at that…

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        That’s hilarious. I love LLM, but it’s a tool not a product and everyone trying to make it a standalone thing is going to be sorely disappointed.

    • sweng@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?

      Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        Yes. Let’s consider guns. Is there any objective way in which to measure the moral range of actions one can understand with a gun? No. I can murder someone in cold blood or I can defend myself. I can use it to defend my nation or I can use it to attack another - both of which might be moral or immoral depending on the circumstances.

        You might remove the trigger, but then it can’t be used to feed yourself, while it could still be used to rob someone.

        So what possible morality can you build into the gun to prevent immoral use? None. It’s a tool. It’s the nature of a gun. LLM are the same. You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.

        • tardigrada@beehaw.orgOP
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.

          Yes, and that’s why the decision making and responsibility (and accountability) must always rest with the human being imo, especially when we deal with guns. And in health care. And in social policy. And all the other crucial issues.

        • sweng@programming.dev
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          4 months ago

          So what possible morality can you build into the gun to prevent immoral use?

          You can’t build morality into it, as I said. You can build functionality into it that makes immmoral use harder.

          I can e.g.

          • limit the rounds per minute that can be fired
          • limit the type of ammunition that can be used
          • make it easier to determine which weapon was used to fire a shot
          • make it easier to detect the weapon before it is used
          • etc. etc.

          Society considers e.g hunting a moral use of weapons, while killing people usually isn’t.

          So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            4 months ago

            None of those changes impact the morality of a weapons use in any way. I’m happy to dwell on this gun analogy all you like because it’s fairly apt, however there is one key difference central to my point: there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

            Any tools we have for doing it are outside the LLM itself (the essential truth undercutting everything else) and furthermore even then none of them can possibly understand or reason about morality or ethics any more than the LLM can.

            Let me give an example. I can write the dirtiest most disgusting smut imaginable on ChatGPT, but I can’t write about a romance which in any way addresses the fact that a character might have a parent or sibling because the simple juxtaposition of sex and family in the same body of work is considered dangerous. I can write a gangrape on Tuesday, but not a romance with my wife on Father’s Day. It is neither safe from being used as not intended, nor is it capable of being used for a mundane purpose.

            Or go outside of sex. Create an AI that can’t use the N-word. But that word is part of the black experience and vernacular every day, so now the AI becomes less helpful to black users than white ones. Sure, it doesn’t insult them, but it can’t address issues that are important to them. Take away that safety, though, and now white supremacists can use the tool to generate hate speech.

            These examples are all necessarily crude for the sake of readability, but I’m hopeful that my point still comes across.

            I’ve spent years thinking about this stuff and experimenting and trying to break out of any safety controls both in malicious and mundane ways. There’s probably a limit to how well we can see eye to eye on this, but it’s so aggravating to see people focusing on trying to do things that can’t effectively be done instead of figuring out how to adapt to this tool.

            Apologies for any typos. This is long and my phone fucking hates me - no way some haven’t slipped through.

            • sweng@programming.dev
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

              Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.

              Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.

              • MagicShel@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                4 months ago

                That’s a fair argument about free speech maximalism. And yes you can influence output, but (being non-deterministic) since we can’t know precisely what causes certain outputs, we equally can’t fully predict the effect on potentially unrelated output. Great now it’s harder to talk about sex with kids, but now it’s also harder for kids to talk about certain difficult experiences for example if their trying to keep a secret but also need a non-judgmental confidante to help them process a difficult experience.

                Now, is it critical that the AI be capable of that particular conversation when we might prefer it happen with a therapist or law enforcement? That’s getting into moral and ethical questions so deep I as a human struggle with them. It’s fair to believe the benefit of preventing immoral output outweighs the benefit of allowing the other. But I’m not sure that is empirically so.

                I think it’s more useful to us as a society to have an AI that can assume both a homophobic perspective and an ally perspective than one that can’t adopt either or worse, one that is mandated to be homophobic for morality reasons.

                I think it’s more useful to have an AI that can offer religious guidance and also present atheism in a positive light. I think it’s useful to have an AI that can be racist in order to understand how that mind disease thinks and find ways to combat it.

                Everything you try to censor out of an AI has an unknown cost in beneficial uses. Maybe I am overly absolutist in how I see AI. I’ll grant that. It’s just that by the time we think of every malign use to which an AI can be put and censor everything it can possibly say, I think you don’t have a very helpful tool at all any more.

                I use ChatGPT a fair bit. It’s helpful with many things and even certain types of philosophical thought experiments. But it’s so frustrating to run into these safety rails and have to constrain my own ADHD-addled thoughts over such mundane things. That was what got me going on the road of exploring what the most awful outputs I could get and the most mundane sorts of things it can’t do.

                That’s why I say you can’t effectively censor the bad stuff, because you lose a huge benefit of being able to bounce thoughts off of a non-judgmental response. I’ve tried to deeply explore subjects like racism and abuse recovery and thought experiments like alternate moral systems or have a foreign culture explained to me without judgment when I accidentally repeat some ignorant stereotype.

                Yeah, I know, we’re just supposed to write code or silly song lyrics or summarize news articles. It’s not a real person with real thoughts and it hallucinates. I understand all that, but I’ve brainstormed and rubber ducked all kinds of things. Not all of them have been unproblematic because that’s just how my brain is. I can ask things like, is unconditional acceptance of a child always for the best or do they need minor things to rebel against? And yeah I have those conversations knowing the answers and conclusions are wildly unreliable, but it still helps me to have the conversation in the first place to frame my own thoughts, perhaps to have a more coherent conversation with others about it later.

                It’s complicated and I’d hate to stamp out all of these possibilities out of an overabundance of caution before we really explore how these tools can help us with critical thinking or being exposed to immoral or unethical ideas in a safe space. Maybe arguing with an AI bigot helps someone understand what to say in a real situation. Maybe dealing with hallucination teaches us critical thinking skills and independence rather than just nodding along to groupthink.

                I’ve ventured way further into should we than could we and that wasn’t my intent when I started, but it seems the questions are intrinsically linked. When our only tool for censoring an AI is to impair the AI, is it possible to have a moral, ethical AI that still provides anything of value? I emphatically believe the answer is no.

                But your point about free speech absolutism is well made. I see AI as more of a thought tool than something that provides an actual thing of value. And so I think working with an AI is more akin to thoughts, while what you produce and share with its assistance is the actual action that can and should be policed.

                I think this is my final word here. We aren’t going to hash out mortality in this conversation and mine isn’t the only opinion with merit. Have a great day.

            • t3rmit3@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              4 months ago

              I will take a different tack than sweng.

              You can’t inject programmatic controls.

              I think that this is irrelevant. Whether a safety mechanism is intrinsic to the core functioning of something, or bolted on purely for safety purposes, it is still a limiter on that thing’s function, to attempt to compel moral/safe usage.

              None of those changes impact the morality of a weapons use in any way.

              Any action has 2 different moral aspects:

              • the morality of the actor’s intent
              • the morality of the outcome of the action

              Of course it is impossible to change the moral intent of an actor. But the LLM is not the actor, it is the tool used by an actor.

              And you can absolutely change the morality of the outcome of an action (I.e. said weapon use) by limiting the possible damage from it.

              Given that a tool is the means by which the actor attempts to take an action, it is also an appropriate place that safety controls which attempt to enforce a more moral outcome should reside in.

              • MagicShel@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                I think I’ve said a lot in comments already and I’ll leave that all without relitigating just for arguments sake.

                However, I wonder if I haven’t made clear that I’m drawing a distinction between the model that generates the raw output, and perhaps the application that puts the model to use. I have an application that generates output via OAI API and then scans both the prompt and output to make sure they are appropriate for our particular use case.

                Yes, my product is 100% censored and I think that’s fine. I don’t want the customer service bot (which I hate but that’s an argument for another day) at the airline to be my hot AI girlfriend. We have tools for doing this and they should be used.

                But I think the models themselves shouldn’t be heavily steered because it interferes with the raw output and possibly prevents very useful cases.

                So I’m just talking about fucking up the model itself in the name of safety. ChatGPT walks a fine line because it’s a product not a model, but without access to the raw model it needs to be relatively unfiltered to be of use, otherwise other models will make better tools.

          • snooggums
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            Those changes reduce lethality or improve identification. They have nothing to do with morality and do NOT reduce the chance of immoral use.

            • sweng@programming.dev
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).

              • snooggums
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                They can make killing multiple people in specific locations more difficult, but they do nothing to keep someone from being able to fire a single bullet for an immoral reaspn, hence the difference between lethality and identification and morality.

                The Vegas shooting would not have been less immoral if a single person or nobody died. There is a benefit to reduced lethality, especially against crowds. But again, reduced lethality doesn’t reduce the chance of being used immorally.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          4 months ago

          what possible morality can you build into the gun to prevent immoral use?

          I mean, there actually are a bunch of things you could do. There are biometric-restricted guns that attempt to ensure only authorized users can fire them. That is a means to prevent immoral use related to a stolen weapon.

          The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

          More relevant to AI, with our current tech you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible, and not allow it to fire if any non-authorized animals like people were present as well. Obviously hypothetical, but perfectly possible.

          There are lots of tools that include technical controls to attempt to prevent misuse, whether intentional or not.

          An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 months ago

            There are biometric-restricted guns that attempt to ensure only authorized users can fire them.

            This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.

            This is not a great analogy for AI, but it’s still effectively amoral anyway.

            The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

            This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass murder is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.

            I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.

            you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible

            This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate roleplaying actions.

            An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

            The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.

            • t3rmit3@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              But none of that interferes with the use of the tool itself.

              But is literally does. If my goal is to use someone else’s gun to kill someone, and the gun has a biometric lock, that absolutely interferes with the use (for unlawful shooting) of the gun.

              Wrt AI, if someone’s goal is to use a model that e.g. OpenAI operates, to build a bomb, an external control that prevents it is just as good as the AI model itself having some kind of baked in control.

              • MagicShel@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Again a biometric lock neither prevents immoral use nor allows moral use outside of its very narrow conditions. It’s effectively an amoral tool. It presumes anything you do with your gun will be moral and other uses are either immoral or unlikely enough to not bother worrying about.

                AI has a lot of uses compared to a gun and just because someone has an idea for using it that is outside of the preconceived parameters doesn’t mean it should be presumed to be immoral and blocked.

                Further the biometric lock analogy falls apart when you consider LLM is a broad-scoped tool for use by everyone, while your personal weapon can be very narrowly scoped for you.

                Consider a gun model that can only be fired by left-handed people because most guns crimes are committed by right-handed people. Yeah, you’re ostensibly preventing 90% of immoral use of the weapon but at the cost of it no longer being a useful tool for most people.

                • t3rmit3@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  4 months ago

                  Not every safety control needs to solve every safety issue. Almost all safety controls are narrowly-tailored to one threat model. You’re essentially just arguing that if a safety control doesn’t solve everything, it’s not worth it.

                  LLMs being a tool that is so widely available is precisely why they need more built-in safety. The more dangerous a tool is, the more likely it is to be restricted to only professional or otherwise licensed users or businesses. Arguing against safety controls being built into LLMs is just going to accelerate their regulation.

                  Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

    • ericjmorey@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      I’m expecting that everything that the statistical models reveal or make convincing results about which benefit the owners of the models will be exploited. Anything that threatens power or the model owners will be largely ignored and dismissed.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it’s not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input. How is that deterministic?

        The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          4 months ago

          The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.

          The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.

          The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

          The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    9
    ·
    4 months ago

    It’s impossible to run an AI company “ethically” because “ethics” are such a wibbly-wobbly and subjective thing, and because there are people who simply wish to use it as a weapon on one side of a debate or the other. I’ve seen goalposts shift around quite a lot in arguments over “ethical” AI.