While I am glad this ruling went this way, why’d she have diss Data to make it?

To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called “Schisms.” StarTrek.com posted the full poem, but here’s a taste:

"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

Data “might be worse than ChatGPT at writing poetry,” but his “intelligence is comparable to that of a human being,” Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

  • SuperNovaStar@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    At least in the US, we are still too superstitious a people to ever admit that AGI could exist.

    We will get animal rights before we get AI rights, and I’m sure you know how animals are usually treated.

    • ProfessorScience@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      I don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        14 hours ago

        LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse’s corpse.

        LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical models to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they’re not going to save the world for us.

        They could be a part in a more complicated system that forms an AGI. There’s nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it’s not a stochastic parrot.

        I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.

        • ProfessorScience@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology

          Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.

          LLMs do not synthesize. They do not have persistent context.

          That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?

          On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.

            I’ll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).

            Isn’t this analagous to short term memory?

            As one with AuADHD, I think a good deal about short-term and working memory. I would say “yes and no”. It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

            Would you consider that LLM system to have persistent context?

            Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI. As a component of a system, it becomes much more promising.

            On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

            This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it’s a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).

            Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn’t impact that for the individual or the LLM. It’s a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they’re just trapped in a hellish sliding window of temporal consciousness).

            Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.

            • ProfessorScience@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 minutes ago

              Cherry-picking a couple of points I want to respond to together

              It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

              It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.

              I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don’t think you’d say that that person wasn’t sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.

              As a component of a system, it becomes much more promising.

              It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.

              Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.

              I’m not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.

      • SuperNovaStar@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        That’s precisely what I meant.

        I’m a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don’t think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that ‘specialness’ that humans possess.

        Because they don’t believe human consciousness is a purely natural phenomenon, they don’t believe it can be replicated by natural processes. In other words, they don’t believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.

        And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.