While I am glad this ruling went this way, why’d she have diss Data to make it?

To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called “Schisms.” StarTrek.com posted the full poem, but here’s a taste:

"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

Data “might be worse than ChatGPT at writing poetry,” but his “intelligence is comparable to that of a human being,” Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

  • ProfessorScience@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    Cherry-picking a couple of points I want to respond to together

    It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

    It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.

    I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don’t think you’d say that that person wasn’t sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.

    As a component of a system, it becomes much more promising.

    It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.

    Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.

    I’m not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.