Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Dirt_Possum [any, undecided]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Sentience is not a “low bar” and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called “AI” is nowhere near either one.

    • archomrade [he/him]
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.

      Not only because I don’t think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like ‘qualia’ are impossible to translate in a meaningful way to begin with.

      What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?

      We might as well be arguing about weather a squirrel is going around a tree.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        is meaningless in this context

        It’s useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That’s the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.

        • archomrade [he/him]
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we’re still light years away from a socialist organization of labor.

          None of the previous industrial revolutions were brought on by a sentient machine, I’m not sure why it’s relevant to this technology’s potential impact.

          • UlyssesT [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 year ago

            are just as detached from reality

            Bullshit false equivalency to run interference for “only equally detached from reality” people like this.

            https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

            Both camps

            I don’t think you’re going to change any minds with your nakedly obvious “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.

            • archomrade [he/him]
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction

              “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.

              I don’t favor the hype, I’m just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as “sentience”. The entire proposition is ridiculous.

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                1 year ago

                The entire question of sentience is irrelevant to the material impact of the technology.

                I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren’t there outside of the aforementioned marketing hype.

                I’m just not naive enough to dismiss the potential impact of machine learning

                That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how “liberating” it will surely be seem like idealism to me under the current material conditions and under the present system.

                EDIT: Besides, you should look again at which position is bringing the sentience talk here:

                https://hexbear.net/comment/4292155

                And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

                • archomrade [he/him]
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I’m not actually sure there’s much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don’t mean to speak for you, please correct me if I’m wrong.

                  While I think the question of AI sentience is ridiculous, I still think that it wouldn’t take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn’t string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.

                  I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.

                  • UlyssesT [he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    1 year ago

                    irrespective of what qualities of competence AI might actually have

                    That competence mostly applies as a net negative when it’s being used in its present state because of who owns and who commands it. The “competence” isn’t thrilling or inspiring people that are getting denied medical because a computer program “accidentally” denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.

                    At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly “curious” about such things when they’re only feeling downward pressure from them is condescending and I don’t blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.

                    I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.

                    That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god’s construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like “meat computers” while trying to boost their favorite LLM products.