Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • oktherebuddy@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    7 months ago

    yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.

    When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.

      “Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.

      • oktherebuddy@hexbear.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        It isn’t “Computers aren’t sentient”, nobody thinks computers are sentient except some weirdos. “Computers can’t be sentient”, which is what is under discussion, is a much stronger claim.

        • VILenin [he/him]@hexbear.netOPM
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.

          There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.

          This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.

          • oktherebuddy@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            7 months ago

            Consciousness isn’t an extra-physical property. That’s the belief.

            I don’t take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It’s obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.

            The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don’t think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that’s just really blinkered.

            • VILenin [he/him]@hexbear.netOPM
              link
              fedilink
              English
              arrow-up
              8
              ·
              7 months ago

              I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.

              Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.

              At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.

              I’d be very happy if it turned out that I’m completely wrong.

              • oktherebuddy@hexbear.net
                link
                fedilink
                English
                arrow-up
                6
                ·
                7 months ago

                Okay I think we pretty much agree. I have been thinking about what the next “category” of thing is that might function as a substrate of consciousness. I do think that the software techniques people have come up with in AI research, run on “computers” though they may be, are different enough from what we ordinarily think of as computers (CPU, GPU, fast short-term memory, slow long-term memory, etc.) to be a distinct ontological category. And new hardware is being built to specifically accelerate the sort of operations used in those software techniques. I would accept these things being called something other than a computer, even though they could be simulated on a Turing machine or with boolean circuits, because as you’ve said that is of limited use - similar to saying that everything is a mechanistic physical process.

                • VILenin [he/him]@hexbear.netOPM
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  7 months ago

                  Not my autistic ass getting into fights online again… I’m learning my parsing skills and social skills slowly though!

                  But yeah, I just want to know what the AI thinks about communism

      • oktherebuddy@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 months ago

        wow we can’t speculate about things that could exist, only things that do exist. this was written on a communist website btw

      • Saeculum [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        By that way of reasoning, the replicates aren’t people because they are characters written by the author same as any other.

        They are as much fiction as sentient machines are science fiction.

        • usernamesaredifficul [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          11
          ·
          7 months ago

          ok sure my point was the authors aren’t making a point about the nature of machines informed by the limits of machines and aren’t qualified to do so

          saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance

          • Saeculum [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.

            The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.