• CrayonMaster
    link
    fedilink
    English
    arrow-up
    34
    ·
    7 months ago

    Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.

      All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /* because it told you so.

      At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.

      • greenskye@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Also training data works on consistency. It’s why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn’t trip human brains up with uncanny valley.

        Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can’t parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn’t right in this context.