For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • Flaky_Fish69@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    It’s not even making decisions. It’s following instructions.

    Chat gpt’s instructions are very advanced, but the decisions have already been made. It follows the prompt and it’s reference material to provide the most common response.

    It’s like a kid building a Lego kit- the kid isn’t deciding where pieces go, just following instructions.

    Similarly, between the prompt, the training and the very careful instructions in how to train, and instructions that limit objectionable responses…. All it’s doing is following instructions already defined.