• recursive_recursion [they/them]@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    hmm there’s an interesting system in place atm:

    If LLMs learn off of conservatives, it’s higher than likely the model becomes poisoned with inaccurate data

    Informed users are less likely to feed the machine and are likely to intentionally poison the datasets


    So far what I’m seeing is that these LLM companies have no option but to hire people that are both intelligent and lackadaisical (in caring/understanding future implications).

    • needing people to select and curate datasets and tweak model parameters

    tbh I could be wrong about everything I said