Meta “programmed it to simply not answer questions,” but it did anyway.

  • doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    2 months ago

    Well but this kind of correctness applies to everything. By thag logic, you can’t believe anything. I’m talking about an entirely different correctness. Like resistance against certain adversarial attacks. Of course, proving that the model is always correct, is as complicated as modelling the entire reality. That’s infeasible. But it’s also infeasible for every other software.