• cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I imagine the vast majority of those 3% error cases get rerouted to a human border official for handling. This is basically a sanity check, and sounds reasonable. The use of AI in the first instance shouldn’t be making things worse, since AI is already superior than humans at facial recognitiob. I wouldn’t be surprised if normal border officials have a significantly higher than 3% error rate in face matching.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      rerouted to a human border official for handling.

      No, the person who was misidentified will be routed to a human TSA agent for harassment. Every single time they fly.

    • mightyfoolish@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Will this be like the “random” checks if your complexion is olive or darker or if your name seems kind of funny?

    • ApeNo1@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      100% this will already be better than humans, but similar to say autonomous driving, the goals should be better than human otherwise we see vendors doing just enough to achieve the simple goal of saving costs or making sales. I would hope they run this in parallel and the system flags anything with confidence less than a threshold for human scrutiny and comparison. Analysing the human decisions in parallel to the AI decisions will help to refine the models and also give some visibility to current accuracy with just human checks. This training and review aspect is a lot of work.