When a machine moderates content, it evaluates text and images as data using an algorithm that has been trained on existing data sets. The process for selecting training data has come under fire as it’s been shown to have racial, gender and other biases.

  • Terrasque@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Exactly. The real problem is lack of human oversight, and lack of a way to contact someone.

    And these days, even if you manage to get someone they’ll be some call center in India or Philippines that are only there to help with faq-level things and otherwise politely tell you to fuck off. They can’t actually do anything.