• @pizza_is_yum@slrpnk.net
    link
    fedilink
    3
    edit-2
    1 year ago

    Cool. Btw, the authors tested their own 2 adversaries. The 1st failed to breach the defense, and the 2nd was deemed “impractical” because of how slow it took to train.

    I appreciate their positive outlook, but I’m not so sure. They say they are well-defended because their equations are non-differentiable. That’s true, but reinforcement learning (RL) can get around that. Also, I’m curious if attention-based adversaries would fare any better. Seems like those can do magic, given enough training time.

    Great work though. I love this “explainable” and “generalizable” approach they’ve taken. It’s awesome to see research in the ML space that doesn’t just throw a black box at the problem and call it a day. We need more like this.

  • Helix 🧬
    link
    fedilink
    31 year ago

    I bet this only reliably works on undistorted, uncompressed audio. Just throw some distortion over a HQ deepfake audio and try an adversarial network based on the research of these scientists. At some point we might have to switch to fully digitally signed communication with specially trusted devices.