Pro@programming.dev to Technology@lemmy.worldEnglish · edit-22 months agoMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgexternal-linkmessage-square37linkfedilinkarrow-up1290arrow-down18cross-posted to: fuck_bigtech@europe.pubfuck_ai@lemmy.worldTechnology@programming.dev
arrow-up1282arrow-down1external-linkMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgPro@programming.dev to Technology@lemmy.worldEnglish · edit-22 months agomessage-square37linkfedilinkcross-posted to: fuck_bigtech@europe.pubfuck_ai@lemmy.worldTechnology@programming.dev
minus-squareouch@lemmy.worldlinkfedilinkEnglisharrow-up22·2 months agoWhat about false positives? Or a process to challenge them? But yes, I agree with the general idea.
minus-squareBeej Jorgensen@lemmy.sdf.orglinkfedilinkEnglisharrow-up16arrow-down1·2 months ago Or a process to challenge them? 😂😂😂😔
minus-squaretarknassus@lemmy.worldlinkfedilinkEnglisharrow-up12·2 months agoThey will probably use the YouTube model - “you’re wrong and that’s it”.
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
😂😂😂😔
They will probably use the YouTube model - “you’re wrong and that’s it”.