- cross-posted to:
- privacy@lemmy.ml
- cross-posted to:
- privacy@lemmy.ml
They can also exfiltrate data to other meta apps on your system using small tricks. Finally, if Facebook is controlling the “ends” then the encryption doesn’t matter at all, they can just send your messages back after the app itself decrypts them. Why anyone trusts Facebook for anything absolutely blows my mind.
Split up your metadata, use Signal instead of WhatsApp. Works just as well but the open source company doesn’t have the same data mining feel to them.
I thought it would have to be something like this:
Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
It seems to me that you have to trust the people in the chat but this is true for any encrypted messaging system. Once the data is decrypted on the other side, the receiver can do anything they want with it.
THIS is the part apps like Signal probably aren’t doing:
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
What is this article actually saying?
Like I hate meta, but I think they’re saying that if you report a comment, a moderator will look at it? That seems kinda obvious.
the “ends”, being opaque apps, just scan messages for government-banned keywords on your device itself, and report them.