- cross-posted to:
- fediverse@kbin.social
- fediverse@lemmy.world
- cross-posted to:
- fediverse@kbin.social
- fediverse@lemmy.world
Highlighting the recent report of users and admins being unable to delete images, and how Trust & Safety tooling is currently lacking.
Yeah, I agree. I think the important thing is “was the local content scrubbed?” Because at least if that was done, the place of origin no longer has it.
Federated deletes will always be imperfect, but I’d rather have them than not have them.
What might actually be interesting would be if someone could figure out this type of content negotiation: deletes get federated, some servers miss it. Maybe there’s a way to get servers to check the cache and, if a corresponding origin value is no longer there, dump it?
Well I’m sure there are a number of nice ways of arranging federated delete, including your suggestion, but it seems to me that the issue is guaranteeing a delete across all federated servers where the diversity of software and the openness/opt-out-ness of federation basically ensure something somewhere will not respect a request out of either malice, ignorance or error.
Ultimately, it seems a weird thing to be creating and expecting fediverse platforms in the image of those designed with complete central control over all data and servers. Like we’re still struggling to break out of the mould.
Even if one platform makes a perfect arrangement for something like delete, so long as servers running that platform push to / federate with servers that run something else, where it’s ultimately impossible to tell what they’re running because it’s someone else’s server, there will be broken promises.
I’m interested to hear your response on this actually, because it increasingly seems to me like we haven’t got to terms yet with what decentralisation actually means and how libertarian some of its implications are once you care about these sorts of issues.
I suspect it gets to the point where for social activity some people may start realising that they actually want a centralised body they can hold to account.
And that feeling secure on a decentralised social media platform requires significant structural adjustments, like e2ee, allow-list federation, private spaces, where public spaces are left for more blog like and anonymous interactions.
Also, sorry, end rant.
No, you’re good, and we’re mostly on the same page! My general expectation is that your server tries its best, maybe there are still copies out there, but you shrug and say “eh, I wiped my stuff locally, good enough”.
But yeah, I agree that once something leaves your server in terms of the passage of data, there are no guarantees. And I do agree that significant structural changes are necessary and important for the network’s continued evolution!
Maybe encryption can help? Instances only copy the encrypted image, and only the original instance provides the encryption key to client apps. This way, the original instance can de-facto delete the image copies by simply refusing to issue the key.
It also means the content is “lost” from all instances if the original server goes down, which altogether with the need to cache the key to continue to provide the service basically means you’ve just implemented DRM on top of Fedi.
And that’s fine. If one instance has to be able to delete one revenge child porn from all the instances, it has to be what you call a DRM. Also, don’t cache the key because that’s exactly the opposite of what this is meant for.