This is certainly a growing trend which was first started amongst end-users themselves, and slowly we've seen a few news media outlets also following suite. So far, only a very few government agencies have actually followed. Coming to mind are also The Netherlands. What is attractive for many organisations and agencies on Mastodon, is that...
The key difference is whether it does what I told it to do (e.g. gather popular posts with tags I follow) or if it uses my usage data to figure out what would keep me engaged. The former is perfectly fine imo, the latter can become manipulative very quickly.
Well then ask yourself who’s doing the manipulation? The instance owner? The open source devs who make the engagement algo?
The open source devs are going to be in line with keeping it transparent and healthy while still keeping it entertaining, so there’s already checks and balances right there to prevent it becoming an issue. There is no venture capitalists to corrupt it either, so there’s no incentive to make it malicious and the community gets to tweak it to make it balanced. That also means anyone can check to see how it works. Also they can add options for the user to tweak it.
If you don’t like it, then the current option of new posts/boosts in order will always be enabled, so this would be a completely optional separate feed and not affect you if you don’t like it. No need to police others and decide they don’t deserve to have this implemented as an optional sort, and it’s not replacing your current feed.
If a instance somehow maliciously manipulates the algo, then that’s the beauty of decentralization right there, you’re free to swap. The problem with other social media algos is they’re corrupted by venture capitalist and they’re centralized so you have no say in how it works. Both these issues don’t apply to Mastodon.
Interesting idea I had, maybe there might be some merit in allowing experienced users to build their own engagement algorithms for their personal profiles. They could also share their code with others who might want to use it. In that situation nobody’s creating any manipulative algorithms, they’re just doing it for themselves or for each other. They can also tweak it individually to their preferences. Of course since it would definitely require experience it’s more a nice optional thing to have, not a necessity.
I’m not strictly against personalized recommendations (hence why I said it “can” become manipulative), and you’re making some good points. But I do think it’s a very dangerous game to be playing.
It almost certainly requires collecting and storing very personal usage data, and it can influence people’s mood and behaviour depending on what the algorithm is optimizing for (e.g. showing you stuff that makes you angry or ashamed). For that reason I think it’s not just a matter of letting it loose on people. It needs to be very well communicated and explained (e.g. things like “we are showing you this because …”), so people stay in control of their own actions.
Imo it’s a bit like slot machines. Just fine for most people most of the time, but it can drag you down a dark path if you’re vulnerable for whatever reason.