I largely agree with you that TikTok is a platform that’s amazing in many ways: creating highly specific content that could reach its most engaged audience possible (as in, mushroom posts for those who like mushrooms, Arch Linux for those who like Arch Linux, etc.), and making editing, referencing, and remixing incredibly easy.
And I also agree with you that the fact that it’s centralized is not optimal.
But what I fear the most is the usage of algorithms that maximize engagement. It’s nosecret that those algorithms are excellent ways of showing polemic and divisive content, part of what leads to YouTube and Facebook being constantly put under public scrutiny for not only enabling, but bolstering mass disinformation. I’m talking anti-vax, flat earth, qanon.
This doesn’t affect every user. In a given user’s personal experience, they may never see these posts. This is part of the design: feeds are personalized. So while there are these radicalization and misinformation trends, some users may never experience them given their particular likes. But by now it’s clear that these platforms bolster algorithmic disinformation.
Having that in mind, and in particular making content democratically chosen and not algorithmically chosen, we can come to commend and perhaps use the positive characteristics of TikTok that you rightly pointed out.
I’m coming to the conclusion that a key strength of the fedi is that AI/ML don’t fit since there’s no usage/addiction/advertising-based business model. Commercial enterprises can ramp up and develop super quickly while FOSS slowly follows to build sustainable alternatives. Your post makes me want to explore how to speed up FOSS in building and converging on those alternatives. Fedi TikTok for one sounds like it’d be a blast to build as does OPs idea.
I also like that we can use the very technologies that made products with features we dislike, to create features we like. If such a feature is an algorithmic feed, it doesn’t necessarily have to be created with the goal of maximizing engagement. It could, for example, be trained to maximize happiness or a broader sense of well-being, or connection, or purpose, or any/all of the good stuff that positive psychology has been showing again and again in study after study that makes humans flourish. Omg… what a dream…
Obligatory response to the impending “but happiness is subjective, and how do you measure it anyway”: Indeed, happiness is subjective, but so is, to an extent, the content that ‘engages’ people. And yet the algorithms that maximize engagement are out there, being deployed again and again in social network after social network. Not only that, but we as humans have developed sturdy enough models that reliable predict and affect peoples’ experience.
To name a few of the relevant people in the literature, there’s Mihaly Csikszentmihalyi, Sonya Lyubomirski, and Martin Seligman. They show that psychology can be used to understand how humans feel purpose and happiness, and that it can also be used to guide people into those directions. They use all kinds of methodologies, including my favorite, real time sampling through the experience sampling method. This is basically asking you at random moments throughout your day what you’re doing and how you feel (along with many other questions). This is asked to hundreds if not tens of thousands of people, and the resulting absurd amount of data can be used to create robust models.
Anyway, that’s not the point of my comment, but I figured it was better to be safe than sorry. My point was that it’d be amazing to have algorithmic feeds trained to make us flourish.
I largely agree with you that TikTok is a platform that’s amazing in many ways: creating highly specific content that could reach its most engaged audience possible (as in, mushroom posts for those who like mushrooms, Arch Linux for those who like Arch Linux, etc.), and making editing, referencing, and remixing incredibly easy.
And I also agree with you that the fact that it’s centralized is not optimal.
But what I fear the most is the usage of algorithms that maximize engagement. It’s no secret that those algorithms are excellent ways of showing polemic and divisive content, part of what leads to YouTube and Facebook being constantly put under public scrutiny for not only enabling, but bolstering mass disinformation. I’m talking anti-vax, flat earth, qanon.
This doesn’t affect every user. In a given user’s personal experience, they may never see these posts. This is part of the design: feeds are personalized. So while there are these radicalization and misinformation trends, some users may never experience them given their particular likes. But by now it’s clear that these platforms bolster algorithmic disinformation.
Having that in mind, and in particular making content democratically chosen and not algorithmically chosen, we can come to commend and perhaps use the positive characteristics of TikTok that you rightly pointed out.
I’m coming to the conclusion that a key strength of the fedi is that AI/ML don’t fit since there’s no usage/addiction/advertising-based business model. Commercial enterprises can ramp up and develop super quickly while FOSS slowly follows to build sustainable alternatives. Your post makes me want to explore how to speed up FOSS in building and converging on those alternatives. Fedi TikTok for one sounds like it’d be a blast to build as does OPs idea.
I also like that we can use the very technologies that made products with features we dislike, to create features we like. If such a feature is an algorithmic feed, it doesn’t necessarily have to be created with the goal of maximizing engagement. It could, for example, be trained to maximize happiness or a broader sense of well-being, or connection, or purpose, or any/all of the good stuff that positive psychology has been showing again and again in study after study that makes humans flourish. Omg… what a dream…
Obligatory response to the impending “but happiness is subjective, and how do you measure it anyway”: Indeed, happiness is subjective, but so is, to an extent, the content that ‘engages’ people. And yet the algorithms that maximize engagement are out there, being deployed again and again in social network after social network. Not only that, but we as humans have developed sturdy enough models that reliable predict and affect peoples’ experience.
To name a few of the relevant people in the literature, there’s Mihaly Csikszentmihalyi, Sonya Lyubomirski, and Martin Seligman. They show that psychology can be used to understand how humans feel purpose and happiness, and that it can also be used to guide people into those directions. They use all kinds of methodologies, including my favorite, real time sampling through the experience sampling method. This is basically asking you at random moments throughout your day what you’re doing and how you feel (along with many other questions). This is asked to hundreds if not tens of thousands of people, and the resulting absurd amount of data can be used to create robust models.
Anyway, that’s not the point of my comment, but I figured it was better to be safe than sorry. My point was that it’d be amazing to have algorithmic feeds trained to make us flourish.