Here’s a brief summary, although you miss something if you don’t read the study (trigger warning: stats):

  • The researchers suggest a novel incentive structure that significantly reduced the spread of misinformation and provide insights into the cognitive mechanisms that make it work. This structure can be adopted by social media platforms at no cost.

  • The key was to offer reaction buttons that participants were likely to use in a way that discerned between true and false information. Users who found themselves in such an environment, shared more true than false posts.

  • In particular, ‘trust’ and ‘distrust’ reaction buttons, which in contrast to ‘likes’ and ‘dislikes’, are by definition associated with veracity. For example, the study authors say, a person may dislike a post about Joe Biden winning the US presidential election, however, this does not necessarily mean that they think it is untrue.

  • Study participants used ‘distrust’ and ‘trust’ reaction buttons in a more discerning manner than ‘dislike’ and ‘like’ reaction buttons. This created an environment in which the number of social rewards and punishments in form of clicks were strongly associated with the veracity of the information shared.

  • The findings also held across a wide range of different topics (e.g., politics, health, science, etc.) and a diverse sample of participants, suggesting that the intervention is not limited to a set group of topics or users, but instead relies more broadly on the underlying mechanism of associating veracity and social rewards.

  • The researchers conclude that the new structure reduces the spread of misinformation and may help in correcting false beliefs. It does so without drastically diverging from the existing incentive structure of social media networks by relying on user engagement. Thus, this intervention may be a powerful addition to existing intervention such as educating users on how to detect misinformation.

  • BurningnnTree@lemmy.one
    link
    fedilink
    arrow-up
    27
    ·
    1 year ago

    I’m really skeptical about this. I feel like it could make misinformation even worse. By letting users democratically label things as “true” or “false”, you’re encouraging users to rely on groupthink to decide what’s true, rather than encouraging users to think critically about everything they see. For example, if a user comes across a post that’s been voted as 90% true, they’ll probably be like “I don’t need to think critically about this because the community says it’s true, which means it must be true.”

    • alyaza [they/she]@beehaw.orgM
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      For example, if a user comes across a post that’s been voted as 90% true, they’ll probably be like “I don’t need to think critically about this because the community says it’s true, which means it must be true.”

      yeah, it’s an interesting–and i’m not necessarily sure solvable–question of how you can design something to usefully combat misinformation which itself won’t eventually enshrine or be gamed to promote misinformation in a website context. twitter’s context feature is only selectively useful and a drop in the bucket. youtube has those banners on certain subjects but i’d describe them as basically useless to anyone who already believes misinfo.

      • BurningnnTree@lemmy.one
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I read a really good book called The Chaos Machine by Max Fisher, which talked about how political division in America (and the rest of the world) has been shaped by social media companies. He argued that it mostly comes down to content recommendation algorithms. Social media companies like to promote divisive and controversial content because it leads to increased engagement and ad revenue. Labeling news as fake isn’t going to help, when the algorithm itself is designed to promote attention-grabbing (fake) news.

        If Twitter wants to solve the issue of misinformation, the solution is simple: turn off all content recommendation, and just show people posts from the people they follow sorted from newest to oldest. But unfortunately that will never happen because that would cause a massive decline in user engagement.

  • 1rre@lemmy.org.uk
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    In theory and in a study where people are actually paying attention to the changes sure, but I think in actual fact people are going to inherently distrust things they dislike and want to trust things they like so for all intents and purposes they’d be identical

  • d3Xt3r@beehaw.org
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    Slashdot had this covered years ago, literally decades.

    1. Upvotes limited to +5.
    2. Votes categorized: funny, informative, insightful, etc.
    3. Number of votes limited per time frame and user karma.
    4. Meta-moderation: your votes (up/down both) were subject to voting (correct/incorrect). good score == more upvotes to spend.

    It’s a pity that Reddit and other sites didn’t follow this model.

  • tojikomori@kbin.social
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    It’s a fascinating idea, but it’s reading a lot into a small effect on a very small study group.

    The authors also don’t seem to dwell much on the effect that novelty itself has on study outcomes. This is a common problem with A/B testing: introducing any divergence from what’s familiar can draw participants’ attention and affect outcomes. “Like” and “Dislike” buttons are familiar for anyone who’s spent time on the internet, but “Trust” and “Distrust” buttons are novel and will be treated more carefully. If they were used widely then we’d eventually develop a kind of banner blindness to the language, and their effect on discernment would be further weakened.

    This approach could also overindex comments that express risk and uncertainty. Well-worded concerns and calls for “further study” are a time-honored way of disrupting progress (never forget the Simple Sabotage Field Manual) but often sound trustworthy.

    Which makes this comment rather ironic.

  • Pigeon@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I… Just straight up don’t buy that this would work out in reality at all. I think people just ignore what the buttons are called when they do the same thing. And we have a situation where people do dispute the veracity of basic facts, anyways.

    It makes me wonder how the study was designed and whether there’s an element of confusion in the reporting between “statistical significance, aka the effect observed by the study is less than 5% likely to be caused by pure chance” and “actual, common parlance significance, aka the effect observed is large and important”. But I’m not curious enough to actually spend the time to look into it.

  • lvxferre@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I feel like the downvote button in special should / could be multidimensional. People downvote content out of multiple reasons: “this is incorrect”, “this is really dumb”, “this is off-topic”, “the poster is a jerk”, so goes on.

    IMO this would combo really well with the experimental study in the OP.

      • lvxferre@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        My idea is partially inspired on the Slashdot system, but I suggest doing it for downvotes instead of upvotes for two reasons:

        1. Bad content usually has a single blatant flaw, but good content often has multiple qualities.
        2. People take negative feedback more seriously than positive feedback.

        As consequences for both things:

        • It’s easier for the user to choose the type of downvote than the type of upvote.
        • If you’re including multiple categories of an action, you’ll likely do it through multiple clicks. If downvotes require two clicks while upvotes require only one, you’re mildly encouraging upvote often, and downvote less often (as it takes more effort).
        • Negative feedback needs to be a bit more informative.
  • emptyother@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Same with upvotes and downvotes, I wonder? Some uses it as a good post/bury post, but most uses it as an agree/disagree button, I believe. Maybe one could have different but clearly labeled buttons per sub or per content-type.

  • Rentlar@beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Potentially it could be helpful, but I think most people in effect will use it as like/dislike buttons anyway, no matter what you call it.

  • neamhsplach@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Thank you for the trigger warning haha!

    This would be a great feature to roll out, mostly because it encourages people to think critically in a way that’s as engaging (addictive?) as clicking “like” on a post. Great idea!

  • WhoRoger@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I dunno. Lots of topics are where most people simply can’t tell what’s true. Having an opinion is fine, but this feels a tad too far.

    Personally I think it’s fine if even misinformation exists as long as there’s a warning/banner like Twitter does, that the general consensus is that it’s incorrect. You never know when something controversial may turn out to be true, but facts aren’t up to democracy.

    • alyaza [they/she]@beehaw.orgM
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      You never know when something controversial may turn out to be true, but facts aren’t up to democracy.

      from a metaphysical standpoint sure, there might be inarguable facts–but i think the past few years especially have demonstrated that for social purposes almost all “facts” are kind of up to democracy, and many people have no interest in believing what is metaphysically true. i mean, we had people literally dying of COVID because they believed it was a hoax or because they believed in crank treatments of the virus.

      • WhoRoger@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        And I think it’s fine for anyone to believe anything and express it (unless they literally call for physical harm and such).

        But this is on the opposite end on the spectrum, where censorship is on the other end. Neither is ideal. As I said, I think having banners and warnings that the topic is controversial or disputed, or the consensus is the opposite, seems like the best compromise.

        I mean, it would be fine if people who actually understand the topic would vote, but everyone would, and thus the trustworthiness rating loses its meaning imho.