• Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    6 months ago

    I am hearing a lot of people who are not young discuss this topic and I would really like to hear from young social media users whether or not this would have any effect on them.

    The article brings up this:

    Critics of the proposed warning label seized on this point. They ask, “Where is the series of definitive scientific studies to prove that social media and phone use destroy teens and adolescents?” They want the same kind of proof that eventually jolted our public health response to tobacco: blackened lungs, cancer research and thousands dead.

    I don’t agree with the critics personally, but if I were their age and I saw a Surgeon General’s warning about social media when I logged in, I might wonder why they’re doing this when there’s no conclusive evidence of harm.

    I should also point out that Surgeon General warnings on cigarette packages began in 1966. You can see here that there is definitely a steady trend line downward, but it had started long before the warnings began and the warnings don’t seem to have been a big factor, since there wasn’t a huge drop once they started. It went down for women after it had been going up, but it had already been going down with men and I am not convinced there is a strong correlation with the packaging and women.

    https://www.cdc.gov/mmwr/preview/mmwrhtml/mm4843a2.htm

    • disguy_ovahea@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago

      In all honesty, I knew cigarettes were bad for me when I was a teenage smoker. I didn’t care, because I thought the repercussions were so far away that there was no way to know what would happen. Similarly, I’m sure many users think they’re immune to the deception of social media.

      Cigarette companies began adding trace toxins, like formaldehyde, cyanide, benzene, and cadmium to compound the addictive properties of nicotine. They enhance the feelings of withdrawal from even just one cigarette.

      The addictive design of social media algorithms, fueled by psychographic profiling, is very similar. The software monitors every bit of input available, from the obvious likes, comments, subscribes, searches, and shares, to the dubious pausing videos, scrolling hesitations, zooming, screenshots, and downloads. On less secure devices, microphone and camera activation can occur, mouse or finger placement may be monitored, as well as contacts and message scrubbing.

      I think your comparison is more accurate than most people understand. The US tobacco industry is worth $108B as of 2024. The US ad industry is $262B. They’re far more powerful and far less regulated.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        I was the same way when I started smoking in high school in the 90s. “I’ll be fine if I quit before I’m 30.” We were under no illusions. We called them death sticks like plenty of other people who smoked. My wife used to say “glad I’m not pregnant” when she saw the pregnancy warning. Thankfully, we both quit many years ago and have not apparently suffered any long-term repercussions, but who knows in 20 or 30 years?

        And that was something we knew killed lots of people.

        • disguy_ovahea@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          6 months ago

          Cancer sticks here. I also quit over ten years ago. I’m grateful I haven’t experienced any long-term repercussions.

          I highly recommend these documentaries on social media psychographics, and their influence on the 2016 US election and the Brexit vote if you haven’t seen them. They’re both very accessible, and the information is coming directly from the experts who created this software and have since left the field.

          The Social Dilemma

          https://www.netflix.com/us/title/81254224

          The Great Hack

          https://www.netflix.com/us/title/80117542

    • Zorsith@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 months ago

      It’s weird to me (and I feel old for saying it) but the warnings were already there for the internet. The biggest thing taught about it that I learned in like, 2nd grade, was

      1. Not everything on the internet is true

      And

      1. Anything you put on the internet is there forever.

      It feels like there’s a distinct lack of education on how to interact with the internet since I wanna say 2010-2015 ish. The warning labels were removed, and the internet has only gotten more insidious since.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Honestly, it seems like people my age (late 40s) and older are the ones who have trouble understanding that since we weren’t taught that in school. So maybe we are the ones who need to see these warnings.

  • nondescripthandle@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    6 months ago

    I wonder if its going to be all social media or just arbitrary companies who the government decides aren’t healthy. Both sound silly. Because either we’re going to end up with warnings on almost every site with social aspects, yes including the entire fediverse, or we’re relying on the government to tell us what’s harmful and not without conclusive studies.

  • kibiz0r
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    Well yeah. I mean, the big companies hire psychologists to conduct user studies to maximize time on device, and they model their user experience after variable reward schedules from slot machines. Seems obvious that they’re nefarious.

    I just have no idea how you can effectively regulate big tech.

    At every corner, the fundamental dynamic of big tech seems to be: Do the same exploitative, antisocial things that we decided long ago should be illegal… but do it through indirect means that make it difficult or impossible to regulate.

    If you change the definition of employment so that gig-work apps like Uber become employers, they’ll just change their model to avoid the new definition.

    If you change the definition of copyright infringement so that existing AI systems are open to prosecution, they’ll just add another level of obfuscation to the training data or something.

    I’m glad they’re willing to do something, but there has to be a more robust approach than this whack-a-mole game we’re playing.

    Edit: And to be clear, I am also concerned about the collateral damage that any regulation might cause to the grassroots independent stuff like Lemmy… but I think that’s pretty unlikely. The political environment in the US is such that it’s way, way more likely that we just do nothing – or a tiny little token effort – and we just let Meta/Google/whoever fully colonize our neurons in the end anyway.