• kibiz0r
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Well yeah. I mean, the big companies hire psychologists to conduct user studies to maximize time on device, and they model their user experience after variable reward schedules from slot machines. Seems obvious that they’re nefarious.

    I just have no idea how you can effectively regulate big tech.

    At every corner, the fundamental dynamic of big tech seems to be: Do the same exploitative, antisocial things that we decided long ago should be illegal… but do it through indirect means that make it difficult or impossible to regulate.

    If you change the definition of employment so that gig-work apps like Uber become employers, they’ll just change their model to avoid the new definition.

    If you change the definition of copyright infringement so that existing AI systems are open to prosecution, they’ll just add another level of obfuscation to the training data or something.

    I’m glad they’re willing to do something, but there has to be a more robust approach than this whack-a-mole game we’re playing.

    Edit: And to be clear, I am also concerned about the collateral damage that any regulation might cause to the grassroots independent stuff like Lemmy… but I think that’s pretty unlikely. The political environment in the US is such that it’s way, way more likely that we just do nothing – or a tiny little token effort – and we just let Meta/Google/whoever fully colonize our neurons in the end anyway.