(Please don’t downvote just because I need some help.)

I was once a privacy nut. But it’s getting so hard nowadays, and there are so many more important problems – global warming, AI, the inevitable collapse of the current world order… how does privacy improve the world? Please help remind me.

I do approve of privacy, of course. All this protect-the-children flak is bullshit. I just can’t remember why I thought it was something worth fighting for and preaching about.

  • jsomae@lemmy.mlOP
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    AI could kill everyone, though it most likely won’t IMO. 10% chance I think. That’s still very bad though. Despite the fact that Ilya Sutskever, Geoff Hinton, MIRI, heck even Elon Musk have expressed varying degrees of concern about this, it seems the risk here is largely dismissed because it sounds too much like science fiction. If only science fiction writers had avoided the topic!

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      This is bullshit. AI will be hunting down survivors? Thus more lethal than nuclear war? ChatGTP4 will be better at it?

      Most of these concern seem to be about AGI which we are nowhere close to having and have no clear path to. Our "AI"s not only do not understand causality but don’t have the ability to perform arithmetic. Nor do they run stuff that could kill humans. Except if you consider Tesla’s FSD an AI system, but Musk assured us back in 2017 it would be safe…

      • jsomae@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        4 months ago

        where did you get the idea that gpt4 is capable of this? this is concerns for 10+ years from now, assuming AI makes the same strides is has in the past 10 years, which is not guaranteed at all.

        I think there are probably 3-5 big leaps still required, on the order of the invention of transformer models, deep learning, etc., before we have superintelligence.

        Btw humans are also bad at arithmetic. That’s why we have calculators. if you don’t understand that LLMs use RAG, langchain (or similar), and so on, you clearly don’t understand the scope of the problem. Superintelligence doesn’t need access to anything in particular except, say, email or chat to destroy the world.