• Juice
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    Ai doesn’t get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you can’t keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet I’ll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, that’s you. You don’t know how the economy works and you don’t know how ai works so you’re just believing all this roku’s basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But I’m telling you, don’t be a sucker. Until there’s a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldn’t worry too much about Ellison’s AM becoming a reality

    • verdare [he/him]@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).

      I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.

      I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.

      • Juice
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        7 months ago

        I wasn’t debating you. I have debates all day with people who actually know what they’re talking about, I don’t come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. I’m not saying the things you’re describing are technically impossible, I’m saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesn’t work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.

        There’s gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we can’t stop producing electricity to run these scam machines because someone might lose money.

    • localhost@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and there’s no cutoff in sight.

      That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

      • Ilandar@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

        I actually don’t think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old “big tech is only about hype, techbros are all charlatans from the capitalist elite” lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.

      • Juice
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I don’t know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I don’t know what algorithmic improvements are going to net efficiencies in the “orders of magnitude” that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.

        Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isn’t a research paper.