• R00bot@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    4
    ·
    edit-2
    4 months ago

    I feel like the amount of training data required for these AIs serves as a pretty compelling argument as to why AI is clearly nowhere near human intelligence. It shouldn’t take thousands of human lifetimes of data to train an AI if it’s truly near human-level intelligence. In fact, I think it’s an argument for them not being intelligent whatsoever. With that much training data, everything that could be asked of them should be in the training data. And yet they still fail at any task not in their data.

    Put simply; a human needs less than 1 lifetime of training data to be more intelligent than AI. If it hasn’t already solved it, I don’t think throwing more training data/compute at the problem will solve this.

    • rdri@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      4
      ·
      4 months ago

      There is no “intelligence”, ai is a pr word. Just a language model that feeds on a lot of data.

      • R00bot@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 months ago

        Oh yeah we’re 100% agreed on that. I’m thinking of the AI evangelicals who will argue tooth and nail that LLMs have “emergent properties” of intelligence, and that it’s simply an issue of training data/compute power before we’ll get some digital god being. Unfortunately these people exist, and they’re depressingly common. They’ve definitely reduced in numbers since AI hype has died down though.

        • noobdoomguy8658@feddit.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          We’re very proficient at walking, but somehow haven’t produced a walking home or anything like that.

          It’s not very linear.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Definitely not the same thing. Just because you can make use of the end result of major efforts does not somehow magically give you access to all the knowledge from those major efforts.

          You can use a smart phone easily, but that doesn’t mean you magically know how to make one.

    • stupidcasey@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      4 months ago

      You’ve had the entire history of evolution to get the instinct you have today.

      Nature Vs Nurture is a huge ongoing debate.

      Just because it takes longer to train doesn’t mean it’s not intelligent, kids develop slower than chimps.

      Also intelligent doesn’t really mean anything, I personally think Intelligence is the ability to distillate unusable amounts of raw data and intuit a result beneficial to one’s self. But very few people agree with me.

      • Peanut@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

        Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

        That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.

        People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

        But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      A human lifetime worth of video is not anywhere close to equalling a human lifetime of actual corporeal existence, even in the perfect scenario where the AI is as capable as a human brain.

      • R00bot@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Strange to equate the other senses to performance in intellectual tasks but sure. Do you think feeding data from smells, touch, taste, etc. into an AI along with the video will suddenly make it intelligent? No, it will just make it more likely to guess what something smells like. I think it’s very clear that our current approach to AI is missing something much more fundamental to thought than that, it’s not just a dataset problem.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    37
    arrow-down
    2
    ·
    4 months ago

    Humans don’t live that long. That’s only about 1.5 million 30 min videos, which isn’t a huge amount for a whole day’s worth of scraping.

  • Kekzkrieger@feddit.org
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    5
    ·
    4 months ago

    instead of focusing on their products and improving them for everyone, some shitty ceo is pushing their shitty ai agenda down everyones throat.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      4 months ago

      Well it sounds like they’re doing something to make their products better, you just disagree that it’s going to be successful.

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      14
      ·
      4 months ago

      Nvidia’s biggest product is absolutely AI by a massive landslide, I’m pretty sure I read that the point of them downloading these videos and doing the training is to build a pipeline for their AI users to do the same with their own shit. (Can’t be bothered to double-check cuz I really don’t care)

      So they aren’t downloading all this video to make a crazy AI model. They’re downloading all this video to make a tool for their AI customers to use, you may not agree but improving their product is exactly what they’re doing.

      • Agrivar@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        4 months ago

        Can’t be bothered to double-check cuz I really don’t care

        For FUCK SAKE, why do you even bother posting your garbage opinions then? and with such authority too!

  • SomeGuy69@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    4 months ago

    So they use VMs to simulate user accounts, in future this will be blocked and whatever new AI startup is there won’t have the option to do so. Competition blocked. Forever.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    edit-2
    4 months ago

    There’s only a handful of video datasets and all of it is owned by Google through YouTube or big Hollywood companies like Disney and Netflix.

    These companies are foaming at the mouth with rage thinking about what generative AI will do to their industry and how much it will help the currently non existant indie one. They will do whatever it takes to fence in the playbox and make sure they get to be the toll man.

    This was never about AI getting to live or not, but who gets to own it. 404media is essentially a mouthpiece for these corporations, willingly or not, and the strengthening of copyright laws will not help the consumers or the small time creators. The only exception being laws that force copy left licenses onto models but that’s not what is being pushed right now, as well as aocs Deepfake act which is well thought out imo.

    Anyone should be permitted to train on YouTube and Netflix data, and Nvidia might even open source it in any case.

    • Sconrad122@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      Nvidia does not have a strong history of open sourcing things, to say the least. That last bit sounds like pure hopium

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Their nematron 320b model was released on what essentially is an open source licence (available for commercial use except if you are doing shady things like spamming and collecting biometric data).

        Having a robust open source ecosystem directly benefits Nvidia since they sell more higher end consumer GPUs.

        Obviously, there’s a real chance that this isn’t open sourced since it’s a video model and there’s huge money involved. Doesnt really change the fact that having YouTube and Netflix dictate who gets to make video models and at what cost isn’t a good idea.

      • trollbearpig@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        4 months ago

        The guy you are replying to is in all AI posts defending AIs. He is probably heavily invested in this BS or being paid for it, don’t waste your time with him.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          4 months ago

          Tbh, someone has to. Have you ever asked yourself if the intense hate AI gets and how 99% of articles are against it is organic?

          There’s a handful of companies that are poised to win big if they can put up a fence around AI while making sure the public can’t run strong models. There is an intense media campaign to make sure the public thinks either AI is dangerous (so they can be the only ones legally allowed to distribute them) or that AI is theft (So they can be the only ones to afford building them).

          Do not let yourself be manipulated, almost all strengthening of copyrights related to AI is completely against our interests.

          And no, I’m not getting paid lol. I have a vested interest because I use generative technology for work and for fun in my free time. I’m also interested in not handing out our whole economy on a silver platter to Google and Microsoft, if I can maybe help with a couple of comments a week, I will. Why don’t you explain why I’m wrong instead of sending out baseless accusations?

          • trollbearpig@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            edit-2
            4 months ago

            Nah my man, you are either brainwashed or are being paid hahaha. Is copyright a mess? Of fucking course, I haven’t meet a single person (except crazy ass libertarians funnily enough hahaha) that likes copyright. Are big corporations using copyright to exploit artists, create monopolies, and generally being dicks? Again, of fucking course.

            But anyone, like you, saying that we should just let AIs destroy copyright effectively is a fucking prick, that simple. And your agruments are dissingenous at best or outright lies. For example, just as big copyright holder companies are pushing to strengthen copyright law, the big tech companies are pushing for effectively destroying copyright through AI models. I have seen you pushing in multiple thread for open source models like that’s a solution. But if you were a serious person researching about the software open source community you would see that pretty much no one there agrees with your position because it would effectively destroy the copyleft open source licenses. After all, if an “AI” model, open source or not, is allowed to just “train” on my AGPL code and spit it back (with minor modifications at best) to an engineer in AWS that’s it for my project. Amazon will do the Amazon thing and steal the project. So say goodbye to any software freedom we have.

            And let’s be 100% clear here, this is not being pushed by the evil copyright holders like you seem to imply (and they are totally evil just to be clear hahahah). This is being pushed by the big tech companies and people like you spreading their propaganda. The fact that the copyright holders happen to be in the right this time is just a broken clock being right and all that, but it’s still good that they are pushing back to big tech. I do agree we have to keep an eye on them, the objective here can’t be to make copyright bigger, just to close the “loophole” that big tech companies are exploting to steal everything.

            People like you who want to destroy copyright without offering any alternatives to allow creatives to work in a market are either missinformed or just assholes. Again, of fucking course it’s not an ideal system, but going full kamikaze and just destroying any possibility for artists and creatives of making a living with their work is the most evil thing goung on right now, so bad that the big copyright holders happen to fall on the less bad side this time hahaha. And all for what? So people can be lied to by dumb chatbots? Or so people can create mediocre derivative “art” without putting any effort? Or so we can get mediocre code autocomplete that is subtly wrong all the time? Is fucking ridiculous.

            • 31337@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              4 months ago

              After all, if an “AI” model, open source or not, is allowed to just “train” on my AGPL code and spit it back (with minor modifications at best) to an engineer in AWS that’s it for my project. Amazon will do the Amazon thing and steal the project. So say goodbye to any software freedom we have.

              An engineer at AWS can already just copy your code, make minor modifications, and use it. I would think the same legal recourse would apply if it was outputted from an LLM or just a copy-paste? This seems like a tangential issue to whether the LLM was trained on your code or not (not training on your code obviously reduces the probability of the LLM spitting it back out near-verbatim though). Personally, I don’t see anything wrong with anyone using public code to build statistical models. And I think the pay-to-scrape models that Reddit, Xitter, and others are employing will help big tech build the “moat” they’re looking for. Big tech is asking for AI regulation for similar reasons.

              • trollbearpig@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                4 months ago

                An engineer at AWS can already just copy your code, make minor modifications, and use it.

                You are 100% wrong here my man. If an engineer does this they are creating a derivative work and they have to fullfil the conditions of the license of the code. No wonder you don’t see anything wrong here, you AI people live in a fantasy world when it comes to how copyright works hahahaha. Please stop talking about shit you know nothing about.

                • 31337@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  I stated that they can do this, and asked if they could be sued if they used near-verbatim code generated from an LLM, just like they could be sued if they copy-pasted AGPL code.

                  Edit: Tools like CoPilot tell you if your code is similar to publicly available code so you can avoid these issues.

                  Edit: Just looked up EFF’s position and I tend to agree with it:

                  Artificial Intelligence and Copyright Law

                  Artists are understandably concerned about the possibility that automatic image generators will undercut the market for their work. However, much of what is criticized is already considered fair use under copyright law, even if done at scale. Efforts to change copyright law to transform certain fair uses into infringement carry serious implications, are likely to interfere with the innovative potential of AI tools, and ultimately do not benefit artists. In fact, the use of these tools could expand the capacity of artists to create expressive works. Policymakers should emphasize the importance of human labor and investment in what receives copyright protection to maintain wages and dignity. Artists should be protected from efforts by large corporations to both substitute their labor with AI tools and create a new, unnecessary copyright regime around AI-generated art.

                  Machine Learning is a Fair Use

                  The process of machine learning for generative AI art is like how humans learn—studying other works—it is just done at a massive scale. Huge swaths of data (images, videos, and other copyrighted works) are analyzed and broken into their factual elements where billions of images, for example, could be distilled into billions of bytes, sometimes as small as less than one byte of information per image. In many instances, the process cannot be reversed because too little information is kept to faithfully recreate a copy of the original work.

                  The analysis work underlying the creation and use of training sets is like the process to create search engines. Where the search engine process is fair use, it is very likely that processes for machine learning are too. While the act of analysis may potentially implicate copyright, when that act is a necessary step to enabling a non-infringing use, it regularly qualifies as fair use. If the intermediate step were not permitted, fair use would be ineffective. As such, when factual elements of copyrighted works are studied and processed to create training sets—which, once again, is how we humans learn and are inspired by themes and styles in art and other works—that is likely to be found a fair use.

                  https://www.eff.org/document/eff-two-pager-ai

            • Doomsider@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              That is a long winded way to say you are a copyright defender. Your insisting on finding an alternative to a broken system so rent seekers can continue to exist is naive to say the least.

              I think most people with your stance (don’t throw out the baby with the bathwater) really have no idea how broken copyright and intellectual property is.

              AI companies have already proven copyright is DOA. It was never designed for the little guy. That is just propaganda you have fallen prey to.

              Simply put copyright was not needed for all of human history and it is still not needed. Pretending you have a unique idea, song, painting, etc in a world of billions of humans is beyond ridiculous.

              The concept was already broken from the start because everything in science and art is iterative. Giving monopoly power to rent seekers is the natural result of a broken concept.

  • riodoro1@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    4 months ago

    Can we stop with this bullshit? Nobody will buy into it. WE DON’T WANT IT.

    • sunbytes@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      4 months ago

      It’s not for you as a consumer.

      It’s to reduce your usefulness as a worker.

      Which would be lovely, if our value wasn’t calculated by our usefulness to the market.

    • boyi@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      Sorry, I disagree with this kind of generalisation. To be rational, Just because you don’t want it, it doesn’t mean everyone else is on the same ship. I am very sure there are certain people who will benefit from this and want it.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    4 months ago

    I’ve just had a thought:

    There’s a little country where the way its leadership still hasn’t been all voted out and put behind bars for life is that it constantly invents new subjects for discussion. Some outrageous, some showing them in good light, but the point is that everyone forgets the real bad things they’ve done (they are basically a collaborationist puppet government of a neighboring fascist country).

    I wonder if it’s today’s world as a whole showing itself in that little country.

    I’ve recently read an article seen on Lemmy, suggesting that the “AI” hype is the same. https://theluddite.org/#!post/ai-hype - found it. The conclusion is very important.

    They are wasting enormous amounts of energy to make those "AI"s, collect training data and so on, to make oligopolized platforms and industries shittier and shittier.

    But we are wasting our energy, which is much more limited, to track myriads of false targets. We are like an air defense system being saturated.

    No one has ever won a war by sitting in defense. We must search for critical joints to attack.

    Also no, voting for one of two candidates presented to you in some election is not that, neither is arguing for one of two sides in a discourse presented to you. There are better and worse choices there, but that’s not what attack means.

  • noobdoomguy8658@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    4 months ago

    Obligatory fuck AI and the illeterate bros pushing it.

    What kind of videos, though? A lot of such material is very far from being proper educational material that we show other people to really teach them much, let alone educate them well enough to be anywhere trustworthy. This is a very processed material, with years of preparation once you consider the prior education of the individuals involved in the creative process - think of the past experiences silently influencing them, their initial knowledge on the subject obtained from somewhat basic facts from school or otherwise, their misconceptions, iterations that nobody knows about, and many other things that we don’t usually directly associate with the act of working on something like a video, but that eventually do dictate a lot of the decisions and opinions put into it.

    It’s one thing that the AI has no intelligence in it whatsoever, but the fact that it’s being pumped with information and “knowledge” in basically the reverse order doesn’t help it become any better.

    On the other hand, the entire thing is not about making something that works well, but something that sells well. And then there’s people putting too much faith into the thing and trusting it with way too much stuff than they should (which is also the case with a lot of other tech, though, admittedly).

    Some things of today are so damn unexciting.