DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Shovel vendors scrambling for solid ground as prospectors start to understand geology.
…that is, this isn’t yet the end of the AI bubble. It’s just the end of overvaluing hardware because efficiency increased on the software side, there’s still a whole software-side bubble to contend with.
there’s still a whole software-side bubble to contend with
They’re ultimately linked together in some ways (not all). OpenAI has already been losing money on every GPT subscription that they charge a premium for because they had the best product, now that premium must evaporate because there are equivalent AI products on the market that are much cheaper. This will shake things up on the software side too. They probably need more hype to stay afloat
The software side bubble should take a hit here because:
Trained model made available for download and offline execution, versus locking it behind a subscription friendly cloud only access. Not the first, but it is more famous.
It came from an unexpected organization, which throws a wrench in the assumption that one of the few known entities would “win it”.
…that is, this isn’t yet the end of the AI bubble.
The “bubble” in AI is predicated on proprietary software that’s been oversold and underdelivered.
If I can outrun OpenAI’s super secret algorithm with 1/100th the physical resources, the $13B Microsoft handed Sam Altman’s company starts looking like burned capital.
And the way this blows up the reputation of AI hype-artists makes it harder for investors to be induced to send US firms money. Why not contract with Hangzhou DeepSeek Artificial Intelligence directly, rather than ask OpenAI to adopt a model that’s better than anything they’ve produced to date?
Lots of techies loved the internet, built it, and were all early adopters. Lots of normies didn’t see the point.
With AI it’s pretty much the other way around: CEOs saying “we don’t need programmers, any more”, while people who understand the tech roll their eyes.
I believe programming languages will become obsolete. You’ll still need professionals that will be experts in leading the machines but not nearly as hands on as presently. The same for a lot of professions that exist currently.
I like to compare GenAI to the assembly line when it was created, but instead of repetitive menial tasks, it’s repetitive mental tasks that it improves/performs.
Oh great you’re one of them. Look I can’t magically infuse tech literacy into you, you’ll have to learn to program and, crucially, understand how much programming is not about giving computers instructions.
Let’s talk in five years. There’s no point in discussing this right now. You’re set on what you believe you know and I’m set on what I believe I know.
And, piece of advice, don’t assume others lack tech literacy because they don’t agree with you, it just makes you look like a brat that can’t discuss things maturely and invites the other part to be a prick as well.
Especially because programming is quite fucking literally giving computers instructions, despite what you believe keyboard monkeys do. You wanker!
What? You think “developers” are some kind on mythical beings that possess the mystical ability of speaking to the machines in cryptic tongues?
They’re a dime a dozen, the large majority of “developers” are just cannon fodder that are not worth what they think they are.
Ironically, the real good ones probably brought about their demise.
Especially because programming is quite fucking literally giving computers instructions, despite what you believe keyboard monkeys do. You wanker!
What? You think “developers” are some kind on mythical beings that possess the mystical ability of speaking to the machines in cryptic tongues?
First off, you’re contradicting yourself: Is programming about “giving instructions in cryptic languages”, or not?
Then, no: Developers are mythical beings who possess the magical ability of turning vague gesturing full of internal contradictions, wishful thinking, up to right-out psychotic nonsense dreamt up by some random coke-head in a suit, into hard specifications suitable to then go into algorithm selection and finally into code. Typing shit in a cryptic language is the easy part, also, it’s not cryptic, it’s precise.
You must be a programmer. Can’t understand shit of what you’re told to do and then blame the client for “not knowing how it works”. Typical. Stereotypical even!
Read it again moron, or should I use an LLM to make it simpler for your keyboard monkey brain?
I’m not talking about this being a snap transition. It will take several years but I do think this tech will evolve in that direction.
I’ve been working with LLMs since month 1 and in these short 24 months things have progressed in a way that is mind boggling.
I’ve produced more and better than ever and we’re developing a product that improves and makes some repetitive “sweat shop” tasks regarding documentation a thing of the past for people. It really is cool.
In part we agree. However there are two things to consider.
For one, the llms are plateauing pretty much now. So they are dependant on more quality input. Which, basically, they replace. So perspecively imo the learning will not work to keep this up. (in other fields like nature etc there’s comparatively endless input for training, so it will keep on working there).
The other thing is, as we likely both agree, this is not intelligence. It has it’s uses.
But you said to replace programming, which in my opinion will never work: were missing the critical intelligence element. It might be there at some point. Maybe llm will help there, maybe not, we might see. But for now we don’t have that piece of the puzzle and it will not be able to replace human work with (new) thought put into it.
I don’t know. In a lot of usecase AI is kinda crap, but there’s certain usecase where it’s really good. Honestly I don’t think people are giving enough thought to it’s utility in early-middle stages of creative works where an img2img model can take the basic composition from the artist, render it then the artist can go in and modify and perfect it for the final product. Also video games that use generative AI are going to be insane in about 10-15 years. Imagine an open world game where it generates building interiors and NPCs as you interact with them, even tying the stuff the NPCs say into the buildings they’re in, like an old sailer living in a house with lots of pictures of boats and boat models, or the warrior having tons of books about battle and decorative weapons everywhere all in throw away structures that would have previously been closed set dressing. Maybe they’ll even find sane ways to create quests on the fly that don’t feel overly cookie-cutter? Life changing? Of course not, but definitely a cool technology with a lot of potential
Also realistically I don’t think there’s going to be long term use for AI models that need a quarter of a datacenter just to run, and they’ll all get tuned down to what can run directly on a phone efficiently. Maybe we’ll see some new accelerators become common place maybe we won’t.
Shovel vendors scrambling for solid ground as prospectors start to understand geology.
…that is, this isn’t yet the end of the AI bubble. It’s just the end of overvaluing hardware because efficiency increased on the software side, there’s still a whole software-side bubble to contend with.
They’re ultimately linked together in some ways (not all). OpenAI has already been losing money on every GPT subscription that they charge a premium for because they had the best product, now that premium must evaporate because there are equivalent AI products on the market that are much cheaper. This will shake things up on the software side too. They probably need more hype to stay afloat
Quick, wedge crypto in there somehow! That should buy us at least two more rounds of investment.
Hey, Trump already did! Twice…
The software side bubble should take a hit here because:
Trained model made available for download and offline execution, versus locking it behind a subscription friendly cloud only access. Not the first, but it is more famous.
It came from an unexpected organization, which throws a wrench in the assumption that one of the few known entities would “win it”.
The “bubble” in AI is predicated on proprietary software that’s been oversold and underdelivered.
If I can outrun OpenAI’s super secret algorithm with 1/100th the physical resources, the $13B Microsoft handed Sam Altman’s company starts looking like burned capital.
And the way this blows up the reputation of AI hype-artists makes it harder for investors to be induced to send US firms money. Why not contract with Hangzhou DeepSeek Artificial Intelligence directly, rather than ask OpenAI to adopt a model that’s better than anything they’ve produced to date?
Great analogy
I really think GenAI is comparable to the internet in terms of what it will allow mankind in a couple of decades.
Lots of people thought the internet was a fad and saw no future for it …
Lots of techies loved the internet, built it, and were all early adopters. Lots of normies didn’t see the point.
With AI it’s pretty much the other way around: CEOs saying “we don’t need programmers, any more”, while people who understand the tech roll their eyes.
Back then the CEOs were babbling about information superhighways while tech rolled their eyes
I believe programming languages will become obsolete. You’ll still need professionals that will be experts in leading the machines but not nearly as hands on as presently. The same for a lot of professions that exist currently.
I like to compare GenAI to the assembly line when it was created, but instead of repetitive menial tasks, it’s repetitive mental tasks that it improves/performs.
Oh great you’re one of them. Look I can’t magically infuse tech literacy into you, you’ll have to learn to program and, crucially, understand how much programming is not about giving computers instructions.
Let’s talk in five years. There’s no point in discussing this right now. You’re set on what you believe you know and I’m set on what I believe I know.
And, piece of advice, don’t assume others lack tech literacy because they don’t agree with you, it just makes you look like a brat that can’t discuss things maturely and invites the other part to be a prick as well.
Especially because programming is quite fucking literally giving computers instructions, despite what you believe keyboard monkeys do. You wanker!
What? You think “developers” are some kind on mythical beings that possess the mystical ability of speaking to the machines in cryptic tongues?
They’re a dime a dozen, the large majority of “developers” are just cannon fodder that are not worth what they think they are.
Ironically, the real good ones probably brought about their demise.
First off, you’re contradicting yourself: Is programming about “giving instructions in cryptic languages”, or not?
Then, no: Developers are mythical beings who possess the magical ability of turning vague gesturing full of internal contradictions, wishful thinking, up to right-out psychotic nonsense dreamt up by some random coke-head in a suit, into hard specifications suitable to then go into algorithm selection and finally into code. Typing shit in a cryptic language is the easy part, also, it’s not cryptic, it’s precise.
You must be a programmer. Can’t understand shit of what you’re told to do and then blame the client for “not knowing how it works”. Typical. Stereotypical even!
Read it again moron, or should I use an LLM to make it simpler for your keyboard monkey brain?
Obvious troll is obvious.
That’s not the way it works. And I’m not even against that.
It sill won’t work this way a few years later.
I’m not talking about this being a snap transition. It will take several years but I do think this tech will evolve in that direction.
I’ve been working with LLMs since month 1 and in these short 24 months things have progressed in a way that is mind boggling.
I’ve produced more and better than ever and we’re developing a product that improves and makes some repetitive “sweat shop” tasks regarding documentation a thing of the past for people. It really is cool.
In part we agree. However there are two things to consider.
For one, the llms are plateauing pretty much now. So they are dependant on more quality input. Which, basically, they replace. So perspecively imo the learning will not work to keep this up. (in other fields like nature etc there’s comparatively endless input for training, so it will keep on working there).
The other thing is, as we likely both agree, this is not intelligence. It has it’s uses. But you said to replace programming, which in my opinion will never work: were missing the critical intelligence element. It might be there at some point. Maybe llm will help there, maybe not, we might see. But for now we don’t have that piece of the puzzle and it will not be able to replace human work with (new) thought put into it.
Sure but you had the .com bubble but it was still useful. Same as AI in a big bubble right now doesn’t mean it won’t be useful.
Oh yes, there definitely is a bubble, but I don’t believe that means the tech is worthless, not even close to worthless.
I don’t know. In a lot of usecase AI is kinda crap, but there’s certain usecase where it’s really good. Honestly I don’t think people are giving enough thought to it’s utility in early-middle stages of creative works where an img2img model can take the basic composition from the artist, render it then the artist can go in and modify and perfect it for the final product. Also video games that use generative AI are going to be insane in about 10-15 years. Imagine an open world game where it generates building interiors and NPCs as you interact with them, even tying the stuff the NPCs say into the buildings they’re in, like an old sailer living in a house with lots of pictures of boats and boat models, or the warrior having tons of books about battle and decorative weapons everywhere all in throw away structures that would have previously been closed set dressing. Maybe they’ll even find sane ways to create quests on the fly that don’t feel overly cookie-cutter? Life changing? Of course not, but definitely a cool technology with a lot of potential
Also realistically I don’t think there’s going to be long term use for AI models that need a quarter of a datacenter just to run, and they’ll all get tuned down to what can run directly on a phone efficiently. Maybe we’ll see some new accelerators become common place maybe we won’t.
There is no bubble. You’re confusing gpt with ai