What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.
When you get articles like this, the first thing you should ask is “Who the fuck is Firstpost?”
Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else
The difference is in who gets the ad money.
Openai biggest spending is infrastructure, Whis is rented from… Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested
MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.
While title is click bite, they do say right at the beginning:
*Right now, it is pulling through only because of Microsoft’s $10 billion funding *
Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.
Almost every company uses either Google or Microsoft Office products and we already know that they’re working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they’ll recoup that investment in months.
I mean, you’re correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn’t care if they make a loss because it’s sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it’s still true that OpenAI’s business plan was unsustainable hyped hogwash.
Their business plan got Microsoft to drop 10 billion dollars on them.
None of my shitty plans have pulled that off.
If they got any of that into their own pockets kudos to them.
Mainly they used it to pay for the tech and research and it’s all reverting back to Microsoft eventually. Going bankrupt is not quite the same as being acquired.
Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.
There’s no way Microsoft is going to let it go bankrupt.
That’s $260 million .There are 360 million paid seats of MS360. So they’d have to raise their prices $0.73 per year to cover the cost.
A no brainier.
So they’ll raise the cost by $100/yr.
You mean 365 right? Or is there another MS product I’m unaware of?
Not trying to be a dick, I really don’t know given how expansive they are.
If there’s no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.
Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.
It won’t take long for them to recoup their investment in OpenAI.
If OpenAI has been more responsible in how they released ChatGPT, they wouldn’t be facing this problem. Just completely opening Pandora’s box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.
There’s plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.
I’m also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they’re making mad money right now, but just like the crypto space that can dry up instantly.
I don’t think you’re right about nvidia. Their hardware is used for SO much more than AI. They’re fine.
Plus their own AI products are popping off rn. DLSS and their frame generation one (I forget the name) are really popular in the gaming space.
I think they also have a new DL-based process for creating stencils for silicon photolithography which, in my limited knowledge, seems like a huge deal.
Couldn’t they charge a subscription? Or sell credits?
Genuine question.
The development of ChatGPT makes it even more comfortable for users. From there, more and more free chatgpt are born for new users to experience
Removed by mod
Removed by mod
I mean apart from the fact it’s not sourced or whatever, it’s standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).
Once everyone’s using your product over competitors who couldn’t afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you’re the biggest player in the market.
It’s just Uber’s business model.
Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.
All it’s every done is rob from it’s employees so it can give money to stockholders. Just like every corporation.
The difference is that the VC bubble has mostly ended. There isn’t “free money” to keep throwing at a problem post-pan. That’s why there’s an increased focus on Uber (and others) making a profit.
I don’t know anything about anything, but part of me suspects that lots of good funding is still out there, it’s just being used more quietly and more scrupulously, & not being thrown at the first microdosing tech wanker with a great elevator pitch on how they’re going to make “the Tesla of dental floss”.
In this case, Microsoft owns 49% of OpenAI, so they’re the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.
This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.
huh, so with the 10bn from Microsoft they should be good for… just over 30 years!
ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they’ll absolutely keep throwing cash at it.
They seems to be killing Cortana… So I expect a new assistant at least based partially on this tbh.
This article has been flagged on HN for being clickbait garbage.
It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.
Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached
People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can’t strike for better wages when your employer is already trying to get rid of you.
The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.
Hunger shouldn’t be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.
I agree with this. People should fight to democratize AI, public model, public data, public fair research. And should fight misuse of it from business schools’ type of guys.
Because it’s just the same as autocomplete on your phone lol so whatevs.
/s
It seems to be a common thing. I gave up on /r/futurology and /r/technology over on Reddit long ago because it was filled with an endless stream of links to cool new things with comment sections filled with nothing but negativity about those cool new things. Even /r/singularity is drifting that way. And so it is here on the Fediverse too, the various “technology” communities are attracting a similar userbase.
Sure, not everything pans out. But that’s no excuse for making all of these communities into reflections of /r/nothingeverhappens. Technology does change, sometimes in revolutionary ways. It’d be nice if there was a community that was more upbeat about that.
I probably sound like I hate it, but I’m just giving my annual “this new tech isn’t the miracle it’s being sold as” warning, before I go back to charging folks good money to clean up the mess they made going “all in” on the last one.
HN is biased towards AI though so 🤷
Flagged where?
I was also curious about that, chatgpt says hacker news and that makes sense to me.
Hacker News
Clickbait crap.
It’s Firstpost, their Kremlin-bootlicking YouTube videos are even worse. Just below Forbes Breaking News trash.
Yup. Uber was burning 10x of it
$7 million a day!?
Edit: it seems that is approximately correct. Wow
Glad I’m not there only one to think that
Its fine, i got my own LlaMa at home, it does almost the same as GPT
Are you referring to https://gpt4all.io or something else? I was going to try this one but will welcome any recommendations.
In terms of flexibility oobabooga’s text generation webui is the best as it supports nearly every model as ggml, gptq and transformers.
For the model itself Nous-hermes-llama2 is one the best for easy use.
I’d like to know too!
The biggest issue I find is it’s just not as easy to use as chatgpt. I’m surprised no one has sold an easy to use consumer version.
Which one? What are your system specs?
I’ve been thinking about doing this too.
uncensored vicuna 7b if i recall
nvidia rtx 3060, 20 core intel cpu, 32gb ram
The thing about all GPT models is that they’re based on the frequency of the word to determine its usage. Which means the only way to get good results is if it’s running on cutting edge equipment designed specifically for that job, while being almost a TB in size. Meanwhile, Diffusion models are only GB and run on the GPU but still produce masterpieces because they already know what that word is associated with.
That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.
I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.
What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.
I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.
A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.
As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.
It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
Not me, but my boss would… wait a minute…
Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.
Removed by mod
Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.
I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.
I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.
Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.
If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.
The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.
I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.
Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.
Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.
Edit: shit, maybe I’m too dependent on it.
I’m also having a good time with copilot
Considering asking my company to pay for the subscription as I can justify that it’s worth it.
Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.
I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.
Btw if you aren’t in the chat beta I really recommend it
Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.
it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.
The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.
But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.
I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.
Link?
https://openai.com. You have to pay to get the code interpreter as it is part of the plus access. Worth it for me.
Thanks. I’ll have a look
It’s definitely become a part of a lot of people’s workflows. I don’t think OpenAI can die. But the need of the hour is to find a way to improve efficiency multifold. This will make it cheaper, more powerful and more accessible
I think they’re just trying to get people hooked, and then they’ll start charging for it. It even says at the bottom of the page when you’re in a chat:
Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT August 3 Version
I don’t think it’s at all clear that that’s a viable business strategy in a market where that kind of sleight of hand
Except for the fact that they’ve said for the entire existence of chatgpt that it’s a free research preview.
At a $250mm/yr burn rate and a revenue of… a lot less than that, they can die pretty quickly
Agreed. But it will be a significant loss for a big chunk of people since other LLMs aren’t nearly as good as GPT-4.
That’s fine, I don’t care that there is a good LLM
Would help if they would offer more payment options than just credit card, which is not really popular in many countries
A company that just raised $10b from Microsoft is struggling with $260m a year? That’s almost 40 years of runway.