It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.
So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.
Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?
I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.
So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.
In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).
I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.
It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.
That’s awesome, and I would probably would find those tools useful.
Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.
So idk if it would be worth it once the venture capitalist money dries up.
Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.
I think what you’re specifically referring to is accessibility or ease of use. For someone unfamiliar with those tools, I can see the appeal.
Personally, as a software dev, I think it’s just very inefficient way to accomplish this goal. LLMs consume vastly more resources than a simple script. So I wouldn’t use it, especially if I’m paying real money for it.
I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.
E.g
Have I ever written a bash function that orders non-symver GitHub branches?
Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.
Huge time saver. I’ve had GPT doing a lot of work for me and it makes stuff like managing my Arch install smooth and easy. I don’t use OpenAI stuff much though. Gemini has gotten way better, Claude 3.5 Sonnet is beastly at code stuff. I guess if you’re writing extremely complex production stuff it’s not going to be able to do that, but try asking most people even what an unsigned integer is. Most people will be like “what?”
but try asking most people even what an unsigned integer is. Most people will be like “what?”
Why is that relevant? Are you saying that AI makes coding more accessible? I mean that’s great, but it’s like a calculator. Sure it helps people who need simple calculations in the short term, but it might actually discourage software literacy.
I wish AI could just be a niche tool, instead it’s like a simple calculator being sold as a smartphone.
I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem.
I think they are a new tool which is genuinely valuable when dealing with natural language problems.
For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!
I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.
I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.
Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.
So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.
Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.
ML in general as a much more usages than only power LLM.
Just look at AlphaProof. Lol we’re all about to be outclassed. I’m sure everyone will still derrid the bots. They could be actual ASI and especially here in the US we’d say “I don’t see any intelligence.” I wish or society and all of us at individualsc would reflect on our limitations and tiny tiny insignificance on the grand scale. Our egos may kill us.
P.S… I give us a 10% to make it to 2100 in any numbers or quality of life we’d consider remotely acceptable today. Pretty grim, but I think that’s the weight of the challenges we’re facing. Without AI I’d probably just say it was fucking hopeless. Because we’ve had all the time we needed and all the tech we needed and hardly ever fix anything. Always running a day late and a dollar short. This species has dreams to big for our collective britches. It’s always been a foolish endeavor and full of suffering and horrors. We’re here though so I hope we at least give it a good go. Would be super lame to go out in a putter and take must lifev on earth with us.
So now the question is if we can use all these access models to actually do something about our problems. Even LLMs seem quite good at pointing out how we are really bad at using the tools we already have and know exactly how to use because we’re always too busy arguing while the ship sinks!
COVID tried and a lot of people paid the price for being low information and not so bright. Sadly plenty of people who did the right things still got fucked by stupidity of others!
I feel like everyone who isn’t really heavily interacting or developing don’t realize how much better they are than human assistants. Shit, for one it doesn’t cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we’ll know it ain’t an LLM, because I don’t know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.
Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That’s it.
All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.
So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don’t give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I’m that much more pissed they’re relying on my good will towards people who use their products and or services.
I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.
One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.
But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.
I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.
Still waiting for that first good use case for LLMs.
It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.
So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.
Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?
I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.
So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.
In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).
I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.
It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.
That’s awesome, and I would probably would find those tools useful.
Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.
So idk if it would be worth it once the venture capitalist money dries up.
That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.
What are these code generators that have existed for a long time?
Lookup emmet.
I’ve also found IntelliJ’s generators useful for Java.
Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.
There is also auto doc generators.
I think what you’re specifically referring to is accessibility or ease of use. For someone unfamiliar with those tools, I can see the appeal.
Personally, as a software dev, I think it’s just very inefficient way to accomplish this goal. LLMs consume vastly more resources than a simple script. So I wouldn’t use it, especially if I’m paying real money for it.
I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.
E.g
Huge time saver. I’ve had GPT doing a lot of work for me and it makes stuff like managing my Arch install smooth and easy. I don’t use OpenAI stuff much though. Gemini has gotten way better, Claude 3.5 Sonnet is beastly at code stuff. I guess if you’re writing extremely complex production stuff it’s not going to be able to do that, but try asking most people even what an unsigned integer is. Most people will be like “what?”
Why is that relevant? Are you saying that AI makes coding more accessible? I mean that’s great, but it’s like a calculator. Sure it helps people who need simple calculations in the short term, but it might actually discourage software literacy.
I wish AI could just be a niche tool, instead it’s like a simple calculator being sold as a smartphone.
Writing bad code that will hold together long enough for you to make your next career hop.
I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!
I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.
I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.
Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.
So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.
ML has a huge future, regardless of LLMs.
Llm’s are ML…or did I miss something here?
Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.
ML in general as a much more usages than only power LLM.
Just look at AlphaProof. Lol we’re all about to be outclassed. I’m sure everyone will still derrid the bots. They could be actual ASI and especially here in the US we’d say “I don’t see any intelligence.” I wish or society and all of us at individualsc would reflect on our limitations and tiny tiny insignificance on the grand scale. Our egos may kill us.
P.S… I give us a 10% to make it to 2100 in any numbers or quality of life we’d consider remotely acceptable today. Pretty grim, but I think that’s the weight of the challenges we’re facing. Without AI I’d probably just say it was fucking hopeless. Because we’ve had all the time we needed and all the tech we needed and hardly ever fix anything. Always running a day late and a dollar short. This species has dreams to big for our collective britches. It’s always been a foolish endeavor and full of suffering and horrors. We’re here though so I hope we at least give it a good go. Would be super lame to go out in a putter and take must lifev on earth with us.
So now the question is if we can use all these access models to actually do something about our problems. Even LLMs seem quite good at pointing out how we are really bad at using the tools we already have and know exactly how to use because we’re always too busy arguing while the ship sinks!
COVID tried and a lot of people paid the price for being low information and not so bright. Sadly plenty of people who did the right things still got fucked by stupidity of others!
I feel like everyone who isn’t really heavily interacting or developing don’t realize how much better they are than human assistants. Shit, for one it doesn’t cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we’ll know it ain’t an LLM, because I don’t know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.
Wrote my last application with chat gpt. Changed small stuff and got the job
Please write a full page cover letter that no human will read.
Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That’s it.
All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.
So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don’t give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I’m that much more pissed they’re relying on my good will towards people who use their products and or services.
That’s because businesses are using AI to weed out resumes.
Basically you beat the system by using the system. That’s my plan too next time I look for work.
I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.
But 98% of GenAI hype is bullahit so far.
How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?
Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?
It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.
One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.
But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.
LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs
I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.
Please note I’m also pro code generators (like emmet).