Something’s been bugging me about how new devs and I need to talk about it. We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning. Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares. The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
What are you guys working on where chatgpt can figure it out? Honestly, I haven’t been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
I’m forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times… the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.
but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.
a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.
When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).
It’s also nice for writing C# unit tests.
However, the times I’ve been stuck on my main languages, it’s been utterly useless.
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
Yeah, I’m not even that down on using LLMs to search through and organize text that it was trained on. But in it’s current iteration? It’s fancy stack overflow, but stack overflow runs on like 6 servers. I’ll be setting up some LLM stuff self hosted to play around with it, but I’m not ditching my brain’s ability to write software any time soon.
I’ve been using (mostly) Claude to help me write an application in a language I’m not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it’s very easy to mismanage memory the exact way Rust requires it.
That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I’ve been learning more about Rust from having to correct code that’s been generated by an LLM.
I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.
looking up docs - mostly useful to find search terms for the real docs
The second was kind of useful since it provided the structure, but I still replaced 90% of it.
I’m still messing with it, but beyond solving “blank page syndrome,” it’s not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.
I’m really bad at explaining what I want, because by the time I can do that, it’s faster to just build it. That said, I’m a senior dev, so I’ve been around the block a bit.
It’s not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time typing around. And searching online a few things.
Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.
What are you guys working on where chatgpt can figure it out? Honestly, I haven’t been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
I’m forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times… the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.
Forced to use copilot? Wtf?
I would quit, immediately.
Pay my bills. Thanks.
I’ve been dusting off the CV, for multiple other reasons.
how surprising! /s
but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.
a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.
It’s an at-scale company, orders came from way above. As did RTO after 2 years full-at-home, etc, etc.
When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).
It’s also nice for writing C# unit tests.
However, the times I’ve been stuck on my main languages, it’s been utterly useless.
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
Yeah, I’m not even that down on using LLMs to search through and organize text that it was trained on. But in it’s current iteration? It’s fancy stack overflow, but stack overflow runs on like 6 servers. I’ll be setting up some LLM stuff self hosted to play around with it, but I’m not ditching my brain’s ability to write software any time soon.
I love asking AI to generate a framework / structure for a project that I then barely use and then realize I shoulda just done it myself
I’ve been using (mostly) Claude to help me write an application in a language I’m not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it’s very easy to mismanage memory the exact way Rust requires it.
That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I’ve been learning more about Rust from having to correct code that’s been generated by an LLM.
I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.
Same. It can generate credible-looking code, but I don’t find it very useful. Here’s what I’ve tried:
The second was kind of useful since it provided the structure, but I still replaced 90% of it.
I’m still messing with it, but beyond solving “blank page syndrome,” it’s not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.
I’m really bad at explaining what I want, because by the time I can do that, it’s faster to just build it. That said, I’m a senior dev, so I’ve been around the block a bit.
I used it a few days ago to translate a math formula into code.
Here is the formula: https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782
It’s not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time typing around. And searching online a few things.
ChatGPT is perfect for learning Delphi.
Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.