- cross-posted to:
- programming@lemmy.ca
- cross-posted to:
- programming@lemmy.ca
Some interesting thoughts on how to leverage ChatGPT
Using ChatGPT for anything more than repetitive or random generation tasks is a bad idea, and its usefulness becomes even more limited when you’re working with proprietary code that you can’t directly send to ChatGPT. Even with personal projects, I still try to avoid ChatGPT as much as possible for the simple reason that I’ll be forced to pay for it if it becomes an essential part of my workflow when it leaves this free beta testing phase.
Exactly this! I hate hearing politicians and rulemakers discuss how ChatGPT and LLM are going to be relevant everywhere and how ChatGPT should already be incorporated into education. They literally call it a “research preview”, you can only assume that when they’ve gathered enough data, they’re going to shut it down, or at least reduce its capacity by a lot.
With that said, I really enjoy using it. Mainly for brainstorming topics or new projects, and what technologies to use in them. Sometimes I also find a use for it as a therapist, for social topics I don’t really know who to ask, and I expect a generic reply anyway.
On the politicians / rulemakers side of things, that may or may not be a good thing tbh. Technology moves so fast and traditionally the aforementioned groups are glacial and can’t keep up, sometimes to the benefit of a small group, often to the detriment of the majority. Having this on their radar relelatively soon is potentially a useful change.
While it’s nice that politicians are enthusiastic about new technologies, I think ChatGPT is one example where they shouldn’t force mass adoption. ChatGPT is a proprietary model owned by a private corporation, and it’s made very clear that interaction data with ChatGPT will be collected and used by OpenAI for its business. It’s horrible for data security and it helps to strengthen OpenAI’s monopoly. Honestly, governments recommending privately owned software and technologies should be considered advertising.
governments recommending privately owned software and technologies should be considered advertising.
Is this not also true if the software is open-source? It’s still advertising, but it’s somehow ok because a corporation doesn’t benefit? It’s not that I don’t agree with you - regulatory capture and vendor lock-in are much less of a concern for free and/or open-source software, but that doesn’t mean it’s not still advertising.
That side of it I wholeheartedly agree with. Perhaps I’m just deluding myself into thinking technology awareness early on makes for better legal infrastructure to handle its effect on society. I really would like that to be the case.
But yeah agree, “ChatGPT” being synonymous with “groundbreaking AI” to the vast majority of the public (I suspect) is not great from a monopoly perspective.
I’ve been using ChatGPT at work quite a bit now. Some of the things I’ve used it for are:
- Writing a shell script that scrapes some information about code modules and shows them neatly
- Minor automation scripts that setup and make my day to day docker workflow easy
- Writing random regex, sql, lua pattern matching functions
- It turned out to be surprisingly good at creating code examples for certain undocumented APIs (kong.cache, kong.worker_events, kong.cluster_events) in Kong API Gateway.
- Copy pasting a rough python automation script, converting it into Go, and adding it in the application itself.
I still don’t feel comfortable using it for anything big.
I usually use it more to help me write documentation and add comments on some functions. It helps explaining what a function does.
To write code I usually just use it to write simple functions or a template code for me to start from somewhere. I avoid using it with external Libraries as in my experience, it likes to “invent” functions and methods that are not implemented.
I, personally don’t trust ChatGPT. So whenever I use it I always look at code it generated until I understand what it does. And the truth is that it takes me more time to understand it than if I would’ve wrote code myself.
It does wonders for repetitive tasks or data generation, though.
I believe in any place where security is critical, developers should not use AI-generated code irresponsibly.One thing I used ChatGPT for recently was generating test data.
Hey ChatGPT, I use SQL Server and here is my table structure, please generate an insert query with 10 rows of fake test data.
It wasn’t perfect, but honestly nor is the test data I would have written. It was a great starting point and saved me a lot of time since this is a legacy app with some wide tables (30+ columns).
Me too! I used it recently to generate some fairly specific test data that would have taken me probably 30 minutes of massaging instead of the 30 seconds of creating the right prompt. So helpful!
This is a paywalled URL, but at least private windows work on Medium.