Just out of curiosity. I have no moral stance on it, if a tool works for you I’m definitely not judging anyone for using it. Do whatever you can to get your work done!
My job actively encourages using AI to be more efficient and rewards curiosity/creative approaches. I’m in IT management.
not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.
Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being
- Think about the requirements
- Work out how best to achieve those requirements within the code I’m working on
- Write the code
It would be
- Think about the requirements
- Work out how best to achieve those requirements within the code I’m working on
- Start writing the code and wait for the auto complete
- Read the auto complete and decide if it does exactly what I want
- Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself
idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.
I will say though in the interest of fairness, there were a few instances where I was blown away with copilot’s ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django’s ORM), so if you’re just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets
EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form
As a side note, whilst I don’t really use AI to help with coding, I was kinda expecting what you describe, more so for having stuff like ChatGPT doing whole modules.
You see, I’ve worked as a freelancer (contractor) most of my career now and in practice that does mostly mean coming in and fixing/upgrading somebody else’s codebase, though I’ve also done some so-called “greenfield projects” (entirelly new work) and in my experience the “understanding somebody else’s code” is a lot more cognitivelly heavy that “coming up with your own stuff” - in fact some of my projects would’ve probably gone faster if we just rewrote the whole thing (but that wasn’t my call to make and often the business side doesn’t want to risk it).
I’m curious if multiple different pieces of code done with AI actually have the same coding style (at multiple levels, so also software design approach) or not.
Those different sets of steps basically boil down to a student finding all the ways they can to cheat and spending hours doing it, when they could have just used less time to study for the test.
Not saying that you’re cheating, just that it’s the same idea. Usually the quickest solution is to just tackle the thing head-on rather than find the lazy workaround.
What I think ChatGPT is great for in programming is ‘I know what I want to do but can’t quite remember the syntax for how to do it’. In those scenarios it’s so much faster than wading through the endless blogspam and SEO guff that search engines deal in now, and it’s got much less of a superiority complex than some of the denizens of SO too.
A lot of people are going to get fucked if they are…
It’s using the “startup method” where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can’t do today.
Eventually they’ll introduce a paid version that might be able to do what the free one did.
But if you’re just blindly trusting it, you might have months of low quality work and haven’t noticed.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.
Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.
When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.
Anyone blindly trusting it is a grade A moron, and would’ve just found another way to fuck up whatever they were working on if ChatGPT didn’t exist.
ChatGPT is a tool, if someone doesn’t know what they’re doing with it then they are gonna break stuff, not ChatGPT.
This is exactly like people who defend Tesla by saying it’s your fault if you believed their claims about what a Tesla can do…
Which isn’t a surprise, there’s a huge overlap between being gullible to believe either companies claims, and some people will vend over backwards to defend thos companies because of sink cost fallacy
I don’t know what OpenAI even claims that ChatGPT can do, but if you trust marketing from any company then you’re gonna get burnt.
I’m not defending the company in any way, more just defending that in general LLMs can be useful tools, but people need to make educated decisions and take a bit of responsibility.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.
It’s been notorious for doing that from the very beginning though
That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.
Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.
deleted by creator
I tried it once or twice and it worked well. It’s too stupid now to be worth the attempt. The amount of time spent fixing its mistakes has resulted in net zero time savings.
Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.
Super unethical and unrepentant about it.
Super illegal in most jurisdictions too.
I use it to write performance reviews because in reality HR has already decided the results before the evaluations.
I’m not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.
To be clear: I don’t support this but it’s the reality I live in.
This is exactly what I use it for. I have to write a lot of justifications for stuff like taking training, buying equipment, going on business travel, etc. - text that will never be seriously read by anyone and is just a check-the-box exercise. The quality and content of the writing is unimportant as long as it contains a few buzz-phrases.
Just chiming in as another person who does this, it’s absolutely perfect. I just copy and paste the company bs competencies, add in a few bs thoughts of my own, and tell it to churn out a full review reinforcing how they comply with the listed competencies.
It’s perfect, just the kinda bs HR is looking for, I get compliments all the time for them rofl.
Work smarter, not harder, lol.
Can you please elaborate on your experience of HR people deciding the results before the evaluation? Just curious
Sure!
It happens behind closed doors and never in writing to keep up the farce, but usually I’m given a paltry number of slots of people I can label as high performers. This is really a damn shame because most of my team members are great employees. This is used as a carrot to show that we do give raises and promotions after all, but the proportion is so small it’s effectively zero. I’m very clear to my team that trying to becoming a top performer to get a promotion is a bad investment. I do my best to communicate the futility without actually saying it literally in such a way that it could get me into trouble.
Next, they use a spreadsheet to figure who they can probably underpay based on a heuristic likelihood that person would actually leave vs current market rates. These automatically become the low performers ahem satisfactory. You’re penalized for being here longer or specializing in something with a small market. Everyone else falls somewhere between satisfactory and above average which makes little difference.
The performance reviews are merely weak documentation to show that somehow HR was “justified” by selectively highlighting strengths or weaknesses depending on the a priori decision of what your performance level was to be.
It’s a huge tautology with only one meaningful conclusion: you will be underpaid, and it gets worse over time.
Thanks. This is great insight and tracks with some personal experience or experience of friends.
You could make a whole post about this topic, but I was curious what’s your advice to an employee that wants to do good work, but who doesn’t want to be taken advantage of?
The truth is you have to do good work for yourself because you care about the quality of your work. You work for you.
You separate the factors. You do good work for you because you care because a life doing things you don’t care about is less meaningful.
Separately you look at pay. You leave when it’s no longer worth staying, which for most people is about every two to three years at least for your early career.
deleted by creator
I use it and encourage my staff and other departments to use it.
I feel that we’re at a horse vs tractor or human computer vs digital computer event. In the next 10+ years those who are AI ignorant will be under employed or unemployed. Get it now and learn to use it as a force multiplier just like tractors and digital computers were.
The arguments against AI eerily mirror the arguments against tractors and digital computers.
I’ve run emails through it to check tone since I’m hilariously bad at reading tone through text, but I’m pretty limited in how I can make use of that. There’s info I deal with that is sensitive and/or proprietary that I can’t paste into just any text box without potential severe repercussions.
I don’t have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.
Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I’m being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn’t matter. I’m paid for results.
I use it as a search engine for the LLVM docs.
Works so much better than doxygen.
But it’s no secret.
Only used it a couple of times for work when researching some broad topics like data governance concepts.
It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.
My bosses use it to write their emails (ESL).
ESL is actually a great use, although there’s a risk someone might not catch a hallucination/weird tone issue. Still it would be really helpful there.
Best used in tandem with something like languagetool.org for the final revision.
yeah my biggest use case it quick summaries of things. it’s great getting a few bullet points, and i miss details a lot less.
deleted by creator
A friend of mine just used it to write a script for an Amazing Race application video. It was quite good.
How the heck did it access enough source material to be able to imitate something that specific and do it well? Are we humans that predictable?
Yes.
I knew you’d say that.
I dont see any reason to not use it to (keyword) help with your work. I think it would be wise to not use its responses verbatim, as well as to fact-check anything that it gives you. Additionally, turn off chat history and do not enter any details about yourself, or your employer, into the prompts. Keep things generic whenever you can.
I used it to write some basic ass explanation of devops for a document and reworded a few things to a way I liked better.
Like, I’m not going to be saying anything different than what countless others have said, so fuck it.
As a language model, I have neither boss nor co-workers.