Copilot is often a brilliant autocomplete, that alone will save workers plenty of time if they learn to use it.
I know that as a programmer, I spend a large percentage of my time simply transcribing correct syntax of whatever’s in my brain to the editor, and Copilot speeds that process up dramatically.
That’s on you then. Copilot even very explicitly notes that the ai can be wrong, right in the chat. If you just blindly accept anything not confirmed by you, it’s not the tool’s fault.
I use AI a lot as well as a SWE. The other day I used it to remove an old feature flag from our server graphs along with all now-deprecated code in one click. Unit tests still passed after, saved me like 1-2 hours of manual work.
It’s good for boilerplate and refactors more than anything
I feel like the process of getting the code right is how I learn. If I just type vague garbage in and the AI tool fixes it up, I’m not really going to learn much.
Autocomplete doesn’t write algorithms for you, it writes syntax. (Unless the algorithm is trivial.) You could use your brain to learn just the important stuff and let the AI handle the minutiae.
Where “learn” means “memorize arbitrary syntax that differs across languages”? Anyone trying to use copilot as a substitute for learning concepts is going to have a bad time.
AI can help you learn by chiming in about things you didn’t know you didn’t know. I wanted to compare images to ones in a dataset, that may have been resized, and the solution the AI gave me involved blurring the images slightly before comparing them. I pointed out that this seemed wrong, because won’t slight differences in the files produce different hashes? But the response was that the algorithm being used was perceptual hashing, which only needs images to be approximately the same to produce the same hash, and the blurring was to make this work better. Since I know AI often makes shit up I of course did more research and tested that the code worked as described, but it did and was all true.
If I hadn’t been using AI, I would have wasted a bunch of time trying to get the images pixel perfect identical to work with a naive hashing algorithm because I wasn’t aware of a better way to do it. Since I used AI, I learned more about what solutions are available, more quickly. I find that this happens pretty often; there’s actually a lot that it knows that I wasn’t aware of or had a false impression of. I can see how someone might use AI as a programming crutch and fail to pay attention or learn what the code does, but it can also be used in a way that helps you learn.
Copilot is often a brilliant autocomplete, that alone will save workers plenty of time if they learn to use it.
I know that as a programmer, I spend a large percentage of my time simply transcribing correct syntax of whatever’s in my brain to the editor, and Copilot speeds that process up dramatically.
problem is when the autocomplete just starts hallucinating things and you don’t catch it
If you blindly accept autocompletion suggestions then you deserve what you get. AIs aren’t gods.
Probably will happen soon.
OMG thanks for being one of like three people on earth to understand this
That’s on you then. Copilot even very explicitly notes that the ai can be wrong, right in the chat. If you just blindly accept anything not confirmed by you, it’s not the tool’s fault.
I use AI a lot as well as a SWE. The other day I used it to remove an old feature flag from our server graphs along with all now-deprecated code in one click. Unit tests still passed after, saved me like 1-2 hours of manual work.
It’s good for boilerplate and refactors more than anything
I feel like the process of getting the code right is how I learn. If I just type vague garbage in and the AI tool fixes it up, I’m not really going to learn much.
Autocomplete doesn’t write algorithms for you, it writes syntax. (Unless the algorithm is trivial.) You could use your brain to learn just the important stuff and let the AI handle the minutiae.
Where “learn” means “memorize arbitrary syntax that differs across languages”? Anyone trying to use copilot as a substitute for learning concepts is going to have a bad time.
AI can help you learn by chiming in about things you didn’t know you didn’t know. I wanted to compare images to ones in a dataset, that may have been resized, and the solution the AI gave me involved blurring the images slightly before comparing them. I pointed out that this seemed wrong, because won’t slight differences in the files produce different hashes? But the response was that the algorithm being used was perceptual hashing, which only needs images to be approximately the same to produce the same hash, and the blurring was to make this work better. Since I know AI often makes shit up I of course did more research and tested that the code worked as described, but it did and was all true.
If I hadn’t been using AI, I would have wasted a bunch of time trying to get the images pixel perfect identical to work with a naive hashing algorithm because I wasn’t aware of a better way to do it. Since I used AI, I learned more about what solutions are available, more quickly. I find that this happens pretty often; there’s actually a lot that it knows that I wasn’t aware of or had a false impression of. I can see how someone might use AI as a programming crutch and fail to pay attention or learn what the code does, but it can also be used in a way that helps you learn.
AI bad tho!!!