- cross-posted to:
- tech@kbin.social
- technews@radiation.party
- cross-posted to:
- tech@kbin.social
- technews@radiation.party
Over the past one and a half years, Stack Overflow has lost around 50% of its traffic. This decline is similarly reflected in site usage, with approximately a 50% decrease in the number of questions and answers, as well as the number of votes these posts receive.
The charts below show the usage represented by a moving average of 49 days.
What happened?
There is a lot of Stack Overflow hate in this thread. I never had a bad experience. I was always on there yelling at noobs, telling them to Google it, and linking to irrelevant questions. It was just wholesome fun that briefly dulled my crippling insecurities
So you never had a bad experience, just were actively causing bad experiences for others?
I think you just fell for quite an obvious case of sarcasm.
A “woosh” if you will.
Sorry for being autistic ig
Removed by mod
We should leave the /s back on reddit
Sadly, it really is necessary if one wants to be sure nobody actually takes the sarcasm seriously. It’s hard for people to tell in a textual medium.
Heck, my style of humor in RL is often sarcasm or deliberately ludicrous comments and people still sometimes go “wait, really?” Even though they know me well.
I’m going to go without it from now on. I can handle clarifying myself if it’s absolutely necessary for someone.
Removed by mod
Yeah but those people who take the sarcasm seriously are fools and you can’t make things foolproof.
Or you know, have a legitimatly very hard time distinguishing it for actual reasons.
chinesescholarshadasimilarst anceagainstallkindofpunctuat ionclaimingtheabilitytodeliv eranddetectmeaningwithouttra iningwheelswasalayerofcommun icationpeopleneededandcouldn otaffordtoabandoninordertoma intainaproductiveconversatio nalenvironmentwithanyoneunab letoreflectuponanddiscernthe intendedmeaningbeingafoolnot worthyoftheloftymessageswrit tencommunicationwasintendedf ortodiscern
https://en.m.wikipedia.org/wiki/Chinese_punctuation
(This is a lesson in history, so I’ll let the discerning reader to decide for themselves whether there is sarcasm contained in it)
Sarcasm
No “/s”, no sarcasm.
I’m pretty sure they were being sarcastic.
Rather than cultivate a friendly and open community, they decided to be hostile and closed. I am not surprised by this at all, but I am surprised with how long the decline has taken. I have a number of bad/silly experiences on stackoverflow that have never been replicated on any other platform.
Removed by mod
Honestly I have a question I answered myself and was up for over 10 years with hundreds of views and votes only for the question to be marked as a duplicate for a question that verboten has nothing to do with the question I asked. Specifically I was working with canvas and svg and the question linked was neither thing. The other question is also 5 years newer so even if it were the same it would be a duplicate of mine, not the other way around.
Another one is a very high rated answer I gave was edited by a big contributor to add a participle several years after I wrote it and then marked as belonging to them now
Can you give more context on the second one? Everyone can edit posts and it shows both the original poster as well as the most recent editor on the post. (I’m not defending SE. I dislike them too.)
Both times i issued a dispute only for it to be completely ignored. Eventually I used a scrubber bot to delete every contribution I ever made instead of letting random power mods just steal content on my high profile posts.
deleted by creator
deleted by creator
Why is everyone saying this is because Stack Overflow is toxic? Clearly the decline in traffic is because of ChatGPT. I can say from personal experience that I’ve been visiting Stack Overflow way less lately because ChatGPT is a better tool for answering my software development questions.
The timing doesn’t really add up though. ChatGPT was published in November 2022. According to the graphs on the website linked, the traffic, the number of posts and the number of votes all already were in a visible downfall and at their lowest value of more than 2 years. And this isn’t even considering that ChatGPT took a while to get picked up into the average developer’s daily workflow.
Anyhow though, I agree that the rise of ChatGPT most likely amplified StackOverflow’s decline.
Half the time when I ask it for advice, ChatGPT recommends nonexistent APIs and offers examples in some Frankenstein code that uses a bit of this system and a bit of that, none of which will work. But I still find its hit rate to be no worse than Stack Overflow, and it doesn’t try to humiliate you for daring to ask.
It depends on what sort of thing you’re asking about. More obscure languages and systems will result in hallucinated APIs more often. If it’s something like “how do I sort this list of whatever in some specific way in C#” or “can you write me a regex for such and such a task” then it’s far more often right. And even when ChatGPT gets something wrong, if you tell it the error you encountered from the code it’ll usually be good at correcting itself.
I find that if it gets it wrong in the first place, its corrections are often equally wrong. I guess this indicates that I’ve strayed into an area where its training data is not of good quality.
Yeah, if it’s in a state where it’s making up imaginary APIs whole cloth then in my experience you’re asking it for help with something it just doesn’t know enough about. I get the best results when I’m asking about popular stuff (such as “write me a python script to convert wav files to mp3” - it’ll know the right APIs for that sort of task, generally speaking). If I’m working on something that’s more obscure then sometimes it’s better to ask ChatGPT for generalized versions of the actual question. For example, I was tinkering with a mod for Minetest a while back that was meant to import .obj models and convert them into a voxelized representation of the object in-game. ChatGPT doesn’t know Minetest’s API very well, so I was mostly asking it for Lua code to convert the .obj into a simple array of voxel coordinates and then doing the API stuff to make it Minetest-specific for myself. The vector math was the part that ChatGPT knew best so it did an okay job on its part of the task.
Your follow up question should be for ChatGPT to write those APIs for you.
I was going to say ChatGPT.
I think the smugness of StackOverflow is still part of it. Even if ChatGPT sometimes fabricates imaginary code, it’s tone is flowery and helpful, compared to the typical pretentiousness of Stackoverflow users.
Also, you can have it talk like a catgirl maid, so I find that’s particularly helpful as well.
deleted by creator
In my experience, ChatGPT is very good at interpreting documentation. So even if it hasn’t been asked on stack overflow, if it’s in the documentation that ChatGPT has indexed (or can crawl with an extension) you’ll get a pretty solid answer. I’ve been asking it a lot of AWS questions because it’s 100x better than deciphering the ancient texts that amazon publishes. Although sometimes the AWS docs are just wrong anyway.
Over the last five years, I’d click a link to Stack Overflow while googling, but I’ve never made an account because of the toxicity.
But yeah, chatGPT is definitely the nail in the coffin. Being able to give it my code and ask it to point out where the annoying bug is… is amazing.
I think the issue is how people got to Stack Overflow. People generally ask Google first, which hopefully would take you somewhere where somebody has already asked your question and it has answers.
Type a technical question into Google. Back in the day it would likely take you to Experts Exchange. Couple of years later it would take you to Stack Overflow. Now it takes you to some AI generated bullshit that scraped something that might have contained an answer, but was probably just more AI generated bullshit.
Either their SEO game is weak, they stopped paying Google as much for result placement, or they’ve just been overwhelmed with limitless nonsense made by bots for the sole purpose of selling advertising space that other bots will look at.
Or maybe I’m wrong and everybody is just asking ChatGPT their technical questions now, in which case god fucking help us all…
It gives decent answer and is still relatively at the top. However, if you need to ask something that isn’t there you’re going to be either intimidated or your question is going to be left unanswered for months.
I’m more inclined to ask questions on sites like Reddit, because it’s something I’m familiar with and there’s far better chance of getting it answered within couple hours.
ChatGPT is also far superior because there’s a feedback loop almost in real time. Doesn’t matter if it gives the wrong answer, it gives you something to work with and try, and you can keep asking for more ideas. That’s much preferable than having to wait for months or even years to get an answer
Ya im not sure what the deal with the hate is. ChatGPT gives you an excellent starting point and if you give it good feedback and direction you can actually churn out some pretty decent code with it.
It couldn’t happen to a more deserving group of smug, self-satisfied shitheads.
I miss when SO used to be a good place to ask questions.
I said I was a novice on the Code Review site and then the one answer I got told me to look into something like “mount genius and the valley of stupid” like dude, I fucking said I was a novice, I’m not claiming to be a genius. All over me using a term wrong. And when I asked what term they’d use they still smarted off. It wasn’t until I asked them again that they told me the term I was actually looking for.
I remember going to the vmware communities looking for help almost 20 years ago and some smug person was really upset that I didn’t use the right wording when I was starting out. He spent something like 2 whole days worth of posting. It was a chore to divine what he was saying while stumbling through his weird rant/lecture about proper terminology. I eventually called him out on it and never went back.
So long story short, communities and companies who don’t nip this kind of behavior in the bud and heavily moderate the assholes almost universally turn into the next expertsexchange community. Stack Overflow kind of leaned heavily into enshitification because of this, they eventually just stopped caring about what was being put on their forums, maintaining high content quality, and getting rid of argumentative power-users. Ironically reddit was a much nicer community and usually you’d find an answer or get help without the attitude, especially in the IT space.
SO claims a lot of this is because it is meant to be a tool where people go for correct answers and I get that, but getting downvoted or your question being closed as a duplicate feel mean regardless of how welcoming the admins claim they’re trying to make the place.
A big part of the problem is that users seek out reasons to close answers as opposed to seeking ways to try and fix them and avoid them being closed. And they’re rewarded for it! I think review queues overall are probably a positive but when you’re sitting there just going through them and you find one that could be closed as is but also could possibly be fixed, which are you going to try and do? Vote to close which takes like one second of effort or try and edit which could take a lot longer and may even involve input from OP? Then even if you do try and fix it, what if everyone else does vote to close?
I’ve had a question closed and my comments explaining why it wasn’t a duplicate deleted. The response from everyone was that because I have been using the site off and on for years they expected me to understand the process so they didn’t explain to me that I needed to edit and instead just deleted my comment and didn’t tell me anything.
The amount of anxiety I have when asking a question there is insane. And I have 6k+ rep. They weren’t wrong, I do know the site well. I have used it a lot. But like, of me, an experienced user, is afraid to ask a question that’s messed up. I’ve sat there and been like “okay, people will probably think it is a duplicate of this, I really hate getting questions closed as duplicates so I’m going to preemptively explain why it isn’t a a dupe” and then they still close it as a dupe. It’s insane. Or they find the one magical combination of words that I didn’t quite think of despite spending a good ten minutes or so looking for dupes prior to asking that did ask my question the act smug about it.
I don’t really use the sites anymore. Not even the more lighthearted and fun ones like RPG and World Building. I’ve just been so soured to it.
The amount of anxiety I have when asking a question there is insane. And I have 6k+ rep. They weren’t wrong, I do know the site well. I have used it a lot. But like, of me, an experienced user, is afraid to ask a question that’s messed up.
Yup that’s practically the same problem I had. I posted maybe one question over the past 15 years. I got crapped on by one of their power users for not doing something properly and I never posted or asked a question again. I don’t even remember what account I originally used, either.
This is sort of why I like ChatGPT, I don’t get harassed for asking something incredibly stupid, and the crappy answers are about as bad as the “marked as duplicate” nonsense that gets me nowhere anyways. Why bother trying to interface with those communities ever again? IT in general already tilts heavily towards salty misanthropes, I’ll pass on that.
Understandably, it has become an increasingly hostile or apatic environment over the years. If one checks questions from 10 years ago or so, one generally sees people eager to help one another.
Now they often expect you to have searched through possibly thousands of questions before you ask one, and immediately accuse you if you missed some – which is unfair, because a non-expert can often miss the connection between two questions phrased slightly differently.
On top of that, some of those questions and their answers are years old, so one wonders if their answers still apply. Often they don’t. But again it feels like you’re expected to know whether they still apply, as if you were an expert.
Of course it isn’t all like that, there are still kind and helpful people there. It’s just a statistical trend.
Possibly the site should implement an archival policy, where questions and answers are deleted or archived after a couple of years or so.
No, they shouldn’t be archived. I say that because technology can change. At some point they added a new sort method which favors more recent upvotes and it helps more recent answers show above old ones with more votes. This can happen on very old posts where everyone else might not use the site anymore. We shouldn’t expect the original asker to switch the accepted answer potentially years down the line.
There’s plenty of things wrong with SE and their community but I don’t think this is one that needs to change.
I can’t wait to read gems like “Answered 12/21/2005 you moron. Learn to search the website. No, I wont link it for you, this is not a Q&A website”.
🤣
Answers from 2005 that may not be remotely relevant anymore, especially if a language has seen major updates in the TWENTY YEARS since!
More important for frameworks than languages, IMO. Frameworks change drastically in the span of 5-10 years.
The worst is when you actually read all that questions and clearly stated how they don’t apply and that you already tried them and a mod is still closing your question as a duplicate.
human nature remembers negative experiences much better than positive, so it only takes like 5% assholes before it feels like everyone is toxic.
True that! and a change from 2% to 5% may feel much larger than that.
As alluded to by comments here already, a long coming death.
Will probably go down as a marker of the darker side of tech culture, which, not coincidentally (?) manifested at time when the field was most confused as to what constitutes its actual discipline and whether it was an engineering field at all.
It’s hostile to new users and when you do ask you will likely not get answer might get scolded or just get closed as duplicate. Then there is the fact that most has answers doesn’t matter if it’s outdated or just bad advice. Pretty much everything has GitHub now. Usually I just go raise the question there if I have a genuine question get an answer from the developers themselves. Or just go to their website api/ library doc they have gotten good lately. Then finally recent addition with chatgpt you can ask just about any stupid question you have and maybe it may give some idea to fix the problem you encounter. Pretty much the ultimate rubber duck buddy.
All questions have been asked and all answers have been given
Are they all linked in a “duplicate of” circle yet?
and copilot and chatgpt give good enough answers without being unfriendly
ChatGPT has no knowledge of the answers it gives. It is simply a text completion algorithm. It is fundamentally the same as the thing above your phone keyboard that suggests words as you type, just with much more training data.
Who cares? It still gives me the answers i am looking for.
Yeah it gives you the answers you ask it to give you. It doesn’t matter if they are true or not, only if they look like the thing you’re looking for.
How is that practically different from a user perspective than answers on SO? Either way, I still have to try the suggested solutions to see if they work in my particular situation.
At least with those, you can be reasonably confident that a single person at some point believed in their answer as a coherent solution
That doesn’t exactly inspire confidence.
An incorrect answer can still be valuable. It can give some hint of where to look next.
@magic_lobster_party I can’t believe someone wrote that. Incorrect answers do more harm than being useful. If the person asks and don’t know, how should he or she know it’s incorrect and look for a hint?
In my experience, with both coding and natural sciences, a slightly incorrect answer that you attempt to apply, realize is wrong in some way during initial testing/analysis, then you tweak until it’s correct, is very useful, especially compared to not receiving any answer or being ridiculed by internet randos.
In the context of coding it can be valuable. I produced two tables in a database and asked it to write a query and it did 90% of the job. It was using an incorrect column for a join. If you are doing it for coding you should notice very quickly what is wrong at least if you have experience.
Well if they refer to coding solution they’re right : sometimes non-working code can lead to a working solution. if you know what you’re doing ofc
Google the provided solution for additional sources. Often when I search for solutions to problems I don’t get the right answer directly. Often the provided solution may not even work for me.
But I might find other clues of the problem which can aid me in further research. In the end I finally have all the clues I need to find the answer to my question.
I don’t know about others’ experiences, but I’ve been completely stuck on problems I only figured out how to solve with chatGPT. It’s very forgiving when I don’t know the name of something I’m trying to do or don’t know how to phrase it well, so even if the actual answer is wrong it gives me somewhere to start and clues me in to the terminology to use.
the good thing if it gives you the answer in a programming language is that its quite simple tontestvif the output is what you expect, also a lot of humans hive wrong answers…
What point are you trying to make? LLMs are incredibly useful tools
Yeah for generating prose, not for solving technical problems.
You’ve never actually used them properly then.
not for solving technical problems
One example is writing complex regex. A simple well written prompt can get you 90% the way there. It’s a huge time saver.
for generating prose
It’s great a writing boilerplate code so I can spend more of my time architecturing solutions instead of typing.
There was a story once that said if you put an infinite number of monkeys in front of an infinite number of typewriters, they would eventually produce the works of William Shakespeare.
So far, the Internet has not shown that to be true. Example: Twitter.
Now we have an artificial monkey remixing all of that, at our request, and we’re trying to find something resembling Hamlet’s Soliloquy in what it tells us. What it gives you is meaningless unless you interpret it in a way that works for you – how do you know the answer is correct if you don’t test it? In other words, you have to ensure the answers it gives are what you are looking for.
In that scenario, it’s just a big expensive rubber duck you are using to debug your work.
There’s a bunch of people telling you “ChatGPT helps me when I have coding problems.” And you’re responding “No it doesn’t.”
Your analogy is eloquent and easy to grasp and also wrong.
Fair point, and thank you. Let me clarify a bit.
It wasn’t my intention to say ChatGPT isn’t helpful. I’ve heard stories of people using it to great effect, but I’ve also heard stories of people who had it return the same non-solutions they had already found and dismissed. Just like any tool, actually…
I was just pointing out that it is functionally similar to scanning SO, tech docs, Slashdot, Reddit, and other sources looking for an answer to our question. ChatGPT doesn’t have a magical source of knowledge that we collectively also do not have – it just has speed and a lot processing power. We all still have to verify the answers it gives, just like we would anything from SO.
My last sentence was rushed, not 100% accurate, and shows some of my prejudices about ChatGPT. I think ChatGPT works best when it is treated like a rubber duck – give it your problem, ask it for input, but then use that as a prompt to spur your own learning and further discovery. Don’t use it to replace your own thinking and learning.
Even if ChatGPT is giving exactly the same quality of answer as you can get out of Stack Overflow, it gives it to you much more quickly and pieces together multiple answers into a script you can copy and work with immediately. And it’s polite when doing so, and will amend and refine its answers immediately for you if you engage it in some back-and-forth dialogue. That makes it better than Stack Overflow and not functionally similar.
I’ve done plenty of rubber duck programming before, and it’s nothing like working with ChatGPT. The rubber duck never writes code for me. It never gives me new information that I didn’t already know. Even though sometimes the information ChatGPT gives me is wrong, that’s still far better than just mutely staring back at me like a rubber duck does. A rubber duck teaches me nothing.
“Verifying” the answer given by ChatGPT can be as simple as just going ahead and running it. I can’t think of anything simpler than that, you’re going to have to run the code eventually anyway. Even if I was the world’s greatest expert on something, if I wrote some code to do a thing I would then run it to see if it worked rather than just pushing it to master and expecting everything to be fine.
This doesn’t “replace your own thinking and learning” any more than copying and pasting a bit of code out of Stack Overflow does. Indeed, it’s much easier to learn from ChatGPT because you can ask it “what does that line with the angle brackets do?” or “Could you add some comments to the loop explaining all the steps” or whatever and it’ll immediately comply.
I honestly believe people are way overvaluing the responses ChatGPT gives.
For a lot of boilerplating scenarios or trying to resolve some pretty standard stuff, it’s good.
I had an issue a while back with QueryDSL running towards an MSSQL instance, which I tried resolving by asking ChatGPT some pretty straightforward questions regarding the tool. Without going too much into detail, I basically got stuck in a loop where ChatGPT kept suggesting solutions that were not viable at all in QueryDSL. I pointed it out, trying to point out why what it did was wrong and it tried correcting itself suggesting the same broken solutions.
The AI is great until whatever it has been taught previously doesn’t cover your situation. My solution was a bit of digging in google away, which helpfully made me resolve the issue. But had I been stuck with only ChatGPT I’d still be going around in loops.
It really doesn’t work as a replacement for google/docs/forums. It’s another tool in your belt, though, once you get a good feel for its limitations and use cases; I think of it more like upgraded rubber duck debugging. Bad for getting specific information and fixes, but great for getting new perspectives and/or directions to research further.
I agree! It has been a great help in those cases.
I just don’t believe that it can fullfill the actual need for sites like StackOverflow. It probably never will be able to either, unless we manage to make it learn new stuff without reliable sources like SO, while also allowing it to snap up these obscure answers to problems without burying it in tons of broken solutions.
ChatGPT is great for simple questions that have been asked and answered a million times previously. I don’t see any downside to these types of questions not being posted to SO…
Exactly this. SO is now just a repository of answers that ChatGPT and it’s ilk can train against. A high percentage is questions that SO users need answers to are already asked and answered. New and novel problems arise so infrequently thanks to the way modern tech companies are structured that an AI that can read and train on the existing answers and update itself periodically is all most people need anymore… (I realize that was rambling, I hope it made sense)
So soon they will start responding with “this has been asked before, let’s change the subject”
🤣 🤣
Exactly! It will all come full circle
A repository of often (or at least not seldom) outdated answers.
yes! this! is chatgpt intelligent: no! does it more often than not give good enough answers to daily but somewhat obscure ans specific programming questions: yes! is a person on SO intelligent: maybe. do they give good enough answers to daily but somewhat obscure ans specific programming questions: mostly
Its not great for complex stuff, but for quick questions if you are stuck. the answers are given quicker, without snark and usually work
@focus Is that the reason why we get more and more AI written articles?
No, thats because of capitalism
Perhaps it’s easier to ask copilot or chatgpt. A quick but slightly inaccurate response might satisfy the user better.
ChatGPT has also found stupid typos or misplaced commas that humans can miss.
When it’s not busy introdcing it’s own typos and mistakes a human then has to catch.
Definatly replaced the site for me I always just needed just a little nudge where I was missing something obvious or new. They should be happy now no one is taking up their “free time” a constant reason for being toxic to new users
@xePBMg9 I prefer human responses.
Half of a fuck-ton is still a lot. If they scale down their operational costs they can still run a very comfortable business for a long while on these kinds of numbers.
I think the point is not their viability as a business but their relevance in the industry.
I suppose the same amount of experts are on stackoverflow and they live in good times. There isn’t too much spam to hate about.
The mosts visits to SO does a novice programmer. Currently they live off of AI answers and from more experienced co-workers.
I think the school of SO will last and the community is not hostile; But some people tend to forget that the quality of a question is very important.
Other factors:
SO jobs was shut down.
There is no new technology which enables a new SO chapter. There aren’t too many new questions about AI.
What do you think?
As long as a LLM doesn’t run into a corner, making the same mistakes over and over again, it is magical to just paste some code, ask what’s wrong with it and receiving a detailed explanation + fix. Even better is when you ask “now can you add this and this to it?” and it does.
chatGPT doesn’t chastize me like a drill instructor whenever I ask it coding problems.
It just invents the answer out of thin air, or worse, it gives you subtle errors you won’t notice until you’re 20 hours into debugging
Just like real humans.
so, like SO?
I agree with you that it sometimes gives wrong answers. But most of the time, it can help better than StackOverflow, especially with simple problems. I mean, there wouldn’t be such an exodus from StackOverflow if ChatGPT answers were so bad right ?
But, for very specific subjects or bizarre situations, it obviously cannot replace SO.
And you won’t know if the answers it gave you are OK or not until too late, seems like the Russian Roulette of tech support, it’s very helpful until it isn’t
Depending on Eliza MK50 for tech support doesn’t stop feeling absurd to me
How do you know the answer that gets copied from SO will not have any downsides later? Chatgpt is just a tool. I can hit myself in the face with a wrench as well, if I use it in a dumb way. IMHO the people that get bitten in the ass by chatgpt answers are the same that just copied SO code without even trying to understand what it is really doing…
Sounds the same as believing a random stranger.
How many SO topics have you seen with only one, universally agreed upon solution?
It’s funny because if you look at the numbers it looks like traffic started to go down before chat GPT was actually released to the public, indicating that maybe people thought that the site was too much of a pain in the ass to deal with before that and GPT is just the nail in the coffin.
Personally, of all the attempts I’ve had it positive interactions on that site I’ve had only one and at this point I treat it as a read-only site because it’s not worth my time arguing pedants just to get a question answered.
If I went to the library and all the librarians were assholes I probably wouldn’t go to that library anymore either.