Volunteer admins with nothing better to do get their dopamine by closing questions for StackOverflow points, regardless of whether the supposedly duped question from 8 years ago is actually still the best answer and covers the nuances of the question now being asked.
There probably is still a space for a SO-style site to exist, but they'd need a drastic change of approach. LLMs (+ Reddit I suppose?) have taken over most the engineer support role.
This rung so true to me, given that my answer from 4y ago was closed as a duplicate of an answer made 3m ago :D (no, the nuances were not considered and the questions were ultimately too different; this didn't influence moderation decision at all and I was very confused on how I've made a duplicate 4y ago of a question in, at that time, the future)
A niche place to find the solution for something getting in your way.
Instead, my own experience and every anecdote I've ever heard from those who tried participating mirrors this one.
Genuine questions and thought out responses closed in the harshest way possible.
If the policy on duplicates weren't so rigidly and coldly enforced it would be a place I've visit frequently to learn.
Instead I avoid it and do not feel bad that it's been superseded by LLMs. Which sucks because good human responses are far more preferable.
They are psychological, manipulative, influencing tools. It's like an annoying wasp that appears out of nowhere and follows you around.
Nobody asked for this.
When this comes to StackExchange, use a PiHole and protect yourself from this barrage of irrelevant ads.
Although Stack Overflow, terrible as it is, is not one of those.
At least it was originally like that. Nowadays political propaganda is also massive. The monetary value to Russia or Israel, of the majority of the USA supporting their side of their war, is immense.
The people who aren't willing to sign up for an account and pay a monthly/annual/per-article fee asked for this.
People have bills to pay after all.
https://cloud.google.com/blog/topics/threat-intelligence/int...
Note: Ctrl + F: malicious advertisements
as the purpose of the post was not to highlight that fact. But given that Google is an advertising company and they still mention it ...
Friend, it has always been this way. Malware in ads is literally the primary reason we created adblockers in the dark mists of time.
Flash was killed off because a majority of ads became cancerous Flash abominations executing all kinds of malicious code.
> They are psychological, manipulative, influencing tools.
The second paragraph is in my opinion also an accurate description of a very huge amount of people. By your argumentation that ads should not exist at all: shouldn't these people also not exist at all?
Well... the advertisers did.
More and more I think we need volunteer projects running the things we depend on the most. Community driving email, forums, social networks and Q&A sites like Stack Overflow. A community driven Stack Overflow could still run a job board, or have the C# section be "Sponsored by Microsoft", or run a Jetbrains ad. If you only have to pay for hosting, then you need less ad revenue.
I see prices around me for $500-ish/month for half-rack colo. Of course you have to bring your own servers if using this option.
https://github.com/gorhill/uBlock?tab=readme-ov-file#ublock-...
I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?
Because where a LLM falls short is in the same topic SO fell short: highly technical questions about a particular technology or tool, where your best chance to get the answer you were looking for is asking in their GitHub repo or contacting the maintainers.
"We've got an egg, why do we still need the chicken, eh?"
Perhaps better than current models at detecting and pushing back when it sounds like the individual asking the question is thinking of doing something silly/dubious/debatable.
As I see it, the next step is a synthesis of the two, whereby StackOverflow (or a competitor) reverses their ban on GenAI [0] and explicitly accepts AI users. I'm thinking that for moderation purposes, these would have to be explicitly marked as AIs, and would have to be "sponsored" by a proven-human StackOverflow user of good standing. Other than that, the AI users would act exactly as human users, being able to add questions, answers and comments, as well as to upvote and downvote other entries, based on the existing (or modified) reputation system.
I imagine for example, that for any non-sensitive open source project that I'm using Claude Code for, I would give it explicit permissions to interact on SO: for any difficult issue it encounters, it would try to find an existing question that might be relevant, if so, try the answers there, and upvote/comment about those, or to create a new question, and either get good answers from others, or to self-answer it, if it later found its own solution.
[0] https://meta.stackoverflow.com/questions/421831/policy-gener...
I can't believe we keep making progress. You know. Things get better and better as time goes on. Right?
Right?
...... Right?
Given how ruthlessly this site treated everyone when it was relevant, not a single tear will be shed when the front page is a letter from the founder.