Screaming into the void of the blogosphere is catharsis for getting my SO question closed.
And because I know you're all nosy, the SO question is here: https://stackoverflow.com/questions/79605462/high-cpu-usage-... . Please feel free to point out more ways in which I screwed up asking my SO question.
It was immediately closed as off-topic with 3 votes. One commenter asked "can't you figure that out using developer tools?"
It was eventually reopened, but I'm never asking a question there again. FWIW one of the governing web bodies essentially says "If you have technical questions, ask them on SO and not here" - I take this to mean that they don't care if you are able to answer your question, they just want you to go away.
Was it so important for the quality of the site to delete a reply thanking someone for their OSS contribution? Is SO a better site now? Meanwhile they have 10 year old answers still up, with no way to indicate they are outdated unless the author themselves goes through the trouble of updating their answer.
I will say, this is a level of question that is too sophisticated for SO, and likely will only have an answer once you figure it out and go back and answer your question.
Are you confident the code is the issue--have you repro'd it consistently with different versions of .NET? What about reproing on different machines? Locally?
I had the same suspicion though, which is that if the question is too difficult it triggers some reflex...
Could it be that, for years, they've been selecting for things like:
- Low friction onboarding
- Upvoting “easy wins” and thought experiments over real-world, in-the-trenches problems
- Penalizing questions that didn’t fit the format—even when those of us around since the early days knew they’d likely get good answers within a few days
- Incentivizing moderators to strictly enforce rules that, frankly, weren’t necessarily great to begin with
Just a thought.
If S.O. believes that deleting everything users post there is somehow improving their site and going to make it relevant again, more power to them. It's their site. Let's see how that goes for them.
Or at least that is my guess, since I stopped working with WCF about 2016 probably.
Anyway in newer version of .net you CancellationToken everywhere what would do exactly that: tell your server that client disconnected. That would be my first try on fixing it.
Use token that is sent via HTTP implementation to the endpoint, pass it to your stream and when it is cancelled, end the stream. Stream ends, endpoint finishes, not CPU load.
Using has similar problems with things like database connections remaining open.
(Edit: it seems people do care about CoreWCF ITT. That's nice to see.)
> Screaming into the void of the blogosphere is catharsis for getting my SO question closed.
That's fine. Almost everyone who comes to SO, in my experience, has a fundamentally wrong idea about how the site is intended to work. That includes people who don't have a question and only want to post answers. Unfortunately, it's difficult to explain because people find the model unintuitive - the UI affords using the place just like many others, even though the site was created exactly to get away from frustrations caused by older models (https://meta.stackexchange.com/questions/92107). And the real objective is a synthesis of many not-always-compatible ideas (https://meta.stackoverflow.com/questions/254770). My personal sense is that the community didn't really get a handle on "what SO is" until around the time that new question volume peaked (way back in 2014).
Even then, people can hang around for years and not really get it (e.g. https://meta.stackoverflow.com/questions/427224) - in large part because the policies have been inconsistently applied on a volunteer basis, and the people who are allowed to e.g. cast close votes are vastly outnumbered.
We generally don't care about people not liking the Stack Overflow model while discussing it off-site. There's far too much of that to worry about. But that doesn't mean we'll change to accommodate everyone else. The entire point is to provide something that isn't available everywhere you look: a polished artifact, an organized repository of commonly-needed, high-quality answers to clear, focused, practical questions.
Do we accomplish that goal? Hell no, not by a long shot. But there are some real gems in there - and a few of them have millions of views. And as the rate of new questions slows, users who put on the "curator" hat become able to keep on top of the incoming queue, filter through for what's of value (and not a duplicate), and even turn attention towards the old Q&A to improve it (incidentally, a lot of that work is rounding up old duplicates that went unnoticed).
> I had forgotten that any external links are a big no-no in SO land, so my question immediately attracted 2 close votes.
The problem isn't simply including an external link (we'll happily just edit those out if they aren't necessary). The problem occurs when a question appears to depend upon the externally linked content. We can't accept that (https://meta.stackoverflow.com/questions/254428) because of link rot and licensing issues (someone who wants to answer you often needs to be able to cite the code; posting on-site automatically licenses the content appropriately, per the terms of service) but mainly because of scope - a question that's suitable for the Stack Exchange format would fit neatly within the actual question text.
We don't want to do detailed analysis of the problem you encountered, even if we're capable of it, because questions are for everyone. They need to be able to reflect a problem that other people could a) have; b) plausibly search for; and c) recognize if they found it. Answers to a question need to make sense in general to people who would ask - not just in the specific context of one person's original problem. In short, we want a question, not a problem - and extracting a proper question starts with (https://meta.stackoverflow.com/questions/261592) your own analysis.
"How do I do X?" questions are usually much easier to ask in the format, and are very valuable and can end up very well regarded, even when they're on very basic topics. But "what went wrong with Y code?" is not fully refined. What we're really looking for is more like "why does Y' code construct do Z?" - where the specific, exact cause of failure (https://stackoverflow.com/help/minimal-reproducible-example) is extracted from your own debugging session (along with reproducing input and actual vs expected output).
> Two days later my question got it's third vote for closure, and remains unanswered and now closed forever.
This is literally not how Stack Overflow works. The OP has at least (https://stackoverflow.com/help/auto-deleted-questions) 9 days to fix the question and nominate it for reopening until it gets "deleted"; but even then it's a soft deletion (delisting) which is still reversible - you can find the question from your personal listing (https://stackoverflow.com/users/deleted-questions/current while logged in; or replace 'current' with your user ID), edit and nominate for undeletion.
The established policy is that we intentionally close questions that don't meet standards (https://meta.stackoverflow.com/questions/417476) as quickly as possible (https://meta.stackoverflow.com/questions/260263). The main point of this is to prevent the sort of people (notice that https://meta.stackoverflow.com/questions/271684 is over 10 years old; and the original complaint https://meta.stackexchange.com/questions/9731 is from before the official launch, during the private beta) who would otherwise hang out on a traditional discussion forum 12 hours a day from trying to read the OP's mind, repost the same basic explanation of the same basic idea dozens of times, etc.
(Unfortunately, the incentive system is completely broken - https://meta.stackexchange.com/questions/387356 - and the company's interests are not aligned with the community, so this is a losing battle.)
And, in fact, your question has been reopened, as of about 3/4 of an hour after your comment that I'm replying to. Stack Overflow is not at all immune to external pressure - after all, many regulars there are also on HN and other usual-suspect sites.
It also looks like your edits have actually improved the question. In particular, adding in a definite conclusion from your profiling attempt.
(We understand that a lot of people in a situation like yours wouldn't necessarily know how to use a profiler and wouldn't necessarily be able to come up with a theory about what's wrong. That isn't our problem. We aren't offering tech support. It's a bitter pill for almost everyone, but Stack Overflow by design is not there to make your code work. It's there to answer questions that arise during your attempt. And a question like yours, properly refined, can help those other people.)
True. I quit trying to do anything there once I realized that SO was fundamentally not useful to me. It advertised as a gamified Q&A platform, but was actually a knowledge base psudeo wiki thing structured in way that didn't lend itself to answering the questions I needed answered.
So, I think a lot of the negative reactions are deserved, because SO looks like something it isn't.
People want a place to get help. SO looks like a place to get help. But SO is a place to ask for help only if your problem fits a specific set of requirements. And since most problems will never meet said requirements, most people can never actually get help on SO.
I post this in part because I'm still saltly about how much time I wasted trying to get help only to get downvoted, but also because if SO actually wants to do what they say, they really need to restructure into something that actually looks like what they want to be.
My suggestion would be to have two sites, one that is actually a general Q&A site like what everyone is after, the other is the kind of knowledge repository that SO wants to be. Then you just promote the really good questions from the Q&A site into the other site.
I'd also recommend ending the whole "downvote" idea. I have yet to see it not result in cliques and in discriminating against viewpoints the people with downvote permissions don't like. Let a lack of upvotes cause poor content to drop to the bottom.
I recommend looking for alternatives, because this problem can't really be fixed and the site owners seem intent on making it worse. I personally use and recommend (and am a moderator at, full disclosure) Codidact Software: https://software.codidact.com/ . But at Codidact we're still fundamentally using the same "Q&A site" (I don't think this means the same thing you appear to think it means) model . We just have proper community involvement (the site is owned by a non-profit foundation and committed to community sovereignty; see https://codidact.org/), new site software, and newly conceived site scope.
> My suggestion would be to have two sites, one that is actually a general Q&A site like what everyone is after, the other is the kind of knowledge repository that SO wants to be.
The problem is that there are already countless sites "like what everyone is after". If you try to split a Q&A site like Stack Overflow that way without changing the actual UI, the problem just repeats itself: people try to use the knowledge repository part as if it were yet another traditional forum.
And it turns out, people often think they're after that model, then get fed up with it over time.
I think an idea like yours can be done, but it would require radically different site software. (In early 2023 - I think - I kept myself busy with contemplating a design for exactly this, but I didn't really write anything down.)
The current state of SO is what it is, but saying this is how it was intended to be is 100% revisionist history.
Also "we" "our"... do you work at SO?
1) clearly technical
2) reproducible
3) has a clear failure condition
Not be a suitable candidate for S/O?
Did we step into a dimension where only "How do I print('hello world')?" is a valid question while I wasn't watching, because it has a trivial one-line answer?
Hard questions doesn't mean they're bad, it just means many people aren't competent answer them. The same goes for obscure questions; there might just not be many people who care, but the question itself is entirely valid.
Does that mean they're not suitable for S/O?
I... can't believe anyone seriously believes that hard niche problems are too obscure or too hard for S/O to be bothered to grace themselves with.
It's absurd.
It just baffles me that a question that might take some effort to figure an answer out to might 'not be suitable' to S/O.
Is it? What hardware and OS version should I use to reproduce the server?
The complete source code of the server is in the question.
I'm not really asking where .NET runs; I'm asking what is the environment to reproduce it in? If it only affects certain ones it's a waste of time to just make people guess.
> The complete source code of the server is in the question.
Yes, I saw.
The problem is complexity and scope.
We don't debug code for others. We expect them to find the specific part of the code that is causing a problem and showcase a minimal reproducible example. For performance issues, we expect them to profile code and isolate bottlenecks - and then they can ask a hard, obscure question about the bottleneck. Or a very easy one, as long as it's something that could make sense to ask after putting in the effort.
In short: we're looking for a question, not a problem. Stack Overflow "can't be bothered to grace itself with" hard niche problems, or with easy common problems. But it is about answering the question that results from an analysis of a problem. Whether that's understanding the exact semantics of argument passing, or just wanting to know how to concatenate lists.
And we're looking for one question at a time. If there are multiple issues in a piece of code, they need to be isolated and asked about separately. If the task clearly breaks down into a series of steps in one obvious way, then you need to figure out which of those steps is actually causing a problem first, and ask about whichever steps separately. (Or better yet, find the existing Q&A.)
(Questions seeking to figure out an algorithm are usually okay, but usually better asked on e.g. cs.stackexchange.com. And usually, an algorithm worth asking about isn't just "do X, then do Y, then do Z".)
Stack Overflow is full of highly competent people who are yearning for questions that demand their specific expertise - recently, not just in the 2010s.
Most questions I've asked since 2020 were deliberate hooks to deal with common beginner-level issues or close FAQs that didn't already have a clear duplicate target. (I've stopped contributing new Q&A, but still occasionally help out with curation tasks like editing.) But I asked https://stackoverflow.com/questions/75677825 because I actually wanted an answer, and it's an instructive example here.
Answering it required detailed expert-level knowledge of modern CPU architectures and reverse engineering of the Python implementation. Asking it required noticing a performance issue, then putting extensive effort into simplifying the examples as much as possible and diagnosing the exact qualities of the input that degrade performance - as well as ruling out other simple explanations and citing the existing Q&A about those.
But demonstrating it requires nothing more than a few invocations of the `timeit` standard library module.
Tangentially, one thing I think StackOverflow misses in its policy is in not allowing questions that ask for help to evaluate libraries and products in the developer space. There are many questions on SO where someone has asked what is the best library for X, and the question is closed due to site policy, but I think a discussion of what library is best for a use case is a very useful thing to have on SO. The answer to the question can change over time, and multiple alternative solutions can be presented. This is the kind of thing that Reddit has taken over for many different product recommendations, but reddit answers are just comments, they don't have the UI of SO's for commenting on questions and answers.
With pleasure! SO is definitely more of a distinct Q&A site and not a discursive, open-ended collaborate and problem-solve site.
The CoreWCF folks would be delighted to have a reproducible example, and they are best suited for answering the question. They have written CoreWCF after all! And if you don’t feel comfortable with raising issues? There’s also a Discussion feature on GitHub, where you can ask more free form questions.
And once you fixed or analyzed the issue with the CoreWCF team, you could write a blog post about it and share it with anyone having a similar issue.
That’s how I do these things.
I posted a question[1]. Got some answers, but not quite complete. Then someone came along and provided a good detailed answer. A couple of upvotes later, that answer got deleted by Community Bot. I voted to undelete it, but it still needs another vote to undelete. So I ended up copying it into my own notes blog[2].
I'm not sure why the best answer was deleted. It would've been a loss if it wasn't preserved somewhere I think.
[1]: https://stackoverflow.com/q/73954228/155351
[2]: https://notes.max.engineer/creating-common-interfaces-in-gol...
It looks like the answer you're talking about is now undeleted, with yours among the three necessary votes. (It also looks like you accepted, and awarded a bounty to, a different answer back when you asked, and the answer you're now calling "best" was posted months later. So it goes.)
The explanation for the original deletion is exactly as ayhanfuat says. LLM and other GenAI content is forbidden on Stack Overflow (https://meta.stackoverflow.com/questions/421831/), since before that answer was posted, and the author (one of the site's most prolific contributors) got in trouble for a flood of such answers. (It took quite a while for the problem to be noticed and acted upon, in large part because of a moderation strike in June 2023 which ended up creating a large backlog of flags for AI content, and in part because custom flags are required to report this. The strike, in turn, was largely about the company/staff trying to interfere with moderators attempting to detect AI content, with the site owners being unwilling to accept false positives because it would be "unfriendly". So it goes.)
Thank goodness I can see it because I have enough reputation.
After a while, I stopped having to post questions about "common frameworks", either because I could do with the official docs of because there was already a StackOverflow answer for my question.
What was becoming more common was that I would have a question similar to an existing unanswered one. Or that my question would never receive an answer (presumably because my questions were becoming more tricky/niche). So what I started doing was answering my own question (or answering those existing unanswered ones) after solving it on my own. Still, it was fine and I was contributing.
And for some reason, a few years ago my questions started being closed for no apparent reason other than "those who reviewed it have no clue and think that it is invalid". Many times they closed even though I had posted both the question and the answer at the same time (as a way to help others)! The first few times, I fought to get my question reopened and guess what? They all got a few tens of votes in the following year. Not so useless, eh?
Still, that toxic moderation hasn't changed. If anything, it has gotten worse. So I stopped contributing to StackOverflow entirely. If I find information there, that's great, if not, I won't go and add it once I find a solution for myself. I am usually better off opening an issue or discussion directly with the upstream project, bypassing StackOverflow's moderation.
I heard people mentioning that LLMs were hurting StackOverflow badly. I'm here to say that what pushed me away was the toxic moderation, not LLMs.
I have a rep that is based almost entirely on questions, not answers. I learned to ask questions fairly well (to the point I seldom get answers -there's a price to pay, for questions that are very specific).
In some cases, the question is a basic one, and doesn't need a code listing and sample project. It's still a perfectly valid, pertinent, thoughtful question, but not very verbose.
Those questions almost always get closed.
I have found that asking LLMs doesn't always get me the best answer, but I get an answer. In some cases, I can have an iterative refinement, where I keep adjusting the question, until I get a useful answer.
I've never gotten code from SO, that I can use without modification, but I have gotten some great answers, over the years, and have expressed gratitude and respect.
I have gotten quite annoyed with the "attitude" that is often expressed. There's no doubt that folks who ask questions, are considered "lesser beings" on SO. Just look at the question-to-answer ratio of the high-score individuals. Weird attitude, for a site that is pretty much completely reliant on questions.
Basically, I have just given up on SO, and have found LLMs to give me what I used to get from it.
In my opinion, they have killed SO.
And because recourse is so hard and goes trough the same gatekeepers anyway, they don't get any signal about the accuracy of the maintenance.
One of the reason I've left as well was bureaucrats wrecking havoc to perfectly reasonable answers trying to rack up these points.
Peak of the fenomenon was 2014 when people started publishing their so scores on their resumes, but the platform never really recovered.
Could you describe this? A lot of people seem to believe that closing or duplicating questions awards reputation. It doesn't.
The complete list of reputation gain sources is at https://stackoverflow.com/help/whats-reputation
I would contend that the "close as something that you have an answer on" is less driven by "I want more votes on the answer" but rather "I know where to find this answer."
Alternatively, if the person didn't close it as an answer you would instead have the person copying and pasting the same answer into the new question - which would accomplish the same thing (more votes for your answers) and further fragment the "one place to look" ideal.
From the perspective of the site and curation of information, a given answer should appear in one and only one question. Closing a question as a duplicate serves to further that goal. Copying and pasting answers ( https://meta.stackoverflow.com/questions/320351/how-to-handl... ) to questions that would be duplicates is frowned upon. Diamond mods get such behavior raised to them as a system flag ( https://meta.stackoverflow.com/a/317988/ ) - "Duplicate answers (auto) - raised on each duplicate answer"
In general, answering a question that you're actively curating is looked down upon on meta (it raises suspicion of vote fraud; and yes, moderators do care about that quite a bit, even if they recognize how broken the reputation system is) unless you've also asked the question intentionally as a canonical duplicate target (https://meta.stackoverflow.com/questions/426205) and you're writing a new answer from scratch. And proper citations are required for anything you get from someone else - whether it's another SO answer or something elsewhere on the Internet.
New Q&A of this sort generally gets written because people recognize that a question is commonly asked about some basic material, but nobody who actually needs the question answers (and thus asks it anew) ever manages to come up with a high-quality phrasing. For example, https://stackoverflow.com/questions/45621722 was intentionally crafted in 2017 to make it easier to direct beginners who have trivial issues with Python indentation to gain a basic understanding of how it's supposed to work. (There are a few key ways to get an IndentationError that aren't caused by general cluelessness; generally those are still duplicates, but should be directed somewhere else.) In 2023, I did some site searches and identified hundreds of old questions that are clearly low-quality duplicates - more beginners asking basic questions about Python indentation; there isn't enough daylight between them to consider them different, as the underlying conceptual difficulty is the same.
This has nothing to do with ego. I don't know the original author, "Chris", and have not otherwise knowingly interacted with that person. But I (and others) did extensively edit the question - to help make sure that other beginners can see their own problem in the question, and to help everyone - people with a more complex problem, and curators trying to point people in the right direction - to find other questions if they're more appropriate.
The fact that a duplicate target was asked later is generally considered irrelevant. We want people to find the best version of the question (https://meta.stackoverflow.com/questions/258697, https://meta.stackoverflow.com/questions/404535). As a general principle, Stack Overflow curation doesn't care about when something was posted - only about how it holds up in the current moment.
> plenty with an answer lifted from the original.
Stack Overflow moderators take plagiarism very seriously. If you see a "lifted" answer anywhere on the site, please flag it.
This. It is exactly the problem with incentives.
At some point I was wondering why Tor was not offering incentives, which is something Nym was talking about. And I found an explanation on the Tor website that said something along those lines: "we thought about incentives, but we decided that we wanted contributions from people who cared, not from interested people". Makes sense to me.
If only. Sorry to say, all of this curation effort happens purely by intrinsic motivation - a desire to see a better-curated site.
It's objectively a good thing when more questions get closed (including marking duplicates) because the overwhelming majority of what gets posted is nowhere near meeting standards, and because those standards have been carefully considered with the site's goals in mind.
Those goals just don't happen to match the goals of the overwhelming majority of people who come to ask a new question on Stack Overflow. That's because they don't understand the site's purpose. There is a tremendous amount of misinformation out there (and the site owners are at least complicit in this, because it drives traffic).
In point of fact, my reputation increased the most during a period when I barely used the site at all, because I accumulated votes on answers I'd already written. And I didn't care about any of that, because it gets you absolutely nothing past IIRC about 35000. (The last privilege - https://stackoverflow.com/help/privileges - is awarded at 25000, but past that you can get an increase in the number of flags and votes you can cast daily. It would take an unimaginable level of obsession with the site to ever run out of validly raised flags, but I have run out of closure votes on several occasions.)
When I came back, I started actually paying attention to the meta site and understanding how Stack Overflow is actually intended to work, instead of just being another random person trying to contribute expertise. And my reputation has actually levelled off and declined, mainly because I award generous bounties for existing exceptional answers, or to promote the few high-quality questions I find that need a better answer (especially, questions that I'd like to use as a duplicate target, but wouldn't provide others asking the question with a good enough answer).
> bureaucrats wrecking havoc to perfectly reasonable answers trying to rack up these points.
It's not bureaucracy and it isn't "trying to rack up points". You get two reputation points for an accepted answer, only if you don't already have at least 1000 points and only if you get two out of three users with unilateral edit privileges to agree that it's a good edit (and they, in turn, are incentivized to steal your edit - not for reputation, but because they can get it published unilaterally instead of waiting for someone else to approve). You can't even reach unilateral edit privileges this way, since you need 2000 points for that.
Among people making edits unilaterally - both to questions and answers - this is overwhelmingly motivated by good faith attempts to improve quality. "Perfectly reasonable" is not the standard. The standard is "as good as the available attention allows" (ideally, people focus on more popular content). When you post on Stack Overflow, you license the content to the community (and separately also to the site and company) and they are absolutely within their rights to make good faith edits. If you want to share "your" ideas with the world and not allow others to touch, use a blog.
As I said, those were pretty specialised questions, you can't expect to have 10 upvotes in the first day for those.
I'm happy to hear it. This is how it's supposed to work. If the system were properly designed, questions would start out closed - that is to say: the community would have a chance to fully refine the question and ensure that it meets the site's standards, before people were allowed to write answers.
(The new Staging Ground implements a form of this, for a small selection of new questions.)
The point is to ensure that everyone who has the same question can have an optimal experience by finding it: they should see a question that's easy for them to read and understand; they should easily be able to verify that it's the same question (even though it came up in a radically different context for someone else); they should be able to come across it with a search engine (so the title should make sense, etc.); and it should be properly focused. Then they can scroll down - ideally, not very far - and see the answers, already written, without themselves having to ask again and wait.
> you can't expect to have 10 upvotes in the first day for those.
Ultimately, the thing that gets a question 10 upvotes in the first day is off-site exposure. That's not how it's supposed to work, but the Internet is what it is.
But everyone who writes a question thinks their questions are "valid", or they wouldn't post in the first place. You aren't the one who gets to decide whether a question meets the standards to stay open; when a question is closed, you are the one primarily responsible for fixing the problem identified with it.
And "valid" is not the standard: https://meta.stackoverflow.com/questions/417476 I don't understand why, but the adjective "valid" seems to be very popular among people who complain about having their question closed. It has nothing to do with how our standards are written, though.
Speaking of which, you also claimed that the people who closed your questions had "no apparent reason other than 'those who reviewed it have no clue and think that it is invalid'". But this directly contradicts what you were told about the closure - I know this because there is a very small set of things you can be told by the system, and none of them matches that description. You have no evidence to back up that mindset; and, as far as I can tell, instead of trying to use the meta site and/or comments to get clarification, you assumed bad faith.
Questions are closed preemptively as an injunction against answers, not as a punishment, as I repeatedly explained throughout this thread. I've also more recently posted a reference question (with my own answer, among others) on the meta site explaining this to would-be answerers: https://meta.stackoverflow.com/questions/429808 (Here I used the word "valid" in the title deliberately as an eye-catch, because I'm not just noticing the trend now.)
All of this happens because questions and answers are for everyone; it's not just about you as the poster. We're trying to maintain quality control for the benefit of countless future readers, not answering a question in the hopes that you, personally, have a better day programming experience as a result (typically referred to as "operating as a help desk" or similar on meta). We want everyone to have the experience of searching for an answer and directly getting one - not getting lost in someone else's conversation or spending time trying to figure out what they're talking about, or verifying that they're in the right place.
Because the latter experience has existed since the creation of phpBB, if not Usenet. And Stack Overflow was specifically borne out of frustration with it.
Having met many SO power users in group settings over the years, I feel that there is very little tolerance for questions that require effort to understand. If it's a simple question posed by a non-English speaker that needs some thought, then it doesn't belong (but that's why comments were introduced later.) The same goes for a deeper technical question where the author gets it all out but doesn't take the time to structure or format it. The volume of behavior like this differs based on the type of question and experts prepared to weigh in.
This gets compounded by the up and comers on the reputation scale. They get their special powers and see this BOFH close behavior and replicate it. Over time it starts to become the norm. I had the ability to vote for reopens and these same people would argue about why this was a bad idea. They weren't prepared to admit they were wrong and felt they were doing God's work by ridding the site of poor questions when some of us even had the ability to make edits to clarify them.
I just opened the site after some time away. At the top, pushing the question list below the fold, are: Reputation, Badge Progress, and Watched Tags blocks. The Interesting Posts for You question feed is below that and I have to go see how that is constructed. I only ever wanted the firehose of new questions with my tags highlighted.
EDIT: The behavior I noted above is yet example of why I always want to know how a job candidate deals with ambiguity. In my experience, this has a massive impact on the ability to work independently, not piss off colleagues/clients/customers, and make good decisions.
No, that's not why. If we can understand the English, we edit to fix the English.
We constantly get questions by native English speakers that are nevertheless barely comprehensible. Even when the problem is clearly described, it still needs to meet several other standards (https://meta.stackoverflow.com/questions/417476/). This is by design.
We aren't closing questions because we want to close questions. We're closing questions because they need to be improved by the OP (i.e., fixing the question requires OP's perspective or knowledge) before they are compatible with the site's objectives, which do not necessarily align with yours as a person who has a question.
This is not a punishment and is not in general a permanent state. Closed questions can be, and are, re-opened if the problem with the question is fixed (without fundamentally changing it).
> I just opened the site after some time away. At the top, pushing the question list below the fold, are: Reputation, Badge Progress, and Watched Tags blocks.
And the people doing the majority of the curation work do not care in the slightest about reputation or badges. I certainly don't.
The users with the most reputation are generally the ones who spend hours a day answering easy questions that don't come anywhere close to meeting the site's standards (not because they're easy, but because they're terribly asked and probably duplicates) after doing a bit of mind-reading to figure out what the terribly-asked question is (or scanning through a couple dozen lines of code for trivial problems without really reading the question - because they usually don't need to) and getting a quick upvote and accept from the OP.
Questions like that have ruined the site and continue to make it worse - by diluting search results, by making it harder for curators to find the "canonical" targets for closing duplicates, by click-baiting away from questions other people actually want to find (e.g. by describing a completely different problem with all the same keywords, or by completely misidentifying what's wrong), and most of all by the broken-window effect (bad content examples overwhelm good ones).
But the reputation system rewards people who answer those questions. (The obsessive answer writers I complain about the most in Stack Overflow chat often have 10x or more my reputation.)
Curators have had a goal of closing bad new questions quickly (https://meta.stackoverflow.com/questions/260263), trying to beat the answer to the punch. But answer-writers get a grace period, and can fill in a stub answer and edit it later; and they can act unilaterally while curators usually have to come to a consensus.
I'm sure mods well and truly believe that what they are doing is for the greater good, and I can even believe that the bulk of questions that are closed really are dupes etc. But there are a lot of babies getting thrown out with the bathwater even so, and the net result is that SO is less useful overall
Not all questions which seem duplicate to a non-expert are actual duplicates. An example question which took 2 lines to answer, then a page of text in that answer to restore the question after being closed as a duplicate: https://stackoverflow.com/q/65025858/
It requires either:
* Three votes (it used to be 5) from community members with the close vote privilege (awarded at 3000 reputation)
* Unilateral closure by a moderator (there are currently 24 of these: https://stackoverflow.com/users?tab=moderators - compare to 29 million user accounts: https://data.stackexchange.com/stackoverflow/query/1877958/c... )
* Unilateral closure as a duplicate by a user with the close vote privilege who also has a gold badge for one of the tags originally used on the question (https://meta.stackoverflow.com/questions/254589)
The thresholds are deliberately fairly low, mainly because closure of new bad questions must happen promptly for the site to work as intended (https://meta.stackoverflow.com/questions/260263). This is frankly a major fault in the site design; but the new Staging Ground feature (https://meta.stackoverflow.com/questions/430404) helps a lot, on the occasions when the site software actually decides to use it.
However, "closing" content "that's viewed a lot" (this basically only ever means old questions; new questions rarely ever get a lot of views, regardless of quality, unless it's from spambots - see https://meta.stackoverflow.com/questions/431084) is emphatically not wrong. We close old, popular questions all the time, because they don't currently meet site standards (usually, because they are no longer deemed on topic). This is at least partly to discourage new questions along the same lines; but the primary effect of closing a question is to prevent answers from being contributed. These old questions generally wouldn't need new answers (although edits to existing answers may be helpful - and are not blocked) even if they were still considered suitable.
If the AI changes things, then one should ask why the individual was contributing when Stack Overflow Inc was the business reaping the financial rewards of community contributions.
It's common for those to get shouted down based on some policy or other bureaucratic nonsense by those who have no idea what the question is actually about. The problem could be that many of those who don't do, moderate. It attracts different sorts of people than those that are actually working with the things being discussed.
1. First and foremost, it's not a democracy if your turnout is too low. The 2024 election had voter turnout of 2%, which would be a catastrophic turnout in any real democracy. Either too few knew about the election, too few thought their vote mattered, or too few had any idea how to choose between the options. Any of these reasons requires immediate and pressing attention if a democracy is to be called that.
2. Never having re-elections makes it useless. For it to be a truly democratic process there absolutely needs to be a way to withdraw consent from a moderator who behaves differently than expected. So yeah, a no confidence vote would be an option, or better yet regular elections to hold a position.
I'm afraid that the largest problem is that democracy is really just a bad fit for this kind of site. By its nature the only people who are likely to vote in this type of election are a small dedicated core, not the enormous number of users that the site actually serves. A small core of contributors to a community resource invariably seems to develop a sense of "us against the world"—the thin blue line of police lore isn't an isolated thing, it's what happens when people view themselves as lone defenders of something they care about. And just like with police, that can result in a toxic culture that begins to actively degrade the plebian outgroup that they started out serving.
I don't have a better answer, but I don't believe democracy is a good fit for moderators on the scale of community that Stack Overflow operates. It's too big to have good turnout, and the problems caused by bad turnout have become catastrophic.
> For it to be a truly democratic process there absolutely needs to be a way to withdraw consent from a moderator who behaves differently than expected.
This is a strange requirement to me. Do parliamentary democracies have this feature? I don't think so. And they are "truly democratic" in my book.FYI: I have been part of SO.com for about 15 years. I am regular on both sides of the Q&A. Never once have I felt compelled to vote in any election on SO.com. The site admin is totally uninteresting to me.
I can't speak for all parliamentary elections, but in the UK MPs must be put to the vote at least once every 5 years, and in practice elections are called more frequently than that. I'm unsure what a system that elects for life but is still a democracy would look like: do you have ideas in mind?
Notwithstanding everything else I said above about how "moderation" is actually almost completely irrelevant here, and the overwhelming majority of what people call "moderation" is in fact curation done by community members in more or less a direct democracy:
We have elections annually (https://stackoverflow.com/election), and so does each Stack Exchange site generally. Moderators generally must voluntarily step down barring a major problem; but this was carefully considered at the start (https://meta.stackexchange.com/questions/984).
Because I absolutely will not agree that other people should get to change what Stack Overflow is, simply because they think it should work like the other sites it was explicitly intended to provide an alternative to.
I'm trying my hardest here to be courteous and to consider all sides: the fact that the software doesn't work optimally for our goals; the fact that the site owners have unaligned interests (corporate ones around ad revenue and site traffic); the fact that key parts of the site software were poorly designed at the start and not properly re-evaluated and fixed (in particular, the reputation system, which saw only a passing attempt to invite meta-discussion and then no corresponding change); the fact that the site's UI affords misuse by looking too much like a discussion forum (compare and contrast Wikipedia: there's no sense that anyone is replying to anyone else except on the Talk and other meta pages, and the edit form is hidden behind a link).
For what it's worth, alternatives exist, and I prefer them. In particular, I use Codidact (https://www.codidact.com) and I consider that its design has fixed many problems with the Stack Exchange network. But fundamentally, these kinds of Q&A sites are meant to work a certain way in the main Q&A space (although Codidact opens up the possibility of parallel related spaces, not just meta). They are fundamentally and crucially not a place to just ask something because it's on your mind (or with the specific intent of getting out of a bind), without heed to existing questions, and hope that someone addresses you personally. That's how traditional forums work, and ultimately the cause of all the things that made experts fed up with them and motivated to try something new in 2008.
I've written a lot ITT because there are a lot of misconceptions about Stack Overflow out there, and many of them are quite popular; and because the site itself is not very good at presenting the needed correct information.
I'll just refer back to the key relevant part of my initial post:
> A small core of contributors to a community resource invariably seems to develop a sense of "us against the world"—the thin blue line of police lore isn't an isolated thing, it's what happens when people view themselves as lone defenders of something they care about. And just like with police, that can result in a toxic culture that begins to actively degrade the plebian outgroup that they started out serving.
In particular: we have always had what could broadly be called a code of conduct; it's become more refined and more like official codes of conduct over the years, for better or worse. But overall, over time, we've become much better at removing actually abusive, profane etc. comments, and editing off-handed details in questions to avoid giving needless offense. (By the way: a quite large fraction of curse words and insults come from new users who are upset at the realization that questions are subject to quality standards, or who take downvotes personally when we intend it purely as content rating.)
When I say that I don't understand, it's because you describe "in-group out-group aggression and defensiveness" and I don't see it that way. I'm not trying to protect other meta regulars. I'm trying to help people integrate by explaining to them how we want them to approach the site instead.
But it's impossible to do that without first informing people that their current approach is wrong, and trying to explain patiently why it's wrong.
> it's what happens when people view themselves as lone defenders of something they care about.
Because we actually, objectively are.
And what's wrong with that?
Why shouldn't we be able to have this thing?
And why should it be considered an invalid thing when e.g. Wikipedia is not?
If 29 million people want to use the "anyone can edit" property of Wikipedia to edit https://en.wikipedia.org/wiki/Dog and ask whether Rover's condition is serious enough to require veterinary attention, does that invalidate Wikipedia's model?
> that begins to actively degrade the plebian outgroup that they started out serving.
Stack Overflow started in a closed beta and was marketed from the start as being for people with a certain level of cluefulness. We had to argue among ourselves to get everyone to accept that a) easy questions are not only fine, but often the most valuable and b) the thing that experts tend to hate about beginner questions is not the fact that they're beginner-level; it's literally every other consequence of a beginner asking them.
And acceptance of that is still not complete; sometimes long-standing members get yelled at on meta for trying to close good, easy questions because they're easy. And they, too, are acting against consensus, and against Stack Overflow's vision. (They're just, you know, nowhere near as troublesome overall as the long-standing members who don't care about policy and just try to answer as many questions as they can figure out an answer to.)
Stack Overflow was never intended to provide the kind of "service" that most newcomers (including newly arrived experts hoping to answer questions) expect. It was instead intended to show people that there's another way, that's fundamentally different from the traditional forum experience.
Review comes after the closure. This is the explicit and intentional design of the system. In other posts, I cited this (https://meta.stackoverflow.com/questions/260263); not sure how that escaped from that post.
With the usual model, I can just negotiate directly with management and they can tell me yes or no, and we can make a contract (or be a bit vaguer and make promises).
With worker democracy, there's no one I can negotiate with that can tell me anything definitive.
Well... obviously :-)
> Second, the voters may not have been qualified; they did not use the site enough to be able to or care to select one moderator over another.
This is exactly the attitude I'm talking about. I think the dedicated core actually does believe this: that nothing is broken, it's okay that the outgroup doesn't vote because they'd just ruin things if they did.
But it often isn't, they just didn't spend enough time to see nuance.
And neither do they see that even if _they_ understand that the question linked to is the same thing, there is no way the asker can understand what the similarity is from their knowledge point of view (or why the linked duplicate question is the same question).
dang doesn't go and delete all the infinite failed submissions to HN, after all.
... Okay, I want to walk back something I said in some other comments here. There is definitely a class of SO questions that get closed as duplicates inappropriately. I tend to forget about the first of the questions because it's not generally a suitable dupe target when it's used: it's a meta question, explaining how to fix your question, rather than actually answering it. But, as you might infer, that means your question should still be closed - it lacks debugging details.
I fought against this trend on meta: https://meta.stackoverflow.com/questions/426205 . Unfortunately, there's another incentive misalignment here: dupe-hammering the question allows users with a gold badge to act more quickly on questions that don't meet site standards but are likely to attract a quick answer that interferes with keeping the site clean.
The second one... honestly probably isn't the best version of the question, but it's attracted good answers and become "canonical". The problem is that thinking in terms of "variable variables" isn't necessarily the right way to think about the problem (dynamically modifying namespaces; or rather, the fact that Python's namespaces are reflected as objects that can in most cases be modified meaningfully) - but it does map pretty well to how a beginner would typically think about the problem. It just tends to overlap with other reasonable questions in a messy way.
On Codidact, I've attempted to address the problem space more proactively, but I think I didn't complete the project I had in mind.
> How much traffic do the questions that get duped to something bring? Especially the (currently) 410 questions linked to the Java NPE question. You get the couple of FGITW answers on it and the answer is over there, and closed to keep more people from trying to answer it (I hope the dup hammer is helping)... but now it's a closed question with 0 score, 100 views after a year... and five answers (one of which was accepted)... and no one will ever find it.
That was in 2014.
---
There are some misaligned incentives. There are probably people who dup vote to try to boost their reputation for some reason.
The problem (as I saw it) was that the tools of moderation and curation had too much friction and limits placed on them.
As the number of questions grew faster than the people who would curate them did, and the tools to curate them were diminished... you've got the problem of "there are two tools to curate and moderate left. One is to close the question. The other is to be a jerk to try to disincentivize the person from doing that again." I wrote about the second bit... a few years ago. Rudeness – the moderation tool of last resort -- https://shagie.net/2016/09/16/rudeness-the-moderation-tool-o...
Things like making it harder to not see low quality questions, or close them, or delete them...
> Thus rudeness and the attempt to drive an individual away because other moderation tools have run out or are ineffective. Rudeness is the moderation tool of last resort. When one sees the umteenth “how do I draw a pyramid with *” in the first week of classes on a programming site – how does one make it go away when the moderation tools have been fully exhausted? Be rude and hope that the next person seeing it won’t post the umteenth+1 one.
With respect to Stack Overflow, I believe that they've exhausted the people capable of doing moderating without rudeness and are now employees trying to moderate the core group rather than the core group empowered to moderate the site. Eventually, there will be no more left in the core group.
Other sites, with a narrower focus (e.g. GitHub discussions) are more able to handle the better focused questions and smaller user bases.
Because we're trying to build a searchable reference, such that if you try to look for an existing question, you a) find it; b) find the right question; c) find the best possible version of that question; d) can readily tell that you found what you want.
And because we are explicitly not trying to build a discussion forum, social media, "HN but specifically for programming questions", or anything else like that.
You might as well ask: why delete newly created pages on Wikipedia, or revert edits to existing pages?
> But it often isn't, they just didn't spend enough time to see nuance.
As a gold badge holder (for Python and a few other things), I see this complaint constantly. It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone. This entails collecting useful answers together so that people with fundamentally the same question can all find them, instead of it depending on how lucky their search engine of choice is feeling today.
The meta site has historically been flooded with people trying to reopen blatant duplicates based on trivial distinctions, at the level of "no, I want to get the Nth item of a list, not a tuple". That isn't a direct quote, but it's not an exaggeration either. I wish it were.
We do make mistakes, in part because there's pressure to act quickly. It's much harder to keep the site clean when answers get posted where they shouldn't be. Closing questions prevents answers from coming in.
> there is no way the asker can understand what the similarity is from their knowledge point of view (or why the linked duplicate question is the same question).
I try to leave a comment to explain the connection when it isn't obvious. (Another common thing that happens is that the problem someone wants to solve involves an obvious two- or three-step procedure, and each step is a matter of fundamental technique that's already been explained countless times.) But overall, it isn't our goal to teach. We answer very simple questions, and very difficult questions; but we aren't designed to teach. Sometimes it's hard to ask a simple question, because you have to figure out what the question is first. It's unfortunate that people who need the question answered often don't have that skill. But if we have a high quality version of that question already, we can direct people there.
Sometimes the linked duplicate isn't the best choice. You can help by finding and promoting a better choice - on the meta site and in the chat rooms. You can also help by editing common duplicate targets - both questions and answers - so that it becomes more clear to people who would actually have the question, that they're in the right place (and so that the information in answers is more readily applicable to them).
This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.
To emphasize the difference: Personalized answers would be about having a single question and giving different answers to different audiences. This is not at all the same as having two different _questions_.
What you're missing: when a question is closed as a duplicate, the link to the duplicate target is automatically put at the top; furthermore, if there are no answers to the current question, logged-out users are automatically redirected to the target.
The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.
It's important here to keep in mind that the site's own search doesn't work very well, and external search doesn't understand the site's voting system. It happens all the time that poorly asked, hard-to-understand versions of a question nevertheless accidentally have better SEO. I know this because of years of experience trying to use external search to find a duplicate target for the N+1th iteration of the same basic question.
It is, in the common case, about personalized answers when people reject duplicates - because objectively the answers on the target answer their question and the OP is generally either refusing to accept this fact, refusing to accept that closing duplicates is part of our policy, or else is struggling to connect the answer to the question because of a failure to do the expected investigative work first (https://meta.stackoverflow.com/questions/261592).
Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.
> ... because objectively the answers on the target answer their question ... > ... because of a failure to do the expected investigative work first ...
Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.
The expected investigative work HAS been done; they explained why the other question is NOT a duplicate. The key point is that all of this has been ignored by the person closing the question.
Here, for reference, is the entire sentence which kicked off the subthread where you objected to what I was saying:
> It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone.
In other words: I am defending "preventing answers to the question" for the exact reason that it probably actually really is a duplicate, according to how we view duplicates. As a reminder, this is in terms of what future users of the site will find the most useful. It is not simply in terms of what the question author thinks.
And in my years-long experience seeing appeals, in a large majority of cases it really is a duplicate; it really is clearly a duplicate; and the only apparent reason the OP is objecting is because it takes additional effort to adapt the answers to the exact situation motivating the original question. And I absolutely have seen this sort of "effort" boil down to things like a need to rename the variables instead of just literally copying and pasting the code. Quite often.
> Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.
No, they do not. They describe the experience of believing that the other question is different. They don't even mention the answers on the other question. And there is nowhere near enough detail in the description to evaluate the reasoning out of context.
This is, as I described in other comments, why there is a meta site.
And this is HN. The average result elsewhere on the Internet has been worse.
Super moderators are elected, but not your regular "moderators". In stack overflow, regular folks you have enough karma are moderators and can cast votes, or initiate voting on moderation action. Enough votes and the action happens.
The elected moderators aren't the problem, generally, it's that anyone with a bit of karma can go power tripping and if you get enough of those people on an ever growing platform, they reach a critical mass to stifle anything.
So yes, some moderators are elected, and yes moderation is very democratic.
The overwhelming majority of the actions people complain about in this context (never mind that they don't understand the purpose of those actions or the underlying objectives) are not performed by moderators. They are curation actions taken by members of the community.
The rights to do so are awarded based on reputation, in a very poorly thought out and fundamentally broken incentive system; but there are far more people involved than the moderators. You can query by reputation at https://data.stackexchange.com/stackoverflow/query/1834631/c... : there are about 29 million total user accounts, 3.3 million which may upvote, 1.1 million which may downvote, 150 thousand which may unilaterally edit posts, 100 thousand which may vote to close questions, 28 thousand which may vote to soft-delete posts (and view soft-deleted posts), 9300 with access to internal site analytics...
and twenty-four moderators (https://stackoverflow.com/users?tab=moderators). Who are not the highest-reputation users. (I have more reputation than over half of them, and I frequently complain about users with over ten times my reputation.)
For a "user-run" site it was pretty advanced at the time - you could choose your level to view at (5 was quick summary of the highlights, -1 if you wanted flamewars about NetBSD), stories were curated enough to prevent slop (at least at the beginning), and metamoderating removed the biggest abuses.
[1] https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Found...
If you ask a question on Stack Overflow and it gets closed, you are generally expected to edit it to fix the identified problem and submit it for re-evaluation. It gets put in a queue that other users can review; and everyone with close-vote privileges also has reopen-vote privileges, and can come along randomly and evaluate the question anew.
If you believe the community has misunderstood something about the question or has misapplied policy, you can ask about it on https://meta.stackoverflow.com . However, when you come to the meta site, you are generally expected to have a basic understanding of what the policy is and what our goals are (hint: not helping you, personally, make your code work), and to accept that you may have misunderstood something. And you should be prepared for the fact that voting works differently on meta (https://stackoverflow.com/help/whats-meta).
People who vote to close your question (or downvote it) are explicitly not required to explain this (again for well considered reasons, largely around the risk of harassment or abuse: https://meta.stackoverflow.com/questions/357436). But usually, if the standard close-reason advice won't be obvious, someone will try to explain. If they think the question is unclear, they'll try to say specifically why they are confused; if it seems to lack focus, they'll highlight the separate problems you're asking about or explain what seems irrelevant; if it "needs debugging details" then they'll explain how your code sample falls short of the https://stackoverflow.com/help/minimal-reproducible-example standard.
If you need an explanation and don't get one, you can again ask on meta. Despite the downvoting, if you're polite and understanding (i.e. don't come in with the mindset that we must have made a mistake or are doing something wrong by having a site that works differently from other sites), we'll be polite and sympathetic, and try to explain as best we can.
The original idea of SO was building a knowledge repository, and that meant no duplication and pruning it endlessly to make sure it was useful and up to date (which pretty much failed until recently, until its probably far too late) - this core tenet is something the moderators take seriously, and people using the site as questioners (not searchers) absolutely hate.
You can see they are trying to experiment (again probably too late) with how to make question asking easier, more friendly, etc - but that sort of cuts against the core original goals of SO and that's why the mods and the users seemed to be always in tension.
This is not true as I recall. On Joel and Jeff's podcast, Joel in particular was in favour of having lots of variants of the same question answered repeatedly. His rationale was that if people didn't find the golden original question, there was a reason for that (e.g. it's not a real duplicate, or it's a different frame of thinking about the problem shared by other people), and adding the supposed duplicate would mean that other people who search for it - and would similarly fail to find the golden original - would land on the supposed duplicate. Net win.
But this was in tension with cheap karma farmers. SO was structured as a points economy, but in any case anything with points rewards motivates some people to play the game of collecting points. A cheap way of farming points is to ask trivial questions then answer them yourself, or participate in an implicit network of people asking and answering trivial questions. How do you cut that out? Have canonical versions of the trivial questions, redirect people to them while asking, and motivate deduplication.
https://blog.codinghorror.com/introducing-stackoverflow-com/
> Stackoverflow is sort of like the anti-experts-exchange (minus the nausea-inducing sleaze and quasi-legal search engine gaming) meets wikipedia meets programming reddit. It is by programmers, for programmers, with the ultimate intent of collectively increasing the sum total of good programming knowledge in the world. No matter what programming language you use, or what operating system you call home. Better programming is our goal.
The emphasis on "good" is in the original.
https://www.joelonsoftware.com/2008/09/15/stack-overflow-lau...
> What kind of questions are appropriate? Well, thanks to the tagging system, we can be rather broad with that. As long as questions are appropriately tagged, I think it’s okay to be off topic as long as what you’re asking about is of interest to people who make software. But it does have to be a question. Stack Overflow isn’t a good place for imponderables, or public service announcements, or vague complaints, or storytelling.
---
And then, go to https://stackoverflow.com/questions/1003841/how-do-i-move-th...
I would draw your attention to its history and the original version: https://stackoverflow.com/revisions/1003841/1
and the action taken on September 17th, 2011. https://stackoverflow.com/posts/1003841/revisions
As I said, I strongly disagree with the idea that my questions were unfit for StackOverflow. Every single time their reason was "duplication", it was not AT ALL a duplicate. Two different questions (sometimes obviously very different) with two different answers. Hell, they closed some of those as duplicate even though I posted both the question and the answer, and the answer was completely different from the one they were pointing to.
This is not "I want to ask whatever questions I want". It's bad moderation.
Please feel free to show concrete examples, and I'd be happy to try to explain the reasoning.
Say I ask "How to do X in settings.gradle?" and it is closed as a duplicate to "How to do X in build.gradle?". I know how to do X in build.gradle, I know it is not the same as doing X in settings.gradle (even if it's is twice the same X), and I know how to do X in settings.gradle (because I just had a need and found a solution without the help of StackOverflow). So I post an answer right away.
Can you explain the reasoning, or do you need it more concrete because you're absolutely sure you know better?
Because what's clear to me is that those (because it required multiple votes) who closed it as duplicate have no clue how it works. They obviously stopped at "X == X, it's a duplicate".
At some point I got into the habit of adding notes like "Note: it is not a duplicate of A because [...] and it is not a duplicate of B because [...]", which honestly made the question worse for those who actually understand it (just for the sake of pleasing those who would close it as duplicate). Spoiler: they closed it as a duplicate of A.
But stay happy in your world where you know everything, I'm not coming back anyway.
I'm not familiar with Gradle (I think that's a Java build system?), but if I saw what actually happened, I could probably understand well enough.
If the moderation was effective and limited, people would ultimately be fine with it.
What people don't like is having a question closed as "duplicate" even though what it supposedly duplicates is very different, or any of the other myriad complaints.
The same story goes for Wikipedia. Moderators have an agenda, act in frequently erroneous ways, and are actively hostile to criticism.
1) People want to ask homework questions (_eg_ on Biology, Chemistry, etc). I understand why that is not allowed, but that doesn't change people's desire to 'just have an answer, now!'. I guess that AI could really take over this niche.
2) Others want to ask very open-ended 'discussion' questions that require back-and-forth to get to the answer, which may be on the edge of known research.
While I do understand why people get frustrated about these things, as you point out - this is not what SO (and SEs) are 'for'.
What if moderators had to actually have karma from recently answering questions or they lose mod privileges?
Wouldn't that be a fresh change. You'd have to actually work to be a mod.
...
It shouldn't be controversial. That mods currently make visitors unwelcome is disgrace. :(
That SO incentivizes that behavior is ridiculous.
Good. That's the site working as designed and intended.
> What was becoming more common was that I would have a question similar to an existing unanswered one.
Then you should improve the existing unanswered question instead, and/or draw attention to it (https://meta.stackoverflow.com/questions/265874 ; https://meta.stackoverflow.com/questions/266338). Or, yes, answer it if you can. Thank you for doing so.
> Or that my question would never receive an answer (presumably because my questions were becoming more tricky/niche).
That's a big presumption. I got an answer to https://stackoverflow.com/questions/75677825/ within hours.
> for some reason, a few years ago my questions started being closed for no apparent reason other than "those who reviewed it have no clue and think that it is invalid"
This is absolutely not what happened. First off, when your question is closed, you get a banner at the top of your question indicating which of the few standard close reasons was chosen. The wording isn't always a great fit (especially in the cases where people voted for more than one close reason - please keep in mind that we neither write this explanation nor get to choose the text; it's pulled from a database following simple mapping rules, and even moderators have only very indirect influence over that database) but it does normally point you in the right direction.
Second, "I don't know the answer" is not a valid close reason. People constantly accuse (on the meta site and elsewhere) that someone else's close vote was motivated by this; there's never any real way to evidence that, and this kind of accusation is in fact what we consider toxic behaviour (an assumption of bad faith).
> Many times they closed even though I had posted both the question and the answer at the same time (as a way to help others)!
The fact that you provide your own answer weighs exactly zero in the calculation of closing a question. It must meet the site standards. Part of the purpose of a question is to index the information in the answer - so no matter how brilliant your explanation of the underlying problem might be, your exposition of the problem is a limiting factor.
> The first few times, I fought to get my question reopened and guess what? They all got a few tens of votes in the following year.
The community does make mistakes, in both directions. The meta site exists for a reason.
But part of "fighting to get a question reopened" is editing it. Changes you might think are trivial might be crucial according to our standards. Some questions fundamentally can't be fixed; but when they can, closing a question signals that the OP's perspective is needed to fix the problem, no matter how minor. If we could fix it (without worrying about trampling on your authorial intent), we would.
>Still, that toxic moderation hasn't changed. If anything, it has gotten worse.
It's not moderation, but curation. It's overwhelmingly done by a community of volunteers - not by the two dozen or so moderators (also volunteers) looking over an accumulation of literally millions of users and questions.
And it isn't "toxic". Overwhelmingly, people aren't doing it out of any kind of vendetta or a desire to cause you or anyone else a problem. They're doing it to uphold a standard (https://meta.stackoverflow.com/questions/417476/) designed (really, developed over many years by community discussion on the meta site) to accomplish particular goals (https://stackoverflow.com/tour ; https://meta.stackoverflow.com/questions/254770).
> I am usually better off opening an issue or discussion directly with the upstream project
If it's something that makes sense to handle this way, it probably doesn't also make sense in the Q&A format. We can't do anything about your bug report.
> I heard people mentioning that LLMs were hurting StackOverflow badly.
A lot of people think so because the volume of questions has dropped off dramatically, and there's good evidence that people will ask an LLM instead of asking on Stack Overflow.
But this is not at all "hurting Stack Overflow", unless you're a staff member at the company and you specifically worry about the effect of this decline on ad revenue.
If asking an LLM - trained on millions of existing Stack Overflow questions, along with the rest of the Internet - gives you an actually working answer (and you're either in a position where you can deal with AI hallucinations, or are lucky enough not to experience one), then that is, almost certainly, not a question that helps improve the existing resource that is Stack Overflow. It's most likely a duplicate or near-duplicate.
Duplicate questions on Stack Overflow are not inherently bad; sometimes rephrasing a question helps by providing a "signpost" so that people who think about a problem in a different way can realize that it's still the same problem, and there's still the same fundamental question to answer about it. But we want everyone who has that question to find the same collection of answers; and we want that collection of answers to be high quality, not redundant, and categorized under a high quality version of the question. That way, when you use a search engine and find Stack Overflow Q&A, you get the best possible result, as quickly as possible.
Nowadays, there are about three times as many publicly available questions on Stack Overflow as there are articles on Wikipedia. Considering that the scope of Stack Overflow is "practical questions about programming", while the scope of Wikipedia is "literally any noteworthy real-world object or phenomenon", that's clearly too many already. So why worry about the influx of new questions slowing down?
If it is from 2010 and was a relevant question or answer then but has since become irrelevant or even wrong because the framework or language has moved on I actually support this kind of clean up.
There are a lot of best practices that just don't apply anymore that far down the line. Even simple things like whats the best way to use a variable inside of a string in Python would have an outdated (and to most users, wrong) answer if it was from 2010.
I don't understand the idea. Are you also in favour of deleting blog posts that are older than a couple years? There is a date next to the question...
Additionally, we generally do not close old questions simply because they're "outdated", e.g. refer to deprecated libraries etc. We recognize that people are often stuck maintaining unsupported legacy systems, effectively indefinitely. We sometimes close questions because they refer to services (especially web APIs) that are no longer available. But overwhelmingly, when old popular questions get closed, it's because they're deemed to be no longer on topic for the site. Since a lot of people will see the question, we don't want them to get the wrong idea about what's topical.
And, of course, it makes perfect sense to downvote things that used to be correct but are now incorrect. Practically speaking, this doesn't happen nearly enough; upvotes have a kind of inertia, and wrong answers are often evaluated by people who don't know they're wrong.
By the way: about 89% of up/downvotes ever cast on Stack Overflow are up (https://data.stackexchange.com/stackoverflow/query/492368/to...).
I never said delete anything, but deprecation warnings, closure, and subsequent SEO down ranking of formerly correct but now incorrect/irrelevant answers would be a huge improvement to StackOverflow. Somebody may need to to know the best way to handle permissions in Java on Android 6.0, but it absolutely should not be a top question or answer in 2025 unless somebody is specifically looking for it.
In retrospect it is a case study of a particular enshittification scenario: "benign neglect" Back when they published a data dump I had a project on my speculative list to clean up their database, take only the best answers, etc. For python, the numerous Python 2 examples
print "something"
would get rewritten to Python 3 print("something")
basically do the maintenance work they weren't doing. Personally I find their idea of what is a valid question to ask offensive. If you're coding in Java or Javascript for example, the question of "Guava vs. Spring" or "Vue vs. React" are probably more consequential decisions for your app as opposed to anything else but questions like that are forbidden.Over time we found that hardly anyone asking questions could achieve the kind of "good subjectivity" that we wanted. Questions like this attract flame wars (which are especially obnoxious in a format with answer posts with non-threaded comments) and advertising for alternatives, add-ons etc. that result in a completely derailed discussion in a place that isn't supposed to have a discussion at all.
If you want to ask "what factors should I take into consideration when choosing..." then I would agree that can in principle fit on a Q&A site. But open-endedness again makes it hard to choose the best answers and ensure they float to the top.
The general principles are much the same at Codidact Software (https://software.codidact.com), but the scope is considerably wider than Stack Overflow's (https://software.codidact.com/posts/search?search=category%3...). You might have better luck with that kind of question there.
Why spend your own time and effort adding content to someone else's platform anyway? It's always a much better idea to write an article on your own website than a stackoverflow answer. Stackoverflow just takes a little less effort but that doesn't matter when your effort is likely to be invalidated anyway.
*Later he took over the Flask project and I was still bitter so I stopped using that too.
Another old problem was notable users. There was a guy famous for his presence and answering tons of question (I forgot his name). He was actually pretty good but... he was not an expert in all the areas he'd participate in, but his answers would sometimes win because he was articulate, not because it was the best.
https://openai.com/index/api-partnership-with-stack-overflow...
Which is also the reason for the ban on GenAI content (https://meta.stackoverflow.com/questions/421831).
I guess nobody could disagree, that it benefits all if the site is useful and whether the content is factually correct and up-to-date and follows Q&A format.
I admit, I haven't followed what happens closely for some time, but here is some older example post: https://meta.stackexchange.com/questions/389834/statement-fr...
You already seem aware of the existence of the meta site - which contains reams of useful information and prior discussion and explanation of policy - so I assume you are simply complaining about others disagreeing with you, rather than genuinely wondering.
https://learn.microsoft.com/en-us/dotnet/api/system.servicem...
https://learn.microsoft.com/en-us/dotnet/api/system.servicem...
Could you use these to cancel the stream?
Is there some kind of `IClosableStream` you can implement? That’d give you a `Closed` method, which you can then use to let either your server or stream know that it’s time to stop reading (or the stream reached EOF) - even if it’s done with a flag that’s set when the client disconnects.
Maybe there’s already an optional `Close` method you’re not overriding?
On the client side, randomStream.Close will get called when it's disposed.
On the server side, I'm not sure what I could put into an overriden Close that wouldn't just be base.Close()? RandomStream itself doesn't own any resources that need cleaning up.
I could force WCF to use Session mode, and then add flow-control through a side-channel, so other messages could prepare the stream to internally buffer and then rewrite in requested chunks?
But at that point I might as well just use an apprpriately sized GetRandomBlock(ValueWithSequence[]), and chunk requests that way and abandon using a stream for this at all.
I'll have an experiment with that approach to try to find the best buffer size and whether streaming the buffer actually helps vs just having it as the message and letting WCF control the sending.
It's weird behaviour and I wouldn't have expected it either, since infinite length streams are pretty common.
There'd be some bookkeeping to keep track of the stream (so you can set its flag) then replace it with a new one the next time a client connects, effectively making each stream single-use. But you seem to be optimizing for rate of reads, not number of opens and closes you can do, so it seems that shouldn't be a blocker..
Ask a question.
Ignore the trolls.
Get a question answered that would have taken you another day to solve.
Make money.
I barely use it anymore. GitHub issues are far more useful these days but lacks a place to ask generic questions.
GitHub could strike a knockout blow here if they wanted to.
The right place for this kind of question is the GitHub Issue tracker for CoreWCF, where you'll find several related issues. You're not the first one to hit bugs in streaming mode, which is... fragile. Just turning HTTPS on or off can break it: https://github.com/CoreWCF/CoreWCF/issues/1391
Unbounded streaming is something WCF was not designed to support, and hence CoreWCF doesn't support it either. CancellationTokens, etc... might improve things in case of accidental disconnection of a client session, but in general WCF expects streamed responses to have a finite length and be explicitly terminated with an end-of-stream by the server.
E.g.: read the end of this section: https://learn.microsoft.com/en-us/dotnet/framework/wcf/featu...
"Streamed transfers end and the message is closed when the stream reaches the end of file (EOF). When sending a message (returning a value or invoking an operation), you can pass a FileStream and the WCF infrastructure subsequently pulls all the data from that stream until the stream has been completely read and reached EOF."
Fundamentally, WCF is a message-oriented RPC system. It has many issues with streaming: it has to be explicitly enabled, it is incompatible with many features, etc...
Just "read between the lines" in your stack trace! It mentions XML encoding in several places, but XML as used in RPC uses a "tree" structure which must be terminated with closing tags.
You might be able to shoehorn unbounded streams into CoreWCF by implementing custom channels and encodings, but at that point it isn't really WCF any more!
The bug is that CoreWCF really should give up and stop writing if the client is disconnected. Stepping through its code, it does look like it loses track of the CancellationToken at some point, probably riiiight here: https://github.com/CoreWCF/CoreWCF/blob/067f83b490332120f45e...
Note the comments surrounding it talking about things like "TODO: Verify that the cancellation token is honored for timeout" and "Since HTTP streams don't support timeouts, we can't just use TimeoutStream here." That sounds unfinished to me.
I can't be sure, but there seem to be some design flaws that also contribute to this bug. E.g.: CoreWCF uses synchronous writes to ASP.NET Core's pipeline, which is... also a buggy mess because it is so rarely used.
Then, somehow, it is still writing to a socket that is long gone, which is really odd. It really does feel like there's at least a DoS security bug here, either in ASP.NET when operating in sync mode or in CoreWCF. The issue is that malicious clients could easily request many large responses, disconnect, and the server would keep processing the requests for hours and hours.
Now at 200k+ rep I cannot stand it anymore. Whenever I ask a question that does not make me bow backwards or discuss how my Python code looks in assembly it gets downvoted almost immediately. Go and Typescript are tags that being frustration.
Now compare this with many other Stack Exchange sites.
A question about cooking or LaTeX gets nice, coloriant answers, even when they are basic.
I get it that they are one or two orders of magnitude less crowded but this may mean that SO passed its scale limit and is done.
(although I have no idea how active CoreWCF owners are w.r.t this)
This might also be a problem WCF client, which is maintained by others elsewhere in a different repo: https://github.com/dotnet/wcf for the nuget package version.
But this might just be how WCF is designed. I'll try a version of this within .NET Framework, but even that might change depending on whether it's via IIS or started via ASP.NET Core, and whether it uses the built in System.ServiceModel or the nuget version.
( You can probably tell I'm a bit frustrated with MS for making a bit of a mess in the way they hurried away from .NET Framework especially with respect to WCF. )
It may not be necessarily a problem but, ideally, the less users have to care about gotchas and knowing how to exactly use the API the better. There are some constraints to this but chances are at least documentation can be improved.
Plus, especially if there are not that many issues, it signals interest and active usage.
Interesting point. I'm going to see if they similarly struggle generating VBA code vs generating Visual Basic code.
I posted this iOS Swift Programming question today:
https://stackoverflow.com/questions/79612931/ios-custom-oper...
and was instantly downvoted and then voted to closed. I do not think this question was worth downvoting or closing. There's no hits on the warning I was getting.
People on SO nowadays seem way too eager to downvote and close.
The outstanding vote to close is on the grounds of "needs details or clarity". Based on the comment that was left, I assume that the other party doesn't understand why you are asking a question, because it comes across that your motivation for asking is that you don't understand what to do, but it also comes across that the error message is already telling you exactly what to do.
If you have a different question - for example, why you should have to do the thing being suggested, or you don't understand how to do the thing being described - then you should edit the question to clarify that. With this clarification, it may be that your question is identified as a duplicate (of a more general question regarding why such "inherited conformance" must be restated in Swift, or what that means, etc.). I don't know Swift, so I can't advise there - I can only rely on my general understanding of programming languages in this general syntax family.
Regardless, the question would be improved by a proper minimal reproducible example (https://stackoverflow.com/help/minimal-reproducible-example). Minimal is a key component of this; please proactively try to remove irrelevant parts of the code example, so that you show only what directly causes the reported problem. This helps (https://meta.stackoverflow.com/questions/333952) other people understand and recognize the question - not just people who might answer it, but other people who have the same question who are looking for an existing Q&A.
> There's no hits on the warning I was getting.
I guess you mean that you tried searching for the warning and came up empty handed. I'm not sure why this happened for you. If I try copying and pasting into a search engine - e.g. https://duckduckgo.com/?q=Class+must+restate+inherited+%27Cl... - I get seemingly useful results off the top - including issues in the GitHub repository for Swift itself, such as one proposing and automatic "fix-it" for this error (I assume this concept is meaningful to Swift developers generally). I also see official Apple documentation at https://developer.apple.com/documentation/swift/sendable , several blog posts talking about the use of `@unchecked Sendable` with fully worked examples, etc.
> I do not think this question was worth downvoting or closing. People on SO nowadays seem way too eager to downvote and close.
Nobody ever thinks their own questions are worth downvoting or closure - they wouldn't ask if they did. But the standards aren't set by the OP; they're set by policy and surrounding discussion such as https://meta.stackoverflow.com/questions/417476 and https://meta.stackoverflow.com/questions/252677 and https://stackoverflow.com/help/closed-questions and https://stackoverflow.com/help/why-vote and https://meta.stackoverflow.com/questions/429808. And they're set this way because the site isn't about answering your question in the moment; it's about building up a searchable, useful repository of high quality questions and answers.
https://community.revenuecat.com/sdks-51/many-rc-related-war...
However, when I searched for it now (after seeing you were getting that Swift GitHub result), I got better results. No idea why I didn't get that before.
Thanks for the detailed answer. I appreciate it.
Seems like an issue with not closing resources properly. Looking at your server code, seems the Close and Dispose methods are not overridden. Try that?
It'll be calling base.Close(), and doing what else?
Just for the sake of it, I've now tried pasting this whole blog into claude.
It has some strange suggestions, including many things that don't work, such as adding to the client:
// Important: Properly close the stream
randomStream.Close();
I read the Stream documentation ( https://learn.microsoft.com/en-us/dotnet/api/system.io.strea... ), and it points out:> This method calls Dispose, specifying true to release all resources. You do not have to specifically call the Close method. Instead, ensure that every Stream object is properly disposed.
My stream is properly disposed.
Claude also sticks in things like:
// Simulate some work or add a small delay to prevent CPU spinning
Thread.Sleep(1);
I can't see how that do anything to solve my issue. I suppose I should humour the machine, go full-vibe and try everything it suggests, and if I end up with a working solution go back from there, but I fear that would just leave me more confused about the underlying mechanisms here.On the client side it not only rewrites my reading to read multiple times, but adds in:
await Task.Delay(500, cts.Token); // Small delay between reads
I didn't ask it to re-introduce the loop, and a 500ms delay between reads is horribly long for reading successive bytes from a stream.The only thing that was interesting was creaitng a linked cancellation token source on the service to pass to the underlying stream and cancelling it on server shutdown.
That's a useful thing to keep in mind to help with helping to shutdown the services, but doesn't actually address the issue for a server you want to keep running.
It does also add send/receive timeouts too, and they're also worth keeping in mind, but that doesn't seem like a good mechanism for dealing with this issue. If anything, it would just mask the issue by having it write to the stream for the duration of the timeout period instead, which if short could cause a problem like this to go unnoticed until it's actually under more load.
SO is trash now-a-days.