> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
Once something enters The Canon, it becomes “untouchable,” and no one wants to question it. Fairly classic human nature.
> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."
-Stephen Jay Gould
Made me think of the black spoon error being off by a factor of 10 and the author also said it didn't impact the main findings.
https://statmodeling.stat.columbia.edu/2024/12/13/how-a-simp...
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
Talked about it years ago https://news.ycombinator.com/item?id=26125867
Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)