> This doesn’t mean that the authors of that paper are bad people!
> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
Nowadays high citation numbers don't mean anymore what they used to. I've seen too many highly cited papers with issues that keep getting referenced, probably because people don't really read the sources anymore and just copy-paste the citations.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
Going to conferences seeing researchers who've built a career doing subpar (sometimes blatantly 'fake') work has made me grow increasingly wary of experts. Worst is lots of people just seem to go along with it.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
Social fame is fundamentally unscalable, as it operates in limited room on the scene and even less in the few spot lights.
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
> Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
Definitely ignore single studies, no matter how prestigious the journal or numerous the citations.
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
And thus all citing, have fatally flawed there paper if its central to the thesis, thus, he who proofs the root is rotten, should gain there funding from this point forward.
I appreciate the convenience of having the original text on hand, as opppsed to having to download it of Dropbox of all places.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
Being practical, and understanding the gamification of citation counts and research metrics today, instead of going for a replication study and trying to prove a negative, I'd instead go for contrarian research which shows a different result (or possibly excludes the original result; or possibly doesn't even if it does not confirm it).
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
Most of these studies get published based on elaborate constructions of essentially t-tests for differences in means between groups. Showing the opposite means showing no statistical difference, which is almost impossible to get published, for very human reasons.
Could you also provide your critical appraisal of the article so this can be more of a journal club for discussion vs just a paper link? I have no expertise in this field so would be good for some insights.
Not even surprised. My daughter tried to reproduce a well-cited paper a couple of years back as part of her research project. It was not possible. They pushed for a retraction but university don't want to do it because it would cause political issues as one of the peer-reviewers is tenured at another closely associated university. She almost immediately fucked off and went to work in the private sector.
Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.
I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
Made me think of the black spoon error being off by a factor of 10 and the author also said it didn't impact the main findings.
https://statmodeling.stat.columbia.edu/2024/12/13/how-a-simp...
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
https://ask.library.harvard.edu/faq/82314
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
Talked about it years ago https://news.ycombinator.com/item?id=26125867
Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)