remix logo

Hacker Remix

Reproducibility project fails to validate dozens of biomedical studies

220 points by rntn 22 hours ago | 107 comments

85392_school 21 hours ago

drgo 6 hours ago

The crisis in science can only be fixed by addressing the slew of bad incentives built into the system. We can't predicate job security, promotion and prestige of every early career scientist on publishing as many papers as possible, and on obtaining grants (which requires publishing as many papers as possible) and then expect high-quality science. We can't starve universities of public funding and expect them not to selectively hire scientists whose main skill is publishing hundreds of "exciting" papers, and not overproduce low-quality future "scientists" who were trained in the dark arts of academic survival. Reform is more urgent than ever; AI has essentially obsoleted the mental model that equates the count of published papers with productivity and quality.

Voultapher 3 hours ago

I can't say this enough, independent reproduction must be a part of the process or we'll continue seeing this issue. As you say it's the incentives. One solution that's seems reasonably possible for 95+% of research would be to put 30% or so of the research funds locked away, to be then given to another team ideally at another university that get's access only to the original teams' publication and has the goal to reproduce the study. The vast majority of papers released don't contain enough information to actually repeat their work.

And since we are talking about science reform, let's start with the much easier and cheaper preregistration [1] which helps massively with publication bias.

[1] https://en.wikipedia.org/wiki/Preregistration_(science)

franktankbank 46 minutes ago

How about punishment for terrible behavior. If you design bad experiments then why are you a researcher? Fired. If you commit fraud, fined and fired. Weed out these fuckers.

constantcrying 51 minutes ago

But why is lying so common in science?

Incentives like these exist in basically all areas of work. Perform well and you get "job security, promotion and prestige". Yet somehow there is no decade long ongoing crisis in industry of corporations lying about their products. When these cases happens (obviously they do), corporations and individuals get punished.

How would you reform the system? More funding definitely is not the answer.

crabbone 1 hour ago

Recently I read some lectures from Jacob Bronowski. If you never heard about him, he was sort of a predecessor of personalities like Bill Nye or Neil Tyson: he wrote books that popularize science, gave simplified introduction to philosophical and scientific topics etc.

He advocated (very naively, as it appears today) for science as a human endeavor that has no reason for falsification. His justification was that scientists have nothing to lose from being proved wrong, and, as an example, he gave some University dean who published some works that were shown to be completely wrong in a course of few decades, but still retained his position in a university (because his approach was valid and he never attempted to manipulate the truth, he just made an honest error).

But, the more I think about how did we come to this, in many human activities it is often the case that whoever undertook such activities relied on their own wealth and not being incentivized to commercialize their discoveries. It was the aristocrats or monks or some other occupation that made their life affordable, and boring enough for them to look for challenge in art or science. Once science became professional, it started to be incentivized in the same way any other vocation is: make more of it--be paid more; make more immediately useful things--be paid more.

I don't know if we should return to the lords and monks system :) But I'm also doubtful that we can make good progress by pulling the levers on financial incentives of commercializing science.

jl6 20 hours ago

It would be interesting for reproducibility efforts to assess “consequentiality” of failed replications, meaning: how much does it matter that a particular study wasn’t reproducible? Was it a niche study that nobody cited anyway, or was it a pivotal result that many other publications depended on, or anything in between those two extremes?

I would like to think that the truly important papers receive some sort of additional validation before people start to build lives and livelihoods on them, but I’ve also seen some pretty awful citation chains where an initial weak result gets overegged by downstream papers which drop mention of its limitations.

0cf8612b2e1e 18 hours ago

It is an ongoing crisis how much Alzheimer’s research was built on faked amyloid beta data. Potentially billions of dollars from public and private research which might have been spent elsewhere had a competing theory not been overshadowed by the initial fictitious results.

pedalpete 14 hours ago

The amyloid hypothesis is still the top candidate for at least a form of Alzheimer's. But yes, the issues with one of the early studies has caused significant issues.

I say "a form of Alzheimer's" because it is likely we are labelling a few different diseases as Alzheimer's.

superfish 16 hours ago

I went searching for more info on this and found https://www.science.org/content/blog-post/faked-beta-amyloid... which was an interesting read.

SoftTalker 12 hours ago

Those studies were all run and paid for, many/most with public funding. Of course it matters.

SonOfLilit 8 hours ago

Reproducing a paper is Hard, and also Expensive. I'd expect that they wouldn't pick papers to try and reproduce at random.

jpeloquin 15 hours ago

The median sample size of the studies subjected to replication was n = 5 specimens (https://osf.io/atkd7). Probably because only protocols with an estimated cost less than BRL 5,000 (around USD 1,300 at the time) per replication were included. So it's not surprising that only ~ 60% of the original biomechemical assays' point estimates were in the replicates' 95% prediction interval. The mouse maze anxiety test (~ 10%) seems to be dragging down the average. n = 5 just doesn't give reliable estimates, especially in rodent psychology.

yummypaint 4 hours ago

This should be the top comment on HN where most users claim to have some grasp of statistics. N=5 implies a statistical uncertainty of about 45%, so they measured what one would expect, which is essentially nothing. Also this is specifically about Brazilian biomedical studies, and contains no evidence to support people's various personal vendettas against other fields in other countries. At least read the article people.