remix logo

Hacker Remix

Comment on 2015 mRNA paper suggests data re-used in different contexts

159 points by picture 3 months ago | 81 comments

owlninja 3 months ago

I guess I'll bite - what am I looking at here?

the__alchemist 3 months ago

An (agarose?) gel.

There are partial holes in at at one end. You insert a small amount of dyed DNA (etc) containing solution each. Apply an electrical potential across the gel. DNA gradually moves along. Smaller DNA fragments move faster. So, at a given time, you can coarsely measure fragment size of a given sample. Your absolute scale is given by "standards", aka "ladders" that have samples of multiple, known sizes.

The paper authors cheated (allegedly) by copy + pasting images of the gel. This is what was caught, so it implies they may have made up some or all results in this and other papers.

shpongled 3 months ago

Close - this is a SDS-PAGE gel, and you run it using proteins. The bands in the first two rows are from a western blot (gel is transferred to a membrane), where you use antibodies against those specific proteins to detect them. The Pon S row is Ponceau S, a dye that non-specifically detects all proteins - so it's used as a loading control, to make sure that the same amount of total protein is loaded in each lane of the gel.

doctorpangloss 3 months ago

Is it conceivable that the control was run once because the key result came from the same run? I can see a reviewer asking for it in all three figures, whereas they may drafted it only in one

gus_massa 3 months ago

The horizontal label is fine, it says Pon S in all images. (I guess a wrong label would be obvious to detect for specialists.)

The problem are the vertical labels

In Figure 1e it says: "MT1+2", "MT2" and "MT1"

In Figure 3a it says: "5'-CR1", "CR2" and "3'-UTR"

In Figure 3b it says: "CR2", "CR3" and "CR4"

shpongled 3 months ago

Based on the images, it is inconceivable that these are from the same run (see the dramatically different levels of TRF-S in each gel. One column/lane = one sample). This isn't something that would be included because of a reviewer - loading controls are required to meaningfully interpret the results (e.g. the data is useless without such a control).

NotAnOtter 3 months ago

Additional context to be speculative of OP's intentions. Within the academic world there was a major scandal where a semi-famous researcher was exposed for faking decades of data (Google: Pruitt). Every since, people have been hungry for more drama of the same shape.

hummuscience 3 months ago

This is protein on a western blot but the general idea is the same.

owlninja 3 months ago

I love HN - thanks!

IshKebab 3 months ago

Faked scientific results.

sergiotapia 3 months ago

what happens to people who do this? are they shunned forever from scientific endeavors? isn't this the ultimate betrayal of what a scientist is supposed to do?

Palomides 3 months ago

if caught and it's unignorable, usually they say "oops, we made a minor unintentional mistake while preparing the data for publication, but the conclusion is still totally valid"

generally, no consequences

mrguyorama 2 months ago

Horseshit. All of the following scientists were caught outright faking results and as a result were generally removed from science.

Jan Hendrick Schon (he was even stripped of his Phd, which is not possible in most jurisdictions) He made up over 200 papers about organic semiconductors

Victor Ninov who lied about creating like 4 different elements

Hwang Woo-suk who faked cloning humans and other mammals, lied about the completely unethical acquisition of human egg cells, and literally had the entire Korean government attempting to prevent him from being discredited, and was caught primarily because his papers were reusing pictures of cells. Hilariously, his lab successfully cloned a dog which was considered difficult at the time.

Pons and Fleischmann didn't do any actual fraud. They were merely startlingly incompetent, incurious, and arrogant. They still never did real research again.

dylan604 3 months ago

There's a difference of having your results on your black plastic cookware being off by several factors in an "innocent" math mistake vs deliberately reusing results to fraudulently mislead people by faking the data.

Most people only remember the initial publication and the noise it makes. The updated/retractions generally are not remembered resulting in the same "generally, no consequences" but the details matter

gus_massa 3 months ago

The people in the area remember (probably because they wasted 3 months trying to extend/reproduce the result [1]). They may stop citing them.

In my area we have a few research groups that are very trustworthy and it's safe to try to combine their result with one of our ideas to get a new result. Other groups have a mixed history of dubious results, they don't lie but they cherry pick too much, so their result may not be generalizable to use as a foundation for our research.

[1] Exact reproduction are difficult to publish, but if you reproduce a result and make a twist, it may be good enough to be published.

rcxdude 2 months ago

This is a general issue with interpreting scientific papers: the people who specialize in the area will generally have a good idea about the plausibility of the result and the general reputation of the authors, but outsiders often lack that completely, and it's hard to think of a good way to really make that information accessible.

(And I think part of the general blowback against the credibility of science amongst the public is because there's been a big emphasis in popular communication that "peer reviewed paper == credible", which is an important distortion from the real message "peer reviewed paper is the minimum bar for credible", and high-profile cases of incorrect results or fraud are obvious problems with the first statement)

gus_massa 2 months ago

I completely agree. When I see a post here I had no idea if i's a good journal or a crackpot journal [1]. The impact factor is sometimes useful, but the level in each area is very different. (In math, a usual values is about 1, but in biology it's about 5.)

Also, many sites just copy&paste the press release from the university that many times has a lot of exaggerations, and sometimes they ad a few more.

[1] If the journal has too many single author articles, it's a big red flag.

rcxdude 2 months ago

Yes, I think science communication is also a big part of the problem. It's a hard one to do right, but easy to do wrong and few journalists especially care or have the resources to do it right (and the end results tends to be less appealing, because there's a lot less certainty involved)

f1shy 3 months ago

This guy made some videos about it

https://m.youtube.com/@PeteJudo1/videos

5mk 3 months ago

I've always wondered about gel image fraud -- what's stopping fraudulent researchers from just running a dummy gel for each fake figure? If you just loaded some protein with a similar MW / migration / concentration as the one you're trying to spoof, the bands would look more or less indistinguishable. And because it's a real unique band (just with the wrong protein), you wouldn't be able to tell it's been faked using visual inspection.

Perhaps this is already happening, and we just don't know it... In this way I've always thought gel images were more susceptible to fraud vs. other commonly faked images (NMR / MS spectra etc, which are harder to spoof)

fabian2k 3 months ago

Gel electrophoresis data or Western/Southern/Northern blots are not hard to fake. Nobody seeing the images can tell what you put into each pocket of your gel. And for the blots nobody can tell which kind of antibody you used. It's still not totally effortless to fake as you have to find another protein with the right weight, this is not necessarily something you have just lying around.

I'd also suspect that fraud does not necessarily start at the beginning of the experiments, but might happen at a later stage when someone realizes their results didn't turn out as expected or wanted. At that point you already did the gels and it might be much more convenient to just do image manipulation.

Something like NMR data is certainly much more difficult to fake convincingly, especially if you'd have to provide the original raw datasets at publication (which unfortunately isn't really happening yet).

dxyms 3 months ago

Or from my own experience, suddenly realize you forgot to make a picture of the gel (or lost it?) and all you have are the shitty ones.

jpeloquin 3 months ago

Shifting the topic from research misconduct to good laboratory practices, I don't really understand how someone would forget to take pictures of their gels often enough that they would feel it necessary to fake data. (I think you're recounting something you saw someone else do, so this isn't criticizing you.) The only reason to run the experiment to collect data. If there's no data in hand, why would they think the experiment was done? Also, they should be working from a written protocol or a short-form checklist so each item can be ticked off as it is completed. And they should record where they put their data and other research materials in their lab notebook, and copy any work (data or otherwise) to a file server or other redundant storage, before leaving for the day. So much has to go wrong to get to research misconduct and fraud from the starting point of a little forgetfulness.

I mean, I've seen people deliberately choose to discard their data and keep no notes, even when I offered to give them a flash drive with their data on it, so I understand that this sort of thing happens. It's still senseless.

dylan604 3 months ago

Isn't this the plot for pretty much every movie about science research fraud? When Richard Kimble was chasing his one arm man, it led to the doctor using the same data to make the research look good. I know this is not the only example.

kylebenzle 3 months ago

"Whats stopping?" nothing, and that is why it is happening constantly. A larger and larger portion of scientific literature is riddled with these fake studies. I've seen it myself and it is going to keep increasing as long as the number of papers published is the only way to get ahead.

hinkley 3 months ago

You switched the samples! In the pathology reports! Did you kill Lentz too!?

smusamashah 3 months ago

They have a playlist of 3500 videos showing images like this one

https://youtube.com/playlist?list=PLlXXK20HE_dV8rBa2h-8P9d-0...

k2enemy 3 months ago

I was curious how the video creators were able to generate so many videos in such a short timeframe. It looks like it might be automated with this tech: https://rivervalley.io/products/research-integrity

bArray 2 months ago

Very cool. I wish these guys would have a podcast discussing high profile papers, how influential they are, what sorts of projects have been built on top of them and then be like "uh oh, it looks like our system detecting something strange about the results".

I wish wish wish there was something similar also for computer science. If I got paid for how many papers that looked interested but could not be replicated, I would be rich.

snowwrestler 3 months ago

There is so little content and context to this link that it is essentially flame war bait in a non-expert forum like HN.

netsharc 2 months ago

I smell this too, especially with the editorialized HN title that contains the word "mRNA".

picture 2 months ago

The title was edited by supposedly HN moderators after I posted it. I actually ran into this youtube channel and thought it was very interesting, since I didn't realize academia seems to make so many mistakes all the time. https://news.ycombinator.com/item?id=42728742