Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Harvard Teaching Hospital Seeks Retraction of 6 Papers by Top Researchers (wsj.com)
106 points by Jimmc414 on Jan 22, 2024 | hide | past | favorite | 53 comments



I think the Crimson article mentioned in this article has better info about the actual details of the alleged manipulation: https://www.thecrimson.com/article/2024/1/22/dana-farber-iss...


I do sometimes wonder if the same attributes that puts people at their top of their fields is the same atribute that makes them more willing to sacrifice their integrity for prestige... It seems like this just happens too frequently (or maybe it just appears that way because these are media reported).


I'm not sure what direction of causation you have in mind but as someone in academics (tenured professor etc) I think the answer is definitely yes.

One thing that should be setting off alarm bells now is how often these scandals in the last couple of years have involved people who are in very high-level administrative positions at these institutions. Not only because of the values that might be instilled downward in the future, but also what it says about what has been valued already to reach those positions.

I have seen and heard stuff like this routinely that never makes the press. Not everyone in academics is corrupt, but the rot is prevalent enough that it's pretty systemic at this point and affects everyone. I think sometimes people don't even realize what they're suggesting sometimes, it's so common.

I have a theory that as some indicator of success deviates from a normal tail, there's more likely to be corruption or luck involved. The incentives just don't work the other way. But I'm biased based on my experiences, which reflect one domain of modern society.


Academics with surprising results are rewarded with more fame, prestige, funding opportunities, etc. It's not a big surprise that people naturally cheat, in big or small ways.

The general public often think that academia is merit-based with the smartest being rewarded, but as you know it's a more complicated picture than that. You're not alone in your thinking; I'm pretty sure everyone in academia recognizes the problems. It's just that enough people benefit from the current incentive structures that the occasional scandal isn't enough for academics to reassess the predominant paradigm.


> I have a theory that as some indicator of success deviates from a normal tail, there's more likely to be corruption or luck involved.

And/or abuse, coercion, and exploitation of others.


In relation to both the quoted sentence and your addition, I have seen speculation that where once "being in the clergy" was esteemed and attracted predators and psychopaths seeking cover, today that attraction is toward a position in science work as cover.


If you take incentivation to the extreme, look no further than the publishing criteria for academic positions in China. https://www.nature.com/articles/463142a


Anecdotally I’ve been doing a lot of research lately on recommended systems and it blows me away how many Chinese papers there are (papers written by researchers in China) and how many claim to beat the current “state-of-the-art” often without saying what that is.


No joke, I was just reviewing a paper for NAACL a couple of weeks ago and the authors made up a brand new task, then stated that they had achieved the state-of-the-art on it. In my review I mentioned that was a meaningless statement for a new task, but I guess some researchers are incentivized to claim that in their papers.


I can't trust that research until it's published in a more trustworthy journal than Nature. ;-)

Crazy that Nature published a article highlighting fraud concerns about Nature's own publications, but Nature has no plans to reduce that fraud.


Worth noting that with success also comes scrutiny. The "causation" here may just be the added scrutiny.


This is a broad issue with any kind of hierarchical power structure, no matter what it is ostensibly about. Once you have hierarchy, you have politics as a profession/lifestyle, and sociopaths always win that game in the long run.


Uh no? Every functioning system we have on the planet is hierarchical. And each system has a different degree of effectiveness & success with some being more corrupt than others.

Just dismissing everything as "welp, that's just hierarchical power structures" is dumb.


There are many working systems that are small enough that they don't need any hierarchy, and function just fine.

Larger systems do require some degree of it, yes. But the ones that have more hierarchy and where it is more rigid, inevitably end up with more of this kind of thing.


> or maybe it just appears that way because these are media reported

When thinking about it more, I think that is also highly likely. It many of these cases the most "famous" author is the last one on the paper, but is the first to be highlighted in the media. That is, oftentimes the work was just done in their labs but they did little more than review. But there is such pressure for those toiling in the labs to make a "big" splash that it's not hard to think that some small number of them would be willing to cheat. This line from the original blog post (https://forbetterscience.com/2024/01/02/dana-farberications-...) reporting on the findings is telling:

> “You should reach out to Dr. Hidde Ploegh- the first author, Dr. Boaz Tirosh was in his laboratory.”

> Well Laurie, it’s your paper, if you care about it being correct, you could very well reach out yourself!

While I agree that Laurie Glimcher should be very concerned about any research misconduct done under her authorship, I think it's premature to get out the pitchforks when there are potentially many levels of indirection here.


They know and if they don't they should not be a co-author.


Baloney. That's literally not how academic papers work in any part of the world. At some point you need to be able to trust your co-authors aren't lying. For example, co-authors of Francesca Gino (the Harvard Biz School prof accused of falsifying data) had to start a project, https://manycoauthors.org/, to essentially compare notes. I'd also consider some of these co-authors victims here.

To be clear, I'm not saying any of the authors get a free pass, but different authors have vastly different levels of culpability.


Yes.

For a sector/area/industry that deals with "hard facts" and Science, it's "surprising" how much of these groups and institutions run on prestige, greed and ego.

We are talking about institutions run by these people who have annual budgets of BILLIONS of dollars, sometimes I feel most people view these schools and institutions like just one step above their local high-school, nothing is further from the truth.


I.e. people are human, regardless of their position, prestige, etc.


I can think of three former colleagues that have had very successful careers and that are also gifted liars (one stands out as also being a pathological liar.) None of them are ever going to be reported in the media.


Great people usually aren't good people...

Or to put it differently, the unquenched desires of acquisition, rivalry, vanity, and power lead people to do things they oughtn't


> Great people usually aren't good people...

The flip side of this is most societies’ model citizens are highly compliant, possibly even supplicant. Entire categories of rudeness are, in essence, about not challenging authority and convention.


>wonder if the same attributes that puts people at their top of their fields is the same atribute that makes them more willing to sacrifice their integrity for prestige

I have a HLS lawyerbro that absolutely attests to this theory. "You hear that? That's `pride` fuckin'with'ya."


The original blog post alleging falsified data was published by Sholto David earlier this month: https://forbetterscience.com/2024/01/02/dana-farberications-...


This is an incredible post, thanks for the link! The researches are quite literally photoshopping study images! How is this even legal? Plain vanilla fraud.


It's confusing because some of the incidents had image manipulation, but apparently no evidence of deceit. Some others involved data collected at labs not belonging to the four authors accused. But there are six papers being retracted, but no real details in that article on who did what and what exactly happened there.


“No evidence of deceit” but the data show no effect while you accidentally published an image from some supposedly “early/exploratory” analysis that does show an effect.

What would be the evidence of deceit you’d expect to find here? A video of a monologue by the evil villain disclosing their intention to deceive readers because they believe no one will reproduce their analysis before they get their promotion?


I agree. You can infer that falsely creating positive results leads to funding leads to prestige, fancy dinners and fancy homes, fancy educations and large inflated egos.


I've been thinking about how with the rise of LLMs, we're going to uncover A LOT of "bad studies" over the next decade. Could be some sort of mass reckoning. Probably better to admit it all now than be uncovered in 5 years.


how would LLMs help with uncovering "bad studies"? I don't think LLMs are yet sophisticated enough to figure out potential plagiarism or Adobe Photoshop copy-and-paste (as mentioned in the article above).


Well you can train AIs to uncover image fraud and detect manipulation that is well known (imagetwin does this).

Presumably the same can be done for tabular data and genomic or "omics" data. At the very least statistical techniques can be used. I imagine high throughput imaging modalities will be the main target.

That being said the only way the -omics data will have utility is via AI models which are trained on some task... which present their own problems..


The OP paper is easily handled by GOFAI from the 1980s; it's just detecting similar images.

You don't need a language model. The effort is in collecting and chopping the data to find snippets to compare. NNs can help with that.


You could probably just train it by taking image segments and duplicating them artificially.

Newer fraud will probably use generative diffusion AIs to make "realistic western plots" on demand .. there's probably a paper in that!


> For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.

Agreed that de-anonymizing will become trivial. This will be a problem not only for bad actors in research, journalism, creative writing, etc., but for internet commenters who believed they'd done due diligence to remain anonymous, and even for research participants who'd expected anonymity when signing their consent forms. We're rushing headlong into even stranger days!


I believe the complete opposite - at least for internet commenters, one could easily use an LLM to generate one's responses.


I would prioritize looking for financial and corporate fraud. It would have a much bigger impact on society than looking for any problems with academic studies. If we can take down those people and bar them from ever having anything to do with finance in the future, I think that would have an important impact on the ethics and behavior of the next generation.


It seems to me that taking down fraudulent academic researchers and barring them from ever having anything to do with research in the future would also have a significant impact on the ethics and behavior of the next generation. If technology is lowering the barriers to fraud detection, why should it be applied to one sector over another?


That's a fair point. However, part of the politics of the situation is that the right end of the spectrum appears bound and determined to discredit any academic or fact-based professions. They also seemed dead set on protecting oligarch and oligarch-lite players from criticism or critical examination.

Academic fraud seems to be self-correcting as we have seen by the numerous reports of fraud and withdrawn papers; as a rule, the players express shame and remorse. Nonacademic fraud doesn't exhibit these characteristics and sometimes seems to be proud of the fraud.


These researchers work for corporations too.


and corporate researchers do commit fraud as well. Look at the number of drug trials that hide results that would interfere with the approval of a drug.


Why is it that with elites it's always "we lack incontrovertible evidence of bad behavior, so we can't take substantial disciplinary action", but never "we lack an incontrovertible evidence of good behavior, so we can't give an elite position and massive pay for another year"? Meanwhile, regular common folks can be executed by police by transient circumstantial whims.

The higher the stakes, the lower the standards.


Cheating in academia is the perfect crime. Cheaters conducting studies on cheating. If you don’t know a if system is consistent and you ask it to determine if it’s consistent. Brilliant!


But it should be possible for a system to prove its consistency, right?


Imagine this is the selector, the great filter, of who gets to thrive in a field. One department head who publish faked/made-up papers, poisons the whole surrounding environment, killing science in his environment for 10 years. Now imagine the whole field poisoned, where it takes some outlying independent research institute, detached from the field to bring up evidence to bring the cartel down. Science should have more of these "outside" field quality check stations, that bring down fraudsters. No original research there, just reproduction of experiments central to theories and vital to the name of those leading the field.


I suspect the only way to fight this is to make the papers freely available so anyone can look at them for flaws and dishonesty.


A libertarian such as yourself should be comfortable with people running an academic fraud detection business, paying for access to papers, and accepting payment for their investigations. Recent events at Harvard show that plenty of interested parties are willing to pay to prove that their opponents in some arena are cheating. Also MrBeast and the rest of YouTube shows that otherwise unprofitable activities can be profitable when presented in an entertaining way for ad views.



I don’t trust anything coming out of academia anymore. First we had over 50% of psychology being nonsense, then sociology (not surprising), then the hard sciences too. But then we also have the rampant ideology problem where you are forbidden from even researching certain topics/questions and if you do, you are blacklisted. They need a hard reckoning. What happened to science? Who cares what the ideological implication is? The truth is the truth.

The icing on the cake is when these frauds retract their papers, NOTHING happens to them. Nothing.


Academia publishes, because academics are forced to publish. Psychology has a subject that's harder than physics, both experimentally and theoretically, and bad research practices. Sociology we can politely ignore. So yes, it has turned into a mill that produces garbage.

Still, some things are worth researching, but finding out what's true (or true-ish) will take a lot of time. Don't trusting findings that haven't been reproduced, stay skeptical of theories that hinge on a far-reaching interpretation of the data, and downright ignore publications with surprising claims. It's not nice, but it's realistic.


Hold on, I've heard of the replication crisis - though I don't know the scale - but are you saying that over 50% of "hard science" is bunk? I find that hard to swallow.


Not addressing the parent's specific claim, but there was recent discussion of a disturbingly high proportion of studies in one field being fake/flawed: https://news.ycombinator.com/item?id=37572394

> "For more than 150 trials, Carlisle got access to anonymized individual participant data (IPD). By studying the IPD spreadsheets, he judged that 44% of these trials contained at least some flawed data: impossible statistics, incorrect calculations or duplicated numbers or figures, for instance. And in 26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data."


I don’t think hard sciences are 50% but still too high. But that’s just the data people looked at. There are so many papers and studies being submitted, who knows how many times a researcher fudged a few values to make the effect size bigger? I personally witnessed this in academia.

https://en.wikipedia.org/wiki/Replication_crisis




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: