Does anesthesiology have a problem? Final version of report suggests Fujii will take retraction record, with 172
July 8, 2012 5:22 AM Subscribe
In the wake of a very thorough and damning statistical analysis of 168 of his papers, published in March, Japanese investigators have concluded that Yoshitaka Fujii, fabricated his results in at least 172 published studies; shattering the previous record for most retracted papers. Considered an expert in postoperative nausea and vomiting, his "incredibly nice" findings drew scrutiny in 2000, but he continued to publish prolifically for more than a decade. Here are the published results of the Japanese Society of Anesthesiology's Special Investigation Committee, with an annotated list of all of his papers (PDF). The Retraction Watch also considers not only the depth of Fujii's betrayal but also whether the discipline of anesthesiology itself has a problem as it weighs in.
It seems clear now that a significant part of the problem was that, as suspicions continued to grow, there was no party with an obvious responsibility to investigate. Fujii's papers were spread across dozens of journals, the authors came from dozens of institutions, and he appears to have had a wide variety of funding sources. In the end, he was caught as a result of informal investigations by dubious colleagues that lead to the damning investigation by Carlisle (2012) for the journal Anasesthesia, which found that the likelihood that Fujii's papers were based on real data approached 1 in 1033.
The previous record for the largest number of retracted papers was held by Joachim Boldt with 89, also an anesthesiologist, and ironically many of the same Journal Editors are involved in exposing both men.
It seems clear now that a significant part of the problem was that, as suspicions continued to grow, there was no party with an obvious responsibility to investigate. Fujii's papers were spread across dozens of journals, the authors came from dozens of institutions, and he appears to have had a wide variety of funding sources. In the end, he was caught as a result of informal investigations by dubious colleagues that lead to the damning investigation by Carlisle (2012) for the journal Anasesthesia, which found that the likelihood that Fujii's papers were based on real data approached 1 in 1033.
The previous record for the largest number of retracted papers was held by Joachim Boldt with 89, also an anesthesiologist, and ironically many of the same Journal Editors are involved in exposing both men.
Great post! I heard about this the other day, and wondered just how he managed to pull it off. Seems there might be something of an answer in this collection of links. Lots to read, but apparently his supervisor turned a blind eye to the magically appearing data, as well as the co-authors who didn't have a clue they were supposed to have co-authored anything.
If there is anything like a dishonorable discharge from science? Because I kind of think this guy deserves more than just being fired on the spot and never hired again.
At the same time, it would be interesting to shine a spotlight at the pervasiveness of the publish-or-perish mentality, and whether paper volume is really how scientific accomplishment should be measured.
posted by harujion at 5:48 AM on July 8, 2012 [1 favorite]
If there is anything like a dishonorable discharge from science? Because I kind of think this guy deserves more than just being fired on the spot and never hired again.
At the same time, it would be interesting to shine a spotlight at the pervasiveness of the publish-or-perish mentality, and whether paper volume is really how scientific accomplishment should be measured.
posted by harujion at 5:48 AM on July 8, 2012 [1 favorite]
Ugh. I talk to my students about this sort of thing a lot. The reason why academic dishonesty is so bad that students get expelled for it is because trust is one of the main currencies of scholarship. No matter how brilliant you are, if your work is not 100% reliable, it's worse than useless; it corrodes other research.
It's also, I suspect, very easy to slippery slope your way into -- faculty are not as alert as they should be to student dishonesty (for a bunch of reasons that are not really relevant here), successful student dishonesty can lead pretty easily into minor "data fixing," that can lead into fabrication, and so on. And, once you get away with it, I imagine it gets easier and easier to do. And the impact of a fraudulent paper (or worse, a fraudulent career) compounds over time, undermining later research.
Since so much of the scholarly editing and reviewing is done gratis and not heavily weighted in promotion and tenure, the check system, while generally good, is fragile. Maybe journal publishers should be liable for publishing fraudulent research....
posted by GenjiandProust at 6:11 AM on July 8, 2012 [4 favorites]
It's also, I suspect, very easy to slippery slope your way into -- faculty are not as alert as they should be to student dishonesty (for a bunch of reasons that are not really relevant here), successful student dishonesty can lead pretty easily into minor "data fixing," that can lead into fabrication, and so on. And, once you get away with it, I imagine it gets easier and easier to do. And the impact of a fraudulent paper (or worse, a fraudulent career) compounds over time, undermining later research.
Since so much of the scholarly editing and reviewing is done gratis and not heavily weighted in promotion and tenure, the check system, while generally good, is fragile. Maybe journal publishers should be liable for publishing fraudulent research....
posted by GenjiandProust at 6:11 AM on July 8, 2012 [4 favorites]
Requiring open access of data sets might also help identify these problems earlier. There are issues with that (researchers are reasonably afraid of being "scooped," for one), but at least some of the reasons might be undercut by changing what constitutes a co-author or providing some other form of recognition for producing especially reliable and broadly useful data.
posted by GenjiandProust at 6:18 AM on July 8, 2012
posted by GenjiandProust at 6:18 AM on July 8, 2012
If "publish or perish" wasn't the prevailing mindset, would this be less of a problem?
If scientists felt that their jobs were secure despite years of disappointing or unexciting findings, I feel like some of the temptation to lie would be removed.
posted by edguardo at 6:27 AM on July 8, 2012
If scientists felt that their jobs were secure despite years of disappointing or unexciting findings, I feel like some of the temptation to lie would be removed.
posted by edguardo at 6:27 AM on July 8, 2012
I've mentioned it previously, but if you want to read about another case of massive scientific fraud, read Plastic Fantastic. It shows just how easy it is for reviewers and other safeguards to fail.
On preview, don't forget Anil Potti, either, who somehow found another job...
posted by ssmug at 6:32 AM on July 8, 2012
On preview, don't forget Anil Potti, either, who somehow found another job...
posted by ssmug at 6:32 AM on July 8, 2012
What I find interesting about his case is how long it was between the initial suspicions and the eventual retractions. The letter to the editor in 2000 by Kranke et al. all but screamed "We suspect fraud!" (albeit in a very polite manner). Fuji's response to the letter is... nothing but smoke. Despite this, Fuji publishes dozens more articles over the following decade without consequences until Carlisle publishes his devastating takedown last March.
posted by RichardP at 6:35 AM on July 8, 2012
posted by RichardP at 6:35 AM on July 8, 2012
Gyan: "So the peer reviewers were sleeping on the job?"
It is more that they were lied to and Fujii was actively gaming the system to avoid conflicting paper trails. It isn't really the job of peer reviewers to perform the specific kind of independent statistical analysis that was the final nail in Fujii's career's coffin. They don't really have the resources or time and the whole system of peer review absolutely does and should rely quite heavily on trust in honesty. In the very beginnings of western science one was often required to make an oath before God as on your honor as a Catholic gentleman attesting to the good faith and honesty of your reports before they would be published, and obviously the specifics have changed but the basic concept remains intact. Science still relies on the idea of Trust but Verify.
If we were going to make a list of entities that failed in their duty to discover and report the fraud, peer reviewers would be towards the end of it. Behind Fujii, of course, there would be Hidenori Toyooka who clearly recognized the fraud and not only failed to report it but continued to publish with Fujii. Hopefully there are similar investigations into Toyookas work. Then there is Toho University, which failed to conduct a successful investigation after the 2000 paper that raised oblique questions of authenticity, and then failed to meet the deadlines imposed by the journals Fujii published in. Then there are Fujii's other colleagues and co-authors, who are exonerated in the JAS report and perhaps many of the were entirely unaware, but all of them certainly couldn't be. Over the last decade there had to have been whispers, and the surface layers of fraud would have been immediately detectable with a non-trivial, but not difficult, amount of administrative checking. There had to have been a small community of people who knew but found it more convenient to do nothing and let him continue publishing.
harujion: "If there is anything like a dishonorable discharge from science? Because I kind of think this guy deserves more than just being fired on the spot and never hired again."
Over the next year or so he will likely have his degrees revoked, his association memberships rescinded, and large formal barriers put up between him and funding. In addition to the thousand little indignities, his stained name will be plastered all over everywhere, anesthesiology is a small enough community that I'm sure he will never fail to be recognized as THAT GUY.
I'm not sure if this counts, but I once knew a post-doc who, as a graduate student, had his whole lab totally fucked over by attempting to follow up on the fraudulent results of a graduate student in a different lab, which they closely collaborated with. The two PIs were totally devastated, but the graduate students decided they needed some kind of ritual to process the event. They printed out the offending emails and took the two fraudulent papers out of their files, and made some kind of paper mache doll thing to burn it in effigy in the woods. I'm told it was cathartic.
posted by Blasdelb at 6:37 AM on July 8, 2012 [19 favorites]
It is more that they were lied to and Fujii was actively gaming the system to avoid conflicting paper trails. It isn't really the job of peer reviewers to perform the specific kind of independent statistical analysis that was the final nail in Fujii's career's coffin. They don't really have the resources or time and the whole system of peer review absolutely does and should rely quite heavily on trust in honesty. In the very beginnings of western science one was often required to make an oath before God as on your honor as a Catholic gentleman attesting to the good faith and honesty of your reports before they would be published, and obviously the specifics have changed but the basic concept remains intact. Science still relies on the idea of Trust but Verify.
If we were going to make a list of entities that failed in their duty to discover and report the fraud, peer reviewers would be towards the end of it. Behind Fujii, of course, there would be Hidenori Toyooka who clearly recognized the fraud and not only failed to report it but continued to publish with Fujii. Hopefully there are similar investigations into Toyookas work. Then there is Toho University, which failed to conduct a successful investigation after the 2000 paper that raised oblique questions of authenticity, and then failed to meet the deadlines imposed by the journals Fujii published in. Then there are Fujii's other colleagues and co-authors, who are exonerated in the JAS report and perhaps many of the were entirely unaware, but all of them certainly couldn't be. Over the last decade there had to have been whispers, and the surface layers of fraud would have been immediately detectable with a non-trivial, but not difficult, amount of administrative checking. There had to have been a small community of people who knew but found it more convenient to do nothing and let him continue publishing.
harujion: "If there is anything like a dishonorable discharge from science? Because I kind of think this guy deserves more than just being fired on the spot and never hired again."
Over the next year or so he will likely have his degrees revoked, his association memberships rescinded, and large formal barriers put up between him and funding. In addition to the thousand little indignities, his stained name will be plastered all over everywhere, anesthesiology is a small enough community that I'm sure he will never fail to be recognized as THAT GUY.
I'm not sure if this counts, but I once knew a post-doc who, as a graduate student, had his whole lab totally fucked over by attempting to follow up on the fraudulent results of a graduate student in a different lab, which they closely collaborated with. The two PIs were totally devastated, but the graduate students decided they needed some kind of ritual to process the event. They printed out the offending emails and took the two fraudulent papers out of their files, and made some kind of paper mache doll thing to burn it in effigy in the woods. I'm told it was cathartic.
posted by Blasdelb at 6:37 AM on July 8, 2012 [19 favorites]
Ironically, its the non academic publishing platforms that often offer more of a "citation please" attitude, in today's read/write web world. Perhaps this implies a larger, deeper look at the whole system?
A recent conversation with a former colleague from academia made me think about this "publish or perish" mentality - a mutual friend had attacked her article so strenuously that it could imply nothing but competitive proactive destruction of a line of thinking. Luckily, all I do is blather on "just a blog".
posted by infini at 6:44 AM on July 8, 2012
A recent conversation with a former colleague from academia made me think about this "publish or perish" mentality - a mutual friend had attacked her article so strenuously that it could imply nothing but competitive proactive destruction of a line of thinking. Luckily, all I do is blather on "just a blog".
posted by infini at 6:44 AM on July 8, 2012
I think people are misperceiving this as a scandal of scholarship. It is not. This is medicine, so it directly affects peoples' lives. And the timing of this research explains a lot.
Fujii's faked research was about using drugs to control post-op nausea and vomiting. I don't like to get into specifics of my medical situation, but the timing and influence of Fujii's early studies may have impacted me directly during a surgery. This would have happened just as the faked studies were being released and were starting to gain influence. It is entirely possible that as a result of Fujii's bullshit data that other doctors believed in and that they used to adjust their medical practices, I vomited just as I was coming off anaesthesia, at the very moment they removed the tracheal tube. I aspirated, choked to death on the operating table, and had to be revived by the anaesthesiologist and surgeon. I remember waking unexpectedly as this happened, and I remember it happening. I remember my panic, and the panic of the medical team. It has haunted my memories for many years.
posted by charlie don't surf at 6:50 AM on July 8, 2012 [18 favorites]
Fujii's faked research was about using drugs to control post-op nausea and vomiting. I don't like to get into specifics of my medical situation, but the timing and influence of Fujii's early studies may have impacted me directly during a surgery. This would have happened just as the faked studies were being released and were starting to gain influence. It is entirely possible that as a result of Fujii's bullshit data that other doctors believed in and that they used to adjust their medical practices, I vomited just as I was coming off anaesthesia, at the very moment they removed the tracheal tube. I aspirated, choked to death on the operating table, and had to be revived by the anaesthesiologist and surgeon. I remember waking unexpectedly as this happened, and I remember it happening. I remember my panic, and the panic of the medical team. It has haunted my memories for many years.
posted by charlie don't surf at 6:50 AM on July 8, 2012 [18 favorites]
GenjiandProust: "Maybe journal publishers should be liable for publishing fraudulent research..."
Liable to who? Particularly for small journals, the massive amount of work that needs to be donated by some of the busiest people on Earth and the still significant amount of money still needed to fund them already requires choices between incredibly shitty solutions. We don't need to add on liability paperwork and insurance to that load. Holding Universities liable in the right way might be able to get them to form committees with teeth earlier, but also might just cause them to bury shit deeper.
posted by Blasdelb at 6:51 AM on July 8, 2012 [4 favorites]
Liable to who? Particularly for small journals, the massive amount of work that needs to be donated by some of the busiest people on Earth and the still significant amount of money still needed to fund them already requires choices between incredibly shitty solutions. We don't need to add on liability paperwork and insurance to that load. Holding Universities liable in the right way might be able to get them to form committees with teeth earlier, but also might just cause them to bury shit deeper.
posted by Blasdelb at 6:51 AM on July 8, 2012 [4 favorites]
...Hidenori Toyooka who clearly recognized the fraud and not only failed to report it but continued to publish with Fujii. Hopefully there are similar investigations into Toyookas work.
I agree Blasdelb. I can't figure out the lack of action on the part of Toyooka. I read the report of the Special Investigation Committee — It's hard to escape the conclusion that either Toyooka was in collusion with Fujii or was bizarrely willing to ignore Fujii's malfeasance.
posted by RichardP at 7:07 AM on July 8, 2012
I agree Blasdelb. I can't figure out the lack of action on the part of Toyooka. I read the report of the Special Investigation Committee — It's hard to escape the conclusion that either Toyooka was in collusion with Fujii or was bizarrely willing to ignore Fujii's malfeasance.
posted by RichardP at 7:07 AM on July 8, 2012
Liable to who?
Well, good question, and I don't have a ready answer. However, as charlie don't surf points out, medical research has an added degree of danger -- if a patient is injured by a technique based on fraudulent research, doesn't the publisher, who profited by the publication, have a role in that? Blaming the institution employing the researcher is also attractive, but better oversight would mean more administrators rather than more faculty, and the current ratio between the two groups is already an issue.
posted by GenjiandProust at 7:09 AM on July 8, 2012
Well, good question, and I don't have a ready answer. However, as charlie don't surf points out, medical research has an added degree of danger -- if a patient is injured by a technique based on fraudulent research, doesn't the publisher, who profited by the publication, have a role in that? Blaming the institution employing the researcher is also attractive, but better oversight would mean more administrators rather than more faculty, and the current ratio between the two groups is already an issue.
posted by GenjiandProust at 7:09 AM on July 8, 2012
A lot of these cases comes from the culture of publish or perish. One of the key metrics of success = # of research papers and the prestige of the journals they are in - in this contexts success= funding/promotion/attracting workers to your lab. Thus, there will be pressure to game the system by getting more, relatively worthless MPU (minimal publishable unit) papers out in crappy journals, or fudging the data to up the prestige of mediocre papers.
When a less-than-successful researcher can't even take the above two measures, there's always outright fraud such as in this case. Sometimes the fraudsters are just nuts/egotists/mentally ill, but sometimes, they are responding in an spectacularly immoral, but economically rational way the same way that serial burglary is a way to improve your lot if you don't have easy access to resources.
posted by lalochezia at 7:25 AM on July 8, 2012 [1 favorite]
Maybe journal publishers should be liable for publishing fraudulent research....
There's no way that a journal can routinely check for more than most obvious frauds. It would just be impossible to check. They don't have the original data, access to the equipment or reagents, the expertise, the funding, the time. And after this expense, nothing would stop the authors from shopping the paper around to another journal who would have to repeat the above expense, or miss the fraud.
Scientific publishing is predicated on responsibility for a paper lying with the authors. Peer review can find problems with experimental design or conclusions, given an honest description by the authors.
Dishonesty in science is a problem, but it is not so widespread as to require massive changes in the way papers are reviewed to improve detection, if that were even possible. The biggest changes that need to be made are what happens after misconduct is alleged or proven. The problems are with both institutions and journals:
posted by grouse at 7:32 AM on July 8, 2012 [8 favorites]
There's no way that a journal can routinely check for more than most obvious frauds. It would just be impossible to check. They don't have the original data, access to the equipment or reagents, the expertise, the funding, the time. And after this expense, nothing would stop the authors from shopping the paper around to another journal who would have to repeat the above expense, or miss the fraud.
Scientific publishing is predicated on responsibility for a paper lying with the authors. Peer review can find problems with experimental design or conclusions, given an honest description by the authors.
Dishonesty in science is a problem, but it is not so widespread as to require massive changes in the way papers are reviewed to improve detection, if that were even possible. The biggest changes that need to be made are what happens after misconduct is alleged or proven. The problems are with both institutions and journals:
- Primary responsibility for investigating scientific misconduct lies with the researcher's institution. The Public Health Service has an Office of Research Integrity that reviews these investigations where work funded by the PHS (including the National Institutes of Health) is involved, but they generally wait for the institution to finish first. Obviously, the university has a vested interest in not seeing one of their star faculty be publicly exposed as a fraud, for public image reasons, of course, but also because the institution might lose hundreds of thousands to millions of dollars in external support in the researchers' grants. So there is a big incentive for them to minimize whatever is found, or not to find anything. Worse, these investigations are not routine (see above), so the institution often has no established procedure for this, and the investigation might be convened by the dean of the college where the investigator works, who has even more of an incentive to not find misconduct. The scientific experts needed for such an investigation are often the researcher's colleagues on the faculty, who, even if they didn't have potential conflict of interest problems, are also extremely busy. So these investigations move at a glacial pace. They take way too long.
- Journals handle alleged misconduct poorly as well. This is the stage where I think journals should perhaps attract some liability. Journals, too, have a conflict of interest in dealing with these allegations, since they look bad if they are seen to have allowed fraudulent research to be published. They often punt allegations to the researcher's institution completely (if they do anything). Or maybe an editor will look into it seriously—in some cases the same editor who made the original decision to accept the paper. For a case of proven misconduct, the journals drag their feet on retracting the papers, or quietly issue retractions with no reasoning or context, which means that other researchers in the community have no idea that they were relying on fraudulent research, even if they manage to find the retraction notice. Then the journal refuses to comment when scientists or reporters ask what happened. The annals of Retraction Watch are full of this phenomenon.
posted by grouse at 7:32 AM on July 8, 2012 [8 favorites]
I was thinking about scientific fraud and unaccounted for factors (not related except for the potential results) the other day in regards to the LHC and the recent announcement of the finding of the Higgs Boson. The LHC is unique; there isn't really anyway for someone to independently verify the recently announced findings. If there is a problem in the theory or execution of the construction of the device there isn't currently any way to ferret that out if the flaw creates results that confirm to our expectations. And that is wholly outside the potential for someone in the chain of experimentation to fudge results. Like Ken Thompson's Reflections on Trusting Trust or the rumour that the published nuclear bomb research is fatally flawed as a security feature the thought that the Higgs Boson research could be flawed accidentally or intentionally was a bit of a mind mender.
posted by Mitheral at 7:43 AM on July 8, 2012
posted by Mitheral at 7:43 AM on July 8, 2012
Ironically, its the non academic publishing platforms that often offer more of a "citation please" attitude
I don't think this is really true, and furthermore it is publications like the ones in question that organizations like wikipedia inherently trust for citations. A 'citation' in this kind of case is an actual experiment, not some other paper. To get the analogue of an extra citation here, one would need to do the kind of massive statistical analysis reported in the article, or full on replicate experiments. Replication is extremely costly, time-consuming, and typically not rewarded in any way by the current infrastructure of science. (For example, a replication is usually unpublishable.) Peer review / providing more citations can't easily solve this problem, though I think one would have to be exceptionally clever / skilled to game the system for this long. Peer reviewers can be skeptical (and often are), but not up to the point of replicating the work themselves.
People are broadly aware of this problem, but institutional change is slow. Some solutions being pursued are requirements to open up data sets (from funding agencies, institutions, and journals), and forums where replications (or failures thereof) can be published. The latter aren't really catching on because they can be highly politically fraught.
posted by advil at 7:54 AM on July 8, 2012 [1 favorite]
I don't think this is really true, and furthermore it is publications like the ones in question that organizations like wikipedia inherently trust for citations. A 'citation' in this kind of case is an actual experiment, not some other paper. To get the analogue of an extra citation here, one would need to do the kind of massive statistical analysis reported in the article, or full on replicate experiments. Replication is extremely costly, time-consuming, and typically not rewarded in any way by the current infrastructure of science. (For example, a replication is usually unpublishable.) Peer review / providing more citations can't easily solve this problem, though I think one would have to be exceptionally clever / skilled to game the system for this long. Peer reviewers can be skeptical (and often are), but not up to the point of replicating the work themselves.
People are broadly aware of this problem, but institutional change is slow. Some solutions being pursued are requirements to open up data sets (from funding agencies, institutions, and journals), and forums where replications (or failures thereof) can be published. The latter aren't really catching on because they can be highly politically fraught.
posted by advil at 7:54 AM on July 8, 2012 [1 favorite]
This is one part that struck me:
The LHC is unique; there isn't really anyway for someone to independently verify the recently announced findings.
On the other hand, the LHC has a uniquely massive number of collaborators, who have access to the data and the tools needed to verify the data analysis (unlike, say, a small research lab tied to a university). With an announcement this important, they've had multiple people look at the data independently (or at least I hope so).
If there is a problem in the theory or execution of the construction of the device there isn't currently any way to ferret that out if the flaw creates results that confirm to our expectations.
My understanding is that the LHC findings are actually confirming earlier findings with other accelerators, which did find something in the same range but didn't have the power to get the statistical significance they were looking for. So yes, there are ways to double-check these findings without building a second LHC.
TL;DR: It seems to me like there is a smaller chance of outright fraud in a large project, when multiple people have access to the data and the tools and know-how to analyze it. Fraud and it's less-evil cousin, error, seems to fester in labs where a small number of people can gain control of the flow of data and analysis, and prevent other people from checking their work (I'm thinking of the relatively recent case of the lab studying language in tamarins, where Marc Hauser so segregated work that no one had the tools to verify what their collaborators were producing).
posted by muddgirl at 7:56 AM on July 8, 2012
The investigation concluded that Fujii’s co-authors, with at least one exception, were unaware of his misconduct. Indeed, it appears he fabricated their signatures in many, if not most instances.There is absolutely an ethical responsibility on the part of his 'co-authors' to step up and say, "Hey, I didn't contribute to this paper, I didn't review it, I don't stand by it, don't put my name on it." I know why they didn't (because names-on-papers is a good thing, and if someone offers you a co-author for no work, it's tempting).
The LHC is unique; there isn't really anyway for someone to independently verify the recently announced findings.
On the other hand, the LHC has a uniquely massive number of collaborators, who have access to the data and the tools needed to verify the data analysis (unlike, say, a small research lab tied to a university). With an announcement this important, they've had multiple people look at the data independently (or at least I hope so).
If there is a problem in the theory or execution of the construction of the device there isn't currently any way to ferret that out if the flaw creates results that confirm to our expectations.
My understanding is that the LHC findings are actually confirming earlier findings with other accelerators, which did find something in the same range but didn't have the power to get the statistical significance they were looking for. So yes, there are ways to double-check these findings without building a second LHC.
TL;DR: It seems to me like there is a smaller chance of outright fraud in a large project, when multiple people have access to the data and the tools and know-how to analyze it. Fraud and it's less-evil cousin, error, seems to fester in labs where a small number of people can gain control of the flow of data and analysis, and prevent other people from checking their work (I'm thinking of the relatively recent case of the lab studying language in tamarins, where Marc Hauser so segregated work that no one had the tools to verify what their collaborators were producing).
posted by muddgirl at 7:56 AM on July 8, 2012
The LHC is unique; there isn't really anyway for someone to independently verify the recently announced findings.
ATLAS and CMS are two separate and independent detectors on LHC, with concurring results.
posted by zamboni at 9:12 AM on July 8, 2012 [3 favorites]
ATLAS and CMS are two separate and independent detectors on LHC, with concurring results.
posted by zamboni at 9:12 AM on July 8, 2012 [3 favorites]
On the other hand, the LHC has a uniquely massive number of collaborators, who have access to the data and the tools needed to verify the data analysis (unlike, say, a small research lab tied to a university).
They also have two main experiments, ATLAS and CMS, taking data from the same beam. Fermilab also had two experiments, CDF and D0, taking data. When you have, by far, the highest energy beam in the world (as the LHC does and the TeVatron did for many years,) no other lab can confirm your result, so you need to run multiple experiments on the same beam so they can crosscheck and confirm.
posted by eriko at 9:23 AM on July 8, 2012 [1 favorite]
They also have two main experiments, ATLAS and CMS, taking data from the same beam. Fermilab also had two experiments, CDF and D0, taking data. When you have, by far, the highest energy beam in the world (as the LHC does and the TeVatron did for many years,) no other lab can confirm your result, so you need to run multiple experiments on the same beam so they can crosscheck and confirm.
posted by eriko at 9:23 AM on July 8, 2012 [1 favorite]
The Anil Potti case is instructive—his work had proven serious statistical problems, but Duke didn't start taking it seriously until he was found to have placed a clear lie about being a Rhodes Scholar on his resume.
Interesting, it sounds like Fujii’s case is similar. The Retraction Watch article suggests that the event that triggered the unraveling of Fujii's fraud was that he submitted a manuscript found to contain plagiarism to the Canadian Journal of Anesthesia. Maybe it is institutionally difficult to investigate suggestions that research might be fraudulent without first having evidence of dishonesty in other aspects of a researcher's career?
I know why they didn't (because names-on-papers is a good thing, and if someone offers you a co-author for no work, it's tempting).
As strange as it might sound, some of Fujii’s co-authors claim that they were unaware that papers had been published listing them as co-authors. Apparently at least some of the papers with listed co-authors had been submitted by Fujii without notice to the co-authors, with forged co-author signatures, and they did not receive re-prints of accepted papers. A colleague has forwarded me an account in which a co-author of Fujii's claims that they did not notice that the papers listing them as a co-author even existed until they were contacted as part of this investigation (seems plausible, if somewhat improbable to me).
posted by RichardP at 9:49 AM on July 8, 2012 [1 favorite]
Interesting, it sounds like Fujii’s case is similar. The Retraction Watch article suggests that the event that triggered the unraveling of Fujii's fraud was that he submitted a manuscript found to contain plagiarism to the Canadian Journal of Anesthesia. Maybe it is institutionally difficult to investigate suggestions that research might be fraudulent without first having evidence of dishonesty in other aspects of a researcher's career?
I know why they didn't (because names-on-papers is a good thing, and if someone offers you a co-author for no work, it's tempting).
As strange as it might sound, some of Fujii’s co-authors claim that they were unaware that papers had been published listing them as co-authors. Apparently at least some of the papers with listed co-authors had been submitted by Fujii without notice to the co-authors, with forged co-author signatures, and they did not receive re-prints of accepted papers. A colleague has forwarded me an account in which a co-author of Fujii's claims that they did not notice that the papers listing them as a co-author even existed until they were contacted as part of this investigation (seems plausible, if somewhat improbable to me).
posted by RichardP at 9:49 AM on July 8, 2012 [1 favorite]
I think that advil hit on the main issue above: there's no motivation for replicating experiments. If there was some way to increase the prestige of replication it would provide both the needed resources to combat fraudulent research, and a publishing outlet for those looking for research topics.
Isn't the standard for investigative journalism "two independent sources"? Imagine if the scientific community didn't put much faith into published results until they had been replicated by an independent research effort, ideally two of them. If scientific journals published a section of replications - or refutations! - in every issue. It would never have the same prestige as initial findings, sure, but there should be some respect for being the first team to decisively replicate important findings, or pride in gaining a reputation for exposing data manipulation in papers claiming outrageous discoveries.
No idea how to make that change come about, but wouldn't it start adding value to the community right away? More scientists familiar with each study, knowledge spread across independent teams, researchers teaming up with former "replicators" on new projects, maybe even encouraging a new career path from grad student to running your own replication study to being PI on new research? Maybe I'm being too idealistic.
posted by ceribus peribus at 10:13 AM on July 8, 2012
Isn't the standard for investigative journalism "two independent sources"? Imagine if the scientific community didn't put much faith into published results until they had been replicated by an independent research effort, ideally two of them. If scientific journals published a section of replications - or refutations! - in every issue. It would never have the same prestige as initial findings, sure, but there should be some respect for being the first team to decisively replicate important findings, or pride in gaining a reputation for exposing data manipulation in papers claiming outrageous discoveries.
No idea how to make that change come about, but wouldn't it start adding value to the community right away? More scientists familiar with each study, knowledge spread across independent teams, researchers teaming up with former "replicators" on new projects, maybe even encouraging a new career path from grad student to running your own replication study to being PI on new research? Maybe I'm being too idealistic.
posted by ceribus peribus at 10:13 AM on July 8, 2012
There is absolutely an ethical responsibility on the part of his 'co-authors' to step up and say, "Hey, I didn't contribute to this paper, I didn't review it, I don't stand by it, don't put my name on it." I know why they didn't (because names-on-papers is a good thing, and if someone offers you a co-author for no work, it's tempting).
My impression from at least some of the articles was that some of the "co-authors" didn't know they had "contributed." If you don't use periodical indexes to check your own publication record, you might never notice, even if you read the article (if the author list is long enough)....
posted by GenjiandProust at 10:33 AM on July 8, 2012
My impression from at least some of the articles was that some of the "co-authors" didn't know they had "contributed." If you don't use periodical indexes to check your own publication record, you might never notice, even if you read the article (if the author list is long enough)....
posted by GenjiandProust at 10:33 AM on July 8, 2012
There's no way that a journal can routinely check for more than most obvious frauds. It would just be impossible to check. They don't have the original data, access to the equipment or reagents, the expertise, the funding, the time. And after this expense, nothing would stop the authors from shopping the paper around to another journal who would have to repeat the above expense, or miss the fraud.
Well, and fair enough. On the other hand, this means that the vast number of journals published by for-profit publishers are able to not only outsource all the "creative" and most of the editing work for their product, but get to claim prestige when their contents are correct but disavow fault when their contents are not. I'd be more sanguine if all journals were published by small societies with limited resources and a dedication to the field rather than profit....
posted by GenjiandProust at 10:39 AM on July 8, 2012
Well, and fair enough. On the other hand, this means that the vast number of journals published by for-profit publishers are able to not only outsource all the "creative" and most of the editing work for their product, but get to claim prestige when their contents are correct but disavow fault when their contents are not. I'd be more sanguine if all journals were published by small societies with limited resources and a dedication to the field rather than profit....
posted by GenjiandProust at 10:39 AM on July 8, 2012
Yeah, I read that "some" of the co-authors were unaware, but some were aware or partially-aware.
Also, vanity-google yourself, scientists! Or at least vanity-PubMed-search or whatever database serves your research niche! You at least should know who is citing you, even if no one is falsifying your signature to authorship statements.
posted by muddgirl at 12:51 PM on July 8, 2012
Also, vanity-google yourself, scientists! Or at least vanity-PubMed-search or whatever database serves your research niche! You at least should know who is citing you, even if no one is falsifying your signature to authorship statements.
posted by muddgirl at 12:51 PM on July 8, 2012
GenjiandProust: " On the other hand, this means that the vast number of journals published by for-profit publishers are able to not only outsource all the "creative" and most of the editing work for their product, but get to claim prestige when their contents are correct but disavow fault when their contents are not. I'd be more sanguine if all journals were published by small societies with limited resources and a dedication to the field rather than profit...."
I can see this making sense from a librarianey perspective, but Elsevier and their ilk are not the industry, they've just wiggled their way into owning much of it. There is a reason they've managed to insert themselves,
Large high powered journals cost a lot of money and need a lot of volunteer time to operate, they pay their editors, they pay for printing, they pay for copy editing, they pay for staff who assist the editors, they pay for staff who manage the peer review, they pay for the billing that they or their parent organization handles, they need peer reviewers, they need volunteer associate editors, and the non-profit ones usually also pay for the more specific and less high powered journals that are built to lose money.
Smaller more niche journals don't have the same kind of expenses, all of their editors are generally volunteers who are successful enough to have a name but not so successful that being an editor doesn't impact their CV, they generally have printing costs but small scale printing isn't as outrageous as it used to be, and they generally don't pay their reviewers or authors. However, someone still needs to copy edit the amazing ESL work that comes in for proper English, there is so much that is so close to good English you can't really turn it away and not quite good enough to print it. Someone still needs to manage the peer-review process, which can get complicated and often messy fast. Someone still needs to go through the modern equivalent to the process of typesetting the pages, which is not what it used to be but nothing like trivial. Someone also needs to manage the website, deal with spam, handle administrative things like paying for stuff, arrange for advertising from corporations, and the million other little things that need doing.
The kinds of editor's in chief who will lend credibility to a journal already have profoundly busy lives, hell, in order to attract them a small journal generally needs to provide paid administrative assistance to help them deal with the purely editorial stuff they need to do. Good luck getting a volunteer editorial board to do this kind of shit either. Professional staff is absolutely necessary, and hiring professionals is and should be expensive.
All of this requires complex, and often really fragile, systems to generate the income as well as the free but profoundly specialized labor necessary to keep it all running. Elsevier and the like generally acquire journals by swooping in when an old editor in chief who did way too much for too long dies, or an asshole gets the wrong position and is not worth dealing with for anybody, or simply that no one in the community is willing to step forward anymore. Elsevier then make it very easy for everyone involved, and reaps massive profits.
One of the best answers to making Elsevier go fuck itself is to make it unnecessary. Anything we do to make running a journal less of a massive pain in the ass is another piece of buffer between communities with journals and giving in to the dark side. I'm sure Elsevier would be overjoyed to add another layer of administrivia that they could easily deal with but poor Associate Professor McRopedintothis couldn't.
posted by Blasdelb at 1:12 PM on July 8, 2012 [3 favorites]
I can see this making sense from a librarianey perspective, but Elsevier and their ilk are not the industry, they've just wiggled their way into owning much of it. There is a reason they've managed to insert themselves,
Large high powered journals cost a lot of money and need a lot of volunteer time to operate, they pay their editors, they pay for printing, they pay for copy editing, they pay for staff who assist the editors, they pay for staff who manage the peer review, they pay for the billing that they or their parent organization handles, they need peer reviewers, they need volunteer associate editors, and the non-profit ones usually also pay for the more specific and less high powered journals that are built to lose money.
Smaller more niche journals don't have the same kind of expenses, all of their editors are generally volunteers who are successful enough to have a name but not so successful that being an editor doesn't impact their CV, they generally have printing costs but small scale printing isn't as outrageous as it used to be, and they generally don't pay their reviewers or authors. However, someone still needs to copy edit the amazing ESL work that comes in for proper English, there is so much that is so close to good English you can't really turn it away and not quite good enough to print it. Someone still needs to manage the peer-review process, which can get complicated and often messy fast. Someone still needs to go through the modern equivalent to the process of typesetting the pages, which is not what it used to be but nothing like trivial. Someone also needs to manage the website, deal with spam, handle administrative things like paying for stuff, arrange for advertising from corporations, and the million other little things that need doing.
The kinds of editor's in chief who will lend credibility to a journal already have profoundly busy lives, hell, in order to attract them a small journal generally needs to provide paid administrative assistance to help them deal with the purely editorial stuff they need to do. Good luck getting a volunteer editorial board to do this kind of shit either. Professional staff is absolutely necessary, and hiring professionals is and should be expensive.
All of this requires complex, and often really fragile, systems to generate the income as well as the free but profoundly specialized labor necessary to keep it all running. Elsevier and the like generally acquire journals by swooping in when an old editor in chief who did way too much for too long dies, or an asshole gets the wrong position and is not worth dealing with for anybody, or simply that no one in the community is willing to step forward anymore. Elsevier then make it very easy for everyone involved, and reaps massive profits.
One of the best answers to making Elsevier go fuck itself is to make it unnecessary. Anything we do to make running a journal less of a massive pain in the ass is another piece of buffer between communities with journals and giving in to the dark side. I'm sure Elsevier would be overjoyed to add another layer of administrivia that they could easily deal with but poor Associate Professor McRopedintothis couldn't.
posted by Blasdelb at 1:12 PM on July 8, 2012 [3 favorites]
Problem With the Specialty or Statistical Cluster?This passage from Blasdelb's last link is interesting to me in light of the fact that a number of anesthetics are NMDA receptor antagonists:
The Fujii scandal marks the biggest and most recent, but hardly the first, major misconduct probe involving anesthesiologists. In 2009, Scott Reuben, then of Baystate Medical Center in Massachusetts, was found to have fabricated data and misused grant money—fraud for which he spent six months in federal prison. That was followed by news that Joachim Boldt, a leading German critical care specialist, had failed to obtain ethics approval in scores of studies, nearly 90 of which have been retracted. Boldt also appears to have fabricated findings in at least one paper, which A&A retracted in 2010.
In fact, of the 2,200 papers that journals have retracted since 1970, Reuben, Boldt and Fujii—assuming the 172 articles found to be fraudulent are pulled—account for roughly 285, or nearly 13%.
Anesthesiologists “have an absolutely horrifying track record in terms of retractions,” said R. Grant Steen, a researcher who studies publishing ethics.
I really wonder whether anesthesiologists aren’t feeling like the ground is shifting underneath them.
NMDA receptor antagonists are a class of anesthetics that work to antagonize, or inhibit the action of, the N-methyl d-aspartate receptor (NMDAR). They are used as anesthesia for animals and, less commonly, for humans; the state of anesthesia they induce is referred to as dissociative anesthesia. There is evidence that NMDA receptor antagonists can cause a certain type of neurotoxicity or brain damage referred to as Olney's Lesions in rodents, though such damage has never been observed in primates like humans.In addition to the anesthetics mentioned by the Wikipedia article, the very widely used halothane and its analogues also display NMDA receptor antagonist activity.
Several synthetic opioids function additionally as NMDAR-antagonists, such as Meperidine, Methadone, Dextropropoxyphene, Tramadol and Ketobemidone.
Some NMDA receptor antagonists, including but not limited to ketamine (K), dextromethorphan (DXM), phencyclidine (PCP), and nitrous oxide (N2O) are popular as recreational drugs for their dissociative, hallucinogenic, and/or euphoriant properties. When used recreationally, they are classified as dissociative drugs.
As recreational drugs, the NMDA receptor antagonists are notable for their acute disinhibiting effects, and chronic use may have permanent or semi-permanent effects on the brain:
Even if the hypothesis of gross neural apoptosis proves to be false in humans, NMDA antagonists certainly have potential to permanently alter synaptic structure due to effects upon long term potentiation, which NMDA plays a crucial role in. Perhaps, with repeated usage, this would manifest, due to tolerance, thus downregulation, of the NMDA receptor system. This could feasibly alter the function/relationship of various structures, specifically the ventral visual stream, which is a likely cause of the anecdotal reports of hallucinogen persisting perception disorder (HPPD) from such chronic users.If Fujii's ordeal were to be a criminal trial rather than the death of a thousand paper cuts which he is now enduring, and I was his defense attorney, my entire defense would be that my client had committed these crimes as a direct result of long term occupational exposure to these potent brain-altering vapors-- the anesthesiological isomorphism of the Twinkie Defense, in other words.
Olney's Lesions have not yet been proven or disproven to manifest in humans. No tests have been conducted to test the validity of post-dissociative development of vacuolization in human brain tissue, and critics claim that animal testing is not a reliable predictor of the effects of dissociative substances on humans:
The evidence is that ketamine and many other NMDA-receptor antagonists that have been tested in humans, cause an acute disturbance in neural circuitry that leads to psychotic manifestations. These same drugs cause the same disturbance in neural circuitry in rats and when we look at their brains we see evidence for physical neuronal injury. Since no one has looked at the brains of humans immediately after administering these drugs, we do not know whether the physical neuronal injury occurs.[8]
—John Olney, Private correspondence
posted by jamjam at 1:39 PM on July 8, 2012
Oh god you guys are scaring the crap out of me. Have you ever seen someone coming down from propofol-ketamine anaesthesia, and the propofol wears off long before the ketamine? It is moments like that when I think anaesthesiology is just witchcraft.
posted by charlie don't surf at 2:59 PM on July 8, 2012
posted by charlie don't surf at 2:59 PM on July 8, 2012
Anaesthesiology is glorified witchcraft
posted by Blasdelb at 3:23 PM on July 8, 2012 [1 favorite]
posted by Blasdelb at 3:23 PM on July 8, 2012 [1 favorite]
At one time I was actually a pre-med student with a work-study job in the anaesthesiology department. If I was exposed to any other specialty than that, I might not have dropped out. I couldn't stand what they did as research and their attitude towards suffering patients.
posted by charlie don't surf at 3:33 PM on July 8, 2012
posted by charlie don't surf at 3:33 PM on July 8, 2012
That patients were research subjects. I suppose this is common in "teaching hospitals."
I once had one of their doctors wag his finger at me and complain that my side effects had ruined his perfect data.
posted by charlie don't surf at 9:38 PM on July 8, 2012
I once had one of their doctors wag his finger at me and complain that my side effects had ruined his perfect data.
posted by charlie don't surf at 9:38 PM on July 8, 2012
« Older The Phi does for radio what Apple did for... | INTERVIEWER: "Was there some technical problem... Newer »
This thread has been archived and is closed to new comments
posted by Gyan at 5:36 AM on July 8, 2012