Publish or Perish
May 13, 2012 6:39 AM Subscribe
Are bias and fraud damaging the existing public trust in scientific and medical research? (previously)
There is a more precise pair of articles entitled Reforming Science by the Editors in Chief of Infection and Immunity and mBio : Methodological and Cultural Reforms and Structural Reforms.
There is a more precise pair of articles entitled Reforming Science by the Editors in Chief of Infection and Immunity and mBio : Methodological and Cultural Reforms and Structural Reforms.
If you contracted HIV, would you rather be treated in 1990 or today?
If your answer is 'today' then, despite all of the bias and fraud and dissembling and bullshit, it appears that the process of doing science still delivers major health improvements.
Which isn't to say that bias and fraud aren't problems, but I'm sick to death of listening to anti-science yahoos using their existance to argue that the process doesn't have value and therefore we don't actually *know* that HIV causes AIDS, or autism isn't caused by vaccines or whatever nonsense it is they're spouting this week.
It's in the nature of science that, over time, it will correct for these errors. It might take longer than we'd like, but it's still faster than using non-scientific methods.
posted by PeterMcDermott at 7:24 AM on May 13, 2012 [35 favorites]
If your answer is 'today' then, despite all of the bias and fraud and dissembling and bullshit, it appears that the process of doing science still delivers major health improvements.
Which isn't to say that bias and fraud aren't problems, but I'm sick to death of listening to anti-science yahoos using their existance to argue that the process doesn't have value and therefore we don't actually *know* that HIV causes AIDS, or autism isn't caused by vaccines or whatever nonsense it is they're spouting this week.
It's in the nature of science that, over time, it will correct for these errors. It might take longer than we'd like, but it's still faster than using non-scientific methods.
posted by PeterMcDermott at 7:24 AM on May 13, 2012 [35 favorites]
I phrased this as a question, and added the 2009 "existing" link, since Daniel Sarewitz has written rather questionable stuff about "trust" in the sciences before, maybe should've sidestepped him entirely, but hey. And thanks for mentioning Flake's NSF funding amendment early, peacay. I'd forgotten about it.
posted by jeffburdges at 7:31 AM on May 13, 2012
posted by jeffburdges at 7:31 AM on May 13, 2012
it's difficult to imagine bias and fraud anything other than a harmful effect on something but then again it is impossible to imagine bias and fraud not existing in research or anything else. This means its impact on trust is diminished. In the end PeterMcDermott is spot-on
posted by 2manyusernames at 7:34 AM on May 13, 2012
posted by 2manyusernames at 7:34 AM on May 13, 2012
it appears that the process of doing science still delivers major health improvements
In some fields. If you're a chronic patient of some sort, your negative outcomes probably won't be something you'll know about immediately. They'll only come out years later, or when the FDA forces the withdrawal your drug because people started dying. See: Vioxx, and note the bit about fabricated studies.
posted by immlass at 7:39 AM on May 13, 2012 [1 favorite]
In some fields. If you're a chronic patient of some sort, your negative outcomes probably won't be something you'll know about immediately. They'll only come out years later, or when the FDA forces the withdrawal your drug because people started dying. See: Vioxx, and note the bit about fabricated studies.
posted by immlass at 7:39 AM on May 13, 2012 [1 favorite]
I believe the most pertinent quote form the NYT article is "published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent" (Nature) which appears to be "a mix of misconduct and honest scientific mistakes" (The Journal of Medical Ethics).
Yes, HIV outcomes are massively better today than in 1990, but HIV attracted exceedingly good researchers, and the underlying research mostly predates the decade being discussed here.
posted by jeffburdges at 7:45 AM on May 13, 2012 [1 favorite]
Yes, HIV outcomes are massively better today than in 1990, but HIV attracted exceedingly good researchers, and the underlying research mostly predates the decade being discussed here.
posted by jeffburdges at 7:45 AM on May 13, 2012 [1 favorite]
I get the pros of publishing negative results, but have people not heard the phrase, "looking for a needle in a hay stack"?
Couple this with the fact that, in a complex experiment it is easier to screw up your design and get a negative response that should have been positive (than the other way around), so a negative result isn't necessarily as meaningful as a positive result and add in a dash of "you tell me what p-value you're using and I'll tell you what the odds are of a false positive or negative based purely on chance" and pretty soon spending two weeks in the lab to avoid wasting two hours in the library isn't going to be as ironic as it is today.
posted by Kid Charlemagne at 8:00 AM on May 13, 2012 [1 favorite]
Couple this with the fact that, in a complex experiment it is easier to screw up your design and get a negative response that should have been positive (than the other way around), so a negative result isn't necessarily as meaningful as a positive result and add in a dash of "you tell me what p-value you're using and I'll tell you what the odds are of a false positive or negative based purely on chance" and pretty soon spending two weeks in the lab to avoid wasting two hours in the library isn't going to be as ironic as it is today.
posted by Kid Charlemagne at 8:00 AM on May 13, 2012 [1 favorite]
Something else to consider is that in the past decade or so we went from lots of instrumentation where your data comes out as a nice computer file on a hard drive capable of holding a couple million experiments worth of data, or maybe a network share somewhere. Not too long ago, you shoved a cuvette into the instrument and hand copied the resulting data into your notebook. If you were really thorough, you might have a giant box of poorly labeled floppy disks somewhere with all sorts of backup data on it in a format you don't know how to read and some of those might still be readable. Maybe.
So there is a real strong possibility that what has changed is the ease of proving fraud and that the number of retractions published recently is closer to what it should be and instead of catching maybe 50% of fraudulent papers today, we were only catching 5% of them in the past.
posted by Kid Charlemagne at 8:15 AM on May 13, 2012 [15 favorites]
So there is a real strong possibility that what has changed is the ease of proving fraud and that the number of retractions published recently is closer to what it should be and instead of catching maybe 50% of fraudulent papers today, we were only catching 5% of them in the past.
posted by Kid Charlemagne at 8:15 AM on May 13, 2012 [15 favorites]
Political support for medical research is driven by very basic hopes and fears, so people will always trust and fund it. It just has to hold out some kind of plausible hope, which is not hard for it to do.
The real reason to fix this is that it's undermining the purpose for which people entered the field. From the NYT article:
posted by Estragon at 8:19 AM on May 13, 2012 [6 favorites]
The real reason to fix this is that it's undermining the purpose for which people entered the field. From the NYT article:
“You can’t afford to fail, to have your hypothesis disproven,” Dr. Fang said. “It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”For these kinds to concerns override scientific values (and they do heavily override them, at least in some fields of biomedical research), is of course a serious ethical failure, but a deeper problem is that it's corrosive to the morale and career prospects of the researchers who actually give a shit about the scientific and medical challenges they're studying.
posted by Estragon at 8:19 AM on May 13, 2012 [6 favorites]
a deeper problem is that it's corrosive to the morale and career prospects of the researchers who actually give a shit about the scientific and medical challenges they're studying.
It also sucks pretty badly to be a patient relying on that research and treatments to find relief from whatever ails you, or to keep a disease from killing you.
posted by immlass at 8:33 AM on May 13, 2012 [2 favorites]
It also sucks pretty badly to be a patient relying on that research and treatments to find relief from whatever ails you, or to keep a disease from killing you.
posted by immlass at 8:33 AM on May 13, 2012 [2 favorites]
Overall, I agree with PeterMcDermott that the whole foundation of the scientific method is that it is open to proving itself wrong, and I have faith in it to root out errors, whether they be accidental or intentional.
That said, I think the peer review process is far from perfect. I've peer-reviewed manuscripts and provided pages of comments, and then I'll see the other peer-reviewer comments and they'll have one paragraph that makes it look like they barely read the paper. On top of that, I don't think it's my job as a peer reviewer to check for plagiarism or fraud.
There's a couple of promising models that are becoming more accepted--one is the open access philosophy of "open peer review" where peer-reviewer's comments (and identities) are freely available for all published papers. That way, anyone can see how carefully a paper was reviewed prior to publication. Another idea that seems promising is "public peer-review" which also seems like it can overcome some of the publication bias problem, since practically all papers submitted become available for critique, regardless of whether their results were positive or null.
posted by gubenuj at 8:35 AM on May 13, 2012 [4 favorites]
That said, I think the peer review process is far from perfect. I've peer-reviewed manuscripts and provided pages of comments, and then I'll see the other peer-reviewer comments and they'll have one paragraph that makes it look like they barely read the paper. On top of that, I don't think it's my job as a peer reviewer to check for plagiarism or fraud.
There's a couple of promising models that are becoming more accepted--one is the open access philosophy of "open peer review" where peer-reviewer's comments (and identities) are freely available for all published papers. That way, anyone can see how carefully a paper was reviewed prior to publication. Another idea that seems promising is "public peer-review" which also seems like it can overcome some of the publication bias problem, since practically all papers submitted become available for critique, regardless of whether their results were positive or null.
posted by gubenuj at 8:35 AM on May 13, 2012 [4 favorites]
immlass: You're right of course, but most of the articles Jeff has linked are talking about pretranslational research. I know the exceptions are extremely grave, but it's rare for the biases under discussion to persist all the way to patient care because as his fifth link notes willful ignorance about how your core product affects your customers is generally bad for business.
posted by Estragon at 8:45 AM on May 13, 2012
posted by Estragon at 8:45 AM on May 13, 2012
One problem is the "man bites dog" problem. There aren't any news items created about a honestly reported drug trial, there are about fraudulent ones.
posted by eriko at 9:33 AM on May 13, 2012 [1 favorite]
posted by eriko at 9:33 AM on May 13, 2012 [1 favorite]
As far as I know, the only scientists who are money mongerers and sell-outs are the economists at Columbia and Harvard. Most industry-backed "research" never makes it farther than the FDA because respectable, peer-reviewed journals employ rigorous vetting processes to maintain academic standards.
posted by lotusmish at 10:08 AM on May 13, 2012
posted by lotusmish at 10:08 AM on May 13, 2012
This seems to be a problem that is affecting one particular field: Drug research in biology. That's probably one of the most directly-mobilizable areas in science: You develop a drug, you get a patent, and then you make your money.
Usually, that's not called "science" It's called "engineering" - It's like the o-ring problem on the Space Shuttle. It wasn't bad, politicized science that caused the problem, it was bad, politicized engineering.
In what other scientific fields do you have billions of dollars on the line depending on the apparent results, as opposed to the actual results? I mean, If you find oil (using geology), you could get a ton of money, but only if the Oil is actually there.
posted by delmoi at 10:28 AM on May 13, 2012 [2 favorites]
Usually, that's not called "science" It's called "engineering" - It's like the o-ring problem on the Space Shuttle. It wasn't bad, politicized science that caused the problem, it was bad, politicized engineering.
In what other scientific fields do you have billions of dollars on the line depending on the apparent results, as opposed to the actual results? I mean, If you find oil (using geology), you could get a ton of money, but only if the Oil is actually there.
posted by delmoi at 10:28 AM on May 13, 2012 [2 favorites]
No, delmoi, it's affecting basic research, as well. See for instance The Heart of Research is Sick by Peter Lawrence.
posted by Estragon at 10:51 AM on May 13, 2012
posted by Estragon at 10:51 AM on May 13, 2012
There are regularly retractions outside molecular biology and medicine that make it into Retraction Watch, delmoi, although those fields receive the most coverage.
I believe a higher percentage of researchers engage in "careerist" activities that harm their discipline today, Kid Charlemagne, meaning salami publication, prejudiced reviewing, data manipulation, plagiarism, etc. In fact, this sounds expect when society increases both the number of researchers and the competition amongst them. We probably need reforms like those proposed by Casadevall and Fang to overcome these growing pains.
posted by jeffburdges at 11:29 AM on May 13, 2012 [1 favorite]
I believe a higher percentage of researchers engage in "careerist" activities that harm their discipline today, Kid Charlemagne, meaning salami publication, prejudiced reviewing, data manipulation, plagiarism, etc. In fact, this sounds expect when society increases both the number of researchers and the competition amongst them. We probably need reforms like those proposed by Casadevall and Fang to overcome these growing pains.
posted by jeffburdges at 11:29 AM on May 13, 2012 [1 favorite]
Yes, big time. I speak as a former scientific research lab worker. Outright fraud is less prevalent than endless credit-jockeying, resulting in very little scientific bang for the research buck. But shoddy, hurried "research" bordering on fraud is common to the point of, again, prevalence. Focusing on dramatic results like "finding oil" and "new cure for disease x" ignores the much more mundane lives and very un-dramatic research efforts of the vast majority of scientists.
posted by telstar at 12:05 PM on May 13, 2012 [1 favorite]
posted by telstar at 12:05 PM on May 13, 2012 [1 favorite]
On the one hand, yeah, bias and fraud taint the reputation of scientific research. On the other hand, the idea that bias and fraud haven't been happening since the beginning of scientific research as a discipline seems like a kind of naive belief in a Golden Age.
Are there actual data that suggest that bias and fraud have increased over time? Because I think that the idea that people are talking about this stuff openly is a net gain, since from my perspective it was happening and being hushed up for centuries.
posted by Sidhedevil at 12:10 PM on May 13, 2012 [2 favorites]
Are there actual data that suggest that bias and fraud have increased over time? Because I think that the idea that people are talking about this stuff openly is a net gain, since from my perspective it was happening and being hushed up for centuries.
posted by Sidhedevil at 12:10 PM on May 13, 2012 [2 favorites]
Can we blame science-by-press-release and funding-by-press-release for some of this?
Science became the cool kid on the block and lost its way (though a terrible grant system, focus on 'translatable to products' research, and personal ego's play a role too I'm sure).
posted by Slackermagee at 12:12 PM on May 13, 2012 [2 favorites]
Science became the cool kid on the block and lost its way (though a terrible grant system, focus on 'translatable to products' research, and personal ego's play a role too I'm sure).
posted by Slackermagee at 12:12 PM on May 13, 2012 [2 favorites]
Delmoi: the reason that this is being described as a structural problem is that across all areas of scientific research, the strong tendency is to publish "exceptional" results (and perhaps obviously, because your career will be made or broken on "publishable results") to therefore generate those results. The problem, structurally, is that this selects for results that are splashy but nonrepeatable over results that are not sexy but confirm existing understanding. You're not getting into Nature with a paper that says 'we confirm this research from five years ago is accurate in its conclusions'.
In short, whatever the stated goals, the facts on the ground are that research as a career strongly selects for P<>.95, which in turn selects for bad, irresponsible and fraudulent publications.>
posted by mhoye at 12:14 PM on May 13, 2012
In short, whatever the stated goals, the facts on the ground are that research as a career strongly selects for P<>.95, which in turn selects for bad, irresponsible and fraudulent publications.>
posted by mhoye at 12:14 PM on May 13, 2012
Yeah, I do see how that could be a problem, but I guess what I meant is that that kind of thing wouldn't really undermine public trust in science, because that stuff doesn't really hurt anyone or ever get noticed. On the other hand, with medical results the obvious bias is there - and fraudulent actions could have serious health consequences.
I remember a couple years ago, there were these ads for some new acne medication that claimed to "balance your hormones" or something. A couple years later I saw an ad looking for people to join a class action suit for that drug, and listing a bunch of horrible side effects it might have caused.
That kind of thing is obviously going to make people worry. On the other hand, something like discovering a new extrasolar planet and then needing to retract it - people probably won't ever hear about the retraction anyway, it's not going to make them think all science is bogus (but, frankly, maybe they should be a little more skeptical about science stories that are printed in the popular press, which compounds the problem by writing up flashy results without providing much context or ability to understand what's going on)
As for the other problem: Yeah obviously it's a problem that people's careers are based on getting positive results. It makes things somewhat luck based. Having happened to pick the right theory among many possible theories makes you the success, while other scientists who were just as good may have ended up trying to test other ones.
But on the other hand, how do you fix it? Scientists are always going to want personal glory - getting positive results is always going to get more attention from people.
We might want to look into working harder to replicate results, rather then just 'reviewing' them. But obviously scientists would rather spend their time looking for new stuff then simply repeating other people's experiments.
posted by delmoi at 12:53 PM on May 13, 2012
I remember a couple years ago, there were these ads for some new acne medication that claimed to "balance your hormones" or something. A couple years later I saw an ad looking for people to join a class action suit for that drug, and listing a bunch of horrible side effects it might have caused.
That kind of thing is obviously going to make people worry. On the other hand, something like discovering a new extrasolar planet and then needing to retract it - people probably won't ever hear about the retraction anyway, it's not going to make them think all science is bogus (but, frankly, maybe they should be a little more skeptical about science stories that are printed in the popular press, which compounds the problem by writing up flashy results without providing much context or ability to understand what's going on)
As for the other problem: Yeah obviously it's a problem that people's careers are based on getting positive results. It makes things somewhat luck based. Having happened to pick the right theory among many possible theories makes you the success, while other scientists who were just as good may have ended up trying to test other ones.
But on the other hand, how do you fix it? Scientists are always going to want personal glory - getting positive results is always going to get more attention from people.
We might want to look into working harder to replicate results, rather then just 'reviewing' them. But obviously scientists would rather spend their time looking for new stuff then simply repeating other people's experiments.
posted by delmoi at 12:53 PM on May 13, 2012
Haha, that should actually read "research as a career strongly selects for P less than .05 rather than P greater than .95, which in turn selects for bad, irresponsible and fraudulent publications", but apparently the other problem with science is that HTML filtering makes talking about stats 900% more difficult than it needs to be.
posted by mhoye at 1:13 PM on May 13, 2012 [2 favorites]
posted by mhoye at 1:13 PM on May 13, 2012 [2 favorites]
Yeah, I do see how that could be a problem, but I guess what I meant is that that kind of thing wouldn't really undermine public trust in science, because that stuff doesn't really hurt anyone or ever get noticed.
Everything about that statement is wrong.
From Reuters, and very, very widely republished:
posted by mhoye at 1:20 PM on May 13, 2012 [1 favorite]
Everything about that statement is wrong.
From Reuters, and very, very widely republished:
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.Fraudulent publication really does undermine public trust in science, it really does get noticed, and it really does kill people.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."
The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.
posted by mhoye at 1:20 PM on May 13, 2012 [1 favorite]
It's in the nature of science that, over time, it will correct for these errors.
Yes, but reduced levels of bias and fraud would allow errors to be corrected faster.
posted by justsomebodythatyouusedtoknow at 1:32 PM on May 13, 2012
Yes, but reduced levels of bias and fraud would allow errors to be corrected faster.
posted by justsomebodythatyouusedtoknow at 1:32 PM on May 13, 2012
...the whole foundation of the scientific method is that it is open to proving itself wrong, and I have faith in it to root out errors, whether they be accidental or intentional.
What I also have faith in is immense corporate greed resulting in a refusal to research or intentionally obfuscating any research that impinges on their profits.
posted by BlueHorse at 1:36 PM on May 13, 2012 [3 favorites]
What I also have faith in is immense corporate greed resulting in a refusal to research or intentionally obfuscating any research that impinges on their profits.
posted by BlueHorse at 1:36 PM on May 13, 2012 [3 favorites]
This seems to be a problem that is affecting one particular field: Drug research in biology. That's probably one of the most directly-mobilizable areas in science: You develop a drug, you get a patent, and then you make your money.
Usually, that's not called "science" It's called "engineering"
How so, it sounds like science? What difference does it make anyway? Engineering is just applied science even if you can define it away from science and into engineering.
It's like the o-ring problem on the Space Shuttle. It wasn't bad, politicized science that caused the problem, it was bad, politicized engineering.
How is it like the o-ring situation?
In what other scientific fields do you have billions of dollars on the line depending on the apparent results, as opposed to the actual results? I mean, If you find oil (using geology), you could get a ton of money, but only if the Oil is actually there.
You could make certainly make money from presenting evidence of oil being present even if the oil turned out not to be present. Indeed, the value of the global oil sector is based on the summation of published data which may well not be rooted in reality. Global trading in carbon is linked to science with a considerable risk factor attached as to actual cost.
posted by biffa at 2:44 PM on May 13, 2012
Usually, that's not called "science" It's called "engineering"
How so, it sounds like science? What difference does it make anyway? Engineering is just applied science even if you can define it away from science and into engineering.
It's like the o-ring problem on the Space Shuttle. It wasn't bad, politicized science that caused the problem, it was bad, politicized engineering.
How is it like the o-ring situation?
In what other scientific fields do you have billions of dollars on the line depending on the apparent results, as opposed to the actual results? I mean, If you find oil (using geology), you could get a ton of money, but only if the Oil is actually there.
You could make certainly make money from presenting evidence of oil being present even if the oil turned out not to be present. Indeed, the value of the global oil sector is based on the summation of published data which may well not be rooted in reality. Global trading in carbon is linked to science with a considerable risk factor attached as to actual cost.
posted by biffa at 2:44 PM on May 13, 2012
We've never discussed all the fake medical, climate, etc. journals run by Elsevier for companies like Merck?
posted by jeffburdges at 5:00 PM on May 13, 2012
posted by jeffburdges at 5:00 PM on May 13, 2012
Everything about that statement is wrong. ... Fraudulent publication really does undermine public trust in science, it really does get noticed, and it really does kill people.Except I was talking about fradulent publication outside of the field of drug research
I wrote:
This seems to be a problem that is affecting one particular field: Drug research in biology. That's probably one of the most directly-mobilizable areas in science: You develop a drug, you get a patent, and then you make your money.Then when someone pointed out there are problems in other fields, I said those problems don't have an impact on the public trust in science, whereas biology and drug research do. And your example was exactly the kind of thing I was saying did undermine public confidence.
Before you say "Everything about that statement is wrong" you should try to figure out what the statement actually says, because your link actually proves the point I was making: That Fraud in medicine, particularly drug research is undermining confidence in science, but other fields are not having the same impact.
posted by delmoi at 6:58 PM on May 13, 2012
So, half of my research dollars are wasted, I just don't know which half?
Familiar problem.
posted by dglynn at 7:45 PM on May 13, 2012
Familiar problem.
posted by dglynn at 7:45 PM on May 13, 2012
We've never discussed all the fake medical, climate, etc. journals run by Elsevier for companies like Merck?
I mentioned it once upon a time. I remember reading up on that and thinking that the papers in there were probably not written by the sales critters who commissioned the thing, but were real reports gleaned from Merck's document management system (and maybe gussied up a bit).
In a truly just world, the scientists who got to have their name associated with that mess should have gotten one of the sales people's gonads in a jar of formaldehyde. (But I have no strong feelings on the matter.)
posted by Kid Charlemagne at 9:25 PM on May 13, 2012
I mentioned it once upon a time. I remember reading up on that and thinking that the papers in there were probably not written by the sales critters who commissioned the thing, but were real reports gleaned from Merck's document management system (and maybe gussied up a bit).
In a truly just world, the scientists who got to have their name associated with that mess should have gotten one of the sales people's gonads in a jar of formaldehyde. (But I have no strong feelings on the matter.)
posted by Kid Charlemagne at 9:25 PM on May 13, 2012
mhoye: From Reuters, and very, very widely republished:
Begley and Ellis' paper was recently discussed here on Metafilter, and I think it would be worth reading that discussion. I think it also would repay reading the original Bagley & Ellis article [Nature 483:531-533 (2012)].
The basic problem with that paper is fairly important. It presents zero data. These guys, and Reuters, want us to believe their unsupported claims. Now, that's a problem.
posted by dmayhood at 6:43 PM on May 14, 2012
Begley and Ellis' paper was recently discussed here on Metafilter, and I think it would be worth reading that discussion. I think it also would repay reading the original Bagley & Ellis article [Nature 483:531-533 (2012)].
The basic problem with that paper is fairly important. It presents zero data. These guys, and Reuters, want us to believe their unsupported claims. Now, that's a problem.
posted by dmayhood at 6:43 PM on May 14, 2012
I'm disappointed jessamyn doesn't write shell style variables more frequently.
posted by jeffburdges at 9:12 PM on May 14, 2012
posted by jeffburdges at 9:12 PM on May 14, 2012
« Older Tchaikovsky Timelapse | Italians have a lot of hells... Newer »
This thread has been archived and is closed to new comments
posted by daniel_charms at 7:05 AM on May 13, 2012 [1 favorite]