PPP vs 538
September 13, 2013 8:17 AM Subscribe
Internet darlings Nate Silver and Public Policy Polling are feuding publicly this week, over PPP's decision not to release a polling result that they felt was probably inaccurate. Nate Silver tweeted "VERY bad and unscientific practice for @ppppolls to suppress a polling result they didn't believe/didn't like." And then the Twitter-based snipefest began, with PPP calling Silver's allegations 'absurd' and accusing him of 'jealosy,' while Silver called PPP's actions 'totally indefensible' and accused them of having their 'finger on the scale.'
PPP responds to allegations that they copy the polling results of others with 15 things PPP got right going it alone.
PPP responds to allegations that they copy the polling results of others with 15 things PPP got right going it alone.
I think Silver's right on this. If PPP had identified a technical or methodological problem in the poll they would be right to not release it. Instead their objection is that the findings conflict with what they expected would happen:
When we got the results back, we found that 33% of Democrats in the district supported the recall. It might be normal for Democrats in Kentucky or West Virginia to abandon their party in those kinds of numbers, but that doesn't happen in Colorado or in most of the rest of the country. That finding made me think that respondents may not have understood what they were being asked, so I decided to hold onto it.
That's not scientific, it's just a pollster trying to avoid embarrassment.
posted by East Manitoba Regional Junior Kabaddi Champion '94 at 8:25 AM on September 13, 2013 [35 favorites]
When we got the results back, we found that 33% of Democrats in the district supported the recall. It might be normal for Democrats in Kentucky or West Virginia to abandon their party in those kinds of numbers, but that doesn't happen in Colorado or in most of the rest of the country. That finding made me think that respondents may not have understood what they were being asked, so I decided to hold onto it.
That's not scientific, it's just a pollster trying to avoid embarrassment.
posted by East Manitoba Regional Junior Kabaddi Champion '94 at 8:25 AM on September 13, 2013 [35 favorites]
Has Mr Silver voiced an opinion on his employer's decision to end its partnership with Frontline investigating concussions and football?
"VERY bad and unjournalistic practice for ESPN to suppress an investigation they didn't like..."
posted by notyou at 8:26 AM on September 13, 2013 [6 favorites]
"VERY bad and unjournalistic practice for ESPN to suppress an investigation they didn't like..."
posted by notyou at 8:26 AM on September 13, 2013 [6 favorites]
That's not scientific, it's just a pollster trying to avoid embarrassment.
I think it's a little bit more complicated than that. I appreciate Silver's commitment to process, but with survey design, it's entirely possible that your question isn't valid - that it doesn't measure what it is intended to measure.
Judging something against common sense is certainly fraught, but questioning the validity of your instrument is an important thing too.
posted by entropone at 8:33 AM on September 13, 2013 [8 favorites]
I think it's a little bit more complicated than that. I appreciate Silver's commitment to process, but with survey design, it's entirely possible that your question isn't valid - that it doesn't measure what it is intended to measure.
Judging something against common sense is certainly fraught, but questioning the validity of your instrument is an important thing too.
posted by entropone at 8:33 AM on September 13, 2013 [8 favorites]
If there was a problem in the methodology, then by all means, hold it back. If the results check out but are unexpected, then it should have been released.
posted by arcticseal at 8:33 AM on September 13, 2013 [2 favorites]
posted by arcticseal at 8:33 AM on September 13, 2013 [2 favorites]
I wish Nate Silver would weigh in on Indi in Victoria.
In other words, I trust the maths dude with no obvious agenda other than facts.
posted by Mezentian at 8:40 AM on September 13, 2013
In other words, I trust the maths dude with no obvious agenda other than facts.
posted by Mezentian at 8:40 AM on September 13, 2013
Science, in brief:
1. State a hypothesis.
2. Design an experiment to test that hypothesis. Explain your methods. Run your experiment.
3. State the results of that experiment.
4. Interpret or discuss the results.
It seems to me that PPP have mixed up steps (3) and (4).
posted by Dashy at 8:40 AM on September 13, 2013 [8 favorites]
1. State a hypothesis.
2. Design an experiment to test that hypothesis. Explain your methods. Run your experiment.
3. State the results of that experiment.
4. Interpret or discuss the results.
It seems to me that PPP have mixed up steps (3) and (4).
posted by Dashy at 8:40 AM on September 13, 2013 [8 favorites]
If there was a problem in the methodology, then by all means, hold it back. If the results check out but are unexpected, then it should have been released.
I don't know what you mean by 'check out'. If the results are inexplicably off where you think you have a choice to release them and face risk of looking like you can't poll properly and/or are biased or you can go back and look at your methodology.
In this case, PPP have claimed they made a call after getting the results that they thought the results were driven by poor methodology.
I don't see the problem with that. These aren't clinical trials where transparency for transparency's sake has public concern issues.
It's bad practice to suppress results that you don't like because you want them to show something else, or they show up your clients. It's not bad practice to pull work you think is poorly executed and therefore misleading.
Silver appears to be accusing PPP of bad faith though. I'm not qualified to comment on that, but if PPP are to be believed and they think voters have misunderstood the question then I have sympathy with them for not releasing the results.
posted by MuffinMan at 8:45 AM on September 13, 2013 [7 favorites]
I don't know what you mean by 'check out'. If the results are inexplicably off where you think you have a choice to release them and face risk of looking like you can't poll properly and/or are biased or you can go back and look at your methodology.
In this case, PPP have claimed they made a call after getting the results that they thought the results were driven by poor methodology.
I don't see the problem with that. These aren't clinical trials where transparency for transparency's sake has public concern issues.
It's bad practice to suppress results that you don't like because you want them to show something else, or they show up your clients. It's not bad practice to pull work you think is poorly executed and therefore misleading.
Silver appears to be accusing PPP of bad faith though. I'm not qualified to comment on that, but if PPP are to be believed and they think voters have misunderstood the question then I have sympathy with them for not releasing the results.
posted by MuffinMan at 8:45 AM on September 13, 2013 [7 favorites]
I think it is actually a bit tricky. As an analyst I can say that if you get results that just look wrong it is quite difficult to trust in the process. An important part of producing this kind of work is that you look it over with a critical eye and give it a sense check.
With political polls like this there can be a problem in how they are used. My instinct would be to publish the results, but with a caveat to the regard that they seem unusual and there may have been methodological problems. Problem is that people don't tend to read the survey design, or read the caveats with political polls.
It's an important point that they didn't announce they were doing the poll beforehand and they claim to have been concerned about validity, so having done the poll and ended up with surprising results I don't think they are obligated to release. However, you can't have your cake and eat it, they should not have later on mentioned the poll after it was proved to be correct.
posted by Just this guy, y'know at 8:48 AM on September 13, 2013 [3 favorites]
With political polls like this there can be a problem in how they are used. My instinct would be to publish the results, but with a caveat to the regard that they seem unusual and there may have been methodological problems. Problem is that people don't tend to read the survey design, or read the caveats with political polls.
It's an important point that they didn't announce they were doing the poll beforehand and they claim to have been concerned about validity, so having done the poll and ended up with surprising results I don't think they are obligated to release. However, you can't have your cake and eat it, they should not have later on mentioned the poll after it was proved to be correct.
posted by Just this guy, y'know at 8:48 AM on September 13, 2013 [3 favorites]
I understand that PPP thinks that their poll may have been confusing, but when they say something like this:
there's been lots of voter confusion, 33% of Dems supporting recall is red flag your data might be off
How is that any different than 33% of Dems supporting the recall because they are also confused. People vote how they understand it, not what their perfect non-confused state would be.
Or am I totally missing something?
posted by MCMikeNamara at 8:48 AM on September 13, 2013
there's been lots of voter confusion, 33% of Dems supporting recall is red flag your data might be off
How is that any different than 33% of Dems supporting the recall because they are also confused. People vote how they understand it, not what their perfect non-confused state would be.
Or am I totally missing something?
posted by MCMikeNamara at 8:48 AM on September 13, 2013
I think PPP's point is that the question was misleading - i.e. people didn't actually know what they were answering.
posted by MuffinMan at 8:50 AM on September 13, 2013
posted by MuffinMan at 8:50 AM on September 13, 2013
I think PPP's point is that the question was misleading - i.e. people didn't actually know what they were answering.
The point was that they thought that might be only because bad political analysis in the absence of other polls led them to believe there must have been something wrong. Imagine if Fox polled Obamacare and it turned out popular and they said, "Well that just can't be right!"
posted by Drinky Die at 8:53 AM on September 13, 2013 [7 favorites]
The point was that they thought that might be only because bad political analysis in the absence of other polls led them to believe there must have been something wrong. Imagine if Fox polled Obamacare and it turned out popular and they said, "Well that just can't be right!"
posted by Drinky Die at 8:53 AM on September 13, 2013 [7 favorites]
It's more like if Fox polled Obamacare and it turned out that 40% of registered Republicans love it. You might think there was a problem with your poll in that situation. I don't know if not releasing it is the right choice.
posted by vogon_poet at 9:00 AM on September 13, 2013 [2 favorites]
posted by vogon_poet at 9:00 AM on September 13, 2013 [2 favorites]
So, if you're actual scientists and come up with a result that is almost certainly impossible, say, a certain particle going faster than the speed of light, you release the data if only so other people can help you figure out just what you screwed up, and maybe learn from that. Even if what you screwed up is a cable not being plugged in all the way. It's an important reminder about the importance of the basics, if nothing else.
These guys just care more about saving face than doing good research,
posted by Zalzidrax at 9:05 AM on September 13, 2013 [1 favorite]
These guys just care more about saving face than doing good research,
posted by Zalzidrax at 9:05 AM on September 13, 2013 [1 favorite]
This article goes into the worrying issues with ppp's methodology:
http://www.newrepublic.com/article/114682/ppp-polling-methodology-opaque-flawed
posted by mulligan at 9:06 AM on September 13, 2013 [3 favorites]
http://www.newrepublic.com/article/114682/ppp-polling-methodology-opaque-flawed
posted by mulligan at 9:06 AM on September 13, 2013 [3 favorites]
Has Mr Silver voiced an opinion on his employer's decision to end its partnership with Frontline investigating concussions and football?
You do realize that Nate Silver wasn't actually brought in to run ESPN, right?
posted by yoink at 9:13 AM on September 13, 2013 [5 favorites]
You do realize that Nate Silver wasn't actually brought in to run ESPN, right?
posted by yoink at 9:13 AM on September 13, 2013 [5 favorites]
It's pretty clear the two Nates (Silver and Cohn) have never worked in a polling firm. I like them both. But in accusing PPP of bad faith in refusing to release a poll that a reasonable person could conclude was an outlier, they are wrong.
Private pollsters like to talk about the "1 in 20": the typical survey has a reported margin of error based on 95% confidence. "1 in 20" means that even with truly random sampling, you'll hit an outlier once in a blue moon.
When you run hundreds of polls a year, rarely a result will come back that is so unbelievable that it would be total malpractice to release it on its own. If it's a paying client, the ethical way to do it is to go back in the field and repeat the poll.
What happened here is that PPP opened the curtain a bit on a public poll that no one was paying for. Firms like PPP release those polls for publicity and to show how accurate they are. If they had decided never to release it this kerfuffle wouldn't have happened.
If PPP had been hired on one of these races and said "You're down this huge in a district like this" and I was there, you bet I'd tell them to run it again. Polling an unprecedented recall election is hard, and when the basic fundamentals of a district are so contradicted you really should run it again. PPP made the calculation that it wasn't worth the money to try again but decided to release it anyway thinking people would appreciate this inside look. That was a stupid decision--of course people will say that they suppressed it for political means.
The two Nates, especially Silver, rely on private pollsters immensely for their work. Nate had the benefit during the 538 years of hundreds of publicly released polls covering tens of thousands of interviews. Even a simple average of the presidential polls would have achieved a substantially similar result. It's noticeable that he stays out of commenting on races where there isn't a trove of public polling, because modeling those electorates is hard.
posted by Hollywood Upstairs Medical College at 9:21 AM on September 13, 2013 [17 favorites]
Private pollsters like to talk about the "1 in 20": the typical survey has a reported margin of error based on 95% confidence. "1 in 20" means that even with truly random sampling, you'll hit an outlier once in a blue moon.
When you run hundreds of polls a year, rarely a result will come back that is so unbelievable that it would be total malpractice to release it on its own. If it's a paying client, the ethical way to do it is to go back in the field and repeat the poll.
What happened here is that PPP opened the curtain a bit on a public poll that no one was paying for. Firms like PPP release those polls for publicity and to show how accurate they are. If they had decided never to release it this kerfuffle wouldn't have happened.
If PPP had been hired on one of these races and said "You're down this huge in a district like this" and I was there, you bet I'd tell them to run it again. Polling an unprecedented recall election is hard, and when the basic fundamentals of a district are so contradicted you really should run it again. PPP made the calculation that it wasn't worth the money to try again but decided to release it anyway thinking people would appreciate this inside look. That was a stupid decision--of course people will say that they suppressed it for political means.
The two Nates, especially Silver, rely on private pollsters immensely for their work. Nate had the benefit during the 538 years of hundreds of publicly released polls covering tens of thousands of interviews. Even a simple average of the presidential polls would have achieved a substantially similar result. It's noticeable that he stays out of commenting on races where there isn't a trove of public polling, because modeling those electorates is hard.
posted by Hollywood Upstairs Medical College at 9:21 AM on September 13, 2013 [17 favorites]
It's more like if Fox polled Obamacare and it turned out that 40% of registered Republicans love it. You might think there was a problem with your poll in that situation.
In the absence of other polls? I don't know. It used to be a Republican policy. I get the sense Silver's idea is you can't make these decisions without data because relying on instinct about what people should believe is the heart of the punditry version of political analysis that gets so much wrong.
I don't know why it should be a newsflash for people who follow politics that large numbers of Democrats in states like Colorado feel very strongly about gun rights. It's always struck me as a big city/everywhere else divide rather than a true partisan divide.
posted by Drinky Die at 9:22 AM on September 13, 2013 [3 favorites]
In the absence of other polls? I don't know. It used to be a Republican policy. I get the sense Silver's idea is you can't make these decisions without data because relying on instinct about what people should believe is the heart of the punditry version of political analysis that gets so much wrong.
I don't know why it should be a newsflash for people who follow politics that large numbers of Democrats in states like Colorado feel very strongly about gun rights. It's always struck me as a big city/everywhere else divide rather than a true partisan divide.
posted by Drinky Die at 9:22 AM on September 13, 2013 [3 favorites]
Also, MetaFilter is pretty heavy on academics, where publishing weird data and asking for peer review is an accepted, even encouraged practice. When survey research is done outside the ivory tower, your audience doesn't have that kind of scientific literacy or appreciation. There's no peer review--only sharks to chum. So I think it's a bit misguided to assume bad faith when the calculus of self-preservation is different than in an academic setting.
posted by Hollywood Upstairs Medical College at 9:26 AM on September 13, 2013 [8 favorites]
posted by Hollywood Upstairs Medical College at 9:26 AM on September 13, 2013 [8 favorites]
So, if you're actual scientists and come up with a result that is almost certainly impossible, say, a certain particle going faster than the speed of light, you release the data if only so other people can help you figure out just what you screwed up, and maybe learn from that. Even if what you screwed up is a cable not being plugged in all the way.
Except that the geniuses across the street at FOX Particle Research Inc. will not be helping you figure out where your experiment went wrong, but rather will either be citing your work as proof that said particle does indeed go faster than light, or use it to embarrass your lab out of funding, etc.
Real scientists are competitive, but they generally have to do things above the board—it's easier to get caught. It's tougher in politics to live on hope that your mistaken findings won't get misused. Invariably, they will.
Tough call to make, but one that is probably practical.
posted by Blazecock Pileon at 9:32 AM on September 13, 2013 [6 favorites]
Except that the geniuses across the street at FOX Particle Research Inc. will not be helping you figure out where your experiment went wrong, but rather will either be citing your work as proof that said particle does indeed go faster than light, or use it to embarrass your lab out of funding, etc.
Real scientists are competitive, but they generally have to do things above the board—it's easier to get caught. It's tougher in politics to live on hope that your mistaken findings won't get misused. Invariably, they will.
Tough call to make, but one that is probably practical.
posted by Blazecock Pileon at 9:32 AM on September 13, 2013 [6 favorites]
I think there's a far bigger problem here than a decision made by some polling company.
posted by Ardiril at 9:32 AM on September 13, 2013
posted by Ardiril at 9:32 AM on September 13, 2013
The hypothetical "40% of Republicans love Obamacare" poll brings to mind the thing that PPP is citing as the methodological error here, which is question phrasing. This is kind of a subjective error, as opposed to sampling errors that I'm sure Nate Silver would have no problem citing to nix a poll (and has weighted polls very lowly in his own calculations for).
If you polled Republicans on "Do you approve of Obamacare" I'm sure you'd get the expected 80%+ against. But if you asked them, "Would you approve of a system in which subsidies, funded by taxes on the very wealthy, and a new marketplace system was put in place to ensure that all Americans are able and required to purchase health insurance from the free market," 40% loving the idea doesn't sound insane at all.
The question is how much you should weigh the ignorance of the people you're polling - the "Obamacare?" question would be closer to how Republicans would act in the voting booth, whereas the descriptive question would be used internally by political operatives to figure out what they're doing wrong with selling the program.
Considering that everybody can see what the question was and gauge their reactions to the results accordingly, PPP probably should've released the poll, perhaps with an editorial caveat if it made them feel better.
posted by mellow seas at 9:37 AM on September 13, 2013 [1 favorite]
If you polled Republicans on "Do you approve of Obamacare" I'm sure you'd get the expected 80%+ against. But if you asked them, "Would you approve of a system in which subsidies, funded by taxes on the very wealthy, and a new marketplace system was put in place to ensure that all Americans are able and required to purchase health insurance from the free market," 40% loving the idea doesn't sound insane at all.
The question is how much you should weigh the ignorance of the people you're polling - the "Obamacare?" question would be closer to how Republicans would act in the voting booth, whereas the descriptive question would be used internally by political operatives to figure out what they're doing wrong with selling the program.
Considering that everybody can see what the question was and gauge their reactions to the results accordingly, PPP probably should've released the poll, perhaps with an editorial caveat if it made them feel better.
posted by mellow seas at 9:37 AM on September 13, 2013 [1 favorite]
But in accusing PPP of bad faith in refusing to release a poll that a reasonable person could conclude was an outlier, they are wrong.
The problem here is that they have little to no grounds for assuming the poll is an outlier. Why bother polling at all if you're going to discard polls that don't tell you what you expect to hear? This is how "conventional wisdom" hardens into dogma. The whole point of "scientific polling" is to allow the facts to surprise us--to teach us something that we don't know. That's why PPP's decision here is, as Silver says, "unscientific."
posted by yoink at 9:37 AM on September 13, 2013 [11 favorites]
The problem here is that they have little to no grounds for assuming the poll is an outlier. Why bother polling at all if you're going to discard polls that don't tell you what you expect to hear? This is how "conventional wisdom" hardens into dogma. The whole point of "scientific polling" is to allow the facts to surprise us--to teach us something that we don't know. That's why PPP's decision here is, as Silver says, "unscientific."
posted by yoink at 9:37 AM on September 13, 2013 [11 favorites]
What sort of errors in phrasing are common when asking a question along the lines of, "Do you support the recall of Sen. Angela Giron?"
posted by Drinky Die at 9:44 AM on September 13, 2013 [2 favorites]
posted by Drinky Die at 9:44 AM on September 13, 2013 [2 favorites]
The biggest issue here is that ppp has a thing called "random deletion" where they delete responses in a haphazard and unspecified fashion.
They do this to make results "consistent"
posted by mulligan at 9:46 AM on September 13, 2013 [1 favorite]
They do this to make results "consistent"
posted by mulligan at 9:46 AM on September 13, 2013 [1 favorite]
The thing that makes me come down on Silver's side was this quote from the second link:
That's like your buddy saying, "Oh, man, those lottery numbers from last week are all my kid's birthdays -- I totally would have won that $250 million!"
posted by Etrigan at 9:54 AM on September 13, 2013
If I'd thought we'd pulled a fast one on the world, I certainly wouldn't have released the poll after the election.So it looks like PPP just wanted to say, "We're so smart that we don't even think we're that smart!"
That's like your buddy saying, "Oh, man, those lottery numbers from last week are all my kid's birthdays -- I totally would have won that $250 million!"
posted by Etrigan at 9:54 AM on September 13, 2013
Okay, I don't know PPP's history, but the thing about this situation is that there's a ton of sniping, not a ton of information, and I think that at the end of the day - barring any bad blood between the two - Silver and PPP probably actually agree.
Silver tweeted that "unless a pollster thinks there was literally something buggy about their data collection, they ought to publish all results." And what PPP is describing is NOT an error in methodology, but rather, something buggy in their data collection: what they thought was a survey instrument that was not Reliable or Valid. These are important concepts in research - Reliability is that an instrument repeatedly returns similar results, and Validity is that an instrument measure what you want it to measure.
[also important is the issue of confidence levels - the 1 in 20 that Hollywood mentions, that answers can be off due to randomness; as well as "regression to the mean," which means that if you have an interesting result, the next time you measure it, it might be very normal, because upon repeated measurements, things often return to a middle ground] ... these statistical concepts provide important context to this discussion.
Now, the way that PPP did this definitely seems poorly done, and I can see why Silver's unloading both barrels, but seriously, if they weren't Internet Entities and if they sat down over a cup of tea they'd work out their differences.
posted by entropone at 9:55 AM on September 13, 2013
Silver tweeted that "unless a pollster thinks there was literally something buggy about their data collection, they ought to publish all results." And what PPP is describing is NOT an error in methodology, but rather, something buggy in their data collection: what they thought was a survey instrument that was not Reliable or Valid. These are important concepts in research - Reliability is that an instrument repeatedly returns similar results, and Validity is that an instrument measure what you want it to measure.
[also important is the issue of confidence levels - the 1 in 20 that Hollywood mentions, that answers can be off due to randomness; as well as "regression to the mean," which means that if you have an interesting result, the next time you measure it, it might be very normal, because upon repeated measurements, things often return to a middle ground] ... these statistical concepts provide important context to this discussion.
Now, the way that PPP did this definitely seems poorly done, and I can see why Silver's unloading both barrels, but seriously, if they weren't Internet Entities and if they sat down over a cup of tea they'd work out their differences.
posted by entropone at 9:55 AM on September 13, 2013
Has Mr Silver voiced an opinion on his employer's decision to end its partnership with Frontline investigating concussions and football?Yes?
You do realize that Nate Silver wasn't actually brought in to run ESPN, right?
posted by notyou at 9:57 AM on September 13, 2013 [1 favorite]
The problem here is that they have little to no grounds for assuming the poll is an outlier. Why bother polling at all if you're going to discard polls that don't tell you what you expect to hear? This is how "conventional wisdom" hardens into dogma.
They had plenty of grounds to assume it. Again, they didn't bother to repeat the poll, which is the real way of detecting an outlier, but let's look at what they knew at the time.
Unlike scientific blue-sky research, the basic fundamentals of an American election poll have been tried and tested so many times that there are red flags that go off when they are contradicted. Within a given state, there is a realistic range that most demographic and partisan groups will behave within.
Split-ticket and party-abandoning behavior in Appalachia, an example PPP cited, is well-studied in both academic literature and by practitioners. 33% of Democrats supporting a recall is so unbelievable in a state where such behavior is uncommon that a re-test is the only way to do it. In a recall, disciplined party voting is a well-studied phenomenon. Most general election polling (primaries are a different beast) in the US is to find out where in these ranges a candidate is doing.
It's a lot like finding out that blacks are voting 80% Republican--that's insane, you check again to see if you or the phone bank miscoded something, and if not...poll it again, that is totally unprecedented.
These are basic sanity checks that any competent political pollster goes through. It's not responsible to go into political polling with a blank-slate attitude, you use these well-known elements of the electorate to sanity check your data and questions. So again, their mistake was not polling again to verify--which meant that they would have released an accurate survey. If you have a finding this shocking, that's what you do.
But that's not what Nate Silver is saying. He's saying that it was bad faith bias, which is totally not true. More realistically, it was they thought their model was so flawed that it wasn't worth the money to do it again.
The biggest issue here is that ppp has a thing called "random deletion" where they delete responses in a haphazard and unspecified fashion.
They do this to make results "consistent"
Now this is definitely a strange practice which I haven't fully formed thoughts on...but definitely not a standard way to weight interviews.
posted by Hollywood Upstairs Medical College at 9:58 AM on September 13, 2013 [3 favorites]
They had plenty of grounds to assume it. Again, they didn't bother to repeat the poll, which is the real way of detecting an outlier, but let's look at what they knew at the time.
Unlike scientific blue-sky research, the basic fundamentals of an American election poll have been tried and tested so many times that there are red flags that go off when they are contradicted. Within a given state, there is a realistic range that most demographic and partisan groups will behave within.
Split-ticket and party-abandoning behavior in Appalachia, an example PPP cited, is well-studied in both academic literature and by practitioners. 33% of Democrats supporting a recall is so unbelievable in a state where such behavior is uncommon that a re-test is the only way to do it. In a recall, disciplined party voting is a well-studied phenomenon. Most general election polling (primaries are a different beast) in the US is to find out where in these ranges a candidate is doing.
It's a lot like finding out that blacks are voting 80% Republican--that's insane, you check again to see if you or the phone bank miscoded something, and if not...poll it again, that is totally unprecedented.
These are basic sanity checks that any competent political pollster goes through. It's not responsible to go into political polling with a blank-slate attitude, you use these well-known elements of the electorate to sanity check your data and questions. So again, their mistake was not polling again to verify--which meant that they would have released an accurate survey. If you have a finding this shocking, that's what you do.
But that's not what Nate Silver is saying. He's saying that it was bad faith bias, which is totally not true. More realistically, it was they thought their model was so flawed that it wasn't worth the money to do it again.
The biggest issue here is that ppp has a thing called "random deletion" where they delete responses in a haphazard and unspecified fashion.
They do this to make results "consistent"
Now this is definitely a strange practice which I haven't fully formed thoughts on...but definitely not a standard way to weight interviews.
posted by Hollywood Upstairs Medical College at 9:58 AM on September 13, 2013 [3 favorites]
Not sure why this got so heated between the two factions?
If I were doing it, I probably would have chosen to release the possibly flawed poll with a big asterisk beside it. I do feel that one of the Big Issues of Big Science in 2013 is the lack of people releasing negative results of various types.
BUT it's a judgement call - particularly so close to the recall vote when there was no time to run another poll.
The "random deletion" doesn't seem at all unreasonable to me. Here's their explanation:
"PPP uses a random deletion process to achieve an appropriate gender and race balance, which generally involves removing excess cases of white and female voters. PPP then uses a statistical formula to adjust for age imbalances, which creates the final results."
Some classes of pollees are going to be harder to reach than others. I can't see any other practical way to account for this, other than some procedure like "random deletion".
posted by lupus_yonderboy at 10:10 AM on September 13, 2013 [2 favorites]
If I were doing it, I probably would have chosen to release the possibly flawed poll with a big asterisk beside it. I do feel that one of the Big Issues of Big Science in 2013 is the lack of people releasing negative results of various types.
BUT it's a judgement call - particularly so close to the recall vote when there was no time to run another poll.
The "random deletion" doesn't seem at all unreasonable to me. Here's their explanation:
"PPP uses a random deletion process to achieve an appropriate gender and race balance, which generally involves removing excess cases of white and female voters. PPP then uses a statistical formula to adjust for age imbalances, which creates the final results."
Some classes of pollees are going to be harder to reach than others. I can't see any other practical way to account for this, other than some procedure like "random deletion".
posted by lupus_yonderboy at 10:10 AM on September 13, 2013 [2 favorites]
Here's another discussion of "random deletion" where "experts" seem to agree with me that the procedure is defensible and necessary. (Sorry for the trashy looking site, but the content is semi-reasonable...)
posted by lupus_yonderboy at 10:12 AM on September 13, 2013
posted by lupus_yonderboy at 10:12 AM on September 13, 2013
The Nates have been really disingenuous about this. Private poll. There was no responsibility to release the result.
Silver has had a grudge against PPP since.... 2009? I can't remember, maybe 2011.
Cohn's TNR article is written to make it appear that no other pollster thinks PPP's methods are acceptable. That is because they are direct competitors with PPP -- a small upstart that ha been providing better poll data at literally a tenth the cost that some of the other polling firms offer.
The PPP results for 2012 were actually better than Silver's model. Analysts won't be quite so lauded if pollsters get smart enough to provide such high quality data. The Nates are pulling a hatchet job to protect the income they make from seeming smarter than everyone else. Everyone else is getting smarter, and that buck is in jeopardy.
posted by samofidelis at 10:16 AM on September 13, 2013
Silver has had a grudge against PPP since.... 2009? I can't remember, maybe 2011.
Cohn's TNR article is written to make it appear that no other pollster thinks PPP's methods are acceptable. That is because they are direct competitors with PPP -- a small upstart that ha been providing better poll data at literally a tenth the cost that some of the other polling firms offer.
The PPP results for 2012 were actually better than Silver's model. Analysts won't be quite so lauded if pollsters get smart enough to provide such high quality data. The Nates are pulling a hatchet job to protect the income they make from seeming smarter than everyone else. Everyone else is getting smarter, and that buck is in jeopardy.
posted by samofidelis at 10:16 AM on September 13, 2013
The biggest issue here is that ppp has a thing called "random deletion" where they delete responses in a haphazard and unspecified fashion.
They do this to make results "consistent"
posted by mulligan at 11:46 on September 13
[1 favorite +] [!]
You're playing the same game with scare quotes that Silver et al. have. It's a fine practice. If the TNR article explained the method -- throwing out data from respondents from over-represented groups -- instead of scare quoting at us, no one would be upset at all.
posted by samofidelis at 10:19 AM on September 13, 2013 [1 favorite]
They do this to make results "consistent"
posted by mulligan at 11:46 on September 13
[1 favorite +] [!]
You're playing the same game with scare quotes that Silver et al. have. It's a fine practice. If the TNR article explained the method -- throwing out data from respondents from over-represented groups -- instead of scare quoting at us, no one would be upset at all.
posted by samofidelis at 10:19 AM on September 13, 2013 [1 favorite]
Not sure why this got so heated between the two factions?
No fight like a nerd fight.
posted by Hollywood Upstairs Medical College at 10:20 AM on September 13, 2013 [1 favorite]
No fight like a nerd fight.
posted by Hollywood Upstairs Medical College at 10:20 AM on September 13, 2013 [1 favorite]
Nate Silver is getting too enamored of his own celebrity. Sam Wang at Princeton was far better in 2012, and offered good mathematical analysis rather than opinion, which is what Silver used to do.
posted by sonic meat machine at 10:23 AM on September 13, 2013 [4 favorites]
posted by sonic meat machine at 10:23 AM on September 13, 2013 [4 favorites]
Sorry to post again but here is a link showing an estimate of a house bias for various polling firms, measured on Obama-Romney results. PPP.... Looks pretty damn good.
posted by samofidelis at 10:24 AM on September 13, 2013 [2 favorites]
posted by samofidelis at 10:24 AM on September 13, 2013 [2 favorites]
Some classes of pollees are going to be harder to reach than others. I can't see any other practical way to account for this, other than some procedure like "random deletion"
The more common response is to weight the respondents by inverse response probability. People with demographics that have a 0.75 probabilty of response* are treated as if they were 1.33 people, people with demographics that have a 0.2 probability of response are treated as if they were five. The real stuff pollsters do is obviously way more complicated.
I have to admit I'm not sure why you'd want to throw out observations from overrepresented groups instead of keeping them all and counting them less, unless they're throwing away the cases before the meat of the survey actually starts or are doing some sort of geewhiz monte carlo thing over multiple iterations of random deletion.
*You can make good estimates of this from census microdata.
posted by ROU_Xenophobe at 10:27 AM on September 13, 2013 [1 favorite]
The more common response is to weight the respondents by inverse response probability. People with demographics that have a 0.75 probabilty of response* are treated as if they were 1.33 people, people with demographics that have a 0.2 probability of response are treated as if they were five. The real stuff pollsters do is obviously way more complicated.
I have to admit I'm not sure why you'd want to throw out observations from overrepresented groups instead of keeping them all and counting them less, unless they're throwing away the cases before the meat of the survey actually starts or are doing some sort of geewhiz monte carlo thing over multiple iterations of random deletion.
*You can make good estimates of this from census microdata.
posted by ROU_Xenophobe at 10:27 AM on September 13, 2013 [1 favorite]
So I think it's a bit misguided to assume bad faith when the calculus of self-preservation is different than in an academic setting.
This is what I think was running through PPP's mind. I'm sure they're more than happy to show everyone the data if it wasn't going to end up on Fox and Friends as the 'democratic poll' showing a republican win.
posted by Slackermagee at 10:30 AM on September 13, 2013 [1 favorite]
This is what I think was running through PPP's mind. I'm sure they're more than happy to show everyone the data if it wasn't going to end up on Fox and Friends as the 'democratic poll' showing a republican win.
posted by Slackermagee at 10:30 AM on September 13, 2013 [1 favorite]
It's crazy to me that anyone is disagreeing with Nate on this. If you don't like a poll, run it again and release both. Otherwise you're introducing a massive source of bias into your releases and torpedoing your credibility.
posted by gerryblog at 10:37 AM on September 13, 2013
posted by gerryblog at 10:37 AM on September 13, 2013
What sort of errors in phrasing are common when asking a question along the lines of, "Do you support the recall of Sen. Angela Giron?"
The actual question was "Will you vote ‘yes’ or ‘no’ on the question of whether Angela Giron should be recalled from the office of State Senator?" And the point of confusion would presumably be over what a "yes" vote means in that election -- does it mean yes, I like this person, retain her, or does it mean yes, I want her recalled?
On the one hand, the best way to get at underlying attitudes in the district (to the limited extent that they exist) would have been to more directly ask something like "Some people think Angela Giron should be removed from office because of her votes on gun control bills, other people think she should remain in office. Which comes closer to your view?" On the other hand, the best way to predict the election would probably be to replicate the question as it's going to appear on the ballot, which might itself be confusing.
posted by ROU_Xenophobe at 10:37 AM on September 13, 2013 [1 favorite]
The actual question was "Will you vote ‘yes’ or ‘no’ on the question of whether Angela Giron should be recalled from the office of State Senator?" And the point of confusion would presumably be over what a "yes" vote means in that election -- does it mean yes, I like this person, retain her, or does it mean yes, I want her recalled?
On the one hand, the best way to get at underlying attitudes in the district (to the limited extent that they exist) would have been to more directly ask something like "Some people think Angela Giron should be removed from office because of her votes on gun control bills, other people think she should remain in office. Which comes closer to your view?" On the other hand, the best way to predict the election would probably be to replicate the question as it's going to appear on the ballot, which might itself be confusing.
posted by ROU_Xenophobe at 10:37 AM on September 13, 2013 [1 favorite]
That's like your buddy saying, "Oh, man, those lottery numbers from last week are all my kid's birthdays -- I totally would have won that $250 million!"
Yea, it's like if he'd gone up to the counter last week and filled out a lottery ticket with those numbers and not actually bought it, and now he's showing it to you like "oh man I would have looked killer smart!" But where in that story is the bit where it is "unscientific and acting in bad faith" for him to not buy the ticket?
posted by jacalata at 10:43 AM on September 13, 2013
Yea, it's like if he'd gone up to the counter last week and filled out a lottery ticket with those numbers and not actually bought it, and now he's showing it to you like "oh man I would have looked killer smart!" But where in that story is the bit where it is "unscientific and acting in bad faith" for him to not buy the ticket?
posted by jacalata at 10:43 AM on September 13, 2013
"Will you vote ‘yes’ or ‘no’ on the question of whether Angela Giron should be recalled from the office of State Senator?"
That seems like perfectly clear phrasing. Yes she should be recalled, no she should not.
I can see where it can often get confusing with things like ballot initiatives.
"Do you support or oppose initiative 432, the initiative on gay marriage?"
But it's an initiative to ban gay marriage so supporting the initiative means opposing gay marriage. That's confusing.
posted by Drinky Die at 10:50 AM on September 13, 2013
That seems like perfectly clear phrasing. Yes she should be recalled, no she should not.
I can see where it can often get confusing with things like ballot initiatives.
"Do you support or oppose initiative 432, the initiative on gay marriage?"
But it's an initiative to ban gay marriage so supporting the initiative means opposing gay marriage. That's confusing.
posted by Drinky Die at 10:50 AM on September 13, 2013
It's crazy to me that anyone is disagreeing with Nate on this. If you don't like a poll, run it again and release both. Otherwise you're introducing a massive source of bias into your releases and torpedoing your credibility.
This was a poll they were running solely for their own benefit/street cred if they nailed it. Running polls isn't free, why should they test over and over again if they felt bad about the results of the first one.
In any case, of course Nate wants more polls released all the time. If it weren't for PPP's polls he would have a much tougher job.
posted by DynamiteToast at 10:50 AM on September 13, 2013 [1 favorite]
This was a poll they were running solely for their own benefit/street cred if they nailed it. Running polls isn't free, why should they test over and over again if they felt bad about the results of the first one.
In any case, of course Nate wants more polls released all the time. If it weren't for PPP's polls he would have a much tougher job.
posted by DynamiteToast at 10:50 AM on September 13, 2013 [1 favorite]
Hollywood Upstairs Medical College: "It's pretty clear the two Nates (Silver and Cohn) have never worked in a polling firm. I like them both. But in accusing PPP of bad faith in refusing to release a poll that a reasonable person could conclude was an outlier, they are wrong."
It's funny how you say Nate Silver lacks experience with this, and then you explain what's you think is going on in terms of statistics and probabilities, the one subject that Nate Silver has proven beyond a doubt that he's extremely good at.
posted by Joakim Ziegler at 10:52 AM on September 13, 2013
It's funny how you say Nate Silver lacks experience with this, and then you explain what's you think is going on in terms of statistics and probabilities, the one subject that Nate Silver has proven beyond a doubt that he's extremely good at.
posted by Joakim Ziegler at 10:52 AM on September 13, 2013
That seems like perfectly clear phrasing. Yes she should be recalled, no she should not.
You'd be surprised, especially for respondents who aren't paying a lot of attention to the person who's pestering them.
posted by ROU_Xenophobe at 10:54 AM on September 13, 2013 [1 favorite]
You'd be surprised, especially for respondents who aren't paying a lot of attention to the person who's pestering them.
posted by ROU_Xenophobe at 10:54 AM on September 13, 2013 [1 favorite]
The thing is, it's possible for the poll to have been wildly inaccurate even though the outcome was similar to the poll. Most obviously, if the recall succeeded because there were lots of Republicans voting and few Democrats, as opposed to the poll's world where a crapton of Democrats turned out but a third of them voted to recall.
posted by ROU_Xenophobe at 10:58 AM on September 13, 2013
posted by ROU_Xenophobe at 10:58 AM on September 13, 2013
Every one of the storify's I've seen ends before the next days tweeting. I don't have the time to assemble but I'll recap some.
The first day was all about Nate asserting it was unprofessional to not release an unannounced private poll that you felt unsure about. The second day the TNR post got posted, and Nate posted it saying it was "a more worrisome example of PPP" or something. I think this criticism is more fair, because PPP is basically admitting to how unconventional their methodology is. However Nate completely drops the first day's line of attack after this and only criticizes them for info in the TNR link, which kinda makes me feel like Nate took it personally when PPP started getting petty (which @ppppolls does a ton of, often just with Republican trolls though) and listing times they'd beat him on poll results.
Bias alert, I love 538 and Nate Silver but I'm on PPP's side this time.
posted by DynamiteToast at 11:04 AM on September 13, 2013
The first day was all about Nate asserting it was unprofessional to not release an unannounced private poll that you felt unsure about. The second day the TNR post got posted, and Nate posted it saying it was "a more worrisome example of PPP" or something. I think this criticism is more fair, because PPP is basically admitting to how unconventional their methodology is. However Nate completely drops the first day's line of attack after this and only criticizes them for info in the TNR link, which kinda makes me feel like Nate took it personally when PPP started getting petty (which @ppppolls does a ton of, often just with Republican trolls though) and listing times they'd beat him on poll results.
Bias alert, I love 538 and Nate Silver but I'm on PPP's side this time.
posted by DynamiteToast at 11:04 AM on September 13, 2013
So I guess a batter criticism of PPP would be that they should stop crowing about getting it right when we don't have the exit polls to establish that they did.
posted by Drinky Die at 11:04 AM on September 13, 2013
posted by Drinky Die at 11:04 AM on September 13, 2013
I wrote on my blog about an amazing AP poll result saying that 18% of Americans thought Barack Obama was Jewish. I though that was a really startling result and spent some time weighing various explanations.
The actual explanation was that someone at AP had miscopied a line on a spreadsheet and shifted everything up a row.
I guess what I'm saying is that when a result is weird enough, you have to start asking yourself, "what's more likely -- that this result reflects reality, or that someone at some point miscoded/mistyped/misrecorded/misunderstood something?"
posted by escabeche at 11:15 AM on September 13, 2013 [2 favorites]
The actual explanation was that someone at AP had miscopied a line on a spreadsheet and shifted everything up a row.
I guess what I'm saying is that when a result is weird enough, you have to start asking yourself, "what's more likely -- that this result reflects reality, or that someone at some point miscoded/mistyped/misrecorded/misunderstood something?"
posted by escabeche at 11:15 AM on September 13, 2013 [2 favorites]
Yah, as far as the CO poll goes, I think he doubted the poll and decided to not release it, but then wanted to eat his cake too when it turned out he hit it on the head, so he published it with a subtext of "someone could write an interesting article about NRA messaging blah blah blah" when really he probably wanted people to just ooh and aah that he nailed the recall vote.
So I guess a batter criticism of PPP would be that they should stop crowing about getting it right when we don't have the exit polls to establish that they did.
posted by Drinky Die at 1:04 PM on September 13 [+] [!]
A batter criticism would be that he hit a long pop-up and started walking to first, but then the outfielder tripped and he ended up making it to second. While he's proud of the double everyone's scolding him on lack of fundamentals.
posted by DynamiteToast at 11:17 AM on September 13, 2013 [2 favorites]
So I guess a batter criticism of PPP would be that they should stop crowing about getting it right when we don't have the exit polls to establish that they did.
posted by Drinky Die at 1:04 PM on September 13 [+] [!]
A batter criticism would be that he hit a long pop-up and started walking to first, but then the outfielder tripped and he ended up making it to second. While he's proud of the double everyone's scolding him on lack of fundamentals.
posted by DynamiteToast at 11:17 AM on September 13, 2013 [2 favorites]
And then a day later a sports journalist exposes that PPP has been rubbing kittens against his elbows in the locker room to make him hit better. PPP points out that while this is unconventional, it works for him (and is cheaper than steroids I guess).
posted by DynamiteToast at 11:33 AM on September 13, 2013
posted by DynamiteToast at 11:33 AM on September 13, 2013
I would imagine the problem with releasing questionable data at best, more than likely has something to do with the fact that it's political data. There's a chance (n>0) that somebody somewhere would have picked up on that one piece of data and then proceed to rehash it as if it were fact, citing PPP and Nate Silver as the authors of said fact, thereby diluting the brand that is PPP and/or Nate Silver.
posted by Blue_Villain at 11:36 AM on September 13, 2013
posted by Blue_Villain at 11:36 AM on September 13, 2013
and then you explain what's you think is going on in terms of statistics and probabilities, the one subject that Nate Silver has proven beyond a doubt that he's extremely good at.
It's true that a lot of these guys know more about statistics than most people know about statistics. But their big contribution to the zeitgeist wasn't some kind of super probability juju. Frankly, you could do almost exactly as well in predicting the election simply by looking at the polling average in each state before the election. The various Nates and others each add their own little flavor to the data but, really, simply taking the raw polling average gets you 90-95% of the way there.
What they contributed was the simple idea that you should believe what the data is telling you. Not your gut. Not the campaign narrative. Not the pundits whose job security depends on making themselves seem useful. The data. It's nice to have a bit of quantitative analysis to go with the raw numbers and the Nates are quite good at making that interesting, but the important part is just to believe what the numbers are saying.
posted by Justinian at 11:38 AM on September 13, 2013 [2 favorites]
It's true that a lot of these guys know more about statistics than most people know about statistics. But their big contribution to the zeitgeist wasn't some kind of super probability juju. Frankly, you could do almost exactly as well in predicting the election simply by looking at the polling average in each state before the election. The various Nates and others each add their own little flavor to the data but, really, simply taking the raw polling average gets you 90-95% of the way there.
What they contributed was the simple idea that you should believe what the data is telling you. Not your gut. Not the campaign narrative. Not the pundits whose job security depends on making themselves seem useful. The data. It's nice to have a bit of quantitative analysis to go with the raw numbers and the Nates are quite good at making that interesting, but the important part is just to believe what the numbers are saying.
posted by Justinian at 11:38 AM on September 13, 2013 [2 favorites]
I'm surprised at how this FPP is framed and how this conversation has progressed. The real issue here isn't PPP holding the Colorado poll, which is defensible under certain circumstances, but the weighting practices revealed by Cohn's piece.
According to Cohn, it appears that PPP was changing their racial targets for different polls in the same election, weighting by reported past election preferences (without revealing these questions) and perhaps modifying their practices based on their feeling about how the results should look. Their racial weighting changes consistently brought their results more in line with polling averages. These are far less defensible practices.
As Mark Blumenthal puts it:
"What Cohn reports, in essence, is that PPP falls at the extreme of the subjective "gut feeling" end of the pollster spectrum. [...] PPP's defense is that its approach has proven to be "generally accurate," which it has. The problem, however, is that at some point it gets hard to distinguish the pollster's judgements from the poll's measurement of voter preferences."
posted by Vectorcon Systems at 11:41 AM on September 13, 2013 [5 favorites]
According to Cohn, it appears that PPP was changing their racial targets for different polls in the same election, weighting by reported past election preferences (without revealing these questions) and perhaps modifying their practices based on their feeling about how the results should look. Their racial weighting changes consistently brought their results more in line with polling averages. These are far less defensible practices.
As Mark Blumenthal puts it:
"What Cohn reports, in essence, is that PPP falls at the extreme of the subjective "gut feeling" end of the pollster spectrum. [...] PPP's defense is that its approach has proven to be "generally accurate," which it has. The problem, however, is that at some point it gets hard to distinguish the pollster's judgements from the poll's measurement of voter preferences."
posted by Vectorcon Systems at 11:41 AM on September 13, 2013 [5 favorites]
Would any of this have made any difference whatsoever? Would anyone have given enough credence to a polling company to have put any more effort into stopping the recall(s)?
"Let's poll people right up to the very end and then watch ourselves lose."
posted by Ardiril at 12:09 PM on September 13, 2013
"Let's poll people right up to the very end and then watch ourselves lose."
posted by Ardiril at 12:09 PM on September 13, 2013
I think the first paragraph of the first link in the FPP says it all, with a few caveats:
posted by JoeXIII007 at 12:09 PM on September 13, 2013
We did a poll last weekend in Colorado Senate District 3 and found that voters intended to recall Angela Giron by a 12 point margin, 54/42. In a district that Barack Obama won by almost 20 points I figured there was no way that could be right and made a rare decision not to release the poll. It turns out we should have had more faith in our numbers because she was indeed recalled by 12 points.Caveats being the predicted and observed result matching could've been just luck, and Vectorcon Systems' note is important.
posted by JoeXIII007 at 12:09 PM on September 13, 2013
The really sad part about this is that a bunch of presumed adults think it is appropriate to scold each other in public 100 character sound-bites rather than, you know, picking up the phone and sorting it out in a 5 minute conference call.
What benefit is there to either party to get involved in a public pissing match?
posted by madajb at 12:42 PM on September 13, 2013 [2 favorites]
What benefit is there to either party to get involved in a public pissing match?
posted by madajb at 12:42 PM on September 13, 2013 [2 favorites]
The really sad part about this is that a bunch of presumed adults think it is appropriate to scold each other in public 100 character sound-bites rather than, you know, picking up the phone and sorting it out in a 5 minute conference call.
What benefit is there to either party to get involved in a public pissing match?
Nate Silver seems to be enjoying his newfound freedom from NYT, and has been more open about personal opinions since he left, almost as if he had to censor his criticisms of others while working there. Tom Jenson has never been above fighting pigs in the mud on twitter, much to my annoyance as a follower.
I'm not defending it as a good idea, but as soon as I saw Silver's first tweet I knew the #NerdFight was coming.
posted by DynamiteToast at 12:49 PM on September 13, 2013 [3 favorites]
What benefit is there to either party to get involved in a public pissing match?
Nate Silver seems to be enjoying his newfound freedom from NYT, and has been more open about personal opinions since he left, almost as if he had to censor his criticisms of others while working there. Tom Jenson has never been above fighting pigs in the mud on twitter, much to my annoyance as a follower.
I'm not defending it as a good idea, but as soon as I saw Silver's first tweet I knew the #NerdFight was coming.
posted by DynamiteToast at 12:49 PM on September 13, 2013 [3 favorites]
Upside: no one's paying attention to that "Unskewed" asshole.
posted by Halloween Jack at 1:17 PM on September 13, 2013
posted by Halloween Jack at 1:17 PM on September 13, 2013
It's true that a lot of these guys know more about statistics than most people know about statistics. But their big contribution to the zeitgeist wasn't some kind of super probability juju. ... What they contributed was the simple idea that you should believe what the data is telling you.
On the one hand, yeah, sure. The underlying models Silver or Wang use are pretty simple (ie, "people usually vote for the person that they just told you they were going to vote for"). The monte carlo simulational approach they take to the polls is probably more clever than what the median political-behavior grad student might come up with, but that's about it.*
On the other hand, Silver especially is also adding a real gift for explaining why he's doing what he's doing, and what probabilistic predictions mean, and so on.
*They would be unlikely to actually do so since there's really not much value added in the exercise itself; predicting presidential election outcomes in early November using polls in October is not very interesting or difficult except by comparison to the twaddle that the mass media traditionally delivered. Instead, polisci types would either ignore the electoral prediction and immediately start trying to understand/predict why different people tell you they're going to vote for different candidates, -or- try to predict the outcome with data from spring, or summer, or maybe early September.
posted by ROU_Xenophobe at 1:28 PM on September 13, 2013
On the one hand, yeah, sure. The underlying models Silver or Wang use are pretty simple (ie, "people usually vote for the person that they just told you they were going to vote for"). The monte carlo simulational approach they take to the polls is probably more clever than what the median political-behavior grad student might come up with, but that's about it.*
On the other hand, Silver especially is also adding a real gift for explaining why he's doing what he's doing, and what probabilistic predictions mean, and so on.
*They would be unlikely to actually do so since there's really not much value added in the exercise itself; predicting presidential election outcomes in early November using polls in October is not very interesting or difficult except by comparison to the twaddle that the mass media traditionally delivered. Instead, polisci types would either ignore the electoral prediction and immediately start trying to understand/predict why different people tell you they're going to vote for different candidates, -or- try to predict the outcome with data from spring, or summer, or maybe early September.
posted by ROU_Xenophobe at 1:28 PM on September 13, 2013
What benefit is there to either party to get involved in a public pissing match?
The only way to be successful today in America is to draw attention to yourself.
posted by one_bean at 3:36 PM on September 13, 2013
The only way to be successful today in America is to draw attention to yourself.
posted by one_bean at 3:36 PM on September 13, 2013
I love a good pollster beef. (Until the guns come out...)
posted by Annika Cicada at 5:47 PM on September 13, 2013
posted by Annika Cicada at 5:47 PM on September 13, 2013
I agree with folks who think the way PPP handled the Colorado recall polls thing was unseemly, but it's important to at least engage with PPP's defense that it was a private poll and they withhold those all the time. From the 2nd link:
As a private polling company, the vast majority of the polling we do is not released to the public. We do 1 or 2 public polls a week across the country, that we let you vote to pick on our website. We announce what states we'll be polling and take question suggestions, and we've never not released one of those polls that was intended for public consumption. Most of the polling we do though is either for clients or our own internal purposes and doesn't get released whether it's good for Democrats, good for Republicans, or somewhere in between.
In the case of the Giron recall, this was the first legislative recall election in Colorado history. There's been a lot of voter confusion. We decided to do a poll there over the weekend and decide whether to release it publicly depending on whether the results made sense or not. When we got the results back, we found that 33% of Democrats in the district supported the recall. It might be normal for Democrats in Kentucky or West Virginia to abandon their party in those kinds of numbers, but that doesn't happen in Colorado or in most of the rest of the country. That finding made me think that respondents may not have understood what they were being asked, so I decided to hold onto it. I would have done the same thing if we'd found 33% of Republicans saying they opposed the recall.
posted by mediareport at 6:20 PM on September 13, 2013
As a private polling company, the vast majority of the polling we do is not released to the public. We do 1 or 2 public polls a week across the country, that we let you vote to pick on our website. We announce what states we'll be polling and take question suggestions, and we've never not released one of those polls that was intended for public consumption. Most of the polling we do though is either for clients or our own internal purposes and doesn't get released whether it's good for Democrats, good for Republicans, or somewhere in between.
In the case of the Giron recall, this was the first legislative recall election in Colorado history. There's been a lot of voter confusion. We decided to do a poll there over the weekend and decide whether to release it publicly depending on whether the results made sense or not. When we got the results back, we found that 33% of Democrats in the district supported the recall. It might be normal for Democrats in Kentucky or West Virginia to abandon their party in those kinds of numbers, but that doesn't happen in Colorado or in most of the rest of the country. That finding made me think that respondents may not have understood what they were being asked, so I decided to hold onto it. I would have done the same thing if we'd found 33% of Republicans saying they opposed the recall.
posted by mediareport at 6:20 PM on September 13, 2013
This article goes into the worrying issues with ppp's methodology [link added]
That New Republic piece is what sparked this whole thing, really, and it's what's let Nate Silver go off on his weirdly aggressive attacks. Silver's assertion about PPP - "what they're doing barely qualifies as POLLING, if at all" - is a hilarious distortion that makes me think there's something emotional/personal at work here. And the biggest piece of evidence to me that this is personal on Nate Silver's part is this bitchy complaint:
@ppppolls: How well do your methods work when there aren't other polls in the field to "gut check" against?
Wow, really? PPP responded with a long list of races it polled accurately when there were few or no other pollsters paying attention: "15 things PPP got right going it alone." And *that*, ladies and germs, is where the dispute currently rests. We're all eagerly awaiting Nate Silver's oh-so-rational response, I'm sure.
posted by mediareport at 6:33 PM on September 13, 2013 [1 favorite]
That New Republic piece is what sparked this whole thing, really, and it's what's let Nate Silver go off on his weirdly aggressive attacks. Silver's assertion about PPP - "what they're doing barely qualifies as POLLING, if at all" - is a hilarious distortion that makes me think there's something emotional/personal at work here. And the biggest piece of evidence to me that this is personal on Nate Silver's part is this bitchy complaint:
@ppppolls: How well do your methods work when there aren't other polls in the field to "gut check" against?
Wow, really? PPP responded with a long list of races it polled accurately when there were few or no other pollsters paying attention: "15 things PPP got right going it alone." And *that*, ladies and germs, is where the dispute currently rests. We're all eagerly awaiting Nate Silver's oh-so-rational response, I'm sure.
posted by mediareport at 6:33 PM on September 13, 2013 [1 favorite]
One last thing: Nate Cohn's piece in the New Republic attacks PPP's methods as outrageously "ad hoc" while implying that the rest of the polling industry is much more scientific and would *never* *ever* dream of using such "ad hoc" methods.
I call utter horseshit on that one. From what I've read/heard, the way other polling companies decide who's a "likely voter" (for just one example) is at least as much an "ad hoc" hand-wavey process as anything PPP routinely does. And that Nate Silver uses these differences to pronounce from On High about PPP that "they don't poll, they just forecast" is a hoot. Like *that* little line is always completely clear at the other polling companies.
Yeah, right.
posted by mediareport at 6:33 PM on September 13, 2013
I call utter horseshit on that one. From what I've read/heard, the way other polling companies decide who's a "likely voter" (for just one example) is at least as much an "ad hoc" hand-wavey process as anything PPP routinely does. And that Nate Silver uses these differences to pronounce from On High about PPP that "they don't poll, they just forecast" is a hoot. Like *that* little line is always completely clear at the other polling companies.
Yeah, right.
posted by mediareport at 6:33 PM on September 13, 2013
The reason I posted this is because I honestly have no idea which of them is more 'right' here and that's so interesting to me! It seems like they both have some good points but also both have an unfortunate tendency to indulge in twitter slapfights. The perfect combination.
posted by showbiz_liz at 7:12 PM on September 13, 2013 [1 favorite]
posted by showbiz_liz at 7:12 PM on September 13, 2013 [1 favorite]
It's a good post about a fascinating hullaballoo, for sure, showbiz_liz. Back to the fray, here's HuffPollster offering evidence Nat Cohn may have stretched the truth by implying PPP's "ad hoc" methodology is beyond the pale. The reality is a bit more complex, with clear evidence of other pollsters weighting past voter preferences in their results, including the Pew Research Center and Rand's American Life Panel:
In addition to weighting the completed interviews by demographics each week, [Rand] also weighted respondents’ self-reports of how they voted in 2008 so that the sample matched the actual result: "Based on the premise that the best predictor of future voting behavior is past voting behavior, we also reweight each daily sample separately such that its voting behavior in 2008 matches known population voting behavior in 2008." The final RAND survey showed Obama defeating Romney by a 3.3 point margin (49.5 to 46.2). Obama's final victory margin was 3.9 percent (51.1 to 47.2)
[...Pew] added a weight for the 2008 vote, but to match the average of results on their prior polls in 2012. "We obviously didn't weight it all the way back to the 2008 outcome. But it put us in a position that we don't like to be in which is not letting the data speak to us, but sort of imposing some kind of limit on the range of the data."...The tweak to Pew's standard methods did not immediately appear in their standard demographic disclosure. That information was added later...
So much for the purity of those other polling organizations. The article notes Pew's particular ad hoc change was very different than the changes Cohn describes PPP making, but adds, "the episode is a reminder that in the often messy real world, even one of the most respected names in survey research is capable of making an ad hoc change to its methods."
Again, something else is going on here: what seems to me a blatant attempt by Nate Silver and Nate Cohn to paint PPP as something far afield from other polling companies, strongly implying that ad hoc changes aren't a normal part of the very messy business of polling and making their case using criteria that almost certainly don't hold up. I wouldn't be surprised to see Cohn apologize at some point. Silver, on the other hand...
Well, that would be surprising.
posted by mediareport at 9:38 PM on September 13, 2013
In addition to weighting the completed interviews by demographics each week, [Rand] also weighted respondents’ self-reports of how they voted in 2008 so that the sample matched the actual result: "Based on the premise that the best predictor of future voting behavior is past voting behavior, we also reweight each daily sample separately such that its voting behavior in 2008 matches known population voting behavior in 2008." The final RAND survey showed Obama defeating Romney by a 3.3 point margin (49.5 to 46.2). Obama's final victory margin was 3.9 percent (51.1 to 47.2)
[...Pew] added a weight for the 2008 vote, but to match the average of results on their prior polls in 2012. "We obviously didn't weight it all the way back to the 2008 outcome. But it put us in a position that we don't like to be in which is not letting the data speak to us, but sort of imposing some kind of limit on the range of the data."...The tweak to Pew's standard methods did not immediately appear in their standard demographic disclosure. That information was added later...
So much for the purity of those other polling organizations. The article notes Pew's particular ad hoc change was very different than the changes Cohn describes PPP making, but adds, "the episode is a reminder that in the often messy real world, even one of the most respected names in survey research is capable of making an ad hoc change to its methods."
Again, something else is going on here: what seems to me a blatant attempt by Nate Silver and Nate Cohn to paint PPP as something far afield from other polling companies, strongly implying that ad hoc changes aren't a normal part of the very messy business of polling and making their case using criteria that almost certainly don't hold up. I wouldn't be surprised to see Cohn apologize at some point. Silver, on the other hand...
Well, that would be surprising.
posted by mediareport at 9:38 PM on September 13, 2013
More via political scientist @DrewLinzer, who been tweeting pointed stuff on the messy business of survey weighting in a way that counters the oddly outraged "how dare they!" approach of Nates Silver and Cohn. Here's his reaction to Nate Cohn's bizarre claim that ""No pollster weights to whatever electorate it chooses, undisclosed and subjective":
Sorry but they all do.
More from Linzer's feed about the "pearl-clutching" he's seeing:
@Nate_Cohn Have you ever worked at a polling firm? When you have small samples, there's judgment involved. There has to be.
Poll aggregators like me and @fivethirtyeight sure didn't object to @ppppolls when they were supplying 18% of our state-level data last fall
Looking forward to @Nate_Cohn's equally thorough takedown of the entire polling industry.
Guess what: Polling is hard. The practice can't perfectly match the theory. That doesn't mean polling firms are committing fraud.
He links this from 2012:
Some thoughts on survey weighting
I do think the current theory and practice of survey weighting is a mess, in that much depends on somewhat arbitrary decisions about which variables to include, which margins to weight on, and how to trim extreme weights.
He also provides other links to folks who point out "there's much ad-hoc-ery" in elements like most pollsters' "likely voter" screens. Again, the point is that ad hoc decisions are an ongoing and probably permanent part of survey methodology, and the attack on PPP from Silver and Cohn has some strange and unsupportable elements that also apply - but are not being applied - to other pollsters in general. Finally, I love this snarky exchange with Silver:
@DrewLinzer
So @fivethirtyeight: Are you giving up using @ppppolls for 2014, 2016, onward?
@fivethirtyeight
No but I will try to come up with a weighting method that is more punative to pollsters who calibrate/herd off others
@DrewLinzer 12 Sep
That's good. I trust it will be based on statistical fundamentals rather than results, and not extremely ad hoc.
Ha. Note Silver's continued insistence that PPP is "herding off" other pollsters (in a way the rest of the pollsters don't). Again, I'm looking forward to his response to PPP's sharp rejoinder to that accusation.
posted by mediareport at 10:24 PM on September 13, 2013
Sorry but they all do.
More from Linzer's feed about the "pearl-clutching" he's seeing:
@Nate_Cohn Have you ever worked at a polling firm? When you have small samples, there's judgment involved. There has to be.
Poll aggregators like me and @fivethirtyeight sure didn't object to @ppppolls when they were supplying 18% of our state-level data last fall
Looking forward to @Nate_Cohn's equally thorough takedown of the entire polling industry.
Guess what: Polling is hard. The practice can't perfectly match the theory. That doesn't mean polling firms are committing fraud.
He links this from 2012:
Some thoughts on survey weighting
I do think the current theory and practice of survey weighting is a mess, in that much depends on somewhat arbitrary decisions about which variables to include, which margins to weight on, and how to trim extreme weights.
He also provides other links to folks who point out "there's much ad-hoc-ery" in elements like most pollsters' "likely voter" screens. Again, the point is that ad hoc decisions are an ongoing and probably permanent part of survey methodology, and the attack on PPP from Silver and Cohn has some strange and unsupportable elements that also apply - but are not being applied - to other pollsters in general. Finally, I love this snarky exchange with Silver:
@DrewLinzer
So @fivethirtyeight: Are you giving up using @ppppolls for 2014, 2016, onward?
@fivethirtyeight
No but I will try to come up with a weighting method that is more punative to pollsters who calibrate/herd off others
@DrewLinzer 12 Sep
That's good. I trust it will be based on statistical fundamentals rather than results, and not extremely ad hoc.
Ha. Note Silver's continued insistence that PPP is "herding off" other pollsters (in a way the rest of the pollsters don't). Again, I'm looking forward to his response to PPP's sharp rejoinder to that accusation.
posted by mediareport at 10:24 PM on September 13, 2013
One last thing before bed: Fans of Princeton's Sam Wang might like to know he wrote that the New Republic article "seems like a hit job with a POV" and recommends the email chain between Cohn and PPP's Tom Jensen as "more illuminating."
posted by mediareport at 10:25 PM on September 13, 2013
posted by mediareport at 10:25 PM on September 13, 2013
Ok, just one more. Sam Wang to Nate Silver:
@fivethirtyeight Nate, I have some bad news for you about the tooth fairy. You should sit for this
posted by mediareport at 10:27 PM on September 13, 2013
@fivethirtyeight Nate, I have some bad news for you about the tooth fairy. You should sit for this
posted by mediareport at 10:27 PM on September 13, 2013
PPP is one of the most consistently accurate polling firms of the last couple American political cycles, so even if people want to criticize their methods it's hard to argue with their results. All polling firms futz with the data in attempts to make it more representative, and PPP has a successful track record on that front. Presumably Nate Silver knows this, since I'm pretty sure he displayed the bias and accuracy for several organizations, including PPP, on 538 during the presidential race.
posted by Corinth at 1:19 AM on September 14, 2013
posted by Corinth at 1:19 AM on September 14, 2013
One last thing before bed: Fans of Princeton's Sam Wang might like to know he wrote that the New Republic article "seems like a hit job with a POV"
I'm surprised this hasn't been brought up before. Right wing publication trying to trash the reputation of a perceived-liberal polling firm? It must be a day ending in y. I'm just disappointed Nate Silver went along with it.
posted by dirigibleman at 7:24 AM on September 14, 2013
I'm surprised this hasn't been brought up before. Right wing publication trying to trash the reputation of a perceived-liberal polling firm? It must be a day ending in y. I'm just disappointed Nate Silver went along with it.
posted by dirigibleman at 7:24 AM on September 14, 2013
But it's an initiative to ban gay marriage so supporting the initiative means opposing gay marriage. That's confusing.
Actually, that's the same issue with 'do you support the recall', where 'I like X' means voting no. It's slightly less confusing, but if you're weak on the concept of a recall, which, frankly, a lot of people will be, there's plenty of room for confusion.
posted by hoyland at 8:16 AM on September 14, 2013
Actually, that's the same issue with 'do you support the recall', where 'I like X' means voting no. It's slightly less confusing, but if you're weak on the concept of a recall, which, frankly, a lot of people will be, there's plenty of room for confusion.
posted by hoyland at 8:16 AM on September 14, 2013
« Older FADE OUT | Learning about (your) camera(s) in text and video Newer »
This thread has been archived and is closed to new comments
posted by roomthreeseventeen at 8:20 AM on September 13, 2013 [5 favorites]