Filtering out the likely from the unlikely has some art, some science, and a whole lot of educated guesswork by pros who do this for a living. And even they don't agree on how to weigh this factor versus that factor. A recent story by Rob Daves lays this out much better than I could ever do. I recommend you click that mouse and check it out if you ever wondered about presidential trial heats, why some differ from others, and what the hell is a likely voter. As Daves writes:
Pollsters use different "likely voter" models and those who have been around the block a few times keep track of how their models perform in various elections. They use some for high-turnout elections; some for low. Some use screens to eliminate unlikely voters. Some weight all respondents, counting likely voters' responses more and those less likely to vote less. There's no industry standard "right way" to model a likely electorate, and virtually all pollsters have their favorite method.
Okay, fine. Different screening and filtering methods, different ingredients in that witch's brew, and you get differing results. This matters, especially if you buy the notion of polls and a bandwagon effect.
What people know about a campaign comes in part from the polls they consume. There's a great definition of public opinion that I'll use again and again: "Public opinion is no more than this, what people think other people think." This is from a play called Prince Lucifer and qualifies as my obscure reference of the day (we academics get points for this). If you look carefully at the trial heat polls on pollingreport.com, you'll see some are registered voters, some are likely voters. That will help you to understand, at least in part, some of the odd differences.
No comments:
Post a Comment