The fine folks at Pew pushed out a tweet this morning pointing to their survey question asking people what they know about the so-called fiscal cliff. Only about a quarter of the folks surveyed "say they understand very well what would happen if the automatic
spending cuts and tax increases were to go into effect in January." The table is below.
Keep in mind this is self-perception of knowledge, not actual knowledge. The difference is significant, because as we all know, some folks overestimate how much they know about a topic (think of your crazy uncle at Thanksgiving who is an expert in everything). Actual and perceived knowledge do correlate, but not perfectly. That's the fun part, studying the folks individuals who think they know a lot but actually don't. They tend to be the ones more emotionally involved but who don't consume a lot of news -- or who rely heavily on pseudo-news such as talk radio, etc.
Random blog posts about research in political communication, how people learn or don't learn from the media, why it all matters -- plus other stuff that interests me. It's my blog, after all. I can do what I want.
Thursday, November 29, 2012
Wednesday, November 28, 2012
Bill O'Reilly -- Quizmaster
There are a lot of news quizzes out there, the Pew one being the most famous, but did you know Bill O'Reilly has his own news quiz? Me neither. Try out your knowledge here.
Lemme just say I sucked. I got only 5 of 10 correct. I'll blame it on grading, on end-of-semester distractions, on a lack of caffeine. At least I was average of those who took the survey.
Lemme just say I sucked. I got only 5 of 10 correct. I'll blame it on grading, on end-of-semester distractions, on a lack of caffeine. At least I was average of those who took the survey.
Tuesday, November 27, 2012
Drinking in the UK -- College Rankings
Yes, I know, today's topic has nothing at all to do with my blog's general theme, but it does include three of my favorite topics: drinking. Oxford, and drinking.
Back in 2006 I taught in the UGA@Oxford program, so the town and its University have special places in my heart. Thus, when I came across this headline today (Oxford Students Outdrunk) I had to pass it on to the tens of people worldwide who read my scribbling.
Here's the bad news -- Oxford was only 41st among U.K. colleges. The good news? It's glass is filling fast. In the last survey, it was only 59th. The full drinking list is here.
Economics and accounting students apparently drink the most, according to the survey of 1,994 students across 74 U.K. universities. Or as one person says: “It seems these turbulent financial times are stressing the accountants and economists out before they’ve even entered the working world.” A student said basically those students were practicing to be social drinkers in the real world.
The most sober majors were nursing, midwifery, and healthcare. I for one find this comforting, but I find the next bit disturbing. "Subjects such as Art and Design, Performing Arts and Music, Journalism and English ranked in the last nine places, consuming less than 17 units. Humanities were ranked at 12th place with 21.1 units per week." Journalism? So low? Pfffft.
Here's what I love about the U.K. They even break the survey down into pints, which makes me thirsty just writing this sentence. In this breakdown, we see a somewhat different ranking with Oxford moving up to 32nd. Go team.
Back in 2006 I taught in the UGA@Oxford program, so the town and its University have special places in my heart. Thus, when I came across this headline today (Oxford Students Outdrunk) I had to pass it on to the tens of people worldwide who read my scribbling.
Here's the bad news -- Oxford was only 41st among U.K. colleges. The good news? It's glass is filling fast. In the last survey, it was only 59th. The full drinking list is here.
Economics and accounting students apparently drink the most, according to the survey of 1,994 students across 74 U.K. universities. Or as one person says: “It seems these turbulent financial times are stressing the accountants and economists out before they’ve even entered the working world.” A student said basically those students were practicing to be social drinkers in the real world.
The most sober majors were nursing, midwifery, and healthcare. I for one find this comforting, but I find the next bit disturbing. "Subjects such as Art and Design, Performing Arts and Music, Journalism and English ranked in the last nine places, consuming less than 17 units. Humanities were ranked at 12th place with 21.1 units per week." Journalism? So low? Pfffft.
Here's what I love about the U.K. They even break the survey down into pints, which makes me thirsty just writing this sentence. In this breakdown, we see a somewhat different ranking with Oxford moving up to 32nd. Go team.
Monday, November 26, 2012
Confounding Science Knowledge with Religion
Here's a fascinating analysis of the accepted scales used to measure scientific knowledge in the public, one that argues that at least a couple of the questions used in these national surveys are actually measuring a dimension of religious beliefs than scientific knowledge (a full pdf may be available to you here).
In other words, asking about evolution and the big bang (the theory, not the very funny television program) may not measure scientific knowledge so much as it measures something called the "Young Earth Worldview," which is the notion the Earth, despite all evidence to the contrary, is only about 8,000 or so years old (for fun, I offer a bit of Inherit the Wind below).
A lot of this paper gets into factor analyses of available survey items to explore whether they measure a single dimension (scientific knowledge) or multiple dimensions. Unless you're a numbers geek like me, it's probably best you avoid the Results section. From a methodological perspective in using these questions, scholars "should take special care to account for the religious loading of the “evolved” and “bigbang” items (and to a lesser extent, “condrift”)." In other words, secondary analysts of GSS data, beware. Make sure you're measuring what you think you're measuring, and don't let religious confusion get in the way of a good, solid index.
In other words, asking about evolution and the big bang (the theory, not the very funny television program) may not measure scientific knowledge so much as it measures something called the "Young Earth Worldview," which is the notion the Earth, despite all evidence to the contrary, is only about 8,000 or so years old (for fun, I offer a bit of Inherit the Wind below).
A lot of this paper gets into factor analyses of available survey items to explore whether they measure a single dimension (scientific knowledge) or multiple dimensions. Unless you're a numbers geek like me, it's probably best you avoid the Results section. From a methodological perspective in using these questions, scholars "should take special care to account for the religious loading of the “evolved” and “bigbang” items (and to a lesser extent, “condrift”)." In other words, secondary analysts of GSS data, beware. Make sure you're measuring what you think you're measuring, and don't let religious confusion get in the way of a good, solid index.
Friday, November 16, 2012
Pew Surveys vs Exit Polls
The fine folks at Pew have a new survey out, a post-election thing, and I just want to focus on one small sliver of their report. The table is below.
Obviously it's to do with when voters made up their minds. Let's compare this with exit polls of actual voters. The times are not exactly compatible. For the exit polls, for example, I collapsed "just today" with "in the last few days" to resemble Pew's "within a week." It works fairly well, as you'll see below.
As you can see, the survey by Pew of 1,206 voters matches up damn well with the exit poll data collected from a much larger sample of voters as they left the ballot box last week.
What can we take away from this, other than Pew knows how to run a good survey? Mainly that not many people made up their mind late in the game and those folks tended to split evenly between Obama and Romney -- thus negating the whole Hurricane Sandy hypothesis. I will point out that among those who decided on Election Day, they cut 51-44 to Obama. But those were only 3 percent of the electorate.
Obviously it's to do with when voters made up their minds. Let's compare this with exit polls of actual voters. The times are not exactly compatible. For the exit polls, for example, I collapsed "just today" with "in the last few days" to resemble Pew's "within a week." It works fairly well, as you'll see below.
Pew Survey | Exit Polls | |
---|---|---|
Near Election | 8% | 9% |
After Debates | 11% | 11% |
Before Debates | 76% | 78% |
Don't Know | 4% | 2% |
As you can see, the survey by Pew of 1,206 voters matches up damn well with the exit poll data collected from a much larger sample of voters as they left the ballot box last week.
What can we take away from this, other than Pew knows how to run a good survey? Mainly that not many people made up their mind late in the game and those folks tended to split evenly between Obama and Romney -- thus negating the whole Hurricane Sandy hypothesis. I will point out that among those who decided on Election Day, they cut 51-44 to Obama. But those were only 3 percent of the electorate.
What People Know about ... Antibiotics
Yes, I often write here about political knowledge or what people know about public affairs, or sometimes the role of the media in same. But sometimes I like to wander elsewhere.
Today, it's what people know about ... antibiotics.
My starting point is this story and the existence of an Antibiotics IQ test. Yeah, I'm surprised too. By the way, I nailed an 80 on the test. I'm a solid "B" student, at least when it comes to antibiotics.
Today, it's what people know about ... antibiotics.
My starting point is this story and the existence of an Antibiotics IQ test. Yeah, I'm surprised too. By the way, I nailed an 80 on the test. I'm a solid "B" student, at least when it comes to antibiotics.
Thursday, November 15, 2012
Too Much Porn?
Can there ever be too much porn?
Well, yeah -- when it comes to journalists tacking porn onto any number of other words. Here are some I came up with searching news sites:
Well, yeah -- when it comes to journalists tacking porn onto any number of other words. Here are some I came up with searching news sites:
- geology porn -- By Scientific American, of all people
- ski porn -- Used by Westword
- involuntary porn -- really want to spent more time on this one
- food porn -- used everywhere. For more, see Food Network
- democracy porn -- I honestly don't get this one
- election porn -- I honestly do get this one after 2012
- storm porn -- ya know, Weather Channel style
Wednesday, November 14, 2012
Gender and UGA Faculty
I posted yesterday data on the racial/ethnic breakdowns of UGA faculty by college. Someone said they'd like to see the gender breakdowns as well. And here they are. Keep in mind these are fulltime faculty only, as of Spring 2012, and is based on UGA data available if you know how and where to root around for it. Any mistakes are my own. So it's the All for UGA and each college, percentages of each gender.
So what we can tell from the data above? First, I need a different hobby. Second, there are few surprises on which are the "manly" colleges (Forestry, then Agricultural and Environmental Science). The largest proportion of women are found in Social Work, which by the way is also where we find, by far, the largest proportion of African American faculty. Not a surprise, mind you, but important nonetheless.
About two-thirds of all UGA faculty are male, and a number of colleges are seriously out of whack when it comes to the proportion of males to females. I mentioned two above, but also raising questions are Public and International Affairs, Business, and Pharmacy. Given their lousy statistics on race/ethnicity as well, you'd want to take a closer look. In fairness, you can only hire faculty available to you in searches, and who want to come to UGA. In other words, you can only swim in the pool presented to you.
That said, according to UGA data on fulltime faculty, Business is 2 percent black and 25.5 percent female -- far below the University average. Forestry? Zero percent black and 13 percent female. Pharmacy? Zero percent black and 26.5 percent female.
Yeah, Houston. We've got a problem.
I have no doubt this has come up in each college or school. I know my own college got in trouble with accreditors many years ago on this very issue -- both on faculty and student representation and our efforts to recruit both. I assume other schools face similar problems. Or maybe they don't. Dunno.
Male | Female | |
---|---|---|
All | 65.7 | 34.3 |
Agri and Env | 84.6 | 15.4 |
Arts and Sci | 66.8 | 33.2 |
Business | 74.5 | 25.5 |
Ecology | 63.2 | 36.8 |
Education | 45.4 | 54.6 |
Env and Design | 71.9 | 28.1 |
Fam Cons Sci | 38.3 | 61.7 |
Forestry | 87.0 | 13.0 |
Journalism | 61.0 | 39.0 |
Law | 54.7 | 45.3 |
Pharmacy | 73.5 | 26.5 |
Public Health | 48.8 | 51.2 |
Pub Intl Affairs | 83.3 | 16.7 |
Social Work | 34.8 | 65.2 |
Vet Med | 58.1 | 41.9 |
So what we can tell from the data above? First, I need a different hobby. Second, there are few surprises on which are the "manly" colleges (Forestry, then Agricultural and Environmental Science). The largest proportion of women are found in Social Work, which by the way is also where we find, by far, the largest proportion of African American faculty. Not a surprise, mind you, but important nonetheless.
About two-thirds of all UGA faculty are male, and a number of colleges are seriously out of whack when it comes to the proportion of males to females. I mentioned two above, but also raising questions are Public and International Affairs, Business, and Pharmacy. Given their lousy statistics on race/ethnicity as well, you'd want to take a closer look. In fairness, you can only hire faculty available to you in searches, and who want to come to UGA. In other words, you can only swim in the pool presented to you.
That said, according to UGA data on fulltime faculty, Business is 2 percent black and 25.5 percent female -- far below the University average. Forestry? Zero percent black and 13 percent female. Pharmacy? Zero percent black and 26.5 percent female.
Yeah, Houston. We've got a problem.
I have no doubt this has come up in each college or school. I know my own college got in trouble with accreditors many years ago on this very issue -- both on faculty and student representation and our efforts to recruit both. I assume other schools face similar problems. Or maybe they don't. Dunno.
Race and Faculty at UGA
What's the whitest college at UGA, at least in terms of its faculty?
Ecology.
Based on Spring 2012 data, the latest available, I looked at the various colleges and programs that make up the University of Georgia. If you know where to dig there are data on racial and ethnic faculty breakdowns. Note: the data below reflect only at fulltime faculty.
Ecology was listed as having 94.7 percent of its faculty as white. The University average is 80 percent. Close behind in whiteness were Public and International Affairs, Forestry, and Law. The least white? Assuming that even matters, Social work comes in at 60.9 percent.
Okay, but how about looking at it from the perspective of percent of African-Americans. In that way, Social Work has the highest proportionally speaking, with 30.4 percent, with Education a distant second. There are relatively few Hispanic faculty but the greatest percentage is found in Environment and Design (9.4 percent). The greatest in terms of Asian faculty, percentage wise, is Pharmacy with 17.6 percent, followed closely by Public Health, Family and Consumer Science, and Business.
I created a rough table below. The numbers across rows (colleges, etc.) will not add to 100 because I dropped a couple of categories where few faculty are represented (American Indian, multi-racial, and no reported race).
Now it's possible I mis-entered a number, but I don't think so, and I think it's important to examine the "0.0" situations above. My own college doesn't do all that well, so I can't easily criticize others. Being the radical moderate, I look with suspicion at any school or college that is far out of whack -- from the University number -- in any racial or ethnic category.
I could break it down even more -- by tenure, for example, but that's more than I can manage on Blogger, which makes even making a table above nothing fun as you gotta go into raw html to create it. Plus I do have grading to finish.
Anyway, this is my data crunching public service announcement for the day.
Ecology.
Based on Spring 2012 data, the latest available, I looked at the various colleges and programs that make up the University of Georgia. If you know where to dig there are data on racial and ethnic faculty breakdowns. Note: the data below reflect only at fulltime faculty.
Ecology was listed as having 94.7 percent of its faculty as white. The University average is 80 percent. Close behind in whiteness were Public and International Affairs, Forestry, and Law. The least white? Assuming that even matters, Social work comes in at 60.9 percent.
Okay, but how about looking at it from the perspective of percent of African-Americans. In that way, Social Work has the highest proportionally speaking, with 30.4 percent, with Education a distant second. There are relatively few Hispanic faculty but the greatest percentage is found in Environment and Design (9.4 percent). The greatest in terms of Asian faculty, percentage wise, is Pharmacy with 17.6 percent, followed closely by Public Health, Family and Consumer Science, and Business.
I created a rough table below. The numbers across rows (colleges, etc.) will not add to 100 because I dropped a couple of categories where few faculty are represented (American Indian, multi-racial, and no reported race).
Asian | Black | Hispanic | White | |
---|---|---|---|---|
All | 9.0 | 5.8 | 3.4 | 80.0 |
Agri and Env | 8.6 | 4.1 | 3.6 | 82.8 |
Arts and Sci | 8.8 | 5.8 | 3.6 | 79.7 |
Business | 14.3 | 2.0 | 2.0 | 78.6 |
Ecology | 0.0 | 5.3 | 0.0 | 94.7 |
Education | 5.9 | 12.4 | 2.7 | 77.3 |
Env and Design | 6.3 | 0.0 | 9.4 | 81.3 |
Fam Cons Sci | 15.0 | 8.3 | 5.0 | 71.7 |
Forestry | 6.5 | 0.0 | 2.2 | 87.0 |
Journalism | 9.8 | 2.4 | 2.4 | 80.5 |
Law | 3.8 | 7.5 | 1.9 | 84.9 |
Pharmacy | 17.6 | 0.0 | 2.9 | 79.4 |
Pub Intl Affairs | 8.3 | 2.1 | 0.0 | 87.5 |
Social Work | 4.3 | 30.4 | 4.3 | 60.9 |
Vet Med | 8.1 | 4.8 | 6.5 | 80.6 |
Now it's possible I mis-entered a number, but I don't think so, and I think it's important to examine the "0.0" situations above. My own college doesn't do all that well, so I can't easily criticize others. Being the radical moderate, I look with suspicion at any school or college that is far out of whack -- from the University number -- in any racial or ethnic category.
I could break it down even more -- by tenure, for example, but that's more than I can manage on Blogger, which makes even making a table above nothing fun as you gotta go into raw html to create it. Plus I do have grading to finish.
Anyway, this is my data crunching public service announcement for the day.
Tuesday, November 13, 2012
New Rasmussen Poll is Out. Who Cares?
A new Rasmussen poll is out that says Americans want the feds to extend the variety of tax cuts.
It's a Rasmussen poll. Should we care?
Nate Silver broke down how well the various polling firms did in the 2012 election. Among the firms that did five or more polls, Rasmussen was fourth from the bottom. Or to put it another way, out of 23 firms, Rasmussen finished 20th.
As Silver wrote:
It's a Rasmussen poll. Should we care?
Nate Silver broke down how well the various polling firms did in the 2012 election. Among the firms that did five or more polls, Rasmussen was fourth from the bottom. Or to put it another way, out of 23 firms, Rasmussen finished 20th.
As Silver wrote:
Several polling firms got notably poor results, on the other hand. For the second consecutive election — the same was true in 2010 — Rasmussen Reports polls had a statistical bias toward Republicans, overestimating Mr. Romney’s performance by about four percentage points, on average.So this is the poll we're supposed to breathlessly quote and consider on an issue of importance? Or as Silver noted:
Rasmussen Reports uses an online panel along with the automated calls that it places. The firm’s poor results this year suggest that the technique will need to be refined. At least they have some game plan to deal with the new realities of polling. In contrast, polls that place random calls to landlines only, or that rely upon likely voter models that were developed decades ago, may be behind the times.Polling is getting very interesting, given the difficulties of landlines versus mobile phones, given the awful nature of robo-call polls (which, by federal law, cannot call mobile phones), and given the disaster certain polling firms managed in the latest election -- with last place held by that most prestigious of polling names, Gallup.
Monday, November 12, 2012
AAPOR on the Election Polls
For what it's worth, AAPOR's statement about the 2012 election polls:
The following press release is being issued today. It was crafted by AAPOR’s current three presidents and our 2012 Election Rapid Response Team (Diane Colasanto, Mike Traugott, Rob Daves, Cliff Zukin, and Quin Monson). Considerable thanks goes to the Rapid Response Team for all they have done to help Council since last spring regarding 2012 election-related matters. -- Paul J. Lavrakas, AAPOR President
The following press release is being issued today. It was crafted by AAPOR’s current three presidents and our 2012 Election Rapid Response Team (Diane Colasanto, Mike Traugott, Rob Daves, Cliff Zukin, and Quin Monson). Considerable thanks goes to the Rapid Response Team for all they have done to help Council since last spring regarding 2012 election-related matters. -- Paul J. Lavrakas, AAPOR President
AAPOR's Statement on 2012 Presidential Election Polling
During
the past two months, journalists, partisans on many sides, and the
public at large have focused a great deal of attention on the accuracy
of the presidential pre-election
polls. At times considerable criticism was directed toward pollsters
and their polling methods.
However,
as was seen last Wednesday morning, the vast majority of the major
pollsters were highly accurate in their final estimates for the
presidential election, both at
the national and state levels. The American Association for Public
Opinion Research (AAPOR) would like to take this occasion to compliment
pollsters who used established, objective scientific methods to conduct
their polls, rather than subjective judgments
about the electorate to make their forecasts.
“AAPOR
is very pleased that the survey research profession has worked to
respond to the increasing challenges facing public opinion polling by
drawing on the best available
scientific evidence, whether it is from scholars, government
researchers, or political polling practitioners themselves,” said Paul
J. Lavrakas, Ph.D., AAPOR’s current president.
Despite
myriad challenges including the growing cell phone population,
increasingly high levels of non-response, and even the effects of
unanticipated events such as Hurricane
Sandy, the final estimates of the 2012 election outcomes demonstrated
that when pollsters remain committed to objective scientific methods,
their pre-election polls are very likely to be an accurate forecast of
the voting public’s behavior.
“As
importantly, to the extent that polls also are accurate in
characterizing the attitudes, beliefs, and motivations of the
electorate, we believe that pollsters, and the
news media that use their poll findings, provide a great service to
democracy by placing the opinions and preferences of the public in the
forefront of the electoral process,” observed Lavrakas.
Saturday, November 10, 2012
Which Polls Were the Best?
Nate Silver has a nice analysis of which polls did well and which polls did not so well in predicting the 2012 presidential election. Well worth the read. You'll see a more comprehensive article in Public Opinion Quarterly some time in the future, if they do as they normally do and do an election synopsis of poll accuracy.
Read his article. Basically, live polls far outperform robo-dial polls. No surprise there despite the pathetic efforts of some to defend such polling. Of the bottom five polls in accuracy, three were robo-bullshit-polls (full list, any polls conducted). The problem with the robo polls, other than being annoying as hell, is they cannot legally call cell (mobile) phones, which gives them among other things an age bias. That makes them only slightly more accurate in predicting an election outcome as cutting open an animal and studying its innards.
Of the "big boy" polling firms, the ones who performed poorly (i.e., sucked) were Gallup, InsiderAdvantage, Mason-Dixon, American Research Group, and Rasmussen (this of shops that conduced at least five polls).
Google's poll did well, leading Silver to suggest:
Read his article. Basically, live polls far outperform robo-dial polls. No surprise there despite the pathetic efforts of some to defend such polling. Of the bottom five polls in accuracy, three were robo-bullshit-polls (full list, any polls conducted). The problem with the robo polls, other than being annoying as hell, is they cannot legally call cell (mobile) phones, which gives them among other things an age bias. That makes them only slightly more accurate in predicting an election outcome as cutting open an animal and studying its innards.
Of the "big boy" polling firms, the ones who performed poorly (i.e., sucked) were Gallup, InsiderAdvantage, Mason-Dixon, American Research Group, and Rasmussen (this of shops that conduced at least five polls).
Google's poll did well, leading Silver to suggest:
Perhaps it won’t be long before Google, not Gallup, is the most trusted name in polling.
Likely Studies of the 2012 Election
Every major event, and especially every presidential election, produces scores of academic studies. 2012 will be no different. So what studies are we most likely to see eventually appear in the major academic journals? Probably many of the same questions found in news accounts, but lemme take a stab at some likely themes.
Got any others? Feel free to add some in the comments section below. For the uninitiated out there, understand it can take as long as two years before an academic study sees publication.
- White Men Can't Vote. Put in this category the studies that attempt to explain the Obama coalition and the role of the white versus non-white vote. You'll find these mostly in political science journals, perhaps in sociology, trying to explain the dwindling role of white voters.
- Obama is Still Muslim, etc. Here you'll find studies of why people did not vote for Obama, based mainly on racism as a factor but also belief in the various myths surrounding him (Muslim, born outside the U.S., is a space alien, and so on). These studies will be found everywhere, including masscomm (note, this is an area of interest to me as well).
- Where'd The Christians Go? By this, I obviously mean the conservative or evangelical vote and the small role it seemed to play in this presidential election compared to others. You'll find these studies everywhere from political science to religion to masscomm, such as Journal of Media and Religion.
- The Roles of Polls. In this, I expect to see a study or maybe several studies on the roles the polls play, from individual surveys to Nate Silver and the gang of geek/nerds who so correctly called the 2012 outcome. Expect to see a lot of this stuff in Public Opinion Quarterly.
- Sandy. Yes, there will be studies that attempt to explain much more fully than you see in the press about the role of Hurricane Sandy and the like. Short answer: very little, but I'm sure we'll see analyses of it in the political science journals.
- Twitter and Social Media. Yes, there will be a number of analyses of Twitter, from its role in the debates to whether it is a useful predictor of the election outcome. A lot of this is basic content analysis, called by my major professor the "great intellectual cul-de-sac." But analyzing big data is the future. You'll find this mostly in the more geeky journals out of computer science.
Got any others? Feel free to add some in the comments section below. For the uninitiated out there, understand it can take as long as two years before an academic study sees publication.
Friday, November 9, 2012
Charles Darwin, Write-In Candidate
Charles Darwin, at least where I live, is a popular guy.
There's been a lot of attention about the write-in campaign using his name thanks to my U.S. Rep's rather shaky understanding of science (local stories here and here, but also found nationally).
Being the nerdgeek I am, I dumped the file of write-in ballots into a spreadsheet and tried to get an exact count of the variations of Charles Darwin. Best I can tell, there were 3,908 write-in ballots for the mostly dead naturalist -- who if he had been elected, would have been 203 years old, roughly the same age as U.S. Sen. Robert Byrd when he died.
I may have missed some. I show 6,907 different kinds of write-ins for the U.S. House race. I excluded write-ins for the other races. Some of the spellings of Darwin are creative, so I may have missed a few, but after some effort to transfer the PDF into a spreadsheet, this is what I found.
I should point out there were a lot of write-ins for "anyone else" or "any one else" or "any one but him" and so on and so on. As many as for Darwin? I don't think so. Any use of the word "anyone," for example, gives only 248 hits, and any use of "anybody" came up with only 35. Other popular write-ins? Best I can tell:
And yes, I voted for Chuck Darwin as well.
Update (3:47 p.m. Friday)
Honey Boo Boo got 1 vote. Just thought I'd throw that in.
It's widely reported already how many votes Jarvis Jones received (not nearly enough, but over 10). John Doe did well with eight. Jesus (4 votes) outscored Mohammed (1 vote). Satan got 2 votes.
And finally, Jim Thompson got one vote. Probably from himself. Just saying...
There's been a lot of attention about the write-in campaign using his name thanks to my U.S. Rep's rather shaky understanding of science (local stories here and here, but also found nationally).
Being the nerdgeek I am, I dumped the file of write-in ballots into a spreadsheet and tried to get an exact count of the variations of Charles Darwin. Best I can tell, there were 3,908 write-in ballots for the mostly dead naturalist -- who if he had been elected, would have been 203 years old, roughly the same age as U.S. Sen. Robert Byrd when he died.
I may have missed some. I show 6,907 different kinds of write-ins for the U.S. House race. I excluded write-ins for the other races. Some of the spellings of Darwin are creative, so I may have missed a few, but after some effort to transfer the PDF into a spreadsheet, this is what I found.
- Search for "darw" and I get 3,908 (not case sensitive, by the way).
- Search for "darwin" and I get 3,876.
- Search for "charles darwin" and 3,320 went with the full name.
- "darwinn" had two hits.
I should point out there were a lot of write-ins for "anyone else" or "any one else" or "any one but him" and so on and so on. As many as for Darwin? I don't think so. Any use of the word "anyone," for example, gives only 248 hits, and any use of "anybody" came up with only 35. Other popular write-ins? Best I can tell:
- Big Bird got 23
- Bill Nye got 13
- Pete McCommons, local publisher, in one form or another (McCommunist) nailed a whopping, 170 votes. Actually it's more than that, given all the various spellings or just "Pete Mc" and such.
- And finally, Darth Vader got two votes for U.S. House -- one of them from my son.
And yes, I voted for Chuck Darwin as well.
Update (3:47 p.m. Friday)
Honey Boo Boo got 1 vote. Just thought I'd throw that in.
It's widely reported already how many votes Jarvis Jones received (not nearly enough, but over 10). John Doe did well with eight. Jesus (4 votes) outscored Mohammed (1 vote). Satan got 2 votes.
And finally, Jim Thompson got one vote. Probably from himself. Just saying...
Thursday, November 8, 2012
Gays Elected Obama?
Gay people elected Obama. How's that for a provocative lede? You doubt me? Check out these fun stats, based on exit poll data. Respondents were asked: Are you gay, lesbian or bisexual?
Yes 5%
No 95%
No surprise above in how many gay or non-gay voters were found in the electorate this week. But check out the numbers below.
Yes, Gay No, Not
Voted Obama 76% 22%
Voted Romney 49% 49%
In other words, among voters who said they weren't gay, it's a tie. Among voters who said they were, Obama overwhelmed Romney.
Oh my, if Rush Limbaugh sees this, he'll have a fit. Forget I mentioned it.
And I should point out that it's likely many of these votes were in states that were already solid Obama in the first place.
Yes 5%
No 95%
No surprise above in how many gay or non-gay voters were found in the electorate this week. But check out the numbers below.
Yes, Gay No, Not
Voted Obama 76% 22%
Voted Romney 49% 49%
In other words, among voters who said they weren't gay, it's a tie. Among voters who said they were, Obama overwhelmed Romney.
Oh my, if Rush Limbaugh sees this, he'll have a fit. Forget I mentioned it.
And I should point out that it's likely many of these votes were in states that were already solid Obama in the first place.
Labels:
conspiracy theories,
exit polls,
gays,
obama,
romney,
rush limbaugh,
voters
Wednesday, November 7, 2012
Obama and the Storm
It's a popular narrative among Republican types to blame Hurricane Sandy (and N.J. Gov. Chris Christie) for Obama's win this week.
Is it true?
First off, most models had Romney's momentum ending about Oct. 15, long before the storm hit. Given the stunning accuracy of these models by Nate Silver, Sam Wang, and the rest of the nerdgeek squad, it's hard to argue against them.
And yet, and yet.
In exit polls of actual living breathing voters, 3 percent said they decided on Election Day who to vote for. Another 6 percent said they decided "in the last few days." This has the potential of supporting the Sandy Hypothesis. After all, these late deciders broke for Obama, about 50-45 percent. Not huge, but perhaps meaningful. Voters were also asked how they would rate Obama's response to the storm in the role it played in their own vote. Below, a breakdown:
Most Important Factor 15%
An Important Factor 27%
A Minor Factor 22%
Not a Factor 31%
These numbers suggest even more support for the Sandy Hypothesis. After all, 42 percent gave Obama's response some kind of importance in their vote. Among those who called it important, they cut for Obama over Romney by an almost 2-to-1 margin.
Yes, but here's what we don't know -- are the "important factor" folks also the late deciders? And were they in key swing states? After all, if a bunch of Illinois or California folks are in there, it really doesn't matter. Those states were decided long ago. I won't know more until I get my grubby little number-crunching hands on the raw data.
My conclusion (so far): The Sandy Hypothesis remains untested. The compelling forecast models (who were, after all, mostly right) argue the storm made no real difference despite what the conservative pundits (who were, after all, mostly wrong) say. When I have the raw data I can crosstab it to death and examine whether late deciders were in key states and also voters who called the storm important in their decision.
My hunch? Some did base their decision on the storm. My other hunch? Not enough did to explain the results seen Tuesday. Still, a hypothesis deserves a fair test with some pretense of methodological rigor beyond the "feel it in their gut" bullshit you get from many pundits.
And ultimately, what's wrong with basing your vote on how a president handles a recent crisis? Nothing at all, best I can tell. If Obama had blown his handling of Hurricane Sandy, you can be damn sure certain pundits of an ideological persuasion would've argued in favor of it playing a role in the vote.
That's the problem with partisan pundits. They're partisan.
Is it true?
First off, most models had Romney's momentum ending about Oct. 15, long before the storm hit. Given the stunning accuracy of these models by Nate Silver, Sam Wang, and the rest of the nerdgeek squad, it's hard to argue against them.
And yet, and yet.
In exit polls of actual living breathing voters, 3 percent said they decided on Election Day who to vote for. Another 6 percent said they decided "in the last few days." This has the potential of supporting the Sandy Hypothesis. After all, these late deciders broke for Obama, about 50-45 percent. Not huge, but perhaps meaningful. Voters were also asked how they would rate Obama's response to the storm in the role it played in their own vote. Below, a breakdown:
Most Important Factor 15%
An Important Factor 27%
A Minor Factor 22%
Not a Factor 31%
These numbers suggest even more support for the Sandy Hypothesis. After all, 42 percent gave Obama's response some kind of importance in their vote. Among those who called it important, they cut for Obama over Romney by an almost 2-to-1 margin.
Yes, but here's what we don't know -- are the "important factor" folks also the late deciders? And were they in key swing states? After all, if a bunch of Illinois or California folks are in there, it really doesn't matter. Those states were decided long ago. I won't know more until I get my grubby little number-crunching hands on the raw data.
My conclusion (so far): The Sandy Hypothesis remains untested. The compelling forecast models (who were, after all, mostly right) argue the storm made no real difference despite what the conservative pundits (who were, after all, mostly wrong) say. When I have the raw data I can crosstab it to death and examine whether late deciders were in key states and also voters who called the storm important in their decision.
My hunch? Some did base their decision on the storm. My other hunch? Not enough did to explain the results seen Tuesday. Still, a hypothesis deserves a fair test with some pretense of methodological rigor beyond the "feel it in their gut" bullshit you get from many pundits.
And ultimately, what's wrong with basing your vote on how a president handles a recent crisis? Nothing at all, best I can tell. If Obama had blown his handling of Hurricane Sandy, you can be damn sure certain pundits of an ideological persuasion would've argued in favor of it playing a role in the vote.
That's the problem with partisan pundits. They're partisan.
Obama and the Catholics
In what will probably be a series of posts breaking down the exit poll data from the presidential election, I start today with religion. Specifically, with Catholics. Why? Mainly because I'm Catholic and this is my blog, but also because Catholics are viewed as a vital swing vote and they made up, in 2012, 25 percent of the electorate. The numbers are kinda interesting.
Stay tuned for more data crunching.
- Obama won the Catholic vote overall, 50 percent to 48 percent. That's a bit surprising given he was supposedly at war with the church. Bishops, take note.
- Okay, but let's dig deeper. Among white Catholics, Romney won 59-40 percent. That's about the same racial breakdown for all voters, so not much to see here folks. Let's move on.
- Among Catholics who attend church regularly, Romney won 57-42 percent. Republicans usually win this category.
- Among Catholics who do not attend regularly, Obama won 56-42 percent. These are mirror images of one another. Fascinating stuff. By the way, in the electorate there are more non-regular Catholics than regularly attending Catholics by a couple of percentage points. In other words, Obama wins.
Stay tuned for more data crunching.
Nate Silver Was Right (and it worries me)
I'm a number cruncher. I'm a fan of Nate Silver and Sam Wang and all the other nerdgeeks who correctly called the presidential election. I applaud their systematic approach to data, their methodological rigor, and their opportunity to put in place a bunch of pundits who basically make shit up and call it analysis.
That said, this worries me.
This worries me because of the reductive nature of these aggregators. This worries me because public opinion is not best defined as what public opinion polls measure. This worries me because, as, the Obama camp demonstrated, politics has truly drilled down to micro marketing.
I have lots of time to elaborate on these concerns, once I (yes, you guessed it) crunch some numbers.
Let me throw one out there in a hurry, though, and that's the nature of public opinion. I've written about this in the past (here and here, about Twitter, for example) and how the circular definition above fails to capture what is truly meant by the fluid nature of public opinion. This gets PhDweeby. I'll write more later, but before I sign off for lunch lemme say my main fear isn't about the accuracy of aggregators. They demonstrate the predictive power of big data. My fear is more a case of how the aggregators influence how we perceive public opinion.
Our opinions are more than numbers. And this comes from a guy who basically takes opinions and translates them into numbers for analysis.
I leave you with my favorite definition of public opinion:
Public Opinion is no more than this,
what people think other people think
That's another post for another day, but the words were uttered by none other than Prince Lucifer in his self-titled play. They say a lot about what public opinion really is -- not just a snapshot of an attitude crystallized into an opinion by a standardized questionnaire, but something more fascinating and fluid and in many ways, all about communication.
As I said, more later.
That said, this worries me.
This worries me because of the reductive nature of these aggregators. This worries me because public opinion is not best defined as what public opinion polls measure. This worries me because, as, the Obama camp demonstrated, politics has truly drilled down to micro marketing.
I have lots of time to elaborate on these concerns, once I (yes, you guessed it) crunch some numbers.
Let me throw one out there in a hurry, though, and that's the nature of public opinion. I've written about this in the past (here and here, about Twitter, for example) and how the circular definition above fails to capture what is truly meant by the fluid nature of public opinion. This gets PhDweeby. I'll write more later, but before I sign off for lunch lemme say my main fear isn't about the accuracy of aggregators. They demonstrate the predictive power of big data. My fear is more a case of how the aggregators influence how we perceive public opinion.
Our opinions are more than numbers. And this comes from a guy who basically takes opinions and translates them into numbers for analysis.
I leave you with my favorite definition of public opinion:
Public Opinion is no more than this,
what people think other people think
That's another post for another day, but the words were uttered by none other than Prince Lucifer in his self-titled play. They say a lot about what public opinion really is -- not just a snapshot of an attitude crystallized into an opinion by a standardized questionnaire, but something more fascinating and fluid and in many ways, all about communication.
As I said, more later.
Tuesday, November 6, 2012
Aggregating the Aggregators
As we all know, Nate Silver at 538, Sam Wang at Princeton, and Drew Linzer at Votamatic have made quite the names for themselves this election cycle with their number-crunching predictions. All three call today's election for Obama with following electoral votes (at the moment I write this):
Wang: 303
Silver: 313
Linzer: 332
So to be fair, as they all suck data from polls paid for and conducted by others, that I suck data off their work and aggregate their aggregation, thus inserting even more random error in a messy political process rife with false precision. If we do some simple calculations on the back of a stained cocktail napkin, Obama wins with 316 electoral votes.
Where do they disagree? Florida, mostly. Silver has it leaning Obama, Wang (map to left, weighted by each state's respective electoral vote size) has it leaning Romney, and Linzer has it leaning, er, nowhere. It's his only swing state bathed in white. All give Obama such key states as Ohio and Pennsylvania.
Obviously we won't know if the nerds were right until late tonight or perhaps Wednesday.
Wang: 303
Silver: 313
Linzer: 332
So to be fair, as they all suck data from polls paid for and conducted by others, that I suck data off their work and aggregate their aggregation, thus inserting even more random error in a messy political process rife with false precision. If we do some simple calculations on the back of a stained cocktail napkin, Obama wins with 316 electoral votes.
Obviously we won't know if the nerds were right until late tonight or perhaps Wednesday.
There's an Election Today
I'm sure you've noticed there's an election today. I voted early. Strolled downtown on a warm, sunny morning, voted in five minutes, then had lunch somewhere other than the usual places near my building.
Today it's chilly, rainy. Grey. (notice I used the "e" and not the "a" because I think it looks cooler)
But I'm wearing my "I'm a Georgia Voter" sticker today so people think I suffered the long lines and the weather and the grey (with an "e") skies.
So, what's my prediction? Hell, even college kids are writing prediction columns in the Red & Black and I can tell from reading them they don't really know what they're talking about. Why should I provide more hot (though more informed) air? If you want predictions, my favorites are the nerds and geeks: 538 or Princeton Election Consortium. Nate Silver and Sam Wang are terrific number crunchers using somewhat different methods and models to come to the same conclusion -- Obama wins, probably with as many as 310 electoral votes. There are a couple of other really good ones out there, but you can find them on your own.
Today it's chilly, rainy. Grey. (notice I used the "e" and not the "a" because I think it looks cooler)
But I'm wearing my "I'm a Georgia Voter" sticker today so people think I suffered the long lines and the weather and the grey (with an "e") skies.
So, what's my prediction? Hell, even college kids are writing prediction columns in the Red & Black and I can tell from reading them they don't really know what they're talking about. Why should I provide more hot (though more informed) air? If you want predictions, my favorites are the nerds and geeks: 538 or Princeton Election Consortium. Nate Silver and Sam Wang are terrific number crunchers using somewhat different methods and models to come to the same conclusion -- Obama wins, probably with as many as 310 electoral votes. There are a couple of other really good ones out there, but you can find them on your own.
Monday, November 5, 2012
IQ and Political Knowledge
If we're getting smarter, at least as measured by IQ scores, then why aren't we getting smarter, at least as measured by political knowledge scores?
The Flynn effect is the steady improvement in our IQ scores over time. Why? Lotsa theories out there and you can read the article yourself and choose your favorite. But while this suggests we're getting smarter, at least at fooling IQ testers, a host of studies find that the public's political knowledge has remained relatively steady since the 1950s.
How can this be? Well, in part we're talking about two very different kinds of tests. Plus IQ is non-motivational. Huh? Simply put, people are either motivated or not motivated to keep up with public affairs, and that can play a huge role in how well they answer the traditional survey items used to measure political knowledge.
So, we're getting smarter, but not more knowledgeable. Take comfort in that as best you can.
Oh, and for fun, you can check out IQ scores by state. Yes, Mississippi is last.
The Flynn effect is the steady improvement in our IQ scores over time. Why? Lotsa theories out there and you can read the article yourself and choose your favorite. But while this suggests we're getting smarter, at least at fooling IQ testers, a host of studies find that the public's political knowledge has remained relatively steady since the 1950s.
How can this be? Well, in part we're talking about two very different kinds of tests. Plus IQ is non-motivational. Huh? Simply put, people are either motivated or not motivated to keep up with public affairs, and that can play a huge role in how well they answer the traditional survey items used to measure political knowledge.
So, we're getting smarter, but not more knowledgeable. Take comfort in that as best you can.
Oh, and for fun, you can check out IQ scores by state. Yes, Mississippi is last.
Friday, November 2, 2012
Welcome to My (research) World
In breathless prose, an article in today's The New York Times points to research about trying to predict elections not by asking who people favor but instead asking them who is going to win (pdf of study here).
Oh jeez.
I'm happy to see someone doing this. Why? Because, dammit, I've written on this topic quite a few times. Most recently here. Also here, and here, and here, and here, and, oh hell, you get the idea. And those are just my blog posts. I'm also in a newspaper column on this topic and, dammit, I've published research on the topic -- including a big study this summer presented at a conference that looked at elections since 1952 and how well people predicted the winner. Punch line: people suck in close elections, are better when there's a bigger margin.
Oh jeez. And does the paper cite me in any way? Sigh. Of course not. Bitter much? Ah well, what would ya expect from Microsoft and economics guys? Basically sophisticated number crunching, and not a bit of theory.
Oh jeez.
I'm happy to see someone doing this. Why? Because, dammit, I've written on this topic quite a few times. Most recently here. Also here, and here, and here, and here, and, oh hell, you get the idea. And those are just my blog posts. I'm also in a newspaper column on this topic and, dammit, I've published research on the topic -- including a big study this summer presented at a conference that looked at elections since 1952 and how well people predicted the winner. Punch line: people suck in close elections, are better when there's a bigger margin.
Oh jeez. And does the paper cite me in any way? Sigh. Of course not. Bitter much? Ah well, what would ya expect from Microsoft and economics guys? Basically sophisticated number crunching, and not a bit of theory.
Who Wins? A Coffee-Based Analysis
Who’s gonna win the election?
Sure, you can turn to
the pundits and politicians, or you can turn to the mathematicians and statisticians. Or like me you can seek out absolutely meaningless
correlations to fill your time.
Yep, I’m going with the last one.
For those of you who bemoan the effete, cappuccino-sipping
liberal types, here’s an explanation you’re going to love. I grabbed data for how many Starbucks arelocated in each state and compared it to the state-by-state electoral
predictions found on Nate Silver’s excellent 538 blog.
Below are the top 10 Starbucks states, per 10,000 population,
followed by the candidate predicted by Silver to win that state. Sorry about the lousy formatting.
D.C. 1.181 Obama
Washington 0.889 Obama
Nevada 0.799 Obama
Colorado 0.690 Obama
Oregon 0.667 Obama
California 0.556 Obama
Hawaii 0.463 Obama
Arizona 0.418 Romney
Alaska 0.337 Romney
Illinois 0.323 Obama
Washington 0.889 Obama
Nevada 0.799 Obama
Colorado 0.690 Obama
Oregon 0.667 Obama
California 0.556 Obama
Hawaii 0.463 Obama
Arizona 0.418 Romney
Alaska 0.337 Romney
Illinois 0.323 Obama
As you can see, Obama’s coffee cup runneth over. The incumbent is predicted to win eight of
the 10 top Starbucks states. For Romney, you’ve
got to drain your cup down to #8 (Arizona) before you find one of his
states. A decaffeinated campaign, perhaps? Well yeah.
He is, after all, Mormon.
Want more? Of the
bottom 10 Starbucks states, seven are predicted for Romney. Wow.
Okay, let’s get a few issues out of the way. D.C. isn’t a state, plus I’m relying
heavily on Silver’s numbers. Then again, he called 49 of 50 states correctly
in 2008 and his more or less flow nicely with another great predictive site.
More mind-numbing numbers? The Obama states average .34 stores per 10,000 people, while the Romney states manage a meager .17 per
10,000. The national average is .26 per
10,000 (.30 if you weight the data, but let’s skip the math).
What’s this all mean?
Nothing much, but for my next trick I’m going to add Wal-Marts to the
data and do some correlations and multiple regressions and really dazzle you
with complete bullshit. Stay tuned,
because on top of everything else there’s gonna be maps better than the one below (which is a bit flakey sometimes, sorry). Oh, click below on a state in the map and see how its Starbucks rank and predicted electoral outcome. Darker colors means more Starbucks per 10,000 people. You can grab and move it around some, too. Have fun. Suggestions welcome, the sillier the better.
Thursday, November 1, 2012
What Teens Know ... About the 2012 Election
Yes, I know, the headline above reads like the setup to a punch line. What do teens know? Not a helluva lot. Still, some enterprising journalists set out to discover what teens know about the 2012 election.
The most likely response?
Students did well on throwaway questions on identifying the president, the vice president, the Republican challenger, and (less well) his VP running mate. Only half knew which party controlled the House, which is about right -- even if you guess, there are only two parties, so half is unsurprising.
My favorite -- they presented students with a picture of John Boehner, the Speaker of the House. Only 14 percent gave his name, party and title. Only 15 percent could name one of their state's two U.S. senators.
Okay, are we picking on the brats? A little. To be honest, they didn't do so poorly compared to the levels of knowledge demonstrated by the general U.S. public, especially given most if not all of the students aren't even voters yet, and so they have strong motivation to follow politics at a high level.
In other words, the kids did okay.
The most likely response?
“I support whatever my parents do because they understand more about politics than I do.”Their "survey" was of students enrolled in four advanced English high school classes in the Council Rock School District of Pennsylvania (sad note, school is out due to damage from Sandy). While it's hardly a generalizable sample, they asked students 13 questions that ran the "gamut of knowledge" from basic to the advanced.
Students did well on throwaway questions on identifying the president, the vice president, the Republican challenger, and (less well) his VP running mate. Only half knew which party controlled the House, which is about right -- even if you guess, there are only two parties, so half is unsurprising.
My favorite -- they presented students with a picture of John Boehner, the Speaker of the House. Only 14 percent gave his name, party and title. Only 15 percent could name one of their state's two U.S. senators.
Okay, are we picking on the brats? A little. To be honest, they didn't do so poorly compared to the levels of knowledge demonstrated by the general U.S. public, especially given most if not all of the students aren't even voters yet, and so they have strong motivation to follow politics at a high level.
In other words, the kids did okay.
Subscribe to:
Posts (Atom)