For me, the hardest part of a piece of scholarly writing is the conclusion or discussion section, the "so what?" that comes at the end. For the uninitiated among you, most papers in my field look like this:
Title (must include a colon, otherwise it's not scholarly)
Abstract (say what you did in 75 words)
Lit Review (all the previous studies on the topic, and theory, may include hypotheses or research questions)
Method (how you did your study, how you measured stuff)
Results (what you found, statistical tests, etc.)
Discussion (sigh, yeah, that "so what?" part)
I tend to write the Results first because, for me, it's more fun, and I am a God of SPSS, a skill mastered in grad school on a mainframe with a 300-baud modem and now perfected with a top-end system with multiple screens. While doing Results I'll mess with the Literature Review, but in my head I have the whole paper from start to finish, I just tend to write it in a hodge-podge fashion.
Except for that Discussion thing. That I put off till the end.
Why? In part because I know what the study is about, but I'm so close that I have a hard time backing off and writing the big picture, the "so what?" of the study, especially without repeating myself. We all know the routine: say what you're gonna say, say it, and they say what you said. That leads to academic suckiness.
Is there a trick to not sucking in that last section? Maybe I should write it first? Maybe I should finish the study, let it sit for a week, then go back and read it, write the grand "what it all means." Also in that section you need to list the weaknesses or limitations of your research and, let's face it, no one enjoys that either. Limitations? Me?
Funny thing is, this paper is full of big concept ramifications. It's all about democracy, about elections, about winners and the consent of the losers. And Fox News. And Karl Rove. And tables full of multiple regression analyses. And more Fox News. So it should write itself, except that never in my life has something ever written itself.
An editor told me long ago -- if writing comes easy, you're not doing it right.
Random blog posts about research in political communication, how people learn or don't learn from the media, why it all matters -- plus other stuff that interests me. It's my blog, after all. I can do what I want.
Wednesday, October 30, 2013
Tuesday, October 29, 2013
Is Neutral Knowledge Dead?
It's become one of those battles:
Not the entry length, but the affective orientation of the Wikipedia entries versus the traditional encyclopedia. Wiki entries were more positive or negative, Britannica entries more neutral.
Does anyone see a parallel here with what we see in the news, in how cable networks with a partisan bent (Fox, MSNBC) are more successful than those that generally try to keep to the middle (CNN)? For news nerds, doesn't this parallel the recent, raging debate between, say, Bill Keller and Glenn Greenwald over the future of journalism?
Doesn't this parallel our more partisan times? Is the Internet just plain evil? Or, perhaps, the Internet is just plain anti-neutral?
To me, this is more than just a silly play with words. What people know, their knowledge, is mixing with opinion to a degree not seen before. Some argue this is a good thing. Others disagree. My point is our very idea of knowledge may be shifting from fact-based to opinion-based, that we can't even complete an encyclopedia entry without lacing it with positive and negative info.
The authors posit social media are creating a fundamental shift in how we frame knowledge, and that strike me as both true and inevitable and not necessarily a good thing. Certainly this fits what we know about how people really process information, such as that described by the theory of motivated reasoning, in which people typically believe what they want to believe, the evidence be damned.
(Oh, ironically, that link above is to a wikipedia entry).
Some of the info above won't surprise those of you who have followed Wikipedia entries and the battles that sometimes happen as people edit, edit again, and edit yet again in squabbles over which facts get included. As an aside, it's fun to look at, say, UGA's entry and look at who edits the pages and, if you're clever enough, track back the IP numbers to administrative folks. That's the nerd journalist in me, but it's a good example of how "facts" come and go in such entries.
- Mac vs PC
- Dogs vs. Cats
- Wikipedia vs Encyclopedia Britannica
Content analyses of the length, tonality, and topics of 3,985 sentences showed that Wikipedia entries were significantly longer, were more positively and negatively framed, and focused more on corporate social responsibilities and legal and ethical issues than the online entries of the traditional encyclopedia, which were predominantly neutral.Let that sink in for a moment.
Not the entry length, but the affective orientation of the Wikipedia entries versus the traditional encyclopedia. Wiki entries were more positive or negative, Britannica entries more neutral.
Does anyone see a parallel here with what we see in the news, in how cable networks with a partisan bent (Fox, MSNBC) are more successful than those that generally try to keep to the middle (CNN)? For news nerds, doesn't this parallel the recent, raging debate between, say, Bill Keller and Glenn Greenwald over the future of journalism?
Doesn't this parallel our more partisan times? Is the Internet just plain evil? Or, perhaps, the Internet is just plain anti-neutral?
To me, this is more than just a silly play with words. What people know, their knowledge, is mixing with opinion to a degree not seen before. Some argue this is a good thing. Others disagree. My point is our very idea of knowledge may be shifting from fact-based to opinion-based, that we can't even complete an encyclopedia entry without lacing it with positive and negative info.
The authors posit social media are creating a fundamental shift in how we frame knowledge, and that strike me as both true and inevitable and not necessarily a good thing. Certainly this fits what we know about how people really process information, such as that described by the theory of motivated reasoning, in which people typically believe what they want to believe, the evidence be damned.
(Oh, ironically, that link above is to a wikipedia entry).
Some of the info above won't surprise those of you who have followed Wikipedia entries and the battles that sometimes happen as people edit, edit again, and edit yet again in squabbles over which facts get included. As an aside, it's fun to look at, say, UGA's entry and look at who edits the pages and, if you're clever enough, track back the IP numbers to administrative folks. That's the nerd journalist in me, but it's a good example of how "facts" come and go in such entries.
Tuesday, October 22, 2013
Race x Gender among Georgia Voters
Playing with data. I was curious about registered voters. Lemme toss a couple of factoids at you.
Clarke County (Athens, where I live, location of UGA) is 27 percent black. Looking at different data, I see among of all registered to vote here, 15.8 percent are black females, 10.2 percent black males. So the registration comes pretty close to the population in terms of registered and residence, with women having a 3:2 advantage in registration while being only slightly more populous. Keep that 3:2 in mind, because the ratio among whites in the same county is different.
So, rounding we get in Clarke:
This is what I do instead of writing. Sigh ... back to work.
Clarke County (Athens, where I live, location of UGA) is 27 percent black. Looking at different data, I see among of all registered to vote here, 15.8 percent are black females, 10.2 percent black males. So the registration comes pretty close to the population in terms of registered and residence, with women having a 3:2 advantage in registration while being only slightly more populous. Keep that 3:2 in mind, because the ratio among whites in the same county is different.
So, rounding we get in Clarke:
- Black Female 16% / Black Male 10%
- White Female 31% / White Male 28%
- Black Female 2% / Black Male 2%
- White Female 46% / White Male 41%
- Black Female 18% / Black Male 12%
- White Female 31% / White Male 28%
This is what I do instead of writing. Sigh ... back to work.
Monday, October 21, 2013
Accepted for Publication
Hollander, Barry A. (forthcoming), The role of media use in the recall versus recognition of political knowledge.
Journal of Broadcasting and Electronic Media.
The only thing missing in the title above is a colon. Without one, you kinda doubt it's actual scholarly research.
What's it about (you ask breathlessly)?
Glad you asked.
I used a question-wording experiment in a national survey to see whether recognition-type political knowledge questions (multiple choice) get different results than recall-type questions (open-ended). First, of course they do. Everyone knows multiple guess is easier than short answer questions. I hypothesize something more -- that people who rely on TV news will do better on recognition questions but people who rely on print will do better on recall.
It worked. Mostly. For TV news exposure, it predicts recognition (but not recall) knowledge. Print newspapers and radio do nothing. But using Internet for news predicts recall (but not recognition) knowledge. And yes, this was a multivariate analysis, meaning I controlled statistically for all the usual suspects (age, income, education, political interest, etc.).
And to top it all off, among the less educated, TV plays a bigger role in recognition knowledge than it does for those with greater education.
There's a bunch of theory stuff in the paper, but basically TV news is all about recognition, is all about being an information leveler for less educated (or interested). Recall requires deeper attention. What's interesting is exposure to paper newspapers does nothing and it seems to have been supplanted by Internet news as a predictor of knowledge. That alone is kinda fascinating, in a PhDweeby kind of way.
This will all appear in a journal near you, sometime in 2014.
The only thing missing in the title above is a colon. Without one, you kinda doubt it's actual scholarly research.
What's it about (you ask breathlessly)?
Glad you asked.
I used a question-wording experiment in a national survey to see whether recognition-type political knowledge questions (multiple choice) get different results than recall-type questions (open-ended). First, of course they do. Everyone knows multiple guess is easier than short answer questions. I hypothesize something more -- that people who rely on TV news will do better on recognition questions but people who rely on print will do better on recall.
It worked. Mostly. For TV news exposure, it predicts recognition (but not recall) knowledge. Print newspapers and radio do nothing. But using Internet for news predicts recall (but not recognition) knowledge. And yes, this was a multivariate analysis, meaning I controlled statistically for all the usual suspects (age, income, education, political interest, etc.).
And to top it all off, among the less educated, TV plays a bigger role in recognition knowledge than it does for those with greater education.
There's a bunch of theory stuff in the paper, but basically TV news is all about recognition, is all about being an information leveler for less educated (or interested). Recall requires deeper attention. What's interesting is exposure to paper newspapers does nothing and it seems to have been supplanted by Internet news as a predictor of knowledge. That alone is kinda fascinating, in a PhDweeby kind of way.
This will all appear in a journal near you, sometime in 2014.
Sunday, October 20, 2013
Random Georgia Stuff
I saw a bunch of motorcyclists the other day (gaggle? gang? what's the collective noun?) and it got me wondering about where in Georgia they tend to live.
The answer? Using 2012 data, I looked at the percentage of motorcycles of all vehicles in Georgia counties. For example, the top county is Camden where 4.1 percent of all registered vehicles are cycles. Three of the top five are, no surprise, in the north Georgia mountains where it's fun to ride. Of the others in the top five, one is at the coast, the other near Fort Benning. Again, no surprise.
Here's a quick Google map. I removed the legend because it annoyed me, so darker colors mean a higher percentage of motorcycles.
You can see how they kind of lump together in north Georgia, down at the coast, and a scattering of other locations that deserve a closer look, like the Fort Benning effect. There's probably a story in here, somewhere, if you were a freelancer and looked to score a piece in a motorcycle mag. I'd also be curious to see what demographics may set these counties apart from other counties. And, of course, to really do a story you'd need to interview real live human beings. The data only take you so far.
The answer? Using 2012 data, I looked at the percentage of motorcycles of all vehicles in Georgia counties. For example, the top county is Camden where 4.1 percent of all registered vehicles are cycles. Three of the top five are, no surprise, in the north Georgia mountains where it's fun to ride. Of the others in the top five, one is at the coast, the other near Fort Benning. Again, no surprise.
Here's a quick Google map. I removed the legend because it annoyed me, so darker colors mean a higher percentage of motorcycles.
You can see how they kind of lump together in north Georgia, down at the coast, and a scattering of other locations that deserve a closer look, like the Fort Benning effect. There's probably a story in here, somewhere, if you were a freelancer and looked to score a piece in a motorcycle mag. I'd also be curious to see what demographics may set these counties apart from other counties. And, of course, to really do a story you'd need to interview real live human beings. The data only take you so far.
Friday, October 18, 2013
Public Broadcasting and Current Affairs Knowledge
Saving me loads of time, The Monkey Cage blog has a nice piece today about new research on public broadcasting and what people know.
Thursday, October 17, 2013
Visual Knowledge
How we measure political knowledge can play a huge role in our results. The kinds of questions we ask, the manner in which we ask them. Which political actors or politicians do we ask about, their gender, their obscurity. All of this and more can affect the results -- and our estimation of public knowledge, or public ignorance.
Okay, fine. But is there a visual knowledge?
This paper says yes, and suggests we're missing a lot with our traditional measures of what people know.
The less educated, for example, do better on tests of visual political knowledge. What's that mean, visual? We can test this by randomly assigning some folks to a verbal only question, some to a visual only, and some to a mix. So in the verbal only, you'd get a question asking what office does John Kerry hold. In the visual only, you'd be asked to identify his office from a photograph. And the third group would get words and a pic. All would have identical multiple choices, four of them, with one having the correct answer of U.S. Secretary of State.
In summary, Marcus Prior notes:
So there are "visual people" out there, especially when it comes to how we measure political knowledge. I'd argue that the more you rely on TV for news, the more "visual" you are and the better you'd do on tests like this.
Okay, fine. But is there a visual knowledge?
This paper says yes, and suggests we're missing a lot with our traditional measures of what people know.
The less educated, for example, do better on tests of visual political knowledge. What's that mean, visual? We can test this by randomly assigning some folks to a verbal only question, some to a visual only, and some to a mix. So in the verbal only, you'd get a question asking what office does John Kerry hold. In the visual only, you'd be asked to identify his office from a photograph. And the third group would get words and a pic. All would have identical multiple choices, four of them, with one having the correct answer of U.S. Secretary of State.
In summary, Marcus Prior notes:
Visual political knowledge is different from verbal political knowledge and represents a previously unmeasured element of political involvement. This study has shown that adding visuals to otherwise identical all verbal knowledge questions significantly increases correct responses. This finding strongly suggests that some people with substantive knowledge of political figures respond incorrectly to knowledge questions about them just because they lack a phonological representation of the person (the politician’s name). Allowed to draw on a visual representation (the politician’s face), they are able to report accurate conceptual knowledge about the politician.
So there are "visual people" out there, especially when it comes to how we measure political knowledge. I'd argue that the more you rely on TV for news, the more "visual" you are and the better you'd do on tests like this.
Monday, October 14, 2013
The Funniest States?
I've been playing with mapping the use of bit.ly, the link-shortening service. You can look at each state and see real-time data on what news sources are most often used. So today let's look for that ultimate of all news sources -- The Onion. The satirical news site shows up on the top 10 lists of some states but not others.
For some states (when I checked, data change constantly), The Onion is the #1 "online only" site. There's New Mexico, Oregon, Montana, Minnesota, Wisconsin, etc. Here in Georgia where I live, it's only #5 (HuffPo is #1). What's fascinating is how, in the conservative South, HuffPo dominates in the use of bit.ly links. In northern states, The Onion is often #1. Weird.
For "newspapers" you tend to see USA Today in the South, and in the north you're more likely to see The New York Times. Interesting. At the moment, The Guardian is #1 in New York (and, at the moment, Texas). The "winners by state" option is also telling.
What can you do with this? Not much in an ever-changing map. But it'd be fun to analyze a database from these links.
For some states (when I checked, data change constantly), The Onion is the #1 "online only" site. There's New Mexico, Oregon, Montana, Minnesota, Wisconsin, etc. Here in Georgia where I live, it's only #5 (HuffPo is #1). What's fascinating is how, in the conservative South, HuffPo dominates in the use of bit.ly links. In northern states, The Onion is often #1. Weird.
For "newspapers" you tend to see USA Today in the South, and in the north you're more likely to see The New York Times. Interesting. At the moment, The Guardian is #1 in New York (and, at the moment, Texas). The "winners by state" option is also telling.
What can you do with this? Not much in an ever-changing map. But it'd be fun to analyze a database from these links.
Thursday, October 10, 2013
Blogs Are So Done
Yes, this is a blog, and yes (sigh) blogs seem to be toast, at least when it comes to elections. How dare I say this? I'm messing with some 2012 data and came across these two questions.
1. In a typical week, how many days do you use blogs to learn about the election for President?
2. In a typical week, how many days do you use social media such as Twitter or Facebook to learn about the election for President?
To be fair the second question has the advantage of specificity (Twitter and Facebook) and some halo effect as people use both sites for stuff other than politics, like sharing those all-important cat videos. Let's set that aside and see how they stack up in a huge national survey. While responses could range from 0 to 7 days a week of use, I use the bottom and top to present a simple yet telling comparison:
Blogs
0 Days - 88.2 percent
7 Days - 1.2 percent
Social Media
0 Days - 64.7 percent
7 Days - 12.6 percent
Not even close. To put this in raw numbers, out of a sample of over 5,000 adults, only 91 freaking people said they read blogs 7 days a week. The results suggest only 1-out-of-10 ever read a blog, and very likely these are members of the chattering class, not real humans.
Again, this is very nearly an apple and orange comparison, especially as a tweet or Facebook news feed may link back to a blog and users may credit the social media more so than the blog itself. And what qualifies as "a blog" is fuzzier today than it's ever been, especially the ones now integrated into mainstream news sites such as, most recently, The Monkey Cage at washingtonpost.com. So while I look at the numbers above with a significant dose of skepticism, they suggest how social media have supplanted blogs when it comes to how people learn about elections.
1. In a typical week, how many days do you use blogs to learn about the election for President?
2. In a typical week, how many days do you use social media such as Twitter or Facebook to learn about the election for President?
To be fair the second question has the advantage of specificity (Twitter and Facebook) and some halo effect as people use both sites for stuff other than politics, like sharing those all-important cat videos. Let's set that aside and see how they stack up in a huge national survey. While responses could range from 0 to 7 days a week of use, I use the bottom and top to present a simple yet telling comparison:
Blogs
0 Days - 88.2 percent
7 Days - 1.2 percent
Social Media
0 Days - 64.7 percent
7 Days - 12.6 percent
Not even close. To put this in raw numbers, out of a sample of over 5,000 adults, only 91 freaking people said they read blogs 7 days a week. The results suggest only 1-out-of-10 ever read a blog, and very likely these are members of the chattering class, not real humans.
Again, this is very nearly an apple and orange comparison, especially as a tweet or Facebook news feed may link back to a blog and users may credit the social media more so than the blog itself. And what qualifies as "a blog" is fuzzier today than it's ever been, especially the ones now integrated into mainstream news sites such as, most recently, The Monkey Cage at washingtonpost.com. So while I look at the numbers above with a significant dose of skepticism, they suggest how social media have supplanted blogs when it comes to how people learn about elections.
Is This a Fair Question?
A Twitter post by Greg Bluestein of the AJC caught my eye.
A @PPPPolls survey by @BetterGeorgia finds a tight race between @GovernorDeal and @SenatorCarter: http://t.co/5ntkKCAcNY #gapolI'm a poll nerd, so I checked it out. Here's the column by the AJC's Jim Galloway that gives the basic lede and some analysis:
— Greg Bluestein (@bluestein) October 10, 2013
Atlanta Mayor Kasim Reed isn't the only one who thinks state Sen. Jason Carter's flirtations with a gubernatorial bid should be taken seriously.In the full poll results, there's this question:
Better Georgia, a left-leaning guerilla (sic) group, commissioned a Public Policy Polling survey of 602 registered Georgia voters this week that found Carter and Gov. Nathan Deal are running nearly neck-and-neck. Executive director Bryan Long, who tweeted #RunJasonRun earlier this week, said he ordered the poll after reading the news that Democrats in Washington in Georgia commissioned a survey to test Carter's popularity.
Q3 If the candidates for Governor in 2014 were Nathan Deal, the incumbent Republican, or Jason Carter, a Democratic state senator from Atlanta and grandson of Jimmy Carter, how would you vote?
The boldface above is my own because it got me wondering about the effect of identifying Carter as the grandson of former Georgia Gov. (and oh yea, President) Jimmy Carter. For some, it might hurt. For others, it might help. Do the two cancel each other out? Among older respondents who may very well remember Carter's time in office, Deal wins. Among younger respondents, Carter wins. How much this is a case of remembering Carter's presidency and how much is a matter of party identification or the age of the candidates themselves it's impossible to say.
Among men Deal does better, 52-36, which is not unusual for a Republican. Among women Carter, the Democrat, leads 43-36. Again, not unusual. And it hardly bears mentioning that Deal cleans Carter's clock among conservatives while Carter does well among liberals. But among moderates, Carter wins 58-21 percent. That's a wow and must be heartening to those who support the guy.
But back to the question itself. Is it fair to identify him? Would you, if conducting a poll?
It makes sense to do so in that he enjoys little name recognition, but then again we'll see the same Nunn effect in the Georgia U.S. Senate race, so it raises an interesting question -- aren't we giving him name recognition by explaining the family connection, thus influencing the poll results? My gut says yes, but I'd love to see someone conduct a split-ballot poll in which half of the folks are randomly given the Carter family identification and half are not to see if it gets him a bump.
Big Bang Theory and Conservative Christians
Sheldon: You know, in difficult times like this, I often turn to a force stronger than myself.Maybe you heard the news this week. Sheldon Cooper – yet again – did not win the Nobel Prize in Physics.
Amy: Religion?
Sheldon: Star Trek.
Cooper is, of course, fictional. But if you turn on a TV you’ll see him a lot more often than those two European guys who won.
It’s tough to avoid The Big Bang Theory.
The program centers around two California roommates, both physics professors, as well as two equally geeky friends and a neighbor, Penny who, as Wikipedia puts it, “is contrasted for comic effect” from the nerdy guys thanks to “social skills and common sense.” Several other semi-regulars come and go (the mothers are my favorite).
The show, like NCIS, is almost impossible to avoid thanks to syndication.
Science and nerdom lie at the heart of the program. The writers go out of their way to not only get the science right, but to make it funny. The best known character, Cooper, lacks any social skills and was raised in a religiously conservative Texas family. As such, the humor often pokes fun at religion. As in:
(Talking about about his religiously conservative mother's upcoming Christian cruise)Given the very title of the program identifies a scientific theory many conservative Christians find unnerving, not unlike evolution or, say, electricity, you’d expect the program to be unpopular with them. Look hard enough online and you’ll find plenty of religious-based criticism, but when it comes to watching, conservative Christians are certainly less likely to do so than others.
Sheldon: Frankly, Mom, I'm encouraged to see how advanced your group has become -- willing to sail into the ocean without fear of falling off the edge.
And yet they don’t shun it entirely.
Take, for example, people who believe in a literal interpretation of the Bible. That’s as conservative and fundamentalist as it gets and among these folks, according to my analysis of recent survey data, 18 percent say they watch. That’s nearly 1-in-5 from the word-of-God set. As you’d expect, among those who say the Bible is from God but not to be understood literally the viewership is even higher (28 percent). Among the godless heathens who think the Bible is merely a book, it’s 32 percent.
How’s this for a promo – 1-in-3 of the everlasting damned watch our TV show! You should too!
For you stats nerds out there, the relationship is statistically significant at X2 = 77.0, p<.001. Don’t worry if you don’t follow the statistical mumbo-jumbo, Sheldon Cooper would, though he’d sneer at the social science.
The Big Bang Theory appeals not just to science geeks. We know this because (1) it is very popular and (2) this is America, and just look at our lousy test scores.
You’d hope some of the program’s love of science would rub off on its audience. For example, in the same data there’s a question asking people whether the U.S. government (you know, that shutdown thing) should rely on scientific methods in crafting policy. Most reasonable people who are not named U.S. Rep. Paul Broun would say yes, of course it should. Still, 4 percent of all people say “never” (I’d love to meet those folks) and another 34 percent say only “some of the time.” If The Big Bang Theory is so popular and the power of TV so pervasive, shouldn’t watching it nudge people toward the evils of science?
Turns out, not so much.
My cranking of the data found no real difference in people who watch it and people who don’t, at least when it comes to supporting a scientific approach to policy. Plus, among our friends who believe in a literal interpretation of the Bible, watching the show makes no difference in their beliefs about science and policy.
You can take that last finding as good news or bad news. It’s bad in that you’d hope anything might help them see the role science plays in government policy, but it’s good in that people believe what they choose to believe and a single entertaining TV program isn’t going to really change their minds.
So whatever your beliefs just sit back, watch it, and laugh. It’s okay. And maybe next year, Cooper will win that Nobel.
Monday, October 7, 2013
Finally Figuring It Out
After devoting a hundred or so hours to reading, writing, and extensive data analysis, I finally sat down with a legal pad and drew out my research project. You know, little boxes and arrows, that sort of thing. So simple, yet so very helpful. I now know, finally, what the hell I'm doing. Warning, this is a bit PhDweeby.
Very simply:
It takes me a while to finally figure out what I'm doing. Some would suggest I still haven't gotten there yet. And they're probably right.
Oh as to results, so far the data are holding up nicely. Surprised losers are a bit different than expected losers, suggesting more work deserves to be done there, and there is most definitely a Fox News exposure effect at work making surprised losers more negative about democracy and the electoral process as compared to those who expected to lose.
This is the 2013 portion of my study, a very specific manuscript. I'm also doing one that expands this across several elections, but I want to get the 2013 version out the door first before turning to that bigger piece.
Very simply:
- The preference for a presidential candidate will make one more likely to predict that candidate will win.
- In 2012 among Romney supporters, watching Fox News will increase this wishful thinking effect.
- We end up with expected losers (Romney supporters who expected Obama to win) and surprised losers (those Romney folks who expected their guy to pull it out).
- Surprised losers will be more negative about government, democracy and the election than expected losers and, of course, winners.
- Watching Fox News will, for surprised losers, result in even more negative attitudes about democracy and the electoral process as compared to expected losers.
It takes me a while to finally figure out what I'm doing. Some would suggest I still haven't gotten there yet. And they're probably right.
Oh as to results, so far the data are holding up nicely. Surprised losers are a bit different than expected losers, suggesting more work deserves to be done there, and there is most definitely a Fox News exposure effect at work making surprised losers more negative about democracy and the electoral process as compared to those who expected to lose.
This is the 2013 portion of my study, a very specific manuscript. I'm also doing one that expands this across several elections, but I want to get the 2013 version out the door first before turning to that bigger piece.
Friday, October 4, 2013
Citations
For tenure-track academics, publishing in peer-reviewed journals is the coin of the realm, but another indirect measure is how often your stuff gets cited. It's one thing to publish in an academic journal to an audience of tens of people worldwide, but it's quite another for fellow scholars to cite your work. It demonstrates your influence. Not a lot of influence, mind you, but of the scholarly kind that's easily quantifiable and has some meaning, at least among other academics.
Okay, that crud aside, I was just vanity-checking my research today to see how the cites are going. For example, my Journal of Broadcasting & Electronic Media piece from a few years ago on whether people learn from late-night comedy programs, it's doing pretty well. Here's a Google Scholar search of stuff from 2013 that's cited that one particular paper. So far, it has 89 total cites via Google, in 2013 it's 18, but several of those are not fellow peer-reviewed journals, just as I suspect a number of the 89 are not.
Still, it's nice to be recognized, even if so modestly. There are some good journals in the 2013 mix, as well as books both national and international.
I'd be doing a hell of a lot better if I were the other Hollander, BA ... the hard scientist. He/She rocks.
Okay, that crud aside, I was just vanity-checking my research today to see how the cites are going. For example, my Journal of Broadcasting & Electronic Media piece from a few years ago on whether people learn from late-night comedy programs, it's doing pretty well. Here's a Google Scholar search of stuff from 2013 that's cited that one particular paper. So far, it has 89 total cites via Google, in 2013 it's 18, but several of those are not fellow peer-reviewed journals, just as I suspect a number of the 89 are not.
Still, it's nice to be recognized, even if so modestly. There are some good journals in the 2013 mix, as well as books both national and international.
I'd be doing a hell of a lot better if I were the other Hollander, BA ... the hard scientist. He/She rocks.
Bad Poll? Blame and the Shutdown
There's a story on my local newspaper's site today about that age-old question -- assigning blame -- this time about the federal government shutdown. But this is a Georgia question. The hed:
Poll: Most Georgians
blame Democrats
for government shutdown
I'm not writing to blame one side or another; others do that better than I. No, I'm here to raise (yet again) questions about how a poll is reported. And don't blame just me. My colleague Lee Becker also noted the odd methodological quirk I'll focus on below, and he knows a lot more about running surveys than I do.
Let's look at the first two grafs:
Hell, one of the two orgs that did this poll specializes in robo-polling. Says so at the bottom of their site's front page. Details about their robo-calling service is here. So you'd expect the poll to skew to the right, meaning it should be more likely to blame Democrats, especially in a red state like Georgia.
But lets look at something else, the response alternatives, which in technical terms means the choices a survey respondent is given on the phone by that annoying computer voice.
Here's the telling graf:
As we say in the journalism side of academy: what the fuck?
Then if you collapse two of the four back into a single category you're surprised Democrats (win?) at being blamed? Maybe they are to blame, but that's not my point here. My point is we're better served by good polling and, dammit, good journalism about the polling. Report polls with skepticism, especially if the polling org used a method that tends to skew one way or maybe has a partisan axe to grind. Landmark Communications, the other sponsor? See below:
Note there are few methodological details in the story about how the poll was conducted. Not there are no damn links to the sites where you can read the actual poll questions, see the methodology. No, you gotta go look yourself for it. If you find them, let me know.
- UPDATE -
Dr. Becker, being both smarter than I and not buzzing on pain meds, found some methodological details of the survey I missed. Key points:
Poll: Most Georgians
blame Democrats
for government shutdown
I'm not writing to blame one side or another; others do that better than I. No, I'm here to raise (yet again) questions about how a poll is reported. And don't blame just me. My colleague Lee Becker also noted the odd methodological quirk I'll focus on below, and he knows a lot more about running surveys than I do.
Let's look at the first two grafs:
ATLANTA — A plurality of Georgians blames Democrats for the shutdown of the federal government and favor repeal of Obamacare, according to a new poll of registered voters.Okay, it is damn near impossible to run a quality survey of a thousand people in one day unless it's a robo-poll. You know these polls, they're those annoying computer-driven calls that due to the vagaries of who is willing to participate tend to skew older, and Republican. Important methdological note -- they can't do cell phones, making their sample really skewed away from younger people.
The survey of 1,000 active voters was conducted Tuesday and has a 3 percent margin of error.
Hell, one of the two orgs that did this poll specializes in robo-polling. Says so at the bottom of their site's front page. Details about their robo-calling service is here. So you'd expect the poll to skew to the right, meaning it should be more likely to blame Democrats, especially in a red state like Georgia.
But lets look at something else, the response alternatives, which in technical terms means the choices a survey respondent is given on the phone by that annoying computer voice.
Here's the telling graf:
It showed that 33 percent of those questioned blame President Barack Obama for the shutdown, 13 percent blame the Democrats who control the Senate – a combined 46 percent. On the other hand, 39 percent fault the Republicans who control the House of Representatives, and 14 percent consider both sides equally culpable.Notice something? Of the choices offered for blame, two of the four are Democrats, one is Republican, the other is equal blame.
As we say in the journalism side of academy: what the fuck?
Then if you collapse two of the four back into a single category you're surprised Democrats (win?) at being blamed? Maybe they are to blame, but that's not my point here. My point is we're better served by good polling and, dammit, good journalism about the polling. Report polls with skepticism, especially if the polling org used a method that tends to skew one way or maybe has a partisan axe to grind. Landmark Communications, the other sponsor? See below:
Landmark has worked for over 100 Republican state legislators since our firm's inception in 1991. We are proud of the part we played in helping build the new Georgia Republican majority, working to help elect many good Republicans to districts where no Republican had won before.C'mon. That doesn't mean they don't do good polling. It does mean they come from a specific partisan tilt, something that deserves mention in an honest news story. He does lots of work for news orgs too, the guy who runs Landmark, so give him his due.
Note there are few methodological details in the story about how the poll was conducted. Not there are no damn links to the sites where you can read the actual poll questions, see the methodology. No, you gotta go look yourself for it. If you find them, let me know.
- UPDATE -
Dr. Becker, being both smarter than I and not buzzing on pain meds, found some methodological details of the survey I missed. Key points:
- The survey used ACA or Obamacare, not Affordable Care Act. Yes, this can make a subtle yet significant difference in the results.
- No sign the poll randomly rotated the possible answers. This is standard practice on professional polls that some would randomly receive Obama to blame first, some might get him second, or third, and so on.
- He noted the federal rules against autodialers for robo-type calls and the AAPOR standards for this.
- I'd do more, but time for another pain med. Had work done on my vocal cords.
Wednesday, October 2, 2013
Find Those Affected
It's Journalism 101. When shit happens, find the people affected and tell their stories.
Or as this Washington Post blog today points out (last graf):
It also doesn't always work the way you think it works.
Research suggests the poor little person story can boomerang. The best example of this is a book-length treatment entitled News That Matters. A set of experiments found that those stories can often lead to people blaming the victim for being out of work, out of luck, having a hard time of it. There's basic psychology at play here. When things go poorly for ourselves, we blame external forces. When things go poorly for others, we blame it on some failing on their part. It's the fundamental attribution error.
We still do the stories. They're necessary. They need to be told. But what we need is research on how we can write or broadcast them in such a way that does not lead into a blame-the-victim mentality on the part of our audience. I'm certain in our storytelling we can find ways to offset such a psychological tendency, probably by asserting our "victim" doesn't want to be thought of as such, just wants to get back to work, or have access to some governmental service, etc. Like a lot of good journalism, the trick is in how you craft the story.
Just doing a poor-little-person story is not enough, and indeed it may make things worse.
Or as this Washington Post blog today points out (last graf):
The cure for all of this is reporting: Go out and find people who are affected or not affected by the health care law and the shutdown. And shut up the pundits and the cable hosts. Why should anyone trust them to evaluate what’s going on in this country?I'm sympathetic. It is Journalism 101 to go out and find folks hurt by a decision, or government stupidity, or a natural disaster, and tell their stories in a compassionate manner.
It also doesn't always work the way you think it works.
Research suggests the poor little person story can boomerang. The best example of this is a book-length treatment entitled News That Matters. A set of experiments found that those stories can often lead to people blaming the victim for being out of work, out of luck, having a hard time of it. There's basic psychology at play here. When things go poorly for ourselves, we blame external forces. When things go poorly for others, we blame it on some failing on their part. It's the fundamental attribution error.
We still do the stories. They're necessary. They need to be told. But what we need is research on how we can write or broadcast them in such a way that does not lead into a blame-the-victim mentality on the part of our audience. I'm certain in our storytelling we can find ways to offset such a psychological tendency, probably by asserting our "victim" doesn't want to be thought of as such, just wants to get back to work, or have access to some governmental service, etc. Like a lot of good journalism, the trick is in how you craft the story.
Just doing a poor-little-person story is not enough, and indeed it may make things worse.
Tuesday, October 1, 2013
Big Bang Theory
You can hardly turn on the TV and not find The Big Bang Theory playing somewhere. It's quite popular. And as everyone knows, the program often pokes fun at religion and so I've always wondered, what do religious people think of the show?
Yes, I have data for that. (I have data for everything)
I may write a longer piece, either academic or for real people, but lemme share a couple of quick hits when it comes to who watches the program.
Let's go with a favorite, people's belief in the Bible:
The results above are fairly obvious. Literalists/fundamentalists are, not surprisingly, less likely to watch the program. Still, nearly 1-in-5 do watch it, and that's very interesting. Then again, it's a damn funny show. Oh, and for the statistically minded of you out there, the relationship above is X2 = 62.1, df=2, p<.001. In simple terms, the relationships are statistically significant.
Okay, one more. How important is religion in your daily life? This is a yes-no question and as you'd expect, those who see religion as important were somewhat less likely to watch the program, but not by a lot. Yes, it's statistically significant (X2 = 31.9, p<.001) but not strongly so. Among those who see religion as important, 23 percent watch the program. Among those who do not see religion as important, 31 percent watch it. So the difference, while statistically significant, isn't all that big.
I've got other data and maybe when I finish on some other projects I'll crank out a good pop culture academic piece or perhaps a data-based essay.
Yes, I have data for that. (I have data for everything)
I may write a longer piece, either academic or for real people, but lemme share a couple of quick hits when it comes to who watches the program.
Let's go with a favorite, people's belief in the Bible:
- Among those who believe the Bible is the actual, literal word of God, only 19 percent report watching the show.
- Among those who believe the Bible is the word of God but not to be taken literally, 28 percent say they watch the program.
- Among those who believe the Bible is a book written by men and not the word of God, 31 percent say they watch the program.
The results above are fairly obvious. Literalists/fundamentalists are, not surprisingly, less likely to watch the program. Still, nearly 1-in-5 do watch it, and that's very interesting. Then again, it's a damn funny show. Oh, and for the statistically minded of you out there, the relationship above is X2 = 62.1, df=2, p<.001. In simple terms, the relationships are statistically significant.
Okay, one more. How important is religion in your daily life? This is a yes-no question and as you'd expect, those who see religion as important were somewhat less likely to watch the program, but not by a lot. Yes, it's statistically significant (X2 = 31.9, p<.001) but not strongly so. Among those who see religion as important, 23 percent watch the program. Among those who do not see religion as important, 31 percent watch it. So the difference, while statistically significant, isn't all that big.
I've got other data and maybe when I finish on some other projects I'll crank out a good pop culture academic piece or perhaps a data-based essay.
Subscribe to:
Posts (Atom)