Tuesday, July 30, 2013

Measuring Media Exposure

A standard measure in our field is news media exposure -- how much people watch TV news or read a paper, that sort of thing. It's been around since the 1950s, it remains in most Pew surveys, despite research suggesting respondents exaggerate their exposure and the question is full of measurement error.  One guy called news media exposure "one of the most notable embarrassments of modern social science."  Ouch.

And yet, here I am, studying news media exposure because, dammit, I'm a notable embarrassment.

I'm comparing specific sources of TV news (Fox, CNN, ABC, etc.) with the traditional 0-to-7 days a week of generic TV news watching.  Is it better to be specific?  Well, yeah .. except it costs you additional survey questions which anyone in the trade will tell you costs you additional money to pay for that time in the survey.

We obviously gain something by asking about specific news sources.  If nothing else, it's well established now in the literature that folks who watch Fox News come away with a very different version of the world than those who watch other news sources -- more so than those who watch MSNBC.  If you doubt this, see the latest Journalism and Mass Communication Quarterly article about "death panels" and sources of news.  Fox News matters.  I'll blog on this article more another day.

Anyway, back to my problem because it's all about me.  I'm comparing how exposure to generic television news is the same or different from exposure to the three broadcast and three cable news networks on five different criterion variables (political knowledge, likelihood to vote, etc.).  To put it in less PhDweeb terms, I want to know whether you get a better result from the generic single question than from the six specific network questions.  Oh, and even better, I've tossed in exposure to specific programs in the three cable news networks (The O'Reilly Factor, etc.).  Fun stuff.

But ... I'm also mulling over a different measure.  Take TV news exposure on a 0-to-7 days scale and combine it with a count of how many different TV networks you say you watch (yes-no question).  It's a weighted measure.  People who watch three networks but not very much (1 day a week) are different than those who watch three networks 7 days a week.  To reverse this, people who report watching TV news 7 days a week but from only one network should in some way be different than those who watch 7 days a week but three different networks.  The math here is tough, how to come up with a valid weighted measure.  Suggestions welcome.  Right now I'm working with a simple interactive term (# of sources x days generic exposure).  I haven't run the analyses yet to see how it fares against the more traditional generic measure of specific network measures, but I'm hoping to see something useful, something helpful for future scholars.

Or at least another pub.  Yes, this is what we academics do when not teaching and other stuff.

Again, methodological suggestions welcome.


Sunday, July 28, 2013

MSCNE13

I have all these T-shirts with MSCNE on 'em, followed by a year, and now it's time for a new shirt because MSCNE13 is upon us.

Huh?

It's a mouthful of a name -- Management Seminar for College Newspaper Editors.  Some 60 or so college editors from 28 states arrive today (Sunday) for a week of bonding and frivolity and training and workshops and more frivolity and a desperate need to figure out what the hell they're gonna do when they go back home and find themselves in charge of their college newspapers.

To all of you, welcome and best of luck.

I teach some years but not this one (I'm having vocal cord issues, tied to thyroid issues, tied to cancer issues ... a long and very boring story that involves an ugly scar across my neck that's not the result of some cool dangerous reporting story).  I do hope to sit in on a couple of sessions, including Monday's The State of College Media, the Future of College Media at 1:15 p.m. in MLC, Room 250.  With luck someone is gonna put this session online because it'll be helpful for those who can't make it.  If you're near Athens, I recommend dropping in.

Participants, you'll really enjoy the guest speakers, the hands-on training, and especially the simulated news event later in the week.

It's cool to see someone representing my alma mater, the University of North Alabama, known to many as TUNA, and its paper The Flor-Ala.  I think someone has come before, but I'm not sure.  Some years I've taught MSCNE, some years I've been out of the state or country.

And while I've enjoyed teaching sessions or sitting in on sessions, one thing I don't like is standing in line at the campus Jittery Joe's waiting for coffee.  Important safety tip, visiting editors -- regular customers go to the front of the line.  It's true.  Just as Caleb, the guy who runs the MLC shop.  Hell, he makes my coffee before I order it, I'm such a regular.  

When it comes to coffee, fear me.  Just ask Bentley.



Friday, July 26, 2013

JOBEM: The Political Satire Issue

The latest issue of the Journal of Broadcasting & Electronic Media arrived in my mailbox the other day and there are two articles that focus on political satire -- specifically, faux programs like The Daily Show with Jon Stewart.
  • An article by Dannagel G. Young takes a uses and grats approach to explore why young people are drawn to, or away from, satirical programming.  In mass comm we often dare to research the obvious (I've done it myself) and this one finds those who like such programming watch it for the humor, to learn what's going on, that they see is as unbiased (really?), and it makes the news fun.  When research says it's about "young people" you can read that as code for "I surveyed kids in college classes."  That's the case here, and as many of us know, college students are linked to, but not necessarily the same as, humans.  A big deal here is that young people watch because such programs make news fun.  The journalist in me cringes, but it's clear that the motivations for why people watch are very different than what Stewart and others hope to accomplish in their programs.
  • An article by R. Lance Holbert and colleagues looks at perceptions of satire as persuasion.  It's a different take, an experimental approach (yes, yet again the intensive study of that creature, the college student) that finds young voters perceive there to be persuasive intent in satire.  It dips into the differences between horatian and juvenalian forms of humor (I covered this last summer in a grad seminar, very interesting stuff).  The authors find "neither type of satire functions as effective narrative persuasion" from the standpoint of the dominant model in the field -- ELM.
There's also a piece about Twitter I haven't gotten to yet in the journal.  Perhaps more on that another time.

Thursday, July 25, 2013

What People Know (or, mostly, don't)

It's time for my occasional knowledge roundup, a look at breaking studies and stories about what people know (or don't know) about stuff. Some of these, well, you'll see ...
  • More than a third of Australians don't know how long it takes the Earth to orbit the Sun, says the Science Literacy of Australia report.  A third of respondents thought the Earth took a day to orbit the Sun.  Damn, that's fast.  Plenty of other goodies in this story if you like to make fun of our friends down under (and who doesn't?).  A different version, focusing on young respondents, here.
  • Women don't know nuthin about politics. I blogged this one recently (see it for more details than here), but the 10-nation study still has legs as can be seen by a new story focused on Norway, based on the same survey.  Let me point out that this study generated a lot of heat over the pond, and rightly so as it oversimplifies some key differences in how political knowledge surveys are done.
  • Okay, a weird one, meaning a Brit one. I can't do this justice, so let me just quote the first graph: One in 10 people have tasted “chicken trotters”, while 8% believe they have tried “pigs’ wings”, while others think pigs fly and chickens trot, a survey from the RSPCA has indicated.  Pigs wings. Priceless. Here's a different take on Brits and food knowledge, also kinda funny.  I did not know cheesecake originated in Greece.
  • The most likely avenue to attack computer users is through their lack of knowledge about computer security, says this survey of IT professionals.  It's true.  Even among my own colleagues, there are many who are clueless when it comes to safe computing or even how to turn on their printer.  And I work in a j-school.
And that's my roundup of the week.

Wednesday, July 24, 2013

Measuring Political Knowledge via the ANES

Warning -- this is one my more PhDweeby posts. I'll skip the math, but in this post I will discuss a paper soon to be published in Political Analysis, a journal full of scary math.  It's about the usefulness of questions from the American National Election Studies, the dominant source of data in most political science, in measuring political knowledge.

Before we get to the meat of the thing, there's a nice section on why political knowledge matters, how it "shapes the behavior of citizens in a democracy," and a nice discussion of the different ways we measure the concept.  I may come back to that in a moment.

The authors took data from the 1992, 1996, 2000, and 2004 ANES.  In these data are various measures:
  • Correct placement of the candidates and parties ideologically from one another.
  • Identification of candidate and party positions on major policies.
  • Identification of the positions held by major political actors
The authors took all these questions (there are a lot of them), summed them, divided by the number of items, and came up with a score.  Fairly common procedure.  After cranking the data through analyses to test for invariance, etc., see the graph below.  It's a little long, but worth the time:
Our results help explain why researchers have been frustrated in their attempt to measure and explain apparent knowledge gaps between various grouping variables including participation, media use, educational attainment, income, and age. Models seeking to explain observed differences have been unsuccessful because the construct of political knowledge is apparently qualitatively different between subgroups based on these grouping variables—excluding media use. Attempts to explain these differences will thus be unsuccessful, if political knowledge is measured using the full battery, because the measures are not sufficiently invariant to permit valid comparisons. The inconsistent results may also stem from a lack of conceptual clarity regarding the measurement model underpinning political knowledge. Our results suggest that the use of indexes assumed to be measured without error cannot be supported.
Not the "excluding media use."  That's nice to know for you budding political communication scholars out there and indeed, if you look at especially Figure 3, the media use items (in particular, newspaper use) really pop compared to others.

The authors argue that political knowledge scales must not be "established by fiat," a shot across the academic bow to most of us who use them.

Tuesday, July 23, 2013

Weiner

Anthony Weiner has apparently sexted again.  A press conference is scheduled soon.

Yeah, so who sexts?

According to this 2010 Pew study, only 6 percent of U.S. adults sent a sexually explicit text to someone else (3 percent forwarded one).  Fifteen percent have received one.  I have not.  Damn.

By age, it looks like below with a high of 10 percent age 25 to 34 admitting saying they sexted.

Unfortunately the numbers are too small to explore what factors predict the likelihood to sext -- other than being a disgraced congressman (and now mayoral candidate) from New York, that is.  As Pew notes:
Men and women are equally likely to send sexts, although male cell owners are a bit more likely than women to say that they have received these types of images on their phones (18% of male cell owners and 12% of female cell owners have done so). Men are also a bit more likely to forward these messages to others compared with women (5% of male cell owners and 2% of female cell owners have forwarded a sext to someone else).



Screwed by an Academic Journal

An interesting survey from Pew just hit online, this about how Latinos get their news (English or Spanish) and their attitudes about this news.
When it comes to the accuracy of news reporting, Hispanics are just as likely to say Spanish-language news organizations and English-language news organizations “get the facts straight” in their news stories and reports.
Why is my title above "screwed by an academic journal?"  Because I did similar research, sent it to the journal years ago, and the editor sat on the study until last week I finally, in frustration, pulled the thing.  Did the editor apologize for fucking up?  No.  Instead, the editor asked if I'd read for the journal.  I did not respond.  More on my frustration here, if you care to read it.

Notice how I nicely did not identify the journal, not even the gender of the editor?  If you care, I'll tell you privately.

I did at least get a conference paper out of the study, and I have fresh data that would really make a new version good -- but I have two other studies in my queue to finish.  Not sure I'll get to it.




Friday, July 19, 2013

Is There Enough Caffeine in the World?

I am in codebook hell.

I'm working on a rather complicated research project that I'll explain briefly below, but basically it requires me knitting together national survey data from the 2000, 2004, 2008, and 2012 presidential elections.  Not just that, but a pre-election survey and a post-election survey for each year (same people before and after) -- essentially, eight surveys.  National surveys.  With hundreds of questions each time.

How big are these?  The PDF of the 2004 survey alone is 754 pages.  I just happened to have it up right now.  Go ahead, look at it.  Feel my pain.  Or at least my need for caffeine.

My plan is simple.  Each pre- and post-election survey has a number of questions similar, if not identical, to those asked in other years.  I'll slowly but ever so surely create SPSS files of each year and then, with luck, merge them into a single file for analysis.  This is harder than it sounds.  Take political interest, for example.  A simple yet important concept.  I have to find the same worded question in all surveys.  In 2004, for example, it's VARIABLE 045057.  I gotta label each one appropriately (POLINT04 for the 2004 version, POLINT08 for 2008, you get it) and make double sure they are identical not only in wording but also in what we in the biz call response alternatives (4=high, 3, 2, 1, etc.).  And then you have to do a merge and, honest to God, hope for the best. 

Today I've mostly worked on 2004 data.  I'll pull from the bejillion available questions about a hundred or so I may or may not use to build multivariate models.  For each year.  Label 'em, merge 'em, analyze the hell out of 'em.  Oh, and write stuff too. 

It's all part of my research sabbatical.  I'm not teaching this Fall, making it perhaps my best teaching performance semester ever, a "do no harm" kinda thing.  The study?  Simply put, it has with the notion that a democracy relies on the consent of the losers -- and a closer look at those who expected to win, but didn't, and just how consenting they are after an election outcome.  There's gobs of theory to appease the academic gods, not to mention it'll play well in the press.

If it all works, that is.




So There's This Study ...

There's this study I'd love to read but, for the life of me, even with my UGA super password and access, I cannot get. Why am I telling you this?  Because I'll never get back the half-hour I just wasted trying every trick to read the paper.  It's in a journal called Political Analysis.  The article title is:

An Analysis of ANES Items and Their Use
in the Construction of Political Knowledge Scales 


Which is stuff I do. Exactly what I do, with the data (ANES) I do it with.  The abstract is kinda in English though not at all in English unless you're a numbers nerd and PhDweeb.  The point seems to be the ANES (American National Election Studies) items are crap, but a subset can be less crappy, but really we should think of the measure as more of a "latent variable" rather than "cause or formative indicators."  Simply put, the measures suck.  Or so they say.  It's hard to say from the abstract.

Thursday, July 18, 2013

Finally ... a Real Poll on the Zimmerman Verdict (sorta)

There were numerous sloppy media pseudo-polls after the Zimmerman verdict (see examples and my comments here and here), but finally we have what appears to be a real poll, even if it is by Rasmussen.  The results?
  • 48 percent of U.S. adults agreed with the verdict
  • 34 percent disagreed
  • 18 percent somehow remained undecided
I was close.  I guessed, once a real poll emerged, it'd be between 45 and 55 percent in favor of the verdict. More men than women agreed with the verdict.  Republicans strongly agreed with the not guilty verdict, Democrats disagreed.  No surprise there.  The poll also breaks it down by race but I caution you to not pay much attention to that number given the small sample size of minorities likely included here.

The poll was of 1,000 adults but I can't tell, at a casual glance, how the survey was conducted (robo-call, Internet-based, what?), so figure only a hundred or so blacks.  Hence, beware of race breakdowns unless the survey includes an oversample.  This one does not.




Wednesday, July 17, 2013

Academic Publishing Hell

A few years ago I submitted a manuscript to an academic journal.  No news there.  I do this foolishness a lot -- conduct research, write up results, find a sexy title that includes a colon (it's a rule), and send it off into the academic publishing world to find a home.  Routine stuff.

This one turned out not to be so routine.

No names.  Let's protect the guilty.  But follow this process:

2009: Submit manuscript to journal
2011: Finally get comments from one reviewer (I like it, publish that baby). Revise based on this, resubmit.
2012: Finally get comments from another reviewer comment (great piece, publish it, do a few small things). Revise again. Resubmit.
2013: I think it's done, will eventually appear, until an email appears this week from the editor with comments from yet another reviewer.

Enough already.  I haven't even bothered to open the reviewer comments file.  Instead, today I sent an email to the editor pulling the article.

I have the benefit of being a full professor, of course.  I don't need the pub, though it's always nice to have, and I have give other things I'm working on now and I'm not about to spend time revising, yet again, a paper I submitted years ago.

Life is too friggin short.

-- Update --

I got an email back from the editor acknowledging my email withdrawing the manuscript but with no, I repeat no, apology for the years of process.  Instead, I get asked if I'd like to review for the journal.

Um, no.

And anyway, I'm already on the editorial board of four journals, plus I read regularly for four more.



More Bad Poll Journalism

The other day I wrote about some of the awful polls being used by news media to gauge -- in no real way whatsoever -- public reaction to the Zimmerman verdict.

Don't think we're done yet with the bad poll stories.

My favorite of the day is this one in which 74 percent agreed with the verdict.  This poll has the advantage of a big N (3,557 respondents) but the disadvantage of being complete bullshit as it does not use a random sample but rather relies on people who happened to use this web site and happened to participate.  I discussed this briefly in my post the other day, the weaknesses of a SLOP (self-selected opinion poll).  And don't be fooled by the large number of respondents.  As we know from our history (see Literary Digest debacle), when it comes to polls, size doesn't matter.

Simply put, when a news org uses one of these polls, it needs to slap a big NON-SCIENTIFIC FOR ENTERTAINMENT USE ONLY label on the poll itself.

Tuesday, July 16, 2013

Little Interest in Zimmerman?

Pew released some survey numbers that suggest public interest in the Zimmerman trial was not all that high.  People are reading these numbers in the wrong way.

Only 26 percent report following the trial "very closely."  Yes, there's a racial divide -- no surprise -- with 56 percent of blacks saying they were following it "very closely" compared to 20 percent of whites.  Keep in mind the survey included only 104 blacks, giving it an 11.5 percent margin of error.  In other words, reader beware.  So let's set aside the race difference and focus on that 26 percent, a lower number, Pew tells us, then the Trayvon Martin shooting itself (36 percent) and certainly less than that benchmark of all trials, O.J. Simpson (48 percent "very closely").

Only 26 percent! people sputter.  So few.  Why all the coverage?

Lemme try to explain it this way.  Given so many media choices out there, the loyal news audience is relatively small, less than you might imagine, counted typically in the hundreds of thousands, not millions, of viewers.  A trial like this bumps the numbers up, as does any major event, especially one hyped as this trial was.  And yet, and yet -- there is a core news audience and really CNN and Fox and HLN and all the rest are fighting for the scraps, for the few million who care about the news.

Thus, 26 percent matters.  Go ahead, multiply that by all the adults in the U.S.  That's about the number of people likely to follow a major news event that does not directly effect them or those they know.

That benchmark case, O.J., was in 1994.  That's pre-Internet for most folks -- nowhere near the media choices available, plus it had the advantage of a football and film celebrity charged with murder.  In other words, it's a lousy benchmark.  Rodney King, that's back further, to 1991.  Neither really fit in our media landscape today, with so many cable networks, radio talk shows, Internet sites, and a million other media ways to spend your time.  The audience for the news is big, if you are generous in you definition of news, but that core audience is relatively small.  That's why the coverage on the cable news networks -- they're scrambling to win that core audience, not the entire U.S.






Monday, July 15, 2013

Polling on Zimmerman

I don't want to spent a lot of time on the Zimmerman trial.  Actually, I don't want to spend any time on the Zimmerman trial, but let's take a quick look at the polls after the not guilty verdict.

Frankly, most of the polls suck.  They're not even polls, at least in the real sense.

They're SLOPs, self-selected polls, which is a scientific way of saying they're complete bullshit.  Why?  Because the only people participating are those who happen to go to that site and care enough, or are pissed enough, to participate.  A quality survey uses a random sample, meaning everyone theoretically has a chance of being included.

This is an example from of all things a public radio station out of California.  I did it, and about 52 percent go with Zimmerman and self defense.  This TV station poll is no better, and with over 70 percent favoring Zimmerman.  And then there are bad polls, like this awful HuffPo (the results, interestingly, are about one-third for each of the response alternatives).  Hey, and lawyers get into it to, this poll from a lawyer web site (most thought Zimmerman not guilty).  Finally, the Orlando Sentinel, the nearest big paper, got into the act and also slopped its way to a 70 percent finding for Zimmerman.

There's nothing wrong with these polls -- as long as you label then non-scientific and completely useless and less a measure of public opinion than a way to engage our audience.  Simply put, for news sites, they're misleading.  They make people think they measure public opinion when, instead, they measure a handful of opinions of those who happen to visit the site and happen to participate and probably are skewed one way or the other in the first place.  In simpler, more academic terms, they're bullshit.

Okay, Hollander, what about the real polls about the verdict.  What do they say?

I'm waiting to see one, and expect a few out later today.  If I had to guess, I'd say between 55 - 65 percent favor the verdict, but I'm lousy at predicting public opinion.



Friday, July 12, 2013

Remember Talk Radio?

Talk radio used to be the new kid on the block, at least in political communication research. That was the 1990s, when academic dinosaurs roamed the world and I published a small mountain of stuff about the effects of political talk radio, certainly enough to get me tenure. There was the infamous 1989 "tea bag rebellion" and the talk radio-fueled 1994 Republican takeover of Congress.

It's been a sleepy field ever since.  Why? The whole Internet thing, of course, and especially the emergence of social media.  Twitter and Facebook pushed talk radio off the sexy research map.  So very old news, talk radio.  So uninteresting.

So ... wrong.

Research on this medium is still out there and believe me, talk radio still matters.  Just ask the 17 million or so folks who listen to Rush Limbaugh or the millions of others who listen to Sean Hannity, et al., not to mention influential local talk radio hosts.   Here's a recent study that found talk radio "played a fundamental role in voicing the protests against the Obama administration." It presents a typology of the talk radio biopshere, the cultural and fiscal conservatives. Useful stuff.

And there's this study, which comes at a completely opposite direction and examines the types of callers to political talk radio programs.  In many ways this resembles the early work on talk radio, all the way back to the 1960s when it was a medium dominated by liberals who used it as a counter to what they considered corporate, conservative news media.

And then there's this paper, which focuses on one of the true powers of talk radio, it's niche potential to reach and engage certain audiences, in this base black listeners.  I'd hope to see more of this, particularly about Latinos.

My point? Don't ignore talk radio either as a political factor if you're a journalist or consultant or as a research area if you're a PhDweeb like me. Yes, it's no longer a "new media" and yes, it's not as sexy as the latest Twitter study, but in terms of real effect I suspect you'll find more from talk radio than from social media.  Whether you'll find a place to published that research, that's a tougher sell.  Editors, after all, want sexy too.  That's another post for another day.


Thursday, July 11, 2013

10-Nation Study: Women Less Politically Knowledgeable

So I'm having sleepless night and this tweet comes across my screen at 3:14 a.m.:
Let's look closer at the research they're tweeting about. Basically the article about research says what's in the tweet above, but thankfully includes some caveats, such as:
Academic studies have previously found that women have higher levels of risk aversion and so are afraid of being wrong. When faced with multiple-choice questions, women are more likely to give a 'don't know' response than men. Others argue that the questions used to gauge political knowledge tend not to be gender-neutral and that women's political knowledge is more concerned with a personal experience of local politics and government programmes relating to daily life.
Simply put, men are more likely to guess on political knowledge tests (but not ask for directions). Lemme explain how this skews results. We generally score such tests as "1" if you answer correctly, "0" otherwise. That means being wrong or saying "I don't know" get scored the same. Men guess more, they're gonna get some of them right (and some wrong). They get extra points this way, by guessing. Studies that control for this tend to find this "guessing" can influence results in a positive way for men, or negative for women.

Yes, I've written about this extensively. If you're interested, you can see some of my previous scribblings about research on women and political knowledge here, here, and here. It's an area ripe for more study, if you're fishing around for a conceptual area to work in.  What else skews tests toward men? The topics of questions asked, politics as "game" or "contest" versus problem solving, and using only male leaders in questions versus female leaders (this one makes a huge difference, research says).

In other words, it's a lot more nuanced than presented here.

Okay, this is an interesting study as it includes 10 nations: Australia, Canada, Colombia, Greece, Italy, Japan, Korea, Norway, the UK, and the US. Despite this diversity, "women answered fewer questions correctly than men in every country."  Setting aside the methodological concerns I raised above, there's a powerful media effect in play.  As they write:
"It seems that gaps in exposure to media are related to the gaps of knowledge between men and women," says Professor Kaori Hayashi, co-researcher on the report. He found that the gender-bias of hard news content in all countries plays an important role in gender gaps and underlines the serious lack of visibility of women in TV and newspaper coverage.
So as a media guy, I'm happy to see a media angle pop out.  But I'd love to know the questions they asked, yet I'm having difficulty locating them.  This page shows me only the grant info, not all that helpful, but it's the "report" link from the Guardian piece.

As faithful readers of the literature knows, so much matters in the kinds of political knowledge questions you ask, how you ask them, the kinds of response alternatives are available (multiple choice versus free recall, for example). I'm guessing this study has not gone through peer review yet, so we're not seeing the details. Until then, remain skeptical.  It's hard to see if they used statistical controls and other factors before presenting the gender differences as real.




Tuesday, July 2, 2013

Polisci Muscles in on Journalism's Turf?

Everyone knows journalism is a wounded profession. And when there's blood in the water, it won't be long before the sharks come calling.

Call this shark political science.

Or, as this study asks:
As the criticism of our current state of journalism and the current state of journalism education mounts, we ask a simple question: Could political science graduates do a better job of providing political reporting than graduates with journalism degrees?
Good question. Unfortunately, the authors "do not test this question empirically," which I'd argue is kinda important. Instead, they scan political science curricula to argue grads have the skills necessary to "wade through political spin, manipulation, and misdirection."

Maybe. Maybe not.

All I have access to is the abstract, same as you if you followed the link above. And I have absolutely nothing against polisci grads, though I'm obviously biased as a journalism professor.  So allow me to make a few points before returning to what little we can learn from the journal article.
  •  Many, if not most, political journalists were not journalism majors in the first place. Anyone familiar with the field knows this, and it undermines the premise of the study. 
  • Journalists get news one of four ways: observation, interviewing, documents, and data. Little in the traditional political science curriculum prepares you for the first two, and not a lot preps you for the third, and it's unlikely most political science faculty are qualified to teach them how to do it. And let's not even get into multimedia skills.
  • But, political science majors are better equipped to understand the government and political systems, how they work (or don't work) and, importantly, data.
So should journalism schools be quaking in their collective boots at the idea of a flood of polisci majors taking their students' jobs?  No. I argue this despite the study's survey that found "an openness on the part of media executives to hire political science graduates to do their political reporting, even if such graduates do not possess a degree in journalism." Because, duh, they already do this.  Walk into any newsroom and you'll find lots of folks with a j-school pedigree. Plus if you ask "media executives" of course they'll be open to this. I'd be more surprised if they said they weren't.

Back to the study.  It's published in what can best be described as a minor outlet, the Journal of Political Science Education. We have a similar Tier 3 education journal in my field. It's okay, just not top-of-the-line rigorous stuff, and the authors are not from major universities. Again, that's okay, I'm just stating information about the source so you can put the study in its proper perspective.

Plus I can't tell much about the survey, and the promise at the end of the abstract that the "free supplemental" material is available online does not work for me.  Who was surveyed? How were the questions posed? Too little info. 

My conclusion? While there may be blood in the water and a few academic sharks circling, most of those sharks are ill equipped to do much more than swim in circles.