Wednesday, January 22, 2014

Internet Broadband and Political Behavior

In the category of variables I never thought to correlate, here's a study from a couple of months ago that examined the diffusion of Internet broadband and its association with voter turnout and campaign contributions. As you can tell from the abstract, he finds broadband expansion is related to greater turnout and an increase in donated dollars, more of it going to Democrats who are hypothesized to have a stronger online presence.

Interesting stuff.

More to the point of this blog, he finds broadband availability is associated with greater political knowledge and "the promotion of liberal values." Let's take a closer look at his evidence. Like a lot of economics papers, this one is full of formulas and statistical wizardry. Ignore it. The paper does a nice job of measuring political knowledge across several survey waves with no fewer than six items in one wave to 17 in another.I could never find a measure of how well the items hung together, either in terms of reliability/internal consistency (Cronbach's alpha) or factor analysis. Perhaps that's not normal in economics, while in political research it would be expected. Again, quibbles. There's some interesting stuff late in the paper that's of interest to folks doing political communication research, especially tables 13 and 14. If I read Table 14 correctly, and I'm not sure I do, the number of ISPs (broadband) is negatively related to watching Fox News and the news on CBS. The other sources of TV news are not statistically significant. Broadband seems to have negatively influences watching TV news both via traditional broadcast networks and cable channels (Table 13) but is unrelated to other news media exposure items.

Again, it's a complicated paper and I've just started getting into it, but there's probably more her of use than you see at first blush -- if you can work through the economics and dismal science language.

Thursday, January 16, 2014

Picking Nits

There's a very useful page for journalists that explains various terms found in academic studies, especially statistical or research methodology terms.That said, it's time for some nitpicking. Lemme be clear, I point my students to this page, so I'm just being persnickety. My criticisms below are too long and I'd tighten them up if I were rewriting this site. Most of what's there is written with clarity and explains nicely, if simply, what these terms mean so a journalist can translate murky, dense academic research. So to pick a few nits in no particular order:
  •  "There are two primary types of population samples: random and stratified." Well, yes, to a degree. There are also convenience samples and snowball samples, the kinds that a journalist really wants to pay attention to because they suggest a weak study. Really sampling comes down to its random nature, does everyone theoretically have a chance to be included in the study or survey. If not, that raises serious questions. 
  • Margin of error is not well explained, but perhaps it's well enough explained for most journalists. And it gets mixed up with the confidence level but, and this is important, the confidence level does not necessarily change as sample size increases. You set in advance what your level is, typically 5 percent (or the p<.05 if you consider inferential tests). In other words, a survey result of 44 percent with a 3 percent margin of error means the real number, if we could question everyone, would be between 41 to 47 percent. The error has to do with 95 times out of 100, we'd expect this to be valid. If you choose, you can bump this up to .01 (or 1 percent, or 99 percent) and that affects your margin of error.
  • The cause and effect section needs to include that there are basically three things that must be present to truly infer cause: time order (one proceeds the other), covariation (as one changes, the other changes) and no third variables, meaning something else could better explain this relationship. In fairness, this latter point comes up nicely just after as confounding variables.
  • The difference between mean and median and their journalistic usefulness is touched on well. I'd point out that the mean is sensitive to outliers and use a better example. So, if you're looking at the salaries of a department and there's one really high salary and the rest are middling, the mean will be pulled upward by that single high salary (or home value in a neighborhood, etc.). A median helps correct for that, which is why we tend to use median for salaries and home values and similar skewed distributions. I preach this to my j-students again and again that it's the median, stupid.
  • Most research is NOT about the relationship between two variables. Often it includes a set of independent variables thought to predict a single dependent variable. Nerdy, I know. Regression is tricky to explain, but basically it statistically controls for the power of a number of variables to predict a single variable. Take income as the thing you're trying to predict. Education is a good independent variable, and so is age. But if you put both in a regression model it's likely that education "explains all of the variance" and age is no longer a significant predictor. Yes, age and income correlate, but in a regression if you control for education, age may no longer be a factor. I'd tie correlation and regression together with a single example and show how the correlation disappears in some regression, which also ties into cause and effect.
  • As some of this has to do with covering polls and surveys, I'd add more or link to a couple of well-known sites designed to guide journalists on how to evaluate a good poll. A good one is here, 20 questions journalists should ask about polls. Highly recommended.
    There are two primary types of population samples: random and stratified - See more at: http://journalistsresource.org/skills/research/statistics-for-journalists#sthash.OPBWQwrO.E4egGTwH.dpuf

Friday, January 10, 2014

Cheaters

You know those web-based knowledge surveys, such as those done by Pew?

People cheat.

I never gave this much thought until I read a paper examining how often people "Google" the correct answers on such tests before filling in their response. The paper itself is behind a paywall, but I can see it from my university IP address. Keep in mind this is self-reported cheating, which is important. Still, some interesting results pop up and there is the strong likelihood that cheating leads us to underestimate the positive effect of education on what people know, at least as measured by such popular surveys. One way to figure out if people cheated? How long they took to answer the questions. Such data is available, but it's probably rarely analyzed.

Monday, January 6, 2014

First Person in Academic Writing

Here's a somewhat nerdy topic -- is it appropriate to write an academic article in the first person?

I just received a "revise and resubmit" from a top journal with an acceptance rate in the single digits -- always a good thing -- but one point the editor made was I'd written in first person and the preference is otherwise. That's an easy fix and, to be honest, as a journalism guy I've never been comfortable writing in the first person, even on a blog, but I'd noted a lot of the journals I read, mostly political science, seem to be edging more toward first person and active voice. So I thought I'd give it a try.

I know certain kinds of research, especially critical/cultural, are often in first person, but the stuff I do, number crunching quantitative stuff known fondly as the dominant paradigm, tends to be in second person. Bad examples below.

Second Person (traditional)

Respondents were asked to identify the top issue facing them personally and the top issue facing the country.

First Person

I asked respondents to identify the top issue facing them personally and the top issue facing the country.

Yes, the second is tighter, but the first is traditional. These are not from my study by the way; I made them up on the fly, and they're not particularly good examples, but you get the idea.

As I said, I'd noticed more and more journals allowing first person. Maybe it works better with "we" and multiple authors, which gives some separation than the dreaded "I" that, frankly, I loathe.  So I have no issue with an editor asking that I shift to second person. That's the easiest of a list of issues to address with the study. Still, it raises the question, a nerdy one I admit, whether we should allow more first-person in our academic writing.