Thursday, July 30, 2015

Best College Towns

We love lists. We loved them long before Buzzfeed ruined them. Today let's look at one of my favorite sets of lists, rankings of the best college town. These lists use wildly varying methodologies -- assuming there's any methodology at all -- hence you find wildly varying results.

I grabbed four lists. Sorry about the crappy formatting.

RANK BestCollegeReviews livability.com Niche USAToday
1 Boulder, CO Ames, IA Santa Barbara, CA Ithica, NY
2 Ann Arbor, MI Logan, UT State College, PA State College, PA
3 Madison, WI Oxford, OH San Luis Obispo, CA Iowa City, IA
4 Ithica, NY Fayetteville, AR Durango, CO Ames, IA
5 Ames, IA Tempe, AZ Chapel Hill, NC Champaign-Urbana, IL

As you can see, not a lot of agreement. Ithica NY does well. And the lists love Iowa. Athens, Ga., where I teach, doesn't score well on these lists (39th on one, 41st on another, not mentioned in others). Oddly, lists often put it as the #1 or #2 college town in America, but as you can see not everyone agrees with this assessment. In general, southern schools don't do well, with the exception being Chapel Hill and, weird, Fayetteville, Arkansas. Really?

Wednesday, July 29, 2015

Daring to Research the Obvious

In grad school, we often joked after reading a pile of studies for class:

"In mass comm,
we dare research the obvious."

So I'm looking at the latest issue of Newspaper Research Journal (yes, that's an actual academic journal and, yes, I'm even on the editorial board) and I see a couple of studies that support our grad school hypothesis.  Here's are the titles:

Younger Journalists More Likely to Use Social Media
and
Traditional Reporting More Credible Than Citizen News

I'm sure these are fine pieces of research, and I agree it's important to establish these kinds of findings before moving on to more interesting extensions of the work. That said, younger journalists more likely to use social media? Ya think?

There is some interesting stuff in this issue. One study finds editors most likely to use social media just to post story links (fail), and another looks at preferences for videos versus slideshows. So don't think I'm knocking NRJ, especially as my name is one of 6 billion listed on the Editorial Board.

Tuesday, July 28, 2015

Engaging News and Knowledge

There's a story out today about the Engaging News Project and its study of "traditional" versus "contemporary" news presentations. Here's the lede:
There is a significant increase in page views when people browse a news website with a contemporary design compared to a website with a classic design, according to a new report from the Engaging News Project. People also learn more from the articles when they view a contemporary site.
Which is quite interesting if you're into news online. The part I was curious about, at least for this blog, was the "people also learn more" from contemporary sites. We're talking knowledge. We're talking what people know. So I went to the full report to understand how they measured this and whether it stacks up as rigorous research.

First off, how do the designs differ? I'm not a design guy, but here's a screen grab below.


To repeat, I'm setting aside the stuff on which design people liked more, spent more time with, etc., and I'm sticking to differences in learning. Read the report yourself if you're into design. Below is a key graph on knowledge:
The Engaging News Project team also found that study participants’ retention of details from the articles, though low across the board, nonetheless increased by at least 50 percent when participants viewed the contemporary homepage compared to the classic one.
Wow, a 50 percent increase. That's a lot, right? Check out the graphic from the report below.


So the differences are statistically significant in two of the three studies, though you can also argue they're not substantively different. 0.4 versus 0.6, that ain't much of a difference. Why is recall so slight? It's free recall. Subjects were asked to write in a box (all online, mind you) what they could remember from the stories. Measuring recall versus, say, recognition (think multiple choice question) is a very different type of knowledge -- something I've published research on, thank you very much. Recall favors certain kinds of people. I would have included some closed-ended questions. In my study of a real actual random national sample, I found Internet news favored recall knowledge.

Interesting sidenote: they also measured generic political knowledge but found it had no relationship in two of the three studies in which it was used. That's odd, because usually prior knowledge is a significant predictor of gaining new knowledge. Curious.

Endnote #38 has some details on how they constructed their measure that explains the low "average details" recalled.

Okay, this is all well and good, but let's assume for the moment that a "contemporary" design leads to better recall than a "classic" design. Why? In part it's how they measured recall -- open recall -- and in part it's the kinds of folks who participated in the study (not a random sample and, it's unclear to me in my reading, whether people were randomly assigned to conditions. I assume so, just didn't see it). All that aside, why would one design work better than the other? I have no friggin idea. Then again, we can give subjects the same exact story on paper and on a screen and they'll remember more from the paper version. Same story. Just different medium. So another factor is how you approach a medium, though I can't see how that plays a role here in design. I think the classic design may turn people off compared to the contemporary design, and that influences (mildly) recall. All that text on the splash page is too much, perhaps. I don't really know without giving it more thought.

Tuesday, July 21, 2015

Citizen Misinformation

Good WaPo post here about citizen misinformation. My hit-and-run post for the day/week/month.

Wednesday, July 15, 2015

Trump and Hispanics

So an ABC News/Washington Post poll came out today that received wide attention. Here's the lede:
Positive views of Donald Trump have doubled since his controversial comments on immigration, although many more Americans still dislike rather than like him, now by a nearly 2-to-1 margin -- and his negatives have soared among Hispanics, a sought-after group in national politics.

Let's think about this for a moment. It's a poll of 1,011 adults with a margin of error of 3.5 percent. Lemme be clear -- that margin of error does not attach itself to Hispanics or any other subgroup. No, that's for the entire sample of adults. Not Republicans. Adults. You have to do the math again for a subgroup based on how many you surveyed.

So how many Hispanics are in the survey? Hell if I know. The report doesn't tell us. Let's assume 100 Hispanics, and that's a big assumption. The margin of error for that subset is 10 percent. In other words, Trump's 81 percent negative could be as low as 71 and as high as 91, but we don't really know. Subsamples are tricky, especially Hispanic subsamples. To be safe, I'd put the margin of error at more like 20 percent, which makes the sample all but useless for analysis purposes. Instead, you should oversample Hispanics (or any other group) to really look at the data.