Sunday, July 31, 2011

Wait, Wait, Lemme Tell You ...

On the road but thought I'd pass on this link from the New York Times about the show Wait Wait ... Don't Tell Me from NPR, which is of course a current events quiz show.  Otherwise, I don't expect to be posting much this week.

Thursday, July 28, 2011

Female Journalists and HPV Coverage

We all know journalists help set the public's agenda.  What reporters cover, people find more important.  And when people see something as more important they act in very different ways -- from how they attend to the news to how they behave or support political action.

But does the gender of journalists make a difference?

Yes, at least in a recent Journalism and Mass Communication Quarterly study that found "the presence of females as reporters and news executives in the newsroom affected news content related to women, such as coverage of the HPV vaccine."  Examining data at the organizational and individual level, the authors found the more "gender-balanced" the newspaper, the more pominent the coverage of the HPV vaccine. Male-dominated newspapers relied more on official sources and less on people (teens, parents, teachers) as compared to more gender-diverse newspapers.

There are limitations to the study.  There always are.  But it's interesting that the gender makeup of a newsroom could so clearly influence the quantity of stories and how they were reported.

What we don't know, at least from this study, is whether the differences in coverage make a difference in the readers of these newspapers.  It would be interesting to know whether, in different communities, there were fewer or more teenage girls getting vaccinated depending on such differences in coverage.  When you extend data that far, things tend to unravel, but I'm guessing you might be able to tease out some significant behavioral effects.

Full cite:  Teresa Correa and Dustin Harp (2011).  Women matter in newsrooms: How power and critical mass relate to coverage of the HPV vaccine.  Journalism and Mass Communication Quarterly, 88 (2), 801-319.

Trick Questions

I caught a bit of video this morning of Newt Gingrich holding up a Newt 2012 t-shirt.  He was set up, and nicely.    The ABC reporter got him to hold up the shirt, which he gladly did for the camera, probably figuring it for a harmless puff pic.  Gingrich is all about American jobs.  Turns out, though, the shirt was made not in the U.S. -- but in El Salvador.

Newt, you've been punk'd.
"I'll have to ask the folks who ordered this," Gingrich responded. "I don't order it and I don't do it." Campaign spokesperson Michelle Selesky said "That was a rush order made by some of the volunteers." Selesky noted the print work on the shirts was done in Atlanta. 
Okay, so Atlanta is kinda like America.  So that's something.

This is nothing new.  The traditional journalistic punking involves milk.

You may not know the routine, but reporters used to follow presidential candidates around as they try to portray themselves as caring about the common man.  And then you ask him or her: "So, what's a gallon of milk cost?"

They struggle for an answer.  They've been punk'd.

There more here than just catching a high rolling politico looking like a fool.  They can do this by themselves quite well (remember Michael Dukakis doing the bobble-head doll routine in a tank, or John Kerry in hunting gear?).  And then there was President George H. W. Bush being "amazed" by a grocery store scanner, something Americans see every day.  Turns out that story is false, but it fit the narrative of a well-heeled Bush who didn't really understand Americans in a time of economic downturn.

My point?

In what people know, perception is powerful.  We tend to see what we want to see and believe what we want to believe, but a powerful narrative that paints a candidate in a certain way, such as hypocritical -- as in the Gingrich t-shirt -- or as historically challenged -- as in Michelle Bachmann or Sarah Palin, has lasting impact on the public mind.  And they are hard to reverse.  Chevy Chase's biting imitation of Gerald Ford on Saturday Night Live may not have lost Ford the 1976 presidential race, but it certainly didn't help.

Research shows we tend to vote for presidential candidates based on two major themes: competence, and character.  Other factors in how a race is framed, such as war or the economy, also play a huge role.  But being perceived as a hypocrite or fool, those also can influence how people think about a candidate, and making every subsequent slip-up, even the most minor, seem huge by comparison.

Wednesday, July 27, 2011

Social Media as Public Opinion

I wrote Monday and Tuesday about the promise and problems of using Twitter as a measure of public opinion.  See those posts for a semi-detailed discussion of why it works and why it doesn't work, especially when it comes to the dangers for journalists tempted to use social media as a gauge of what people think.

But the politicians, they're already there.

According to a new study (full pdf here), nearly 64 percent of congressional offices consider Facebook and 42 percent consider Twitter as "somewhat important" or "very important" measures of public opinion. That's the soundbite, but like all soundbites it's misleading.  Let's dig a bit deeper.
  • While 64 percent think Facebook is "somewhat" or "very" important, only 8 percent call it "very important" as a tool for "understanding constituents' views and opinions."  Only 8 percent.  The "somewhat" is a catch-all category, one people fall into almost by default.  Also, I cannot easily find what the other response alternatives were, though I suspect they were "somewhat unimportant" and "not at all important."  That would be standard.
  • And while 42 percent think Twitter is "somewhat" or "very" important, only 4 percent think it is "very important" as a gauge.
  • By comparison, 13 percent of respondents thought "paper surveys/polls" were very important and another 55 percent thought traditional polls were "somewhat important."
  • Even so questionable a measure of public opinion as online polls received higher marks than social media.  Seven percent said online polls (slops) were very important, and 47 percent said they were somewhat important.  
The takeaway?   As my earlier posts discuss in more detail, Twitter has a lot of promise as a measure of public opinion, especially as we classically define it.  But as a comparison to traditional, scientific polling and modern definitions of public opinion, Twitter and Facebook and other social media fall far short.  At the moment, social media are just another tool, not unlike phone calls and letters to congressional offices, a way to take the pulse of a highly selective public.

So congressional staffers, and journalists, need to take care in how they interpret such data, even as we become more sophisticated in our methods of analyzing thousands and even millions of tweets. 

Tuesday, July 26, 2011

Twitter II -- Is it Public Opinion?

I wrote yesterday on whether Twitter qualifies as public opinion.  I opened with an attack on the notion, one based on the methodological challenges of using the micro-blog as a measure of opinion.  I then ended by suggesting that the earlier me was wrong, that Twitter may better resemble our classic understanding of the concept of public opinion.

Let me take a stab at following up on my defense of Twitter.  At the end, I'll try to extend this from a philosophical concern to a more practical concern for journalists.

Before the existence of sophisticated polling, early thinkers had a very different understanding of public opinion.  Its roots can be found in the coffee houses of England and the salons of France.  In an 1820 letter, Sir Robert Peel complained about "that great compound of folly, weakness, prejudice, wrong feeling, right feeling, obstinacy and newspaper paragraphs, which is called public opinion."

Sounds like Twitter to me.

By this, I mean that early concepts of public opinion were based on the idea of communication.  While I'm a poll guy and use survey data extensively, public opinion is more than merely what public opinion polls measure.   Public opinion is an 18th Century invention, and at its heart is public discussion.  It is, to borrow a phrase, an "organic sociological process" that takes part in the "public sphere." 

As John Durham Peters writes in an excellent chapter that sums up much of this early thought, "in reading the newspapers, the public reads about itself, and thus finds ways to come into existence."

Take out newspapers, put in Twitter.  It's a nice fit.  After all, in Twitter people can talk back and forth, at "influentials" and others who fill the twittersphere.  From this, theoretically at least, public opinion is a changing, ephemeral, difficult to grasp.  And hard to measure.  Polls are much easier.  And thus, public opinion becomes, not what it was intended to be, but what became easier to measure.

Polls matter.  Surveys are useful.  Uncanny in predictive accuracy for elections, excellent as snapshots of what people think about a question asked of them at a specific time, they are filled with usefulness in their generalizability.  I love 'em.  Can't live without 'em.  But as measures of the classical sense of public opinion, they fall desperately short.

Can Twitter be any better?

Snapshot counting of positive versus negative tweets falls into some of the same traps that polls do.  All the million-tweet analyses cranked out by computer scientists do little to address what we really mean by public opinion.  It's a difficult concept.  As V. O. Key wrote: "to speak with precision about public opinion is a task not unlike coming to grips with the Holy Ghost."
To work, this will require significantly more sophisticated analysis techniques to tease out not the snapshot, but the tendrils of communication and change that occur as a topic or topics bounce across the twittersphere. 

It's a methodological nightmare.  But it's probably a better sense of what we truly mean by public opinion.

So, what's this mean for those in journalism?

Very soon we (the royal journalistic we) will face folks trying to sell us Twitter counting as an alternative to polls.  Surveys are expensive, counting tweets -- less so.  I suspect by 2012 we'll see some of the major players, the CNNs and New York Times of the world -- messing around with tweet counting either done in-house or, more likely, through consultants.  As I discussed in detail in the previous day's post, doing so is full of problems.  Yes, cheaper.  And yes, above I argue it more closely resembles our classical understanding of what public opinion means.  But from a practical standpoint, we simply are not there yet.  My warning is this:  journalists, be very very careful in using such data, if you use it at all, except in tandem with traditional polling data.  Do NOT rely merely on tweets.  We aren't there.  Yet.

Monday, July 25, 2011

Can Twitter Measure Public Opinion?

There's been a lot of interest lately in whether Twitter can be used successfully as a measure of public opinion.  Certainly it's cheaper than a real survey, assuming you have the expertise to gather and analyze thousands, or hundreds of thousands, and yes even millions of tweets, as some scholars have done.

This matters not only for scholars, but journalists as well.

So far the results on Twitter are mixed.  A German study found an analysis of tweets to be as good as polls in predicting a multi-party election.  A million-tweet analysis found Twitter to be both good and not so good, depending on what you were analyzing, in a comparison with traditional U.S. polling data.

There are enormous difficulties in arriving at a suitable, valid, and reliable automated measure of sentiment from tweets.  So often what we post is tongue-in-cheek, negative when we mean positive (or vice versa).

In other words, a million tweets may seem to have enormous predictive power.  But size isn't everything.

Setting aside the problems with measurement error in any program that analyzes tweets as either positive or negative, let's not forget that while a sample of 100,000 or even a million seems impressive compared to traditional surveys of 1,000 respondents, we've seen big mistakes arise when we confuse the size of our sample with the quality of our sample.  In other words, size doesn't always matter.

And for this, I remind you of the infamous 1936 Literary Digest poll. 

You can read about it here, or here or a host of other places.  The magazine was the Time of its period.  The short version: this survey of over a million people predicted Alf Landon, a Republican, would win the 1936 presidential election.  None of us remember studying the Landon administration in U.S. history class because, as you may have guessed, there never was a President Landon.  Roosevelt won.

The magazine, using a method that had worked for it in the past, sent out over 10 million ballots, got over a million back, and called the election so very wrong.  Why?  Because a big sample is not the same as a good sample.  This was 1936.  It was the Depression.  The magazine relied on its own subscription rolls, on those who owned cars, who had telephones, and host of other sources that all skewed toward folks during those terrible economic times who could afford such things -- people who tended to vote Republican.  Thus, the mag predicted a Landon victory. 

Size, then, doesn't always matter.

Twitter is like this.  While I like Twitter and use it often, relatively few Americans make use of it, fewer still actively post to the micro-blog, and even if you can glean a million tweets on an election or public issue, the resulting sample is deeply skewed toward the geeky and those who like technology or just enjoy sharing with the world their daily wisdom.  That's a lot of sampling error.  And we're not even getting into the difficulty of having a computer program decide what's positive or negative in a 140-character-or-less posting.

There are a lot of interesting uses for Twitter -- how people respond in real-time to a television program or sporting event or even breaking news.  As a measure, by itself, of public opinion?  No.  Unless, of course, you learned nothing from the Literary Digest.  Journalists and scholars alike need to keep this in mind when using Twitter as the source of all knowledge, at least when it comes to evaluating what people think.

And now for my defense of Twitter -- as a measure of public opinion.

Yes, you read that right.  While this deserves a more in-depth analysis than I can do here, Twitter in many ways resembles what we classically think of as public opinion.  Since the 1940s or so, as polling technology grew more sophisticated, we've tended to define public opinion as that which public opinion polls measure.  Circular, to be sure, a definition driven by polling methodology and not by sound theoretical reasoning. 

The classical understanding of public opinion is more nuanced.  It includes aspects of communication, something missing in our modern snapshot definition.  In other words, people in the coffee houses of the 1700s discussing the issues of the day, a fluid understanding of opinion as it moved and changed due to not only what people thought but what they said, and how it moved and shifted.  Thus -- Twitter.  It fits this classical, versus the modern, understanding of public opinion.

As I said, my argument above deserves more time and all the usual academic citations we love to layer on our work, but the thesis is a simple one -- Twitter is an imperfect measure of public opinion, as we define it in modern times, but it may very well be the perfect measure of public opinion as we classically understand the concept: messy, fluid, and full of communication.

As journalists become more sophisticated in evaluating the Twitterverse for more than mere anecdotal evidence, they need to keep in mind the limitations of even a million tweets.

Friday, July 22, 2011

God's Approval Ratings

Only 52 percent in a recent poll approve of the job God is doing. I feel a heckuva lot better about myself today, for some strange reason.

I heard about the PPP poll via The Atlantic Wire.  If you download the actual report, you'll find that while 52 percent approve of God's job, only 9 percent disapprove and 40 percent -- well, those folks aren't sure.  What's fun is not only the results of questions immediately below this but the results later in my post which break down the crosstabs, such as who gives God the highest marks -- men versus women, by age, by political partisanship.  Read all the way through to get it.  First, some of the other questions:
  • If God exists, do you approve or disapprove of
    its handling of natural disasters?  50 percent approve, 13 percent disapprove, 37 percent unsure.
  • If God exists, do you approve or disapprove of
    its handling of animals? 56 percent approve, 11 percent disapprove, 33 percent unsure.
  • If God exists, do you approve or disapprove of
    its handling of creating the universe? 71 percent approve, 5 percent disapprove, 24 percent unsure.
Yeah, the use of "its" strikes me as either politically correct or gender safe.  On the last one above, I'm not exactly sure how you can disapprove or be unsure about the whole creation thing, otherwise we wouldn't be here.  I suppose you might quibble about certain aspects of creation, such as the color of the sky or the existence of light beer.

Further down in the report you can have a little fun with the crosstabs.  Looking only at God's approval ratings, we find:
  • Among the "very liberal," 54 percent give God good job ratings.  Among the "very conservative," 61 percent do.  Not a huge difference.
  • Women gave God higher marks than men, 55 percent to 48 percent.  Again, kinda close.
  • Republicans gave higher marks than Democrats, 55 percent to 50 percent.  Ditto on close.
  • Blacks gave higher approval (72 percent) than Hispanics (53) and whites (47).   That's a big difference, but the number of blacks in the survey may be so small as to raise methodological issues.  Still, I'm not surprised by the difference.
  • And finally, to bury a lede, the older you were, the lower God's approval ratings.  Among those 18-29, God got 67 percent aproval.  The other age categories and approval ratings in parentheses were: 30-45 (61), 46-65 (50), older than 65 (40).  I suppose as you get older, you have a little more to bitch about when it comes to God and the job, um, "its" doing.

Wednesday, July 20, 2011

Estimating Political Expertise and Fearing the "Passionate Fool"

When we think someone knows what they're talking about, we quite logically pay more attention to what they say. 

Guess someone's expertise correctly and we benefit from good advice.  Guess wrong, we're screwed.  This goes for health information, this goes for car repairs, and according to a new Political Behavior article, this goes for guessing the political expertise of people we speak to most often about politics.

Turns out, we're only so-so at guessing the expertise of others.  According to the author:
This study presents a mixed picture of the public’s ability to identify credible information sources among those with whom they discuss politics. The good news is individuals are able to recognize expertise, but people do make mistakes and systematically overestimate the knowledge of some types of individuals and underestimate it in others.
When someone is passionate about politics we tend to overestimate their actual expertise.  That's an understandable bias on our part.  If someone cares deeply, they must be knowledgeable, right?  Not necessarily. 

We can also be overwhelmed by someone's emotion.  The downside, writes study author John Barry Ryan, is "individuals do run the risk of believing those who are constantly talking, but without any real understanding of the topics about which they speak. It is the passionate fool who may disrupt effective political discussion."

So when it comes to who to trust about politics and public affairs, the message here seems to be to fear the passionate fool.

Full Study Cite:  Ryan, J. B.  (2011).  Accuracy and bias in perceptions of political knowledge.  Political Behavior, 33, 335-356.

Tuesday, July 19, 2011

U.S. Students and Geography

A Chicago Trib story today reveals U.S. students have a "tenuous command" of basic geography, "including knowledge of the natural environment, how it shapes society and other cultures and countries."
Fewer than a quarter of high school seniors scored proficiently on the geography test, down from 25 percent in 2001 and 29 percent in 1994, when the national geography exam first was administered. The decline seen in the twelfth-grade scores was the most dramatic of any grade tested.


The funny thing is my daughter, an Honors student and scary smart, she sucks at geography.  But even she would pass the exam above with flying colors.

Earlier this year we had civics and history scores and across all of them, high school seniors did the worst.  Perhaps it's because a lot of this isn't part of the grand No Child Left Untested.  Regardless of the cause, it's bad news.

Monday, July 18, 2011

Want a Job Researching Political Knowledge?

There's an opening at Princeton for a post-doc to work with Marcus Prior, a political scientist who wrote an excellent book a few years ago entitled Post-Broadcast Democracy that sparked quite a few studies, including one of my own.  Read it (I blogged about it here, with some lousy formatting for some reason, I think I was sitting at a Mac).  The research program will be on "political motivations and the abilities of ordinary citizens."  It continues:
The individual will work on a variety of projects which may include experimental research on political knowledge, analysis of panel survey data, and development of an original survey. 
It's only for a year but has the potential for renewal, and it's a helluva opportunity for those out there wanting to get into political knowledge research and work with a really good scholar.  You must have a PhD in hand.  And live in New Jersey.

What Fish Know

It's biology day here at the whatpeopleknow, so I'm shifting from people to fish.  And not just any fish, but Poecilia mexicana, which sounds tasty, which is mentioned in this study with a cool title:

Male fish use prior knowledge about
rivals to adjust their mate choice

So let's applaud this fish for its use of knowledge, and for giving me something to write about today that involves knowledge. 

Friday, July 15, 2011

The Internet Affects Memory?

This NYTimes story reports on research that finds people, when they expect a computer to save information, are less likely to remember it.  I wrote about this in 2009 and also here about Google making us stupid, but not in detail.

As the story says:
The subjects were significantly more likely to remember information if they thought they would not be able to find it later. “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read,” the authors write. 
This probably falls in the Google is Killing Our Brain category of studies.  But this and other results from the study (read the article, it's quite short) suggest to me not a cause-and-effect from computers and the Internet, but rather the power of motivation in memory.  I'm not motivated to remember if I think a computer -- or my wife -- is gonna do it for me.  Less motivation =  less deeper processing = less remembering of stuff.  And that can have real consequences.  More on that in a moment.

Here's where it gets kinda interesting:
The experiment explores an aspect of what is known as transactive memory — the notion that we rely on our family, friends and co-workers as well as reference material to store information for us.  
Which is basically what we've been suggesting all along, and makes personal referrals and the wisdom of the crowd -- via Facebook or Twitter or Google+ -- an important change in the way we use memory and make decisions.  There is strong research that suggests the referrals of friends carry more weight than other, even more authoritative, sources.

The downside?

Like a muscle, memory needs to be exercised.  Sure, with mobile media we can always look something up.  Google is but a peck of a smartphone away.  But there's a hell of a lot to be said for having a base of knowledge to draw on.  That base influences how we process new information, how we pick up on important changes, how we spot trends or subtle differences, indeed how we make sense of our world.  The ability to look something up is neat and cool and convenient, but a heavy reliance on that ability may have dramatic negative affects as well, both in social knowledge but also political knowledge.

UPDATE: The Atlantic has a nice piece on this, just available. Strongly recommended.

Thursday, July 14, 2011

A Summer of Case Studies

Two major stories -- the Casey Anthony trial/verdict and the phone hacking scandal out of the U.K. -- make this seem a target rich summer when it comes to ethical case studies in journalism.

But are they of any real use for those of us who teaching journalism in the U.S.?

I don't think so.

Let's take the phone hacking story first.  It's more of a Brit tabloid thing, this hacking into mobile phone mailboxes.  Yes, we're nearly as celebrity crazed in the U.S. as they are in Great Britain, but the phone hack is a harder to pull off here and also we don't have quite the tabloid environment, despite TMZ and National Enquirer, found across the Atlantic.  Maybe some of the News Corp's U.S. properties such as Fox News or the New York Post will get caught up in the storm, but I doubt it.  Short of that, this makes the phone hacking story an interesting one when teaching a basic or advanced news reporting class, but one hard to connect to the day-to-day activities of most working journalists. 

In fairness, there are aspects to invasion of privacy that may make for good material.  And I suspect I could take the phone hacking and extend it to social media in some way, looking for parallels for the students to grab at and understand, but even that may be a reach.

Extreme cases are fun to discuss in class, but in the end their utility is meager.

Speaking of extreme, we also have the Casey Anthony trial and the verdict and this Sunday her release from jail.  Now this is the kind of case we may can use when discussing how not to go overboard with a story (hear that, Nancy Grace?).  About the only real lesson here, for basic or even advanced students, is to avoid taking a side in a major trial and using sources on air or in a story that perpetuate your point of view.  Also, it raises questions about feeding public anger, which is what cable TV now seems to be all about.

This one has more utility.  Only a little, but more.

The case allows me the prof to talk about covering controversial trials, about how to handle sensitive information, but most of all the Anthony story is a case study in how a media frenzy can begin, can be fueled by TV talking heads, and how it's hard to keep the news proportional while also being comprehensive.  The power of social media, particularly Twitter, fits well here.

All in all, though, the trial/verdict is not a terribly useful case study for reporting classes.  Most reasonable people know the coverage, especially on cable television and most especially on Headline News, went off into journalistic Neverland.  At best, other than some social media aspects, a brief mention in class as a cautionary tale is about all the Anthony case deserves.

Unless of course my students want to be the next Nancy Grace.  Then I'll ask them to leave the room.

Wednesday, July 13, 2011

Casey Anthony and the News

Just when you thought the nightmare might be over, the fine folks at Pew put out a really useful analysis of news coverage and interest in the trial of She Who Must Not Be Named (except in the title above, thus attracting web traffic and building my brand and all that other social networking crud).  Check the Pew study yourself rather than have me shamelessly lift their content here, Huffpost-like.  But there are a couple of key points I'd like to make note of and then comment on.
  • Nearly half of those surveyed said news organizations had been fair in the coverage of She Who Must Not Be Named.  Only 20 percent thought coverage had been unfair.  Thirty-one percent of respondents (apparently dead) had no opinion.  I'd love to see a breakdown of this question by the news network or social media consumed.
  • Lots of folks heard/read about you-know-who via social media -- 40 percent said "a lot" and, frankly, that's an awful lot.  I admit it, I heard about the verdict via Twitter and immediately called my wife.
  • Actual coverage of the who-know-who trial was high, so high it tied coverage of the national debt.  It may have felt like it was all her trial, all the time, but both stories tied at 17 percent of coverage.  I'm betting the numbers on HLN were, ahem, somewhat different.
  • But actual coverage is different than actual interest by real folks.  In this, She Who Must Not Be Named dominated coverage.  Far dominated, at 37 percent compared to 17 percent on the economy.  Nothing like a big trial to take our collective minds off a lousy economic situation.

Tuesday, July 12, 2011

Running for Office? You Better Look the Part

Got plans on being the next big thing in politics?  You gotta look the part.

An article in the latest Time, elaborating on research published in the American Journal of Political Science, discusses how people infer vitally important personality traits about a candidate from the face alone.  What traits?  The ones that tend to matter in elections, such as competence, honesty, trustworthiness, intelligence, etc.

And yes, there is a knowledge angle.  Here's a graph from the Time piece:
They combined data about voters in the 2006 elections—including their vote choice, political knowledge and TV exposure—with data about the candidates' faces, specifically ratings people gave about how "competent" the candidates were based on looks alone. All told, they analyzed 35 gubernatorial races and 29 Senate races, and they found that "low-knowledge individuals" who watched above-average amounts of TV were about six times more likely to vote for the more competent-looking person than those who watched little TV. They were also much more susceptible than those who had "high-knowledge" of politics. (The Onion headline for this rather unsurprising find would likely read "Ignorant Couch Potatoes Less Likely To Make Thoughtful Decisions.")
Those of you steeped in persuasion or processing theories such as the Elaboration Likelihood Model will of course not be surprised by the findings.  "Low-knowledge individuals" tend to be those who are less motivated and less able to deal with information, so they're more likely to use shortcuts to make sense of politics. In social science we call these heurstics, or cues.  Basically, they make life easier for those who either don't care or aren't able to deeply consider a situation or issue (or here, a candidate).
As the authors say in their abstract: "we find that appealing-looking politicians benefit disproportionately from television exposure, primarily among less knowledgeable individuals."
All I can say is it's a good thing I didn't go into politics.  Got a face for radio, and a voice for newspapers.

Monday, July 11, 2011

Numbers in a Story Increase Credibility

Writing a news story with numbers is a balancing act.  I do a whole lecture on writing with numbers, so important is the topic.  Do you go for precision?  Or do you instead rely on broader terms, like "most" or "half" or some similar shorthand? 

For most pros the answer is somewhere in the middle.  Lead with the summary words for readability and then follow with the precise numbers.  An experiment reported in the latest Newspaper Research Journal looks at words versus numbers and what it means for perceived credibility of the news stories.  The result?  Hardly surprising.  Using numbers instead of broad terms does indeed increase the perceived credibility of the various articles.

For most pros this is not an either-or choice.  You might lede by saying the majority of Americans think this way or four-out-of-five Americans think that way and then you'd come back, fairly early on, with specific numbers.  But in these Twitter times, when news can be condensed to 140 characters, then numbers can improve credibility.  And probably cost fewer characters.

Study cited:  Koetsenruijter, A. W. M. (2011).  Using numbers in news increases story credibility.  Newspaper Research Journal, 32, 74-82.

Thursday, July 7, 2011

Casey Anthony -- What (kinda) People Know

Merely by mentioning Casey Anthony's name will earn me additional hits, but I wanted to briefly discuss not the verdict itself, or even the media saturation coverage of the trial, but rather the remarkable interest the public has shown in this case and its response to the final verdict.  Much of that interest was fed by Nancy Grace and Headline News, and with the public response came more and more coverage.

But what's the public response to the verdict?  We don't have traditional poll data to rely on (yet), but on Twitter the results seem all one way.  Here's an excellent analysis of Twitter traffic that found 64 percent of Twitter users disagreed with the verdict, 35 percent were undecided, and only 1 percent sided with Anthony being found not guilty.  Doubt Twitter as a measure of public sentiment?  You shouldn't.  It's been found in some academic research to be a reasonably good barometer of public opinion.  Not as good as a real poll, but good.

While we don't have any real poll data yet, at least that I can find, there is this stupid Huffington Post "informal poll" that found only 50 percent thought Anthony was guilty.  I'm rather surprised by such a SLOP (self-selected opinion poll, also known in the survey business as complete bullshit).  Usually the angry folks dominate such faux polls, but apparently not in this case.  By the way, 28 percent had her guilty of a lesser charge and 17 percent thought she was not guilty.  Some might argue hey, that's over 13,000 votes.  It's gotta be more accurate than a scientific poll with a carefully drawn sample of 1,000 people.  And then you'd be wrong.  Size doesn't matter, at least when it comes to surveys.  If you doubt this, just look up the infamous Literary Digest poll debacle of 1936.  Over a million folks surveyed.  Results -- way off.  Unless, that is, Alf Landon really was elected president in 1936.  And I'm pretty sure he wasn't.

Of course the various networks, including HLN, saw remarkable increases in their TV or online traffic during the trial and with the announcement of the verdict.  Much of the response can best be described as anger (Grace, especially, has added fuel to this fire).  Negative emotions are more powerful than positive ones.  That's why talk radio and the TV talking heads who sell partisan indignation and disgust do so well.  Taking a position, it sells.

AEJMC and Political Knowledge

A few papers in the upcoming AEJMC conference include some aspect of knowledge.  Below are some abstracts I've come across that address, in some way, this topic.  Unfortunately I won't be attending AEJMC this year.

Social Media Consumption, Interpersonal Relationship and Issue Awareness • Sungsoo Bang, University of Texas, Austin • This study examines the relationship between social media consumption and issue awareness using South Korea’s 2007 national survey dataset. This study finds that there is a significant and positive relationship between consuming social media, such as Internet community sites, and issue awareness. The findings indicate that frequency of using social media significantly and positively increases issue awareness such as public policy.  The finding also indicates using social media for socilability is positively related to issue awareness, which is essential for democracy in terms of political knowledge. Furthermore, the finding shows social media uses mediate the relationship between issue awareness and interpersonal relationship such as political discussion, which demonstrates consuming social media decrease the information gap caused by interpersonal relationship.

Exploring News Media Literacy: Developing New Measures of Literacy and Knowledge • Seth Ashley, University of Missouri; Adam Maksl, University of Missouri; Stephanie Craft, University of Missouri • Using a framework previously applied to other areas of media literacy, we developed an attitudinal scale focused specifically on news media literacy and compared that to a knowledge-based index including items about the structure of the U.S. news media system. Among our college student sample, the knowledge-based index was a significant predictor of knowledge about topics in the news, while the attitudinal scale was not. Implications for future work in assessing news literacy are discussed.

Understanding News Preferences in a “Post-Broadcast Democracy”: A Content-by-Style Typology for the Contemporary News Environment • Stephanie Edgerly, University of Wisconsin-Madison; Kjerstin Thorson, University of Southern California; Emily Vraga, University of Wisconsin-Madison; Dhavan Shah • This study develops a 2×2 news typology accounting for an individual’s orientation toward content (news vs. entertainment) and style (factual reports vs. pundit opinions). Findings from cross-sectional and panel data reveal that our typology predicts distinct patterns of news consumption during the 2008 election. Specifically, we predict selection of cable news outlets, soft news programs, and late-night talk shows. Our results also shed light on knowledge change during the 2008 election season.

Knowledge Gaps, Belief Gaps, and Public Opinion about Health Care Reform • Doug Hindman, Washington State University • Partisanship and political polarization has become the norm in national, and increasingly, local politics. The passage of the health care overhaul legislation, the Patient Protection and Affordable Care Act, signed into law in March 2010, was no exception to the trend towards greater levels of partisanship; the legislation passed without a single Republican vote. This study raises an additional issue thought to be associated with polarization and partisanship: the distribution among the public of beliefs regarding heavily covered political controversies. Specifically, this study tests hypotheses regarding the distribution of beliefs and knowledge about health care reform. Hypotheses are formulated that seek to extend the knowledge gap to account for the partisan environment.  The belief gap hypothesis suggests that in an era of political polarization, self identification along ideological or political party dimensions would be the better predictor of knowledge and beliefs about politically contested issues than would one’s educational level.  Findings showed that gaps in beliefs and knowledge regarding health care reform between Republicans and Democrats grew, and traditional knowledge gaps, based on educational level, disappeared. Attention to cable TV news narrowed gaps in knowledge among party identifiers. Findings are discussed in terms of improving news coverage of partisan debates.

The Rise of Specialists, the Fall of Generalists • S. Mo Jang • The present study revisits the question as to whether U.S. citizens are information specialists or information generalists.  Although the literature has presented mixed views, the study provides evidence that the changing information environment facilitates the growth of specialists.  Using a national survey (n=1208), the study found that individuals seek issue-specific knowledge driven by their perceived issue importance rather than by general education, and that this trend was saliently observed among those who relied on the Internet.

Understanding the Internet’s Impact on International Knowledge and Engagement: News Attention, Social Media Use, and the 2010 Haitian Earthquake • Jason A. Martin, Indiana University School of Journalism • Relatively little is known about how Internet media use and other motivational factors are associated with outcomes such as knowledge of international news and involvement. Recent research suggests that attention and interaction with foreign affairs news is one path to closing the knowledge gap in this context. The acquisition of foreign affairs knowledge also has implications for individuals’ abilities to have a broader worldview, to hold accurate public opinions about foreign nations, to facilitate a greater since of global belonging, and to get involved with international events.  This paper examines the relationship of media use, foreign affairs political knowledge, and international involvement. A nationally representative survey conducted shortly after the 2010 Haitian earthquake produced measures of demographics, news media use, social media use, international engagement, general political knowledge, and foreign affairs knowledge.  Statistical analysis found that news exposure, news attention and various types of social media use produced significant independent positive associations with international news knowledge and international involvement after demographic controls. Hierarchical regression also found that domestic political knowledge, cable TV exposure, Internet news exposure, and radio exposure were the most important predictors of international knowledge. Another regression found that news attention, e-mail use, social media use, and texting about the Haitian earthquake were the three strongest predictors of international involvement.  These findings support related research that has found a positive association among Internet news use, international knowledge, and international engagement while also making new contributions regarding the importance of mediated interpersonal discussion for predicting international involvement.

Wikipedia vs. Encyclopedia Britannica: A Longitudinal Analysis to Identify the Impact of Social Media on the Standards of Knowledge • Marcus Messner, Virginia Commonwealth University; Marcia DiStaso, Pennsylvania State University • The collaboratively edited online encyclopedia Wikipedia is among the most popular Web sites in the world. Subsequently, it poses a great challenge to traditional encyclopedias, which for centuries have set the standards of society’s knowledge. It is, therefore, important to study the impact of social media on the standards of our knowledge. This longitudinal panel study analyzed the framing of content in entries of Fortune 500 companies in Wikipedia and Encyclopedia Britannica between 2006 and 2010. Content analyses of the length, tonality and topics of 3,985 sentences showed that Wikipedia entries are significantly longer, more positively and negatively framed, and focus more on corporate social responsibilities and legal and ethical issues than in Britannica, which is predominantly neutral. The findings stress that the knowledge-generation processes in society appear to be shifting because of social media. These changes significantly impact which information becomes available to society and how it is framed.

The Influence of Knowledge Gap on Personal and Attributed HIV/AIDS Stigma in Korea • Byoungkwan Lee; Hyun Jung Oh; Seyeon Keum; Younjae Lee, Hanyang University • This study tests a comprehensive model that explicates the influence of AIDS knowledge gap on personal and attributed stigma. Fear of contagion serves as a mediator between AIDS knowledge gap and AIDS stigma. An analysis of the survey data collected to evaluate the impact of 2008 AIDS campaign in Korea reveals that AIDS knowledge was significantly associated with personal stigma both directly and indirectly but only indirectly associated with attributed stigma through fear of contagion.

Perceived Threat, Immigration Policy Support, and Media Coverage: Hostile Media and Presumed Effects in North Carolina • Brendan Watson, University of North Carolina at Chapel Hill, School of Journalism & Mass Communication; Daniel Riffe, University of North Carolina, Chapel Hill • This study, using survey data (N=529), examined perceived “”threat,”" subjective knowledge about immigration, support for punitive and assimilative policies, and opinions about media coverage effects. Perceived threat was related to support for punitive policies, and “”hostile media perception”" was confirmed.  However, perceived threat was not related to presumed influence of coverage. Internet use, age, race, and education predicted threat perception; perceived threat, perceived favorableness of coverage, and daily newspaper reading predicted presumed influence of coverage.

Evolutionary Psychology, Social Emotions and Social Networking Sites — An Integrative Model • Sandra Suran; Gary Pettey; Cheryl Bracken; Robert Whitbred • This exploratory research employed an Evolutionary Psychology (EP) perspective whereby the human mind is viewed through the lens of the physiological and psychological mechanisms that created the developmental programs we use today (Cosmides & Tooby, 1992). This theoretical framework was used to study the relationship between human behavior, the state of alienation, and Social Networking Sites  (SNS). Based on survey data from college students, there seemed to be a relationship between alienation and SNS. Alienation dimensions were highest among those who had the lowest amount of contacts on SNS.  The findings from this study will add to the body of knowledge on Computer Mediated Communication (CMC) as well as afford an opportunity for further research in understanding human behavior engaged in SNS through the viewpoint of Evolutionary Psychology.

Friday, July 1, 2011

Social Networking Sites and Likeminded Others

Selective exposure, the idea we seek out information that agrees with our point or view or avoid sources that disagree with our point of view, is an appealing notion.  Unfortunately, it's one that has mixed findings.

One aspect of selective exposure is the idea that social networking sites like Facebook will make it worse.  We'll hang out, digitally that is, with folks a lot like ourselves.  There are ways Facebook organizes our news feed ("top news" versus "most recent") that should make this even worse.  That's a topic for another time, but look at both and you'll see.  Based on this and what we know about source preference, social networking sites might be expected to decrease our exposure to opinions unlike our own.

But this study finds just the opposite, that social networking sites may increase our exposure to dissimilar views.  Using Pew data, the authors found use of SNS predicted exposure to cross-cutting political viewpoints even after statistically controlling for a number of socio-demographic factors (age, education, etc.).  This holds across partisan predispositions.

Oddly, traditional news was unrelated in the model to exposure to cross-cutting views (see Table 2).  This is where the study, for me, starts to raise serious questions.  No where else except a mainstream, traditional approach to news will you find all sides presented more or less fairly, or at least given roughly equal time/space.  This excludes, of course, certain talking head shows on Fox News, MSNBC, etc.  So either social networking sites like Facebook are the salvation when it comes to exposing people to dissimilar views, or there's something wrong methodologically in the study.  Let's break it down.
  • SNS Use:  this is a 5-question index devoted specifically to using Facebook, etc., for news and political information. This makes for a very different measure than mere use of SNS, so already the results apply only for those who make use of FB and other SNS to find out news and political info.  I'm an avid FB user, but I never think to use it for that, so we're talking about a variable constrained by the particular use of the medium. 
  • Cross-Cutting Exposure: The dependent variable here is a single item described as "respondents were asked to indicate whether most of the sites they visit to get political or campaign information online challenge their own point of view, share their point of view, or do not have a particular point of view."  This dependent variable is loaded with social desirability.  I don't know how I'd do it differently, given the reliance on the Pew data, but this question only measures people's perception of their news choices, not their actual news choices.  A Fox News viewer who is politically conservative may answer this question in such a way that yes, I seek out views that challenge my own -- but we all know that in this case, it just ain't so. 

Yes, you can quibble with any methodology.  That does not take away the study's main finding, that social networking sites lead to greater exposure to opinions you disagree with.  It's rather surprising, theoretically.  Indeed, the study hypothesizes such a result but I'd argue the original hypothesis, based on all we know, should have been the other way around.  It should have hypothesized that FB use would lead to less, not greater, exposure to dissimilar others.