I work in the Grady College of Journalism and Mass Communication. There are a lot of programs out there that include some form of mass communication in their title, either as singular or, oddly, plural. But how popular is the phrase? And is it changing over time? Glad you asked. Here I present a graph via the magic of Google of how often mass communication was mentioned in books.
As you can see, the phrase peaked in the 1950s, dipped but clawed its way back in the 1970s, then started a slow and seemingly inevitable decline.
As some of you know, I wrote a testy post some weeks back about the possibility of changing the name of our college. That was mostly about losing journalism (bad idea) or adding a long list of stuff after journalism (worse, given how badly written it would be, such as Journalism and Media, etc.). Here I present an argument for losing mass communication, undercutting my own position because I'm a data guy and the graphic above suggests our name is, well, kinda stuck in the 1950s. The mass communication part, mind you. Journalism stays until you pry the name out of my cold dead fingers.
Okay Hollander, but how about journalism? Isn't it dying? Below, the graphic is squeezed to fit given how long ago the word appears.
What does this really mean? Hard to say. This is, after all, merely mentions in books scanned by Google, but the data suggest quite strongly that mass communication as a phrase may have less meaning today and tomorrow. Journalism means something specific and, after all, Henry W. Grady was a journalism guy, but mass communication may be overdue for a change. The problem is, we end up with all these awful combinations of words to try and capture what it is we do in our college (essentially we teach students to inform, persuade, and entertain -- but that makes for a lousy college title).
Random blog posts about research in political communication, how people learn or don't learn from the media, why it all matters -- plus other stuff that interests me. It's my blog, after all. I can do what I want.
Friday, May 31, 2013
Political Knowledge -- Over Time
In playing with Google, we can see how the phrase "political knowledge" has been used in books over time. See below.
So the phrase "political knowledge" really took off in the 1950s, hitting its high point in the early 1960s, and then dropped. There were burps up in the 1970s (Watergate?) and the 1980s (Reagan?). It's odd the term never really appeared until the 1940s.
Interesting, while not being terribly informative.
So the phrase "political knowledge" really took off in the 1950s, hitting its high point in the early 1960s, and then dropped. There were burps up in the 1970s (Watergate?) and the 1980s (Reagan?). It's odd the term never really appeared until the 1940s.
Interesting, while not being terribly informative.
Thursday, May 30, 2013
Polls of Journalists
There's a story about an Australian poll of journalists that deserves a brief mention. A lot of it is caught up in that country's politics, which won't mean a hell of a lot to U.S. readers, but there is one small methodological point I want to explore. See below:
On the topic of voting intentions, only 61 per cent of the 605 respondents were willing to reveal their voting intention, but of those who did, 43 per cent said they would vote Labor, 30 per cent Coalition and 19 per cent Green.
For a lot of us those party affiliations mean little. The real point here is only 61 percent gave their voting intentions and from that small subsample, we have a list of partisan preferences. So basically we're operating with a non-random sample of 369 journalists, because there's no reason to suspect people randomly choose to give their political preference. A number that small, the margin of error balloons. Plus it's become so non-random as to be questionable whether margin of error (about 5 percent here) even matters.
In other words, beware.
Now you could do a quick-and-dirty analysis to see if those who agreed to reveal their party preferences differed from those who did not on such factors as age, education, etc. If you did not see significant differences there, you have more confidence in the meaningfulness of your results. The story here doesn't go that deep, so it's hard to say. The real point is, beware subsamples when people assign themselves through a willingness to reveal something about themselves. My hunch is the journalists who didn't reveal their partisan leanings differ wildly from those who did -- and that throws the whole analysis out the window.
Tuesday, May 28, 2013
Brilliant Scholar Update
I like to sometimes see who is citing my research -- thus making this another Brilliant Scholar Update.
- If I spoke German, or maybe Dutch, I'd have more to say about this study, which cites me (shouldn't everyone?). Ah well, on to the next in our brilliant scholar update ...
- This study looks at political satire, a favorite topic of mine. Basically it's a summary of the available literature. In the same journal issue there's another that cites me.
- Keeping with our The Daily Show and satire theme, this article in MC&S notes my early work (two cites, from 1995 and 2005).
Wednesday, May 22, 2013
By National TV News, I Mean ...
When we ask survey respondents how much they watch national television news, what goes through their mind? Do they calculate some generic TV watching total? Do they weight their answer depending on how much they watch, say, Fox News versus CNN versus other networks?
I was curious, so of course I turned to the data.
This is part of a bigger paper I'm writing, but I thought I'd share a few results. The idea is simple -- in a survey we may ask the generic question (national television news) and we may or may not get into specifics (ABC, MSNBC, etc.). Which is better? Is there any real difference? This matters both methodologically (we can save money with a single question) and theoretically (generic TV viewing may predict different outcomes than network-specific TV viewing). Watching Fox News, we can all agree, is somewhat different than watching MSNBC.
First, lemme get at the overlaps. A national survey asked respondents how whether they watched the three broadcast networks, the three cable networks, and also included a generic "national television news" question. As you'd expect, answering "yes" to one meant you were likely to answer "yes" to the others. The lowest overlap? No surprise -- Fox News and MSNBC (5.4 percent said "yes" to both). The greatest overlap? NBC and CBS (19.5 percent "yes" to both). And how do these overlap with the generic TV question? Pretty well, most around the 20 percent overlap mark except MSNBC (9.9 percent), perhaps because it's not available on as many cable systems.
The real question here, from a PhDweeb standpoint, is whether the generic question is as good a predictor of various outcomes as the specific network measures. I'm in the middle of this, but I can tell you a few small differences do emerge, but not as many as you might expect. Few are surprising. Below, I sketch out some regression model results that control, statistically, for a bunch of other factors (education, political interest, etc.) to predict stuff like political knowledge. See below.
I was curious, so of course I turned to the data.
This is part of a bigger paper I'm writing, but I thought I'd share a few results. The idea is simple -- in a survey we may ask the generic question (national television news) and we may or may not get into specifics (ABC, MSNBC, etc.). Which is better? Is there any real difference? This matters both methodologically (we can save money with a single question) and theoretically (generic TV viewing may predict different outcomes than network-specific TV viewing). Watching Fox News, we can all agree, is somewhat different than watching MSNBC.
First, lemme get at the overlaps. A national survey asked respondents how whether they watched the three broadcast networks, the three cable networks, and also included a generic "national television news" question. As you'd expect, answering "yes" to one meant you were likely to answer "yes" to the others. The lowest overlap? No surprise -- Fox News and MSNBC (5.4 percent said "yes" to both). The greatest overlap? NBC and CBS (19.5 percent "yes" to both). And how do these overlap with the generic TV question? Pretty well, most around the 20 percent overlap mark except MSNBC (9.9 percent), perhaps because it's not available on as many cable systems.
The real question here, from a PhDweeb standpoint, is whether the generic question is as good a predictor of various outcomes as the specific network measures. I'm in the middle of this, but I can tell you a few small differences do emerge, but not as many as you might expect. Few are surprising. Below, I sketch out some regression model results that control, statistically, for a bunch of other factors (education, political interest, etc.) to predict stuff like political knowledge. See below.
- Political Knowledge -- Three are negative (national TV news and watching ABC and CBS). The others, no relationship. In other words, the generic question predicts less knowledge (only barely, by the way), as do two of the three broadcast networks. MSNBC comes close to significance and is positive. That's interesting.
- Rate Obama -- No surprises here. Generic national TV viewing and all of the cable networks, save one, predict positive feelings toward President Obama. The exception, of course, is Fox News, which is negative. Duh. If you ever wanted quantitative evidence of Fox News exceptionalism, here it is.
- Talk to Friends -- This is a fun one. Does watching the news predict your likelihood to talk politics with friends? The generic TV news viewing measure is a strong predictor even after statistical controls (beta = .14, p<.001 for the nerds among you), but not a single specific network is related to talking except Fox News, and it's positive too. That's fascinating. I'm guessing people watching Fox gives those viewers something to talk about, no doubt about Obama and not in a good way.
Friday, May 10, 2013
The Most Trusted in America?
Who is the most trusted person in America?
Hint: it's not Walter Cronkite.
A Reader's Digest (yes, it's apparently still around) poll found Tom Hanks to be the most trusted person. Below, a brief Top Ten List.
Notice how no journalists make the list? Not until #12, Robin Roberts of Good Morning America.
Other interesting folks include a chemists at #11 and #14 (weird), an economist at #15 (weirder, but then again they all have won Nobels), Dr. Oz at #16 (kill me now), Michelle Obama at #19 (first real politico to make it, if you count the First Lady as a politico), and Noam Chomsky at #20 (no doubt voted on by people who haven't read his stuff). There are a few TV talking head types, journalist-like folks, sprinkled throughout the list, as are more Hollywood types and even Supreme Court justices. That's heartening. And let's not forget Judge Judy, somehow getting #28, and Adam Sandler at #64 (what the hell?).
As a side note, two from the NYTimes and one from the WSJournal person each made the top 101. CNN had a few. Fox had Shep Smith.
So basically this is a popularity contest, or more to the point -- a familiarity contest. It's hard to say because nowhere on the site can I easily find anything about the poll methodology other than "over 1,000" people were surveyed. Did they generate names on their own, these respondents? Were they given a giant list?
Finally, let us praise the journalism professor who finished at #83 -- Michael Pollan, who is there more for his books and stuff, not for being a professor.
Hint: it's not Walter Cronkite.
A Reader's Digest (yes, it's apparently still around) poll found Tom Hanks to be the most trusted person. Below, a brief Top Ten List.
- Tom Hanks
- Sandra Bullock
- Denzel Washington
- Meryl Streep
- Maya Angelou
- Steven Spielberg
- Bill Gates
- Alex Trebek
- Melinda Gates
- Julia Roberts
Notice how no journalists make the list? Not until #12, Robin Roberts of Good Morning America.
Other interesting folks include a chemists at #11 and #14 (weird), an economist at #15 (weirder, but then again they all have won Nobels), Dr. Oz at #16 (kill me now), Michelle Obama at #19 (first real politico to make it, if you count the First Lady as a politico), and Noam Chomsky at #20 (no doubt voted on by people who haven't read his stuff). There are a few TV talking head types, journalist-like folks, sprinkled throughout the list, as are more Hollywood types and even Supreme Court justices. That's heartening. And let's not forget Judge Judy, somehow getting #28, and Adam Sandler at #64 (what the hell?).
As a side note, two from the NYTimes and one from the WSJournal person each made the top 101. CNN had a few. Fox had Shep Smith.
So basically this is a popularity contest, or more to the point -- a familiarity contest. It's hard to say because nowhere on the site can I easily find anything about the poll methodology other than "over 1,000" people were surveyed. Did they generate names on their own, these respondents? Were they given a giant list?
Finally, let us praise the journalism professor who finished at #83 -- Michael Pollan, who is there more for his books and stuff, not for being a professor.
Tuesday, May 7, 2013
How Laryngitis Became Something Worse
Excuse me as I hijack my blog, normally about media, to write something personal, about how my laryngitis became something much much worse.
The story goes like this. I was grading one Sunday afternoon. My wife or daughter asked me a question and when I answered with a cracked voice I joked that I'd lost it. It came back, then went away again a few minutes later.
Well hell.
As I was teaching a three-hour graduate seminar, I figured it for a case of laryngitis. That was March 3. After a few days I call my doctor. You know the routine -- it'll pass on its own, they say, just give it some time, so I give it some time. A week later I call again, suspecting my occasional acid reflux may be the cause. They shrug, tell me it'll pass, give me some meds for reflux just in case.
Yup, I call again a week later. Still no voice. They prescribe some prednisone, thinking quite reasonably some kind of inflammation is the cause.
Still no voice.
Where's Dr. House when you need him?
Finally, I convince my GP to refer me to an ear-nose-throat guy. He deadens my nose and throat, snakes an itty bitty camera down to peek at my vocal cords, and says my left vocal cord is paralyzed. It happens, he says, and often we don't know the cause and it just takes care of itself with time. Still, let's do a CT scan (now known as Scan #1) just to be sure. Smart move. Turns out, Scan #1 reveals a nodule on my thyroid pressing the nerve that controls the left vocal cord. Not all that unusual, and only about 5-10 percent of the time is such a nodule anything more than an abnormal growth to be dealt with in various ways.
Five to 10 percent chance. You see where this is going, right?
So we do a sonogram (forever known as Scan #2) to confirm my throat is not pregnant and there is indeed something there, something suspicious.
Let's do a different test, my doc says. A PET scan (Scan #3). Serious stuff, #3, and yes the nodule lights up like a Christmas tree. It is, indeed, cancer. And oh, by the way, a couple of very small spots lit up in your lung, but they're likely unrelated to cancer, probably no bid deal.
Yeah, with my luck?
Last week was a biopsy where they stab a very fine needle into your throat. Actually it's not as bad as it sounds, the doc and nurse were real pros, and the test confirms that while it is cancer, it's papillary and not one of the others. Good news, if you can call it good news in a cancer-glass-half-full kinda way. The plan is simple: a doc slits my throat, removes the thyroid, patches me up, puts me on meds for the rest of my life, and theoretically all should be fine. Lotsa people in my building have been through this.
Bring it on, I say.
And yet, and yet.
Doc says, let's do Scan #4. That's next week. Because of those damn other spots. Because we want to be sure. Because there's some radioactive material that's yet to be inserted into your body. Then we'll do surgery. Really. Promise.
I expect to be on the table in late May, sliced and diced.
And so, kids, that's the story of how laryngitis became cancer. I still find it hard to think of myself as having cancer. That's what other people have. Not me. Not ever. Probably everyone thinks that way when they find out. I dunno. Anyway, it was damn difficult teaching with no voice. I used a computer to speak, my iPad to speak, I used microphones, I whispered really loud and sounded like Batman or that guy who calls late at night and asks what your wearing. My throat hurt like hell while I muddled through my classes. The rest of me feels fine -- except that I have (did I mention this?) cancer.
I suppose we can be thankful I lost my voice otherwise it might have taken a while before any symptoms showed up. Also, theoretically, once all this is done and assuming the nerve goes undamaged in surgery, my voice should return to its full nasal glory. I was lucky enough to snag a research sabbatical for the Fall (I prefer to believe it was based on the quality of my proposal, but then again ...). Regardless, I'll be around Grady working on a pretty cool research idea (I'll write about it another time) and then I'll be back in Spring teaching, assuming nothing goes wrong in the surgery.
I can't be that unlucky, right?
Friday, May 3, 2013
Margin of Error
So there's this story today about people's opinion of the name Redskins for the Washington NFL team. The story reports an N of 1,004 respondents with a margin of error of 3.9 percent. That struck me as big, so I did the math and my margin of error is 3.09 percent. Either the reporter dropped the "0" or there's something more to the poll not in the story.
Does it matter? No, not really. It's my job to focus on the trivial, is all.
Does it matter? No, not really. It's my job to focus on the trivial, is all.
News Tribes
Has news become tribal?
In the spew that is Twitter, a retweet (that I can no longer find) raised the question of whether news consumption has become tribal. Perhaps it was someone from Pew at a forum, but the idea is stuck in my head. Let me explain, and propose a research project.
First, what is tribal? Here's done definition off the net:
"in-group" and "out-group" studies found in psychology and sociology, but I argue it gets more anthropological than that. I argue that the cable networks, especially, often frame their "news" today as "us versus them." This is especially true in the talking heads, from Bill O'Reilly to Rachel Maddow, but I suspect there are threads slipping into the news coverage as well.
Don't think this is anything new. Journalists often frame stories this way, or good versus bad, or any number of ways. I'm arguing that people who watch, say, Fox News are today more likely to look negatively on people who do not watch Fox News.
That's your operationalization, your measurement. Ask people where they get their news, and then ask them what they think about people who watch other sources. Get a bit deeper. Ask them why they think people do so. The answers will often fall back on our tried-and-true partisan and ideological lines, but I think we'll see an emergence of almost tribal feelings. How do I define tribal and how do I separate it from more mundane partisan hackery? I dunno, not yet. It'd take more reading, more thinking, but I suspect Pew could tack on a few questions the next time they ask about news sources to see if anything does pop up. Or, to flip it, ask why people watch their favorite news source, and what they think of people who do the same, and those who do otherwise.
There's something here, something worth pursuing.
In the spew that is Twitter, a retweet (that I can no longer find) raised the question of whether news consumption has become tribal. Perhaps it was someone from Pew at a forum, but the idea is stuck in my head. Let me explain, and propose a research project.
First, what is tribal? Here's done definition off the net:
A unit of sociopolitical organization consisting of a number of families, clans, or other groups who share a common ancestry and culture and among whom leadership is typically neither formalized nor permanent.I'm sure there's a more social science definition out there, but let's go with the above. We all know media fragmentation and selective exposure has led to news consumption patterns -- especially cable TV -- along partisan lines. Conservatives tend to watch Fox, liberals tend to watch MSNBC. I'm arguing that news consumption may be tribal in a way that goes beyond ideology and partisanship, not unlike fans of a television program represent, in some ways, a tribe. It's "us versus them," which dips into
"in-group" and "out-group" studies found in psychology and sociology, but I argue it gets more anthropological than that. I argue that the cable networks, especially, often frame their "news" today as "us versus them." This is especially true in the talking heads, from Bill O'Reilly to Rachel Maddow, but I suspect there are threads slipping into the news coverage as well.
Don't think this is anything new. Journalists often frame stories this way, or good versus bad, or any number of ways. I'm arguing that people who watch, say, Fox News are today more likely to look negatively on people who do not watch Fox News.
That's your operationalization, your measurement. Ask people where they get their news, and then ask them what they think about people who watch other sources. Get a bit deeper. Ask them why they think people do so. The answers will often fall back on our tried-and-true partisan and ideological lines, but I think we'll see an emergence of almost tribal feelings. How do I define tribal and how do I separate it from more mundane partisan hackery? I dunno, not yet. It'd take more reading, more thinking, but I suspect Pew could tack on a few questions the next time they ask about news sources to see if anything does pop up. Or, to flip it, ask why people watch their favorite news source, and what they think of people who do the same, and those who do otherwise.
There's something here, something worth pursuing.
Thursday, May 2, 2013
What People Know ... About Immigration
A new Pew poll has some knowledge questions about he immigration bill and, as you'd expect, we don't do so good. See below (bold face is the correct answer):
You can write this a couple of ways. You can say, as Pew notes, that nearly half don't know who introduced the bill (the 47 percent on the first question) but nearly half do know applicants can remain in the U.S. (46 percent). Or you can combine the questions and be really depressing, as Pew noted:
The fine folks at Pew do something neat in their next section, they look at attitudes of the "knowledgeable." See below:
This is telling. The more knowledgeable also are more supportive, the less knowledgeable are less so (and more likely to say, quite honestly I think, that they have no opinion).
You can write this a couple of ways. You can say, as Pew notes, that nearly half don't know who introduced the bill (the 47 percent on the first question) but nearly half do know applicants can remain in the U.S. (46 percent). Or you can combine the questions and be really depressing, as Pew noted:
Just 24% of the public correctly answered both knowledge questions, 35% got one question correct while 41% answered neither question correctly.In other words, fewer than a quarter of U.S. adults got both right, and 2-out-of-5 couldn't manage to answer either one correctly.
The fine folks at Pew do something neat in their next section, they look at attitudes of the "knowledgeable." See below:
This is telling. The more knowledgeable also are more supportive, the less knowledgeable are less so (and more likely to say, quite honestly I think, that they have no opinion).
Subscribe to:
Posts (Atom)