Just picked up a copy of Talking Together: Public Deliberation and Political Participation. Yes, note the titular colonicity -- gotta have a colon to qualify as serious academic work, otherwise it's just hackery.1
Anyway, this book focuses on deliberative democracy. Basically, you sit a bunch of people in a room, educated them in an unbiased fashion about the issues, then poll them to get results theoretically better than the kind of uninformed opinion you'd often get from a random survey of U.S. adults.2
I'm burying the lede. I've only skimmed the book so I can't give an overall judgment, but there are concepts that matter to me: political engagement, participation, attention, and of course my old favorite, knowledge (which plays a bit part, not a starring role). In this case, there is a set of multiple regression tables in Chapter 5 with participation as the dependent variable and a host of independent variables tossed into the recipe. These ingredients are grouped as demographic factors (race, gender, etc.), social capital (belong to organization, religious attendance, etc.), political capital (efficacy, trust, knowledge, attention), and the last in the model -- deliberation.
Now the lede: deliberation still explains unique variance even after controlling for every possible other variable in the social science universe.3 Now the secondary lede: knowledge sometimes retains its explanatory power, even after all these controls. In some tables yes, some tables no.4
So knowledge leads to participation? Couldn't it be the other way around?
There are some obvious causality issues here, and the authors rightly mention them, so I'm not going to get into that methodological mess. Just skimming, the book is sound, thoughtful, and a worthy addition to the participation/social capital/deliberation literature.
---
1 Like this blog
2 That these people differ from regular people is hardly surprising
3 Okay, not every variable. Beer consumption is oddly missing
4 Attention, however, offers zilch to the model
Random blog posts about research in political communication, how people learn or don't learn from the media, why it all matters -- plus other stuff that interests me. It's my blog, after all. I can do what I want.
Monday, August 31, 2009
Sunday, August 30, 2009
Multi-Taskers are Mediocre
Interesting research on multi-tasking that came out last week. There's a NYTimes version and a slightly better by Wired that includes info on how the study was conducted. Basically, multi-taskers may do a lot of stuff at the same time, but they do none of it particularly well.
From one of the authors:
So what?
We all multi-task. Especially when we're consuming the news.
So I'm thinking, if we multitask, do we remember less of the news? Seems an obvious yeah, but how do we test this? Controlled experiment is the easy answer. But what about a survey? We could ask people how often they watch or read the news while doing something else, and what that "something else" might be. Assuming they're honest, we'd find some interesting results in political knowledge.
Okay, what about people who multitask the news. That is, people who say they use lots of difference sources, like TV and newspapers and online and magazines. This is not traditional multitasking (doing many things at once), but kinda interesting nonetheless. We want people to rely on multiple news sources, but is that also a bad thing? This is easily tested in a survey scenario: just ask them about all the different ways they get news and then create an index that measures the variety of sources. Will more variety be associated with more, or less, political knowledge?
Dunno. If time allows, I'll do a quick-and-dirty analysis this week and report back.
From one of the authors:
"Multitaskers were just lousy at everything,” said Clifford I. Nass, a professor of communication at Stanford and one of the study’s investigators. “It was a complete and total shock to me.”You can read the abstract here. It reports multitaskers were "more susceptible to interference" and they performed less well on memory tasks (this last part not from the abstract but rather the Wired version of the story).
So what?
We all multi-task. Especially when we're consuming the news.
So I'm thinking, if we multitask, do we remember less of the news? Seems an obvious yeah, but how do we test this? Controlled experiment is the easy answer. But what about a survey? We could ask people how often they watch or read the news while doing something else, and what that "something else" might be. Assuming they're honest, we'd find some interesting results in political knowledge.
Okay, what about people who multitask the news. That is, people who say they use lots of difference sources, like TV and newspapers and online and magazines. This is not traditional multitasking (doing many things at once), but kinda interesting nonetheless. We want people to rely on multiple news sources, but is that also a bad thing? This is easily tested in a survey scenario: just ask them about all the different ways they get news and then create an index that measures the variety of sources. Will more variety be associated with more, or less, political knowledge?
Dunno. If time allows, I'll do a quick-and-dirty analysis this week and report back.
Labels:
academic research,
multi-tasking,
multitasking,
new york times
How Much Attention is Attention?
There are good reasons to pay attention to news and public affairs (civic responsibility, your job requires knowing what's happening, conversational material, etc.). There are good reasons to not pay attention (busy schedule, hate the partisanship, worried about missing the next Dancing with American Idol episode, etc.).
How much attention is enough?
If you read the scholarship on political knowledge, you might come away thinking there is a magic amount that everyone should know, a certain degree of attention everyone should have. Carefully read the literature, though, and you find the nuances: people learn what they need to know based on their jobs, their social situations, their time and resources.
Scholars tend to ask the same kinds of questions to tap political knowledge. We do this in part because they seem to work from study to study and it allows us to compare -- more or less -- our finding with other studies. It all gets into the reliability and validity of our measures that, if established by other research, frees us to get at the more substantive analyses. In other words -- get past the method stuff and get to the fun results. Sometimes a study will come along that challenges the status quo of how to measure knowledge. These are acknowledged quickly by most scholars and then they move on to, yes, studying it the same way other people have done over the years.
It may be time to reconsider our measures.
So back to my question above: How much attention is attention? We know that media exposure is a lousy measure, that how much someone attends to a medium is a more fair, more valid, and at times more reliable measure. Perhaps we need individual-specific attention measures. Is a "high" attention for one person the same as a "high" for another? And do they have the same consequences? We have to tie together exposure, attention, and some other factor -- not sure what -- to get at the changes in the media landscape. One answer may be elite versus middle-brow versus low-brow sources of news.
Is high attention to Bill O'Reilly the same as high attention to NewsHour? Probably not, but they'd go down at the same in a research situation (depending on how you scale it).
But trying to scale the kinds of content people consume, that's rife with problems. Should I score Fox News lower than CNN? No, I don't think so. People versus New Yorker magazines? Yep, that's easy. It's TV where we get into trouble, and we're probably going to have to drill down to the individual program level to get at some of this, and that's a whole different problem when it comes to survey (versus experimental) research.
How much attention is enough?
If you read the scholarship on political knowledge, you might come away thinking there is a magic amount that everyone should know, a certain degree of attention everyone should have. Carefully read the literature, though, and you find the nuances: people learn what they need to know based on their jobs, their social situations, their time and resources.
Scholars tend to ask the same kinds of questions to tap political knowledge. We do this in part because they seem to work from study to study and it allows us to compare -- more or less -- our finding with other studies. It all gets into the reliability and validity of our measures that, if established by other research, frees us to get at the more substantive analyses. In other words -- get past the method stuff and get to the fun results. Sometimes a study will come along that challenges the status quo of how to measure knowledge. These are acknowledged quickly by most scholars and then they move on to, yes, studying it the same way other people have done over the years.
It may be time to reconsider our measures.
So back to my question above: How much attention is attention? We know that media exposure is a lousy measure, that how much someone attends to a medium is a more fair, more valid, and at times more reliable measure. Perhaps we need individual-specific attention measures. Is a "high" attention for one person the same as a "high" for another? And do they have the same consequences? We have to tie together exposure, attention, and some other factor -- not sure what -- to get at the changes in the media landscape. One answer may be elite versus middle-brow versus low-brow sources of news.
Is high attention to Bill O'Reilly the same as high attention to NewsHour? Probably not, but they'd go down at the same in a research situation (depending on how you scale it).
But trying to scale the kinds of content people consume, that's rife with problems. Should I score Fox News lower than CNN? No, I don't think so. People versus New Yorker magazines? Yep, that's easy. It's TV where we get into trouble, and we're probably going to have to drill down to the individual program level to get at some of this, and that's a whole different problem when it comes to survey (versus experimental) research.
Labels:
attention,
measurement,
media attention,
research methodology
Friday, August 28, 2009
The Politics of "Civic Knowledge"
Everyone agrees "civic knowledge" is important. People need to understand how government works and the historical nature of our system and other systems, how they compare, the basics of structure and how just a bill on capitol hill can become law. But I can't help think there's a lot of politics involved in these calls for "civic knowledge."
Here's one from a day or so ago. According to the pullout quote:
Here's one from a day or so ago. According to the pullout quote:
Coalition Calls for a Renewed Focus on Western Civilization and Ethics Coursework in Texas Colleges and Universities Asks lawmakers to study the value of fostering greater civic knowledgeThe Coalition for American Traditions and Ethics? Obviously I'm big on political and civic knowledge. Hell, I blog about it constantly, plus I do research in this area, but I gotta think there's just a hint of politics and "culture war" stuff going on here.
Reasons for Paying Attention
I wrote yesterday about what the reasons might be for people to pay attention to public affairs.
Attention to the news is necessary, according to the traditional argument, in order for people to know what's going on. An informed public, so goes the theory, is vital in a democracy. Let's assume all that (though it makes an interesting argument for a later date as to whether it matters if a public is informed or not).
What are the reasons for paying attention?
There are a lot of reasons to not pay attention.
So Jon Stewart has taught us an important lesson. For many, the news has to be entertaining. And Bill O'Reilly has taught us that, for many, the news has to be highly partisan. And Rush Limbaugh has taught us that, for many, the news can even be on radio. And the Internet has taught us that, for many, the net is for porn.
I'm being a bit of a pessimist here. The NYTimes and Wall Street Journal prove that, for many, quality serious news will sell. At the hyper-local level, people will pay attention if you cover your community in a way that rings true to them, stories that seem real and not artificially distanced from the communities they strive to describe. People will also pay attention if you somehow manage to work sex into the story, but that doesn't always work, so it's a challenge for many types of news stories.
Ultimately, getting people to pay attention to what's happening the world, that's not so hard. Getting them to pay attention to News, that's a bit harder. Getting them to pay for News, that's even more difficult. And thus we come to the core of the problem -- that journalism is expensive, at least the way we've been doing it for decades, and the revenue models are unraveling in some ways.
We've got to come up with ways to convince people to not only pay attention, but to pay for paying attention.
Attention to the news is necessary, according to the traditional argument, in order for people to know what's going on. An informed public, so goes the theory, is vital in a democracy. Let's assume all that (though it makes an interesting argument for a later date as to whether it matters if a public is informed or not).
What are the reasons for paying attention?
There are a lot of reasons to not pay attention.
- There are a zillion TV channels now. You don't have to watch TV news unless you absolutely want to. Many have chosen to do just that.
- The news itself -- especially on cable TV -- is often petty, partisan, and particularly focused on the inane.
- I get my news off Facebook ... whatever friends happen to link to.
- The source of real news -- newspapers -- is often presented in a dull, dry manner.
- It's so easy to just be entertained. Or if you want to feel informed while being entertained, there's always The Daily Show or Colbert Report.
- It's your responsibility in a democracy
- Take your medicine; it's good for you
- Being informed is fun!
So Jon Stewart has taught us an important lesson. For many, the news has to be entertaining. And Bill O'Reilly has taught us that, for many, the news has to be highly partisan. And Rush Limbaugh has taught us that, for many, the news can even be on radio. And the Internet has taught us that, for many, the net is for porn.
I'm being a bit of a pessimist here. The NYTimes and Wall Street Journal prove that, for many, quality serious news will sell. At the hyper-local level, people will pay attention if you cover your community in a way that rings true to them, stories that seem real and not artificially distanced from the communities they strive to describe. People will also pay attention if you somehow manage to work sex into the story, but that doesn't always work, so it's a challenge for many types of news stories.
Ultimately, getting people to pay attention to what's happening the world, that's not so hard. Getting them to pay attention to News, that's a bit harder. Getting them to pay for News, that's even more difficult. And thus we come to the core of the problem -- that journalism is expensive, at least the way we've been doing it for decades, and the revenue models are unraveling in some ways.
We've got to come up with ways to convince people to not only pay attention, but to pay for paying attention.
Thursday, August 27, 2009
Reasons for Paying Attention
Citizens do need to be more engaged in politics, but the reasons for paying attention need to be clearer to them, the benefits of stronger citizenship must be more evident, and the opportunities to learn about politics more frequent, timely, and equitable.
- Michael X. Delli Carpini & Scott Keeter
What Americans Know About Politics and Why It Matters
What Americans Know About Politics and Why It Matters
The quote above, from an excellent book I've had shamelessly checked out of the university library now for a million years, gets right to the point I'll be addressing for the next few days -- the reasons for paying attention. For the average American, what are they? Do they still work? What does this mean for news, and for journalism? For democracy? Are we -- to steal a great book title -- Amusing Ourselves to Death?
This is serious stuff. I'll summarize much of what I've blogged about before, look at some of the more recent research, and ask why someone should even want to pay attention to the business of public affairs.
Stay tuned, or clicked, or whatever the hell it is you do for a blog.
Wednesday, August 26, 2009
Titular Colonicity
I've blogged about titular colonicity so many times (latest one here but also see here), it feels like an old friend. There's a theory that as an academic field "matures" you see more and more journal titles (hence, titular) with a colon (hence, colonicity). Plus the name is damn funny.
I'd love to do a serious study of mass comm journal titles, but I'm not sure where the heck I'd publish the thing. But as a quick-and-dirty study, I looked at 1960 titles from Journalism Quarterly and then what's been published so far in 2009 (now it's called Journalism and Mass Communication Quarterly).
This was not a careful, systematic study, but if we can argue mass comm has "matured" as a field -- and I think we can -- the academic journal titles certainly seem to reflect that -- at least as far my small sample from one journal are concerned.
This is clearly a good AEJMC conference paper and possibly, but less likely, journal article. I'm tempted, oh so tempted, to carry through with it. But if someone out there wants to take a stab at it, I claim no ownership. Go for it. Just let me know how it comes out.
I'd love to do a serious study of mass comm journal titles, but I'm not sure where the heck I'd publish the thing. But as a quick-and-dirty study, I looked at 1960 titles from Journalism Quarterly and then what's been published so far in 2009 (now it's called Journalism and Mass Communication Quarterly).
This was not a careful, systematic study, but if we can argue mass comm has "matured" as a field -- and I think we can -- the academic journal titles certainly seem to reflect that -- at least as far my small sample from one journal are concerned.
- In 1960, I count an almost 3-to-1 ratio in favor of NO colons in titles.
- In 2009, I count an almost 2-to-1 ratio in favor USING colons in titles.
This is clearly a good AEJMC conference paper and possibly, but less likely, journal article. I'm tempted, oh so tempted, to carry through with it. But if someone out there wants to take a stab at it, I claim no ownership. Go for it. Just let me know how it comes out.
Tuesday, August 25, 2009
Objective and Perceived Knowledge
I've written many times about perceived knowledge, the perception that we are informed, and how it is different from -- but related to -- actual or objective knowledge. Indeed the two are often highly correlated, but not perfectly so. My interest has always been in the role the media play in creating the sense of being informed versus actually gaining knowledge.
But let's talk condoms.
There is this study, published in Pediatrics, that examines perceived and objective knowledge about adolescent male condom use. Other than being a neat topic, I found the methodology kinda interesting.
No it's not what you think, so keep reading.
They used a 5-item scale to measure knowledge and a 5-item scale to measure confidence about that knowledge. Is confidence the same as perceived knowledge? That's an interesting question; it's fascinating to tap not only a set of knowledge items but confidence about those very same items. So if you missed a few of the five but were very confident, you are high in perceived knowledge without being high in actual knowledge. Typically we ask a very different kind of question to measure "perceived knowledge," a set that often look very much like internal efficacy. I think the approach here may be much better, given that it's domain specific rather than general, like efficacy.
Why does all of this matter? Simple. I argue that some news programming, especially entertainment-based news programming, can lead to the sense of feeling informed (my empty calorie theory) when you're actually not all that informed, or at least not as much as you believe. Feeling informed, those empty calories, mean you don't eat your spinach (serious news, that sort of thing).
In the abstract of the condom study, the authors report:
But let's talk condoms.
There is this study, published in Pediatrics, that examines perceived and objective knowledge about adolescent male condom use. Other than being a neat topic, I found the methodology kinda interesting.
No it's not what you think, so keep reading.
They used a 5-item scale to measure knowledge and a 5-item scale to measure confidence about that knowledge. Is confidence the same as perceived knowledge? That's an interesting question; it's fascinating to tap not only a set of knowledge items but confidence about those very same items. So if you missed a few of the five but were very confident, you are high in perceived knowledge without being high in actual knowledge. Typically we ask a very different kind of question to measure "perceived knowledge," a set that often look very much like internal efficacy. I think the approach here may be much better, given that it's domain specific rather than general, like efficacy.
Why does all of this matter? Simple. I argue that some news programming, especially entertainment-based news programming, can lead to the sense of feeling informed (my empty calorie theory) when you're actually not all that informed, or at least not as much as you believe. Feeling informed, those empty calories, mean you don't eat your spinach (serious news, that sort of thing).
In the abstract of the condom study, the authors report:
However, those with higher perceived knowledge, particularly in the context of low objective knowledge, may be at greater risk for not using condoms. Addressing not only objective but also perceived knowledge may increase the effectiveness of interventions that are designed to increase rates of condom use among male adolescents.In other words, high perceived knowledge but less actual knowledge is a bad combination. I'd make the same argument across a number of domains, from health to consumption of serious news as part of being a citizen in a democratic society.
Science Knowledge Quiz
Watch a lot of Star Trek? Big Star Wars fan? Think you're a science nerd? Take the Pew Center's Science Knowledge Quiz and see how you stack up. The questions range from easy to damnably tricky, so good luck.
I got 'em all right, beating 90 percent of those who took it. Sheer dumb luck on a couple of them, but right now I need all the ego boost I can get, even if from a survey.
I got 'em all right, beating 90 percent of those who took it. Sheer dumb luck on a couple of them, but right now I need all the ego boost I can get, even if from a survey.
Monday, August 24, 2009
Learning Online
Short piece in today's NYTimes suggests that learning online may be superior to face-to-face instruction. Obviously there are a lot of methodological and self-selection issues in play here, yet an interesting if brief article. The actual report, if you feel brave enough to take on 93 pages of pdf, can be found here. It's a meta-analysis, meaning statistical tests from 46 studies (out of a thousand studies on this topic). It's a powerful tool. You have to wade through the methods section to see the criteria for studies to be included, but basically they find -- within the studies that meet their criteria -- that web-based studies work better than face-to-face in K-12 learning. BUT ... the authors caution the small number of studies that meet their criteria makes it difficult to generalize to the K-12 population as a whole. Though important, this kinda gets lost.
Labels:
education,
learning,
new york times,
online learning
Teen Cellphone Use
Teens are closing the gap on adults when it comes to cell (mobile) phone use, according to a new Pew study. See the graphic to the right, or click on it to see a bigger image.
Yes, my teenagers finally got cell phones last month -- they'd been borrowing ours for years -- so they'll show up in the data soon enough.
The demographics of teen cell users are fairly consistent. Boys and girls, about the same. Race, not really a factor. Income, some effect but not as much as you'd think. Scroll down to see those tables if you're into data crunching.
What's this to do with what people know? A lot of these phones are truly mobile media devices, so you can get news and weather and sports -- if you choose. Or think of it in this way: every minute spent staring at that tiny screen is one less minute on the computer or watching TV or reading, thus cutting into the time in which political learning might take place.
The media pie doesn't usually get bigger, the slices among the different media tend to become smaller and smaller for older media, bigger and bigger for the newer media. In other words, radio never disappeared, but its slice of the pie (our time) got smaller, and became more specialized such as music in our cars or at work. The "slice" of mobile phones continues to grow and as a consequence, time spent with other media will likely suffer. Since its hard to get a lot of news via phone (not impossible, but less likely), you'd have to hypothesize that political knowledge will also be reduced.
Yes, my teenagers finally got cell phones last month -- they'd been borrowing ours for years -- so they'll show up in the data soon enough.
The demographics of teen cell users are fairly consistent. Boys and girls, about the same. Race, not really a factor. Income, some effect but not as much as you'd think. Scroll down to see those tables if you're into data crunching.
What's this to do with what people know? A lot of these phones are truly mobile media devices, so you can get news and weather and sports -- if you choose. Or think of it in this way: every minute spent staring at that tiny screen is one less minute on the computer or watching TV or reading, thus cutting into the time in which political learning might take place.
The media pie doesn't usually get bigger, the slices among the different media tend to become smaller and smaller for older media, bigger and bigger for the newer media. In other words, radio never disappeared, but its slice of the pie (our time) got smaller, and became more specialized such as music in our cars or at work. The "slice" of mobile phones continues to grow and as a consequence, time spent with other media will likely suffer. Since its hard to get a lot of news via phone (not impossible, but less likely), you'd have to hypothesize that political knowledge will also be reduced.
Labels:
cell phone,
media use,
mobile phones,
pew center,
political knowledge
Sunday, August 23, 2009
More Knowledge Quotations
Information is not knowledge
- Albert Einstein
Knowledge is no value unless you put it into practice
- Anton Chekhov
Take the two famous quotes above and you actually run smack into an ongoing controversy among those who study knowledge in general and political knowledge in particular. Is what people know sufficient to assume sophistication? Those who study expertise or sophistication or similar constructs would argue no, that knowledge itself is necessary but not sufficient. It's not what ya know, it's what ya know and how ya use it that matters.
This gets at how we measure knowledge. In political studies, we rely heavily on those civics class questions of who is Nancy Pelosi or what party controls the U.S. House of Representatives. We assume if you can rattle off the answers correctly, you're somehow more knowledgeable than those who cannot -- but we rarely build into our models how people use such information, if at all. Exceptions are when we see studies of how people organize their political world, often along partisan or ideological lines, and the role civics/textbook knowledge plays in all that.
Studies of sophistication, they use knowledge, but they also tend to use other factors in a multidimensional (and often muddy) construct. In other words, I'm not sure we gain all that much in our studies of "expertise" and "sophistication," which brings us back to our tired yet true measures of civics knowledge. They're simple, they hang together well (Cronbach's Alpha is usually in the .70s), and they have years of previous research to back 'em up. So we default to the conceptual safe ground.
But information, as the good professor notes above, is not knowledge.
It's an interesting problem for those of us who mess with these concepts, who try to analyze and predict based on them, who desperately need to publish in academic journals for promotion or tenure or a better raise (when there are raises and not furloughs). For me it's mostly an interesting intellectual exercise, and eventually I'd love to create to final, best, ultimate measure of sophistication that makes actual sense. Or, failing that, a knowledge measure that answers many of the criticisms I mentioned above, and the many I've not bothered here to get into (but have discussed in other posts).
Maybe it's time for a Political Wisdom construct.
Huh?
Here's my model, the old guys you'd see around the table at some diner, sipping coffee and arguing politics, and some snotty college kid sits down and can tell them all the names they get wrong, the little factoids they get confused, but when it comes to knowing -- the old guys beat the snotty kid every time (in this scenario, I was once the snotty kid...long story). It's how you use information, how you make sense of the world, that matters most. Hence my new construct -- Political Wisdom.
And no, I have absolutely no idea how to measure it, especially in a telephone survey.
Saturday, August 22, 2009
Media Systems and Knowledge
Here's an interesting new study in the European Journal of Communication that looks at television media systems in different nations and their role in what people know. Unfortunately the link for me leads only to the abstract and not the full article -- at least from home. I may try again next week from my office box.
The authors compared three systems: "public service" (Denmark and Finland), "dual" (UK), and a "market model" (U.S.).
According to the abstract, the "public service" model for TV boasts greater public affairs and international news, which leads to greater knowledge in those areas. All is well, in this model, including smaller differences among the advantage and disadvantaged -- in other words, a smaller "knowledge gap." It's a news Utopia, or you'd gather from the abstract.
The problem with relying only an abstract to understand a study? Lines like this: "But wider processes in society take precedence over the organization of the media in determining how much people know about public life." I'm not certain just what the hell that means, but it smacks of saying "we really couldn't statistically control for all the obvious factors that would explain differences across countries so what the heck, we wanted to publish this anyway." Actually, given the prestigious names on the study that's not gonna be the case, but I can't parse the wording based on just an abstract. Maybe next week I'll look at the piece in full, because it's rare you see system-wide studies of this sort. We need more, but they're difficult to do and even more difficult when you try to account for all the other differences in comparing some place like Denmark to the United States.
The authors compared three systems: "public service" (Denmark and Finland), "dual" (UK), and a "market model" (U.S.).
According to the abstract, the "public service" model for TV boasts greater public affairs and international news, which leads to greater knowledge in those areas. All is well, in this model, including smaller differences among the advantage and disadvantaged -- in other words, a smaller "knowledge gap." It's a news Utopia, or you'd gather from the abstract.
The problem with relying only an abstract to understand a study? Lines like this: "But wider processes in society take precedence over the organization of the media in determining how much people know about public life." I'm not certain just what the hell that means, but it smacks of saying "we really couldn't statistically control for all the obvious factors that would explain differences across countries so what the heck, we wanted to publish this anyway." Actually, given the prestigious names on the study that's not gonna be the case, but I can't parse the wording based on just an abstract. Maybe next week I'll look at the piece in full, because it's rare you see system-wide studies of this sort. We need more, but they're difficult to do and even more difficult when you try to account for all the other differences in comparing some place like Denmark to the United States.
Friday, August 21, 2009
Knowledge Quotes
We are drowning in information but starved for knowledge.
- John Naisbitt
Megatrends: Ten New Directions
Transforming Our Lives
Megatrends: Ten New Directions
Transforming Our Lives
I like pulling quotations and then riffing off them. This book was written in the early 1980s, before the Internet entered most people's lives. So why this quotation? With the Net, with the fragmenting of the media marketplace, we're certainly drowning in information. Compared to today, back in 1980 were were merely wading in information maybe to our ankles or knees. Today we're up to our chins. We're treading as best we can, and probably going under any minute now.
We have access to so much information, and yet we seem to know less and less.
Technology swamps us with access to facts and opinion and stuff, and then someone comes along with technology to help us organize this stuff -- at a price. An iPod does that. A smart phone too. We get better and better at finding stuff out, but we get lousier and lousier at learning. The above quotation means so much today, probably even more tomorrow. I can "google" something, but does that make me knowledgeable? Or just a good typist?
Knowing Your Town
I had my public affairs reporting students read this article about how disconnected many journalists are from the places they live and cover for their news organizations -- and how the news they produce can suffer as a consequence. Then I rattled off in class some places in Athens and asked if they'd visited or knew where they were, like Memorial Park, which sits less than a mile off the edge of campus. Only about two of 16 knew where the park was.
In a society where people move around a lot, that sense of place can suffer, especially if you don't join community organizations and link yourself up somehow with where you live (ala Bowling Alone and the whole social capital argument).
My point? What people know about their homes, their neighborhoods and their communities, must somehow ring true in the kinds of stories we craft as journalists. That's damn hard to do as a reporter if all you know is where you live, the concrete-and-brick buildings of your beat, your news organization's office, and the bar with the cheapest drinks at happy hour. I worked for three daily newspapers in three different states, and I admit to not being a joiner, not knowing much about the places I covered beyond my beat. Yeah, I could tell you which local politicians hated each other, or where most crimes happened, but I didn't know the places. I suspect my stories suffered as a consequence. And what people knew was, my stories -- good as they were technically -- didn't always ring true.
In a society where people move around a lot, that sense of place can suffer, especially if you don't join community organizations and link yourself up somehow with where you live (ala Bowling Alone and the whole social capital argument).
My point? What people know about their homes, their neighborhoods and their communities, must somehow ring true in the kinds of stories we craft as journalists. That's damn hard to do as a reporter if all you know is where you live, the concrete-and-brick buildings of your beat, your news organization's office, and the bar with the cheapest drinks at happy hour. I worked for three daily newspapers in three different states, and I admit to not being a joiner, not knowing much about the places I covered beyond my beat. Yeah, I could tell you which local politicians hated each other, or where most crimes happened, but I didn't know the places. I suspect my stories suffered as a consequence. And what people knew was, my stories -- good as they were technically -- didn't always ring true.
Labels:
bowling alone,
connectedness,
journalism,
social capital
Thursday, August 20, 2009
Civics, Youth, and Voting
An interesting blog post that touches on a study I discussed earlier, and looks at engaging youth in the political process.
Labels:
political knowledge,
political participation,
youth
SLOPpy Polls
A SLOP is a "self-selected opinion poll," most often seen on TV where some talkinghead host asks you to vote via phone or online on some issue. As AAPOR points out, "there is no control over respondent selection because anyone can call in an opinion." So don't be surprised when watchers of Lou Dobbs, for example, all call in and skew the program's "poll" results to favor -- surprisingly -- what Dobbs "thinks."
So as a measure of what people think, it's a load of complete crap.
And that brings us to this bit seen below by funnyman Jon Stewart. Thanks to Nick Browning for pointing this one out to me. As often is the case, Stewart sums it all up and makes it funny to boot.
So as a measure of what people think, it's a load of complete crap.
And that brings us to this bit seen below by funnyman Jon Stewart. Thanks to Nick Browning for pointing this one out to me. As often is the case, Stewart sums it all up and makes it funny to boot.
The Daily Show With Jon Stewart | Mon - Thurs 11p / 10c | |||
Poll Bearers | ||||
www.thedailyshow.com | ||||
|
Wednesday, August 19, 2009
Americans, Congress,
and Democratic Responsiveness
I've been skimming a new book by David Jones and Monika McDermott (image to the right) that looks at Americans and Congress and how the two interact. Chapter 3 is of particular importance to me since it explores what people know about Congress and the consequences of that knowledge.
You can typically divide scholars of political knowledge into two camps. There are those who think the public fails to meet its democratic responsibilities of being informed, and there are those who think the public manages quite nicely, thank you very much, and any failure has more to do with the kinds of questions we ask or the ways we ask them.
Jones and McDermott fall into the second group.
The authors make some good points in criticizing the kinds of questions used to tap the public's knowledge, but these criticisms are nothing new. Even people who use them are critical of their limitations. I'll probably discuss the book again this week, but one point sticks out in my mind, that of the traditional which party controls Congress question often used in creating an index of political knowledge. They make a good point that the post-election surveys that ask people to remember which party controlled the House or Senate back before the election is probably an unfair test. People do lousy on these, and with good reason. "Such a retrospective fact question is a particularly challenging test for respondents," they write on page 47.
This gets into models of how people learn, remember, and retrieve information. A better question, they argue, is the prospective one that asks which party is about to control Congress. "The information is salient," they say, "having been amply covered in the media as well as very recent, and should therefore still be readily available to citizens."
Their analyses "provide at least a ray of hope" in the public's ability to perform its democratic duties, they conclude. The book's overall conclusions are equally optimistic. If time in a busy semester allows, I'll dig deeper and write more. Or you can just get the book yourself and check it out.
You can typically divide scholars of political knowledge into two camps. There are those who think the public fails to meet its democratic responsibilities of being informed, and there are those who think the public manages quite nicely, thank you very much, and any failure has more to do with the kinds of questions we ask or the ways we ask them.
Jones and McDermott fall into the second group.
The authors make some good points in criticizing the kinds of questions used to tap the public's knowledge, but these criticisms are nothing new. Even people who use them are critical of their limitations. I'll probably discuss the book again this week, but one point sticks out in my mind, that of the traditional which party controls Congress question often used in creating an index of political knowledge. They make a good point that the post-election surveys that ask people to remember which party controlled the House or Senate back before the election is probably an unfair test. People do lousy on these, and with good reason. "Such a retrospective fact question is a particularly challenging test for respondents," they write on page 47.
This gets into models of how people learn, remember, and retrieve information. A better question, they argue, is the prospective one that asks which party is about to control Congress. "The information is salient," they say, "having been amply covered in the media as well as very recent, and should therefore still be readily available to citizens."
Their analyses "provide at least a ray of hope" in the public's ability to perform its democratic duties, they conclude. The book's overall conclusions are equally optimistic. If time in a busy semester allows, I'll dig deeper and write more. Or you can just get the book yourself and check it out.
Exercise and Weight, Take 2
Time magazine has apparently taken some exercised heat over a recent article that questioned the usefulness of exercise in losing weight. I blogged about the original article a few days back. Simon Owens of Bloggasm describes the backlash here with links and thoughtful discussion. Thanks to him for pointing the controversy out to me.
Tuesday, August 18, 2009
Polls and Health Care
The health care debate? I'd love to avoid this one, what with all the town halls and screaming and lies and damn lies and occasional statistics, but here I am, writing about it.
So I'm skimming the polls about health care and a couple of them stick out for lots of -- as you'll see -- obvious reasons.
Here they are:
When attempts to find out what people think turn out so differently, we often look at how the sample was drawn or how the questions were asked. I've little time to get into the sampling thing (but it looks okay), so let's look at the questions.
So I'm skimming the polls about health care and a couple of them stick out for lots of -- as you'll see -- obvious reasons.
Here they are:
- Fox News has polls that show in July, 36 percent, and in August, 34 percent, favor health care reform legislation.
- CNN has polls that show in June, 51 percent, and in July/August, 50 percent favor health care reform legislation.
When attempts to find out what people think turn out so differently, we often look at how the sample was drawn or how the questions were asked. I've little time to get into the sampling thing (but it looks okay), so let's look at the questions.
CNN: "From everything you have heard or read so far, do you favor or oppose Barack Obama's plan to reform health care?"No real difference that I can see, which is troubling. Is it coincidence that FOX's poll seems negative and CNN's poll seems positive toward health care reform? I dunno, but I'd love to see the order of the questions -- what came before these and how they may have primed the responses. There's something fishy here, on one side or the other.
FOX: "Based on what you know about the health care reform legislation being considered right now, do you favor or oppose the plan?"
Labels:
cnn,
fox news,
health care,
public opinion polls
Monday, August 17, 2009
More Coding Fun
Just adding to my post below, this time about answers to "who is John Roberts?" Of course you know he's Chief Justice of the United States. A few favorite answers from the 2008 ANES pre- and post-election surveys:
- "president of Australia"
- "Australian prime minister" (John Howard had been PM, so perhaps that explains the confusion here)
- "director of the cia"
- A million versions of "never heard of him" or "I don't know."
- "on Supreme Ct." This is an interesting one. Coded as correct?
- "newscater for cnn" Misspelling included, but an interesting response. John Roberts is the morning guy for CNN, so understandable. Surprised not more of these.
Labels:
ANES,
john roberts,
political knowledge,
response coding
More Coding Fun
I blogged here, and a day later here, about the 2008 ANES open-ended answers and the difficulties of coding them when it comes to political knowledge. I used Nancy Pelosi as an example and asked whether it's correct or incorrect to call her merely a "member of the House" as opposed to the more specific "Speaker of the House."
Today, a few of the other questions asked of respondents. Asked who was Dick Cheney, a zillion people got it right. Vice president. Favorite answer: "vp he's a jerk. don't give him a gun." I'd count that as correct, I suppose, just because of the "vp" part.
Asked "who is Gordon Brown" (The PM of UK), however, people flaked out. Favorite answers below in quote marks, my snotty comments in italics:
Today, a few of the other questions asked of respondents. Asked who was Dick Cheney, a zillion people got it right. Vice president. Favorite answer: "vp he's a jerk. don't give him a gun." I'd count that as correct, I suppose, just because of the "vp" part.
Asked "who is Gordon Brown" (The PM of UK), however, people flaked out. Favorite answers below in quote marks, my snotty comments in italics:
- "Who is Gordon Brown?" I suppose if you don't know the answer, repeat the question.
- "works in the white house? james brown relative?" I feel goood!
- "small business administration." Not sure where the hell that comes from.
- "FEMA" You're doing a heckuva job, Brownie. Er, wrong Brownie.
- "Governmental. Is he even a person?" Yeah, kinda, though Tories might disagree.
Sunday, August 16, 2009
Titular Colonicity Revisited -- Kinda
I've blogged before about titular colonicity, the use of colons in titles of academic work. It's how you know an academic field has "matured," its greater reliance on colons to sound, well, more academic. Apparently people pay attention not only to colons, but also question marks. This study looks at the use of question marks in journal article titles. They found "significant increase" in the use of question marks in studies of physics, life sciences, and medicine.
Okay, so they've looked at colons and question marks. Periods don't make much sense, so that leaves me an intimate study of the semicolon in titles of academic studies.
Hey, you gotta publish, or perish, even if it's dumb publishing.
Okay, so they've looked at colons and question marks. Periods don't make much sense, so that leaves me an intimate study of the semicolon in titles of academic studies.
Hey, you gotta publish, or perish, even if it's dumb publishing.
Labels:
academic publishing,
semicolon,
titular colonicity
Friday, August 14, 2009
Youth, Participation, and Knowledge
There was greater political participation by young people in the 2008 presidential election but that did not necessarily translate into greater political knowledge, according to a news study (press release here).
First, some caveats. This study was done by graduate students. It was presented at the recent AEJMC convention (I'm a member). So I can't say a lot about the method, because I skipped this year's convention in Boston. Caveats aside, we can either read the press release or wait until the papers themselves become available online. It's basically a study of undergraduates -- who as we all know barely qualify as real people. And there's this little bit of info near the end of the release: "the study has limitations, particularly since the students were not selected from a random sample." Yup. Okay for a conference paper, harder to sell as a journal article.
Here's an interesting quote from the press release:
"The measure for political knowledge was similar to a current events quiz with questions like the name of the U.S. secretary of defense," according to the release. That's okay too, though as I've discussed many times before in previous blogs, it's just one form of political knowledge.
First, some caveats. This study was done by graduate students. It was presented at the recent AEJMC convention (I'm a member). So I can't say a lot about the method, because I skipped this year's convention in Boston. Caveats aside, we can either read the press release or wait until the papers themselves become available online. It's basically a study of undergraduates -- who as we all know barely qualify as real people. And there's this little bit of info near the end of the release: "the study has limitations, particularly since the students were not selected from a random sample." Yup. Okay for a conference paper, harder to sell as a journal article.
Here's an interesting quote from the press release:
"We found that the students were really politically active," York said. "They talked about the campaigns with their friends, and a lot of people got online on a social networking site to talk about the campaigns. Not many wrote blogs, but a considerable amount kept up with blogs."This gets at the difference in what is participation. We'd normally define it as attending rallies, contributing money, putting a bumpersticker on your car, that sort of traditional thing, but there is movement to include social networking, Internet-based activities as participation.
"The measure for political knowledge was similar to a current events quiz with questions like the name of the U.S. secretary of defense," according to the release. That's okay too, though as I've discussed many times before in previous blogs, it's just one form of political knowledge.
A Specificity Index
for Open-Ended Coding
I blogged yesterday (see below) about problems in coding open-ended responses to survey political knowledge questions. I used the Nancy Pelosi question as an example. The "correct" response, from a scholarly standpoint, would have respondents identify her as Speaker of the House, but I also argued that it was equally correct to identify her as a congresswoman, a member of the House, and a lot of other answers that in the past would have been coded as "incorrect" in the American National Election Studies dataset.
Go back to yesterday's post for links to ANES data, the newly released raw open-ended responses, and other important points of interest, especially problems with earlier coding.
We don't know exactly how ANES staff will code these answers, but when release the next version of the 2008 pre- and post-election data, I'll do a comparison then. Today I offer an alternative Specificity approach to coding. It's simple. Anything resembling "Speaker of the House," given its specificity, gets coded as the highest, most correct, response. Let's call it a "3" for the sake of argument. Identifying Pelosi as a member of the House, while correct, loses that specificity, so it gets a "2." Calling her a politician or something similar, that's correct in a vague sort of way, so it scores a "1." And getting it wrong, that's a "0."
Missing and refusals get their own special codes. Scholarly typically recode a refusal to be the same as an "incorrect" response. That's a different problem for a different day.
My specificity method provides greater data range. If someone doesn't like it, they can collapse the resulting codes into any method that strikes them as useful, especially if they're comparing answers in 2008 with some previous year.
Go back to yesterday's post for links to ANES data, the newly released raw open-ended responses, and other important points of interest, especially problems with earlier coding.
We don't know exactly how ANES staff will code these answers, but when release the next version of the 2008 pre- and post-election data, I'll do a comparison then. Today I offer an alternative Specificity approach to coding. It's simple. Anything resembling "Speaker of the House," given its specificity, gets coded as the highest, most correct, response. Let's call it a "3" for the sake of argument. Identifying Pelosi as a member of the House, while correct, loses that specificity, so it gets a "2." Calling her a politician or something similar, that's correct in a vague sort of way, so it scores a "1." And getting it wrong, that's a "0."
Missing and refusals get their own special codes. Scholarly typically recode a refusal to be the same as an "incorrect" response. That's a different problem for a different day.
My specificity method provides greater data range. If someone doesn't like it, they can collapse the resulting codes into any method that strikes them as useful, especially if they're comparing answers in 2008 with some previous year.
Thursday, August 13, 2009
Coding Responses
The American National Election Studies released this week the redacted responses to various open-ended questions -- including four political knowledge items -- from the 2008 study (study page here if you'd like to download the Excel file). These are the standard office recognition questions that provide a name and ask respondents to identify "what political office does he/she now hold?" The political figures were John Roberts, Dick Cheney, Gordon Brown, and Nancy Pelosi.
Later, ANES will code these as correct or incorrect, but as some of you know there have been problems with earlier coding of responses and I've blogged about this in the past here and here. ANES has its own report. Basically, some answers that should have been coded as "correct" were instead coded as "incorrect," thus deflating the public's actual knowledge as reported in these influential surveys. Some scholars studying knowledge about the U.S. Supreme Court uncovered the problem.
Okay, back to the present. There's a nice Excel file with all the various responses. It's time for a coding lesson, children, as in -- how do I code this? Take, for example, the question about Nancy Pelosi, who was (and is) Speaker of the House. Is the following response correct or incorrect? I've included misspellings.
It'll be interesting to see how ANES staff code these in the next release. When that happens, I promise to compare these responses with their correct/incorrect dichotomy and report back, because honestly I don't know how to handle them myself. I have a few ideas and I may share some in a later post, but it has to do with a specificity scale (more later). If anyone has suggestions, lemme know. I'd be happy to hear them.
Later, ANES will code these as correct or incorrect, but as some of you know there have been problems with earlier coding of responses and I've blogged about this in the past here and here. ANES has its own report. Basically, some answers that should have been coded as "correct" were instead coded as "incorrect," thus deflating the public's actual knowledge as reported in these influential surveys. Some scholars studying knowledge about the U.S. Supreme Court uncovered the problem.
Okay, back to the present. There's a nice Excel file with all the various responses. It's time for a coding lesson, children, as in -- how do I code this? Take, for example, the question about Nancy Pelosi, who was (and is) Speaker of the House. Is the following response correct or incorrect? I've included misspellings.
- "she is a congressmand out of California"
- "congresswoman in Ca. 8th district"
- "majority leader in congress, democrat"
- "house of representative member"
- "specker of the house"
- "governor of Alaska, John's running mate."
It'll be interesting to see how ANES staff code these in the next release. When that happens, I promise to compare these responses with their correct/incorrect dichotomy and report back, because honestly I don't know how to handle them myself. I have a few ideas and I may share some in a later post, but it has to do with a specificity scale (more later). If anyone has suggestions, lemme know. I'd be happy to hear them.
Labels:
ANES,
measurement,
political knowledge,
response coding
Actual and Perceived Knowledge
Interest in the difference between what people know (actual knowledge) and what people think they know (perceived knowledge) crosses a number of scholarly disciplines. I've done work on whether listening to talk radio or TV faux news shows increases knowledge or merely the perception that one is informed.
I'm not alone in my fascination with the difference between actual and perceived knowledge.
For example, this study finds elementary school teachers often overestimate what they know about literacy development in children. There's similar work in pharmacy, counseling, and political science (the latter gets into some fascinating gender differences and I hope to write more about this tomorrow).
My point? What people think they know can have a number of consequences, thus should be studied when looking at any type of standard knowledge. What consequences? If a person feels informed, he or she may engage in early closure, not seek out additional information, perhaps reach a faulty conclusion. Persons may fill up on "empty calories" of junk news or entertainment-based news and think themselves informed when perhaps they're not truly informed.
Perceived knowledge can lead to greater feelings of efficacy. That's the good. But if those feelings of efficacy are based on weak, insubstantial actual knowledge, you have to worry about any final judgments -- be it in the classroom with teachers, or in the voting booth by citizens. Or in any number of possible scenarios, from doctors to government workers.
I've always wanted to dig deeper into the theoretical possibilities of actual versus perceived knowledge, perhaps even write a good NSF grant aimed at what the consequences of this are for democracy. Maybe I will. Some day. But I firmly believe that if you're studying knowledge, you need to also study perceived knowledge.
I'm not alone in my fascination with the difference between actual and perceived knowledge.
For example, this study finds elementary school teachers often overestimate what they know about literacy development in children. There's similar work in pharmacy, counseling, and political science (the latter gets into some fascinating gender differences and I hope to write more about this tomorrow).
My point? What people think they know can have a number of consequences, thus should be studied when looking at any type of standard knowledge. What consequences? If a person feels informed, he or she may engage in early closure, not seek out additional information, perhaps reach a faulty conclusion. Persons may fill up on "empty calories" of junk news or entertainment-based news and think themselves informed when perhaps they're not truly informed.
Perceived knowledge can lead to greater feelings of efficacy. That's the good. But if those feelings of efficacy are based on weak, insubstantial actual knowledge, you have to worry about any final judgments -- be it in the classroom with teachers, or in the voting booth by citizens. Or in any number of possible scenarios, from doctors to government workers.
I've always wanted to dig deeper into the theoretical possibilities of actual versus perceived knowledge, perhaps even write a good NSF grant aimed at what the consequences of this are for democracy. Maybe I will. Some day. But I firmly believe that if you're studying knowledge, you need to also study perceived knowledge.
Wednesday, August 12, 2009
Obama's Slide
It had to happen, even to Barack Obama. The presidential honeymoon is gone and we're starting to see the typical slide in popularity by a new, sitting president. His "approve" numbers have dropped 9 percentage points since February, according to Pew Center report. A CNN survey had him at 78 percent favorability in January. Now he's at 64 percent. Still not bad, but a drop is a drop in political utility, especially when you're also trying to reform health care.
No, I'm not gonna discuss the health care town hall mess. It's all over cable. Go there. But it does fit into Obama's slide in positive numbers, that and the pounding he's taking at Fox or by the radio talkmeisters who have fueled most of the townhall people.
To be fair, George W. Bush was in the low-to-mid 60s early in his presidency through this same period, about August, so he outperforms Obama. He drops a little by Fall, but of course 9/11 changed the math and he shot up to a high of 87 percent "favorable" in late 2001. Comparisons unravel in a hurry.
No, I'm not gonna discuss the health care town hall mess. It's all over cable. Go there. But it does fit into Obama's slide in positive numbers, that and the pounding he's taking at Fox or by the radio talkmeisters who have fueled most of the townhall people.
To be fair, George W. Bush was in the low-to-mid 60s early in his presidency through this same period, about August, so he outperforms Obama. He drops a little by Fall, but of course 9/11 changed the math and he shot up to a high of 87 percent "favorable" in late 2001. Comparisons unravel in a hurry.
Tuesday, August 11, 2009
Memory and Recognition
Dunno why, but I find this fascinating -- a study of how well people of different ages recognize faces from either contemporary or historical figures. Younger subjects did better recognizing contemporary famous people, older subjects did better with "dated" famous people. No real surprise. What I find fascinating is how young adults better recalled "young" versus "old" familiar faces, and younger adults performed better than older ones when it came to "young" unfamiliar faces.
What's it all mean? As we get older, prior knowledge becomes harder to access to help us make sense of what we see.
How does this fit what people know? Turn this into a question of political knowledge and recognition of political actors and you see how older citizens, who often do well on these tests and have superior prior knowledge (based in part on experience), will struggle to make sense of new information as they have a hard time making use of all that experience and prior knowledge. It's there, but they have a hard time finding the facts or names to fit a familiar face -- be it politician or someone else in the news. As the news audience ages, especially for broadcast TV news but also for newspapers, we need to rethink how we tell stories to help the aging audience with the realities of difficult-to-access prior knowledge. It's there, we just need to learn ways to help them trigger that knowledge so they can better make sense of the stories we're telling.
What's it all mean? As we get older, prior knowledge becomes harder to access to help us make sense of what we see.
How does this fit what people know? Turn this into a question of political knowledge and recognition of political actors and you see how older citizens, who often do well on these tests and have superior prior knowledge (based in part on experience), will struggle to make sense of new information as they have a hard time making use of all that experience and prior knowledge. It's there, but they have a hard time finding the facts or names to fit a familiar face -- be it politician or someone else in the news. As the news audience ages, especially for broadcast TV news but also for newspapers, we need to rethink how we tell stories to help the aging audience with the realities of difficult-to-access prior knowledge. It's there, we just need to learn ways to help them trigger that knowledge so they can better make sense of the stories we're telling.
News, Opinion,
and the old Editorial Page
Since the age of dinosaurs, newspapers have had "opinion" or "editorial" pages. Readers would find "institutional editorials" to represent the opinion of the paper, "columns" to represent opinions of individuals (often journalists), "letters-to-the-editor" to represent readers, and an "editorial cartoon" to represent art-as-opinion.
News dominated the "A-section" and theoretically was devoid of opinion by journalists, but near the end of the front section you'd often find the editorial page -- and opinion.
TV news rarely included editorial opinion. A few local stations had some kook who'd come on at the end and share, but rarely did the networks do so (60 Minutes being an exception, but that was a news magazine, not a newscast).
We've reversed this approach.
Now, opinion seems to be the "A-section" of most cable news programming, with a little news tucked in here and there. Certainly that's the Fox News and MSNBC approach, and to a lesser degree (and less successfully) one pursued by CNN. Newspapers still have a front section and opinion pages, but the average reader looks at it and wonders "what's the difference?" True or not, it's a damn good question.
Bloggers often start with the assumption that there's no difference between news and opinion, that "truth" is the ultimate goal. As if journalists don't also try to get at "the best obtainable version of the truth." One of the differences is "obtainable." Journalists gather information, bloggers often react to information (yes, there are great exceptions to this, but I'm in no mood to get subtle, but yeah there are some damn good bloggers out there actually walking and talking and leaving the safety of their mom's basement to find stuff out).
The line between news and entertainment have blurred, not only in practice, but also in the minds of many Americans. The line between news and opinion? Same line. Blurred.
Is this a bad thing?
To old traditionally trained journalism guys like me, the answer should be YES! But I'm not so sure. I'm still wrestling with this, balancing the practical against the philosophical.
I now divide the world not into mainstream media and non-mainstream media, but into two other camps: Rational Media and Irrational Media. I'll get more into this dichotomy on another post, but rational is not merely in the eyes of the beholder. And I'm not entirely convinced my reasoning here won't unravel if one smart person tugs at a thread, but then again that's what this blog is for, to test ideas. But I'll give you a hint. Irrational Media include not only Fox News (easy) but also Jon Stewart. It includes Lou Dobbs and Rush Limbaugh and Sean Hannity (easy), but it also includes others that will make conservatives and liberals angry. I'm a radical moderate, so it's not like I care.
News dominated the "A-section" and theoretically was devoid of opinion by journalists, but near the end of the front section you'd often find the editorial page -- and opinion.
TV news rarely included editorial opinion. A few local stations had some kook who'd come on at the end and share, but rarely did the networks do so (60 Minutes being an exception, but that was a news magazine, not a newscast).
We've reversed this approach.
Now, opinion seems to be the "A-section" of most cable news programming, with a little news tucked in here and there. Certainly that's the Fox News and MSNBC approach, and to a lesser degree (and less successfully) one pursued by CNN. Newspapers still have a front section and opinion pages, but the average reader looks at it and wonders "what's the difference?" True or not, it's a damn good question.
Bloggers often start with the assumption that there's no difference between news and opinion, that "truth" is the ultimate goal. As if journalists don't also try to get at "the best obtainable version of the truth." One of the differences is "obtainable." Journalists gather information, bloggers often react to information (yes, there are great exceptions to this, but I'm in no mood to get subtle, but yeah there are some damn good bloggers out there actually walking and talking and leaving the safety of their mom's basement to find stuff out).
The line between news and entertainment have blurred, not only in practice, but also in the minds of many Americans. The line between news and opinion? Same line. Blurred.
Is this a bad thing?
To old traditionally trained journalism guys like me, the answer should be YES! But I'm not so sure. I'm still wrestling with this, balancing the practical against the philosophical.
I now divide the world not into mainstream media and non-mainstream media, but into two other camps: Rational Media and Irrational Media. I'll get more into this dichotomy on another post, but rational is not merely in the eyes of the beholder. And I'm not entirely convinced my reasoning here won't unravel if one smart person tugs at a thread, but then again that's what this blog is for, to test ideas. But I'll give you a hint. Irrational Media include not only Fox News (easy) but also Jon Stewart. It includes Lou Dobbs and Rush Limbaugh and Sean Hannity (easy), but it also includes others that will make conservatives and liberals angry. I'm a radical moderate, so it's not like I care.
Labels:
editorial page,
newspapers,
opinion page,
tv news
Monday, August 10, 2009
Death of Polling?
What people know about what others think often comes from polls, but a column today describes how at least one pollster thinks telephone polling will die by 2012.
Spoiled Brats
We all know a spoiled brat when we see one -- if it's someone else's kid. We see spoiled kids on TV, in movies, everywhere. Wanna know how to tell if your own little darling is one of the spoiled? Article here. Has an interesting little test involving two chairs in a waiting room. But best of all, below is a great youtube video of a spoiled brat.
Metamemory
Stick meta in front of a word and we move a step back. It becomes thinking about that word, whatever it is. Too PhDweebish for ya? Here's an easy example. Put meta in front of cognition and metacognition is thinking about thinking. Metamemory, according to a study I read this morning, is belief in one's own memory efficiency -- kinda thinking about your memory.
Yeah, I'm getting older, so I wonder how long the old brainpan is gonna work.
This also gets into feeling of knowing. We've all suffered this one. Some call it the tip-of-the-tongue phenomenon. Or maybe their different. Feeling of knowing is just what it sounds like: you know something, but you can't quite remember. It's like an annoying itch you can't quite reach. Tip-of-the-tongue is similar, a word or phrase or name that's right there, on the tip of your tongue, but you just can't quite grab it. There are probably conceptual and theoretical differences between the two, but it's too early, and I'm not caffeinated enough, to tease them out here.
Why am I going on about this? In part to demonstrate that there's a wide world of research out there on memory and the different ways it goes wrong, which can definitely play a role in how those of us who study political knowledge deal with survey respondents or experimental subjects. We know that TV news, for example, does a bit better with recognition of political information but not so well with recall of facts. There's a nagging feeling of knowing, a tip-of-the-tongue example for you right there. TV news alone probably leads to greater feelings that you know something, because of haphazard exposure to TV news while doing something else, thus leads to better recognition of information as opposed to easy recall. What might be interesting to study is any frustration that emerges from that nagging feeling that you know something but can't quite yank it out of trace memory because you watched it on the boob tube rather than read it online or on paper (thus, deeper processing).
Yeah, I'm getting older, so I wonder how long the old brainpan is gonna work.
This also gets into feeling of knowing. We've all suffered this one. Some call it the tip-of-the-tongue phenomenon. Or maybe their different. Feeling of knowing is just what it sounds like: you know something, but you can't quite remember. It's like an annoying itch you can't quite reach. Tip-of-the-tongue is similar, a word or phrase or name that's right there, on the tip of your tongue, but you just can't quite grab it. There are probably conceptual and theoretical differences between the two, but it's too early, and I'm not caffeinated enough, to tease them out here.
Why am I going on about this? In part to demonstrate that there's a wide world of research out there on memory and the different ways it goes wrong, which can definitely play a role in how those of us who study political knowledge deal with survey respondents or experimental subjects. We know that TV news, for example, does a bit better with recognition of political information but not so well with recall of facts. There's a nagging feeling of knowing, a tip-of-the-tongue example for you right there. TV news alone probably leads to greater feelings that you know something, because of haphazard exposure to TV news while doing something else, thus leads to better recognition of information as opposed to easy recall. What might be interesting to study is any frustration that emerges from that nagging feeling that you know something but can't quite yank it out of trace memory because you watched it on the boob tube rather than read it online or on paper (thus, deeper processing).
Sunday, August 9, 2009
Exercise and Weight
We've been told over and over -- if you exercise (and eat right), you'll lose weight. Not so much, or so says a Time Magazine article (image from the site at the right). The online version is a bit different than the print version, which I read this weekend. But the result is the same ... yeah, exercising is good for you, but it's what you eat that counts. And a burst of exercise, that twenty minutes you do a day on some torture machine, that is not the way to go if you want to lose weight. And even more important, after exercising many people decide they need a treat -- a muffin, or some caloric catastrophe from Starbucks.
Why are we always told to exercise? It's good for ya, sure, but there's a whole industry built around going to the gym or buying crap for your house that ends up more of a place to hang clothes than actually use. But we don't want to upset the food industry all that much, so what people know about losing weight is more about spending time at the gym rather than not eating at McDonald's.
Why are we always told to exercise? It's good for ya, sure, but there's a whole industry built around going to the gym or buying crap for your house that ends up more of a place to hang clothes than actually use. But we don't want to upset the food industry all that much, so what people know about losing weight is more about spending time at the gym rather than not eating at McDonald's.
Saturday, August 8, 2009
Journalism Jobs
In a shout out, colleagues Lee Becker and Tudor Vlad released the annual job and salary survey this week at AEJMC. Becker's been doing this survey forever, or at least since about 1986 (same thing, right?). There are various stories about the survey -- Editor & Publisher has one, for example, as does U.S. News and World Report and one of my favorite publications, The Red & Black. The lede is the same: bad news for journalism/mass comm grads when it comes to jobs and money. PR grads have the best chance of finding a job. Journalism, not so much. According to the report:
"Only six in 10 of the graduates had full-time employment six to eight months after graduation," noted the UGA researchers in the report. That is the lowest level of full-time employment reported in the 23-year modern history of the annual survey.The complete report is available here. Read it and weep.
Labels:
cox center,
journalism jobs,
lee becker,
tudor vlad
Friday, August 7, 2009
Credit Scores
One of those people want to know is their credit score, and if they don't want to know it then by god someone on TV will try to convince them they want to know it. Along those lines, The NYTimes has apparently canned Ben Stein for shilling for one of those operations that charge you for something you can get, at least once a year, for free. Stein, a funny guy and economist and actor ("Bueller ... Bueller ... Bueller") will no longer write a column for the Times, given the ethical problems with associating himself with these credit guys.
I was also disappointed to learn that the three-guy band for "free credit report dot com" don't actually sing, don't actually play instruments, don't do much of anything at all except wear pirate outfits or ride roller coasters or -- in the latest one below, dress like cowboys -- and try to sell us this crap. Sigh.
I was also disappointed to learn that the three-guy band for "free credit report dot com" don't actually sing, don't actually play instruments, don't do much of anything at all except wear pirate outfits or ride roller coasters or -- in the latest one below, dress like cowboys -- and try to sell us this crap. Sigh.
Do People Learn from Faux News?
It's been a question raised by scholars and by those somewhat, um, less scholarly -- do people learn from entertainment faux news programs such as the funny stuff by Jon Stewart and Stephen Colbert? The answer so far has been mixed. Programs increase feelings of efficacy but decrease trust, seem to aid learning or perhaps not. They influence how people process the news, what factors they consider, or sometimes not.
So I'm re-reading a study in Journal of Communication by Kim and Vishak (story about it here). I blogged about it earlier, but I wanted to re-read the thing and see if any fresh ideas emerge other than what I blogged on earlier. The basics remain the same: mainstream news media exposure led to greater and more accurate knowledge in a controlled experiment than did The Daily Show.
What seems to explain the difference is the goal of the user. This gets a bit into uses and gratifications research. If my goal in watching Stewart is to be entertained, my brain engages with the program in a very different way than it does with CNN or reading news online. Let's face it, watching CNN is a lot of things, but entertaining ain't one of 'em. If my goal is to laugh at current events and politicians, then the brain is not in "learning" mode, thus actual learning decreases and the accuracy of what I remember suffers as well.
So it's in part what we bring to the TV. Yes, TV news suffers from delusions of adequacy, but when we catch up with the news we're in a different frame of mind, one willing to absorb and retain information -- if only a little since it's merely TV after all. When we sit to watch Colbert or Stewart be funny, it's a completely different take, even if people do say in surveys they watch such programs to keep up with the news.
So I'm re-reading a study in Journal of Communication by Kim and Vishak (story about it here). I blogged about it earlier, but I wanted to re-read the thing and see if any fresh ideas emerge other than what I blogged on earlier. The basics remain the same: mainstream news media exposure led to greater and more accurate knowledge in a controlled experiment than did The Daily Show.
What seems to explain the difference is the goal of the user. This gets a bit into uses and gratifications research. If my goal in watching Stewart is to be entertained, my brain engages with the program in a very different way than it does with CNN or reading news online. Let's face it, watching CNN is a lot of things, but entertaining ain't one of 'em. If my goal is to laugh at current events and politicians, then the brain is not in "learning" mode, thus actual learning decreases and the accuracy of what I remember suffers as well.
So it's in part what we bring to the TV. Yes, TV news suffers from delusions of adequacy, but when we catch up with the news we're in a different frame of mind, one willing to absorb and retain information -- if only a little since it's merely TV after all. When we sit to watch Colbert or Stewart be funny, it's a completely different take, even if people do say in surveys they watch such programs to keep up with the news.
Thursday, August 6, 2009
Cool Stuff: MemeTracker
Check out MemeTracker, a neat visualization of news coverage. Scroll down the first page to see some of the other applications with these data.
This is all part of the growing field of statistically analyzing the boatloads of data out there to help make sense of what we do, what we think, and how we respond on the Net. And it's just fun to play with. I like the "top phrases" graph. Run your mouse over the graphs and the phrase pops out. Cool.
What's this all mean? Lots of stuff, but it'll be an interesting portrait of what the media are talking about, what bloggers are talking about, what news people are talking about, another piece of the puzzle along with opinion polls and all the rest. What this fails to do, of course, is get at the meat of the matter. It's a picture, a snapshot, but that's about it. We don't so much learn from this as we see what we've been talking about. Still, a lot of fun to play with, and the first of many steps that hopefully will take us from what we're talking about to what we know.
This is all part of the growing field of statistically analyzing the boatloads of data out there to help make sense of what we do, what we think, and how we respond on the Net. And it's just fun to play with. I like the "top phrases" graph. Run your mouse over the graphs and the phrase pops out. Cool.
What's this all mean? Lots of stuff, but it'll be an interesting portrait of what the media are talking about, what bloggers are talking about, what news people are talking about, another piece of the puzzle along with opinion polls and all the rest. What this fails to do, of course, is get at the meat of the matter. It's a picture, a snapshot, but that's about it. We don't so much learn from this as we see what we've been talking about. Still, a lot of fun to play with, and the first of many steps that hopefully will take us from what we're talking about to what we know.
Wednesday, August 5, 2009
Odds and Ends
Favorite odds and ends from today's news:
Passalong PR
A NYT story on passalong pr (my term). It goes like this. Coal producers and power companies have a trade group that hires a lobbying firm that hires a different lobbying firm that has a brainiac staffer who sends fake letters to lawmakers pretending that they're from nonprofit groups opposed to climate change laws. The brainiac gets fired, we're told, thus absolving anyone else of responsibility. So you pr guys out there, remember to hire someone to hire someone who then gets some low-level schmuck to fake stuff and then fall on his or her sword. Plausible deniability is a wonderful thing, especially when you make stuff up in order to persuade people.
More PR -- Kinda
PR, or lobbying, or trying to influence opinion, whatever it's called there's also a good story about pharmaceutical company hiring ghostwriters to gin up fake research reviews to say good things about their hormone replacement therapy. This is about as bad as it gets in academe, faking research or at least faking a review of the research. It came out as part of a lawsuit.
Third Person Effect and Driving While Yakking
Summit coming soon on driving while distracted, the new phrase to describe talking or texting on that annoying cell phone while trying not to kill someone while driving.
So how the heck do I work the third-person effect in all this? First, a brief definition: The third-person effect means we think media content doesn't affect us, but it does affect others. In a way this is reported in a story today about the potential of new laws aimed to stopping people from driving while, yup, distracted. Deep in the story, a survey by AAA that found "58 percent of drivers consider other motorists talking on a cellphone to be a very serious threat" and 87 percent think texting is dangerous. But ... and here's the third-person ... 67 percent said they had yakked on a phone while driving, and 21 percent had texted. Sheesh. To summarize, do as I say, not as I do.
A Story NOT from the NYTimes
Okay, the stuff above was drawn from today's NYTimes so I feel obligated to draw something from another source just so I can pretend I'm a balanced, reasoned, Renaissance kind of guy. In this one, it kinda sums up the military's relationship with social networking. This piece here says that while the Marines have banned Twitter and Facebook and other sites for one year, the chairman of the Joint Chiefs will continue to Tweet, thank you very much. Wall Street Journal version here. Below, a video version of the story:
Passalong PR
A NYT story on passalong pr (my term). It goes like this. Coal producers and power companies have a trade group that hires a lobbying firm that hires a different lobbying firm that has a brainiac staffer who sends fake letters to lawmakers pretending that they're from nonprofit groups opposed to climate change laws. The brainiac gets fired, we're told, thus absolving anyone else of responsibility. So you pr guys out there, remember to hire someone to hire someone who then gets some low-level schmuck to fake stuff and then fall on his or her sword. Plausible deniability is a wonderful thing, especially when you make stuff up in order to persuade people.
More PR -- Kinda
PR, or lobbying, or trying to influence opinion, whatever it's called there's also a good story about pharmaceutical company hiring ghostwriters to gin up fake research reviews to say good things about their hormone replacement therapy. This is about as bad as it gets in academe, faking research or at least faking a review of the research. It came out as part of a lawsuit.
Third Person Effect and Driving While Yakking
Summit coming soon on driving while distracted, the new phrase to describe talking or texting on that annoying cell phone while trying not to kill someone while driving.
So how the heck do I work the third-person effect in all this? First, a brief definition: The third-person effect means we think media content doesn't affect us, but it does affect others. In a way this is reported in a story today about the potential of new laws aimed to stopping people from driving while, yup, distracted. Deep in the story, a survey by AAA that found "58 percent of drivers consider other motorists talking on a cellphone to be a very serious threat" and 87 percent think texting is dangerous. But ... and here's the third-person ... 67 percent said they had yakked on a phone while driving, and 21 percent had texted. Sheesh. To summarize, do as I say, not as I do.
A Story NOT from the NYTimes
Okay, the stuff above was drawn from today's NYTimes so I feel obligated to draw something from another source just so I can pretend I'm a balanced, reasoned, Renaissance kind of guy. In this one, it kinda sums up the military's relationship with social networking. This piece here says that while the Marines have banned Twitter and Facebook and other sites for one year, the chairman of the Joint Chiefs will continue to Tweet, thank you very much. Wall Street Journal version here. Below, a video version of the story:
The Internet Kills Knowledge?
Much like earlier Atlantic articles on whether Google is making us smarter or dumber, a The New York Times piece -- based on this article -- gets into the question as well. Good reading, both.
In part, this has everything to do with facts. Does it matter that we can immediately recall some fact as compared to being able to quickly and efficiently find some fact? Are we raising a generation of people who know less, but who are more capable of finding stuff out? Some say it's no problem. We're creating a generation, through Google, of integrators, of people who know where to find stuff out and how to link it all together to make sense of some issue, question, or problem. Others sorry that without basic knowledge, an underpinning of common understanding, new information means very little and cannot be integrated, no matter how quickly you can Google some fact.
From the piece mentioned above:
I'm wanting to buy into this. Really I am.
This article quoted above is excellent, and I strongly recommend it. So good that I'm going to lift, with due credit to Brian Cathcart, a professor of journalism at Kingston in the U.K., the end of the article:
In part, this has everything to do with facts. Does it matter that we can immediately recall some fact as compared to being able to quickly and efficiently find some fact? Are we raising a generation of people who know less, but who are more capable of finding stuff out? Some say it's no problem. We're creating a generation, through Google, of integrators, of people who know where to find stuff out and how to link it all together to make sense of some issue, question, or problem. Others sorry that without basic knowledge, an underpinning of common understanding, new information means very little and cannot be integrated, no matter how quickly you can Google some fact.
From the piece mentioned above:
A certain lack of general knowledge—what some might call ignorance—is thus built into the system, and will be more so in the future. My Googling undergraduates are doing something they may have been encouraged to do at school.This may be one of the most important topics in education, and what people know, for quite some time. Is it a scary world? Not so much. Writing, too, was feared to create a forgetfulness in the public, an inability to learn and instead rely on words scratched on parchment. And we all know that turned out to be a pretty good idea. So a world in which basic facts and information are at our fingertips, so goes the reasoning, frees us to consider deeper what we are learning and to better match it with other information. To integrate. To learn wisely.
I'm wanting to buy into this. Really I am.
This article quoted above is excellent, and I strongly recommend it. So good that I'm going to lift, with due credit to Brian Cathcart, a professor of journalism at Kingston in the U.K., the end of the article:
There will always be dimwits, and their feats of stupidity will always make news. Equally, there will always be teachers and parents who shake their heads at the supposed ignorance of the young. We need to be careful before we construct trends from such things. But the internet is different, and it lifts the discussion onto a different plane. We are bound to tap into it for general knowledge, and the young will do it first. Schools are surely right to encourage them. The story of Thoth tells us that the curmudgeonly response—“This invention will produce forgetfulness in the souls of those who have learned it”—is a waste of breath.But equally, the extraordinary popularity of the quiz in the mass-communication age suggests that general knowledge, the idea of a pool of information shared within a culture and a time, is potent enough to survive.
Tuesday, August 4, 2009
Moody Songs and Blogs
A terrific story today in The New York Times science section about gauging the nation's mood not through traditional surveys asking people how they feel but rather through the kinds of songs they craft or the blogs they write. Or as a pullout says: "Looking for clues to well-being in what we sing and say." The image to the right is hotlinked from the Times story and nicely sums up what it has to say.
A couple of statisticians at the University of Vermont have created an analysis strategy that taps into our sense of well-being in a field called mass psychology. As they note, every methodology has its drawbacks. Asking people about their well-being may actually influence it, so this is an indirect approach that skips asking people and instead looks at what they do or say.
The Gallup Well-Being Index a traditional, survey-based approach, and a damned good one too. Play with the tabs. Much fun.
I don't want to go deeper into the NYT story. Read it for yourself if you're interested in what people know or what they reveal about their own well-being through a less-than-traditional-but-awfully-cool approach. Yes, I'm suffering from methodological envy. Why didn't I think of this?
A couple of statisticians at the University of Vermont have created an analysis strategy that taps into our sense of well-being in a field called mass psychology. As they note, every methodology has its drawbacks. Asking people about their well-being may actually influence it, so this is an indirect approach that skips asking people and instead looks at what they do or say.
The Gallup Well-Being Index a traditional, survey-based approach, and a damned good one too. Play with the tabs. Much fun.
I don't want to go deeper into the NYT story. Read it for yourself if you're interested in what people know or what they reveal about their own well-being through a less-than-traditional-but-awfully-cool approach. Yes, I'm suffering from methodological envy. Why didn't I think of this?
Science, Framing, and the Media
An interesting blog post examines how important the "framing" of science is for public understanding. Social scientists are "critical friends" of other scientists, according to the post, and the "picture we present" to the public can influence perceptions. All in all, interesting stuff.
The idea of "framing" is an old one in the literature. What is framing? Whatever the hell you want it to be, since there are about a million definitions. Like porn, we know it when we see it, but in general framing sets the way we understand a story. Is a story framed as little guy versus big guy, or is it framed as a morality piece? An election framed as a vote on the economy can turn out very different than an election framed on the safety of a nation (see 1992 and 2008 U.S. presidential elections as examples).
And people don't need a lot of information, or any information, to form an opinion, especially about science. A good 2005 study, for example, found that people use heuristics, or shortcuts, to make sense of science stories about nanotechnology depending in part on how they are framed by the media or political elite.
The idea of "framing" is an old one in the literature. What is framing? Whatever the hell you want it to be, since there are about a million definitions. Like porn, we know it when we see it, but in general framing sets the way we understand a story. Is a story framed as little guy versus big guy, or is it framed as a morality piece? An election framed as a vote on the economy can turn out very different than an election framed on the safety of a nation (see 1992 and 2008 U.S. presidential elections as examples).
And people don't need a lot of information, or any information, to form an opinion, especially about science. A good 2005 study, for example, found that people use heuristics, or shortcuts, to make sense of science stories about nanotechnology depending in part on how they are framed by the media or political elite.
Monday, August 3, 2009
Polls
Public opinion polls are obviously one key way to learn what people know or think about some subject. The Washington Post has tightened its rules on reporting about polls. The Post's new regs are now out yet, but they will be "especially wary of unproven new polling techniques," according to the story. We're saturated by polls. Many are excellent. Some are complete crap. News organizations need to tell us when a poll is crap, and why. It'd be nice if bloggers did the same, even when they like the results of a poll to prove some partisan nutjob point.
The Word of 2009
My vote for the top word of 2009 is sustainable.
It's everywhere. For example, here's a recent USAToday story about college students flocking to sustainability degrees. Green -- not greed -- is good, or so says the story. And we talk a lot about finding a sustainable model for journalism as its traditional economic model of advertising and circulation unravels. There's even a Center for Sustainable Journalism at Kennesaw State University. I look forward to seeing what they come up with, once it truly gets rolling.
A Google News search for "sustainable" 40,979 hits, though to be fair, a lot of them have nothing to do with the recent use of the word. But a lot are tied to "green" in some way. By the way, search Google and you get 61.2 million hits, the first two from -- yes -- wikipedia. So "sustainable" is, I would think, growing in the public mind. It'd be interesting to know how people in a survey respond to the word. Do they think "green" when they hear it? Do they think anything at all? Is the word loaded with positive or negative connotations?
My hunch? People will associate it with "green" and that association will result in the usual, tired, ideological breakdowns. Conservatives will shudder at the word (I do too sometimes, it's not a pretty word). Liberals and greenies in particular will smile, get a warm fuzzy, then go buy something organic. At some point "sustainable" will get ideolized -- a bad word as well, one that I made up on the fly to suggest it's become too politicized to have meaning.
It's everywhere. For example, here's a recent USAToday story about college students flocking to sustainability degrees. Green -- not greed -- is good, or so says the story. And we talk a lot about finding a sustainable model for journalism as its traditional economic model of advertising and circulation unravels. There's even a Center for Sustainable Journalism at Kennesaw State University. I look forward to seeing what they come up with, once it truly gets rolling.
A Google News search for "sustainable" 40,979 hits, though to be fair, a lot of them have nothing to do with the recent use of the word. But a lot are tied to "green" in some way. By the way, search Google and you get 61.2 million hits, the first two from -- yes -- wikipedia. So "sustainable" is, I would think, growing in the public mind. It'd be interesting to know how people in a survey respond to the word. Do they think "green" when they hear it? Do they think anything at all? Is the word loaded with positive or negative connotations?
My hunch? People will associate it with "green" and that association will result in the usual, tired, ideological breakdowns. Conservatives will shudder at the word (I do too sometimes, it's not a pretty word). Liberals and greenies in particular will smile, get a warm fuzzy, then go buy something organic. At some point "sustainable" will get ideolized -- a bad word as well, one that I made up on the fly to suggest it's become too politicized to have meaning.
Labels:
sustainability,
sustainable journalism,
word of 2009
Sunday, August 2, 2009
Judge Souter and Civics Knowledge
Retired U.S. Supreme Court justice David Souter warned that a lack of civics knowledge poses a threat. Versions of the story all over the Net. An AP version here.
Polls show two-thirds of Americans can't name all three branches of government. "This is something to worry about," Souter said "There is a danger to judicial independence when people have no understanding of how the judiciary fits into the constitutional scheme."
He's absolutely right, of course. The stability of a democratic government relies in part on a reservoir of good will by the public. When government screws up (which is often), it draws down the reservoir. When things go well, it gets replenished. But it's important people understand why the courts can make certain decisions, even if they don't agree with them politically or ideologically. Understanding the role of the courts is vital.
Polls show two-thirds of Americans can't name all three branches of government. "This is something to worry about," Souter said "There is a danger to judicial independence when people have no understanding of how the judiciary fits into the constitutional scheme."
He's absolutely right, of course. The stability of a democratic government relies in part on a reservoir of good will by the public. When government screws up (which is often), it draws down the reservoir. When things go well, it gets replenished. But it's important people understand why the courts can make certain decisions, even if they don't agree with them politically or ideologically. Understanding the role of the courts is vital.
Saturday, August 1, 2009
What People Know ... about organic foods
There was some noise earlier this week about a study that found organic foods to not be any more nutritious than other foods. There's a version of the story here though a bit of searching and you can find the same thing, more or less, all over the net.
The lede? A review of 150 studies found no real health benefits from eating organic-grown food.
In other news, organic food will significantly damage your wallet.
To be fair, they're mostly talking nutrient content here, and that's hardly the reason people buy organic foods. We buy them on occasion in part to support an industry and local farmers, in part to avoid the chemicals that are used in non-organic foods, and in part I guess to make ourselves feel better. I never expected there to be a nutrient difference, though some individual studies do find better antioxidants and other factors in organic foods.
But will this study and its coverage influence what people know about food, and especially organic food? Not really, not so much. We're talking a very specific audience here (part yuppies and yippies, part informed consumers, and in all cases people willing to spend a little -- or a lot -- more for a carrot or hunk of meat). However, it may influence general public opinion about organic foods, which in turn could influence how willing government is to support organic growers through space for local farmer's markets or even in rules and regulations about that organic label. The study gives ammunition to people who are suspicious of anything without a corporate label, or smacks in any way of "liberal" thinking. I haven't checked, but I'd think Rush Limbaugh loves this study. He hates most other science, but he'll like this one.
The lede? A review of 150 studies found no real health benefits from eating organic-grown food.
In other news, organic food will significantly damage your wallet.
To be fair, they're mostly talking nutrient content here, and that's hardly the reason people buy organic foods. We buy them on occasion in part to support an industry and local farmers, in part to avoid the chemicals that are used in non-organic foods, and in part I guess to make ourselves feel better. I never expected there to be a nutrient difference, though some individual studies do find better antioxidants and other factors in organic foods.
But will this study and its coverage influence what people know about food, and especially organic food? Not really, not so much. We're talking a very specific audience here (part yuppies and yippies, part informed consumers, and in all cases people willing to spend a little -- or a lot -- more for a carrot or hunk of meat). However, it may influence general public opinion about organic foods, which in turn could influence how willing government is to support organic growers through space for local farmer's markets or even in rules and regulations about that organic label. The study gives ammunition to people who are suspicious of anything without a corporate label, or smacks in any way of "liberal" thinking. I haven't checked, but I'd think Rush Limbaugh loves this study. He hates most other science, but he'll like this one.
Subscribe to:
Posts (Atom)