You may have heard there is a fight for the Republican nomination. What's interesting to me is how bendable the truth seems to be for certain politicians. For fun, let's look at Politifact, the non-partisan fact-checking web site, to see how the various candidates stand up so far.
The site scores folks on a scale: True, Mostly True, Half True, Mostly False, False, and everyone's favorite for the really big fibs -- Pants on Fire.
How are the GOP hopefuls doing?
Newt Gringrich: He's been around a long time and so there's a nice fat record here of 59 evaluations of his comments. Of these, only 5 received a "True" (8.5 percent), the lowest rate among the candidates left standing. He's received "Pants on Fire" 10 times, a 16.9 percent rate of telling whoppers. Gingrich scores the lowest in truthfulness and the highest in burning pants.
Ron Paul: Of the 33 comments by Paul, seven were judged "true" (21.2 percent, the highest of the GOPers) and 3 found so bad as to receive the Pants on Fire (9.1 percent). Congratulations, Rep. Paul, you're the head of the class in truthfulness, at least in terms of comments evaluated by Politifact.
Mitt Romney: The frontrunner's page has the most evaluated comments of the four GOP hopefuls (117), and of these he's had his pants on fire only 10 times (8.5 percent). He's been "true" 22 times, or 18.8 percent.
Rick Santorum: If nothing else, Santorum knows how to make headlines, especially of late. On his Politifact page, of his 34 comments evaluated by the site, only 4 scored a true (11.8 percent). He got 3 "Pants on Fire" (8.8 percent).
Let's be a bit fair here. Politifact only looks at the comments likely to be a bit wacky, so the proportion of outright lies and falsehoods for all candidates is gonna skew higher than the stuff actually spewing out of their mouths. And I'm not even getting into Obama because, well, it's still nomination time among the GOP candidates. We'll return to this when the general election rolls around. But the Politifact site does have a nice graph on Obama's promises and whether they were kept or not.
Random blog posts about research in political communication, how people learn or don't learn from the media, why it all matters -- plus other stuff that interests me. It's my blog, after all. I can do what I want.
Wednesday, February 29, 2012
What Republicans Know -- Part 2
Tuesday I wrote about an interesting survey that examined, in part, how political knowledge is associated with who Republicans favor in the nominating process. I mentioned that I couldn't find the actual knowledge questions. Thankfully Dan Cassino, lead investigator of the study, saw my post and pointed me to the right place to find the info.
WARNING: Long post below.
You have to scroll down a bit, to page 4, to find the beginning of the questions. Let's take a closer look.
[First, a brief aside. In writing political knowledge questions, you want some to be more difficult than others to discriminate among respondents. But -- and this is important -- you want to avoid the impossible-to-answer questions. Finally, you want the questions to be unloaded, especially along partisan lines, unless of course that's what you're studying. Simply put, you don't want questions Republicans or Democrats might be naturally better at answering.]
Back to the survey questions. I'm not picking at this survey in particular, which is fine with some neat results, but it uses enough different kinds of questions that we can learn something from it.
At least one question is probably too difficult (#6).
Overall, the public didn't perform particularly well. Twelve percent got no question correct and only 3 percent got all eight correct. It's that last number that worries me. While there's no hard and fast rule about this sort of thing, I'd like to see that higher -- both because we're prefer in a democracy for more people to know what's happening in their political world, but also for simple methodological reasons. I think a relatively fair test of current events knowledge should result in at least 10 percent getting all eight items correct, but I invented that number out of thin air. I'd want to go back and check the Pew current event quizzes to see what percentage gets all their questions correct and use that as a guideline.
Finally, an interesting (to me) statistical bit of trivia. Democrat "leaners" did better on the questions than pure Democrats. What's a "leaner?" Traditionally we ask respondents for their party identification and if they name Republican or Democrat right off, they go to the end of the continuum (far left and far right). If they didn't name either party, we then follow up by asking which party they lean toward. These folks are categorized just to the inside of our hard-core Dems and GOPers. In the middle go our true Independents who held out after two (or sometimes more) questions and probes designed to force them to inch one way or the other. Why would leaners do better than party purists? Lots of reasons, but that's probably a post for another day.
WARNING: Long post below.
You have to scroll down a bit, to page 4, to find the beginning of the questions. Let's take a closer look.
[First, a brief aside. In writing political knowledge questions, you want some to be more difficult than others to discriminate among respondents. But -- and this is important -- you want to avoid the impossible-to-answer questions. Finally, you want the questions to be unloaded, especially along partisan lines, unless of course that's what you're studying. Simply put, you don't want questions Republicans or Democrats might be naturally better at answering.]
Back to the survey questions. I'm not picking at this survey in particular, which is fine with some neat results, but it uses enough different kinds of questions that we can learn something from it.
- The first two questions are "yes-no" types that ask whether the leaders of Egypt and Syria have been removed. Straightforward, simple, and roughly equivalent in difficulty. No odd partisan breakdowns in the results. About half of respondents got these right.
- The third question asks what country has done the most to financially bail out fellow European countries. It's difficult for me to tell if this is a recognition question (the list of countries provided) or a recall question (have to generate the correct answer -- Germany -- off the top of your head). Yes, the difference can matter (I'm finishing a paper on just this topic). This looks like free recall to me, given the other response alternatives. Only a quarter of respondents nailed this one.
- The fourth question asks what the sanctions about Iran are supposed to accomplish. Again, probably free recall given the "Anything about Nuclear Program or Uranium enrichment or WMDs…" is listed as an accurate response. This is the fourth international question in a row (though I assume the questions were asked in the survey in random order, so no problem). Nearly half got this one correct.
- Question five asks what party controls the U.S. House of Representatives, a traditional question asked in most academic surveys (ANES, for example). I'm frankly surprised the Republicans didn't outperform Democrats even more on this one. Two-thirds of respondents knew this.
- The sixth question requires I cut-and-paste. It asks: "In December, House Republicans agreed to a short-term extension of a payroll tax cut, but only if President Obama agreed to do what? [Open-Ended]" By noting this is "open-ended" it makes me wonder if I was wrong above in my guess as to the previous questions being cued or free recall. This is a particularly difficult question, with 9 percent getting it right and 72 percent answering some form of "don't know." That's high.
- The next two questions ask specifically about the Republican primaries (who won Iowa and New Hampshire). In other words, GOPers are a lot more likely to answer these two correctly. Indeed this was the case by 12 and 17 percentage points.
- The final question is a tricky one. It asks what percentage of Americans are unemployed. Why tricky? Do we rely on the official number or the, arguably, real number -- which is significantly higher? This was also open-ended. Republicans did better than Democrats (on every question above, including this one) but their wrong answers skewed higher than Democratic wrong answers, no doubt reflecting GOP pessimism about President Barack Obama and the economy and ... if you listen to Fox News or Limbaugh or Hannity, the higher number is always stressed.
At least one question is probably too difficult (#6).
Overall, the public didn't perform particularly well. Twelve percent got no question correct and only 3 percent got all eight correct. It's that last number that worries me. While there's no hard and fast rule about this sort of thing, I'd like to see that higher -- both because we're prefer in a democracy for more people to know what's happening in their political world, but also for simple methodological reasons. I think a relatively fair test of current events knowledge should result in at least 10 percent getting all eight items correct, but I invented that number out of thin air. I'd want to go back and check the Pew current event quizzes to see what percentage gets all their questions correct and use that as a guideline.
Finally, an interesting (to me) statistical bit of trivia. Democrat "leaners" did better on the questions than pure Democrats. What's a "leaner?" Traditionally we ask respondents for their party identification and if they name Republican or Democrat right off, they go to the end of the continuum (far left and far right). If they didn't name either party, we then follow up by asking which party they lean toward. These folks are categorized just to the inside of our hard-core Dems and GOPers. In the middle go our true Independents who held out after two (or sometimes more) questions and probes designed to force them to inch one way or the other. Why would leaners do better than party purists? Lots of reasons, but that's probably a post for another day.
Tuesday, February 28, 2012
What Republicans Know
The most politically knowledgeable conservative Republicans, according to this report, are more likely to support Rick Santorum for the GOP nomination. The most knowledgeable moderate Republicans? They prefer Mitt Romney. Wait! What about Ron Paul? According to the national survey by Fairleigh Dickinson University:
And if you're a Romney fan, this isn't half bad news. After all, the most knowledgeable moderates prefer him, and once it gets to a general election that's where the real battle lies, for the moderate vote (though you could argue Romney will need to appeal more to conservatives than other GOPers may have to, and that's a plausible argument, plus's he's got to do well in today's primaries).
So here's the more complete report, and by complete I mean no real information on the kinds of knowledge or current events questions asked. But there's some good stuff here if you like to have fun with data. Take, for example, the following:
In English, the less knowledgeable conservatives went with Romney. Why? Because the less knowledgeable you are, the more likely you are to be swayed by political advertising, various appeals, and even emotions. Romney so far has outspent the others, so this makes sense. Yes, Santorum makes more emotional appeals, but outside of cable news he's not getting quite the airtime that Romney gets through big spending.
“Conservatives who are paying attention to current events are pleased with Santorum,” said Dan Cassino, a professor of political science at Fairleigh Dickinson University, and an analyst for the PublicMind Poll. “The opposite happens with Ron Paul. The more conservatives pay attention, the less they like what they hear from him.”Essentially, the more knowledgeable a Republican is, the more likely he or she is to gravitate toward a candidate that, arguably, makes sense given their ideological predispositions. A lot of Paul's positions cut across traditional partisan and ideological boundaries, so it's logical (not to Paul fans, I admit) that the more knowledgeable a conservative Republican is, the less they like what he has to say.
And if you're a Romney fan, this isn't half bad news. After all, the most knowledgeable moderates prefer him, and once it gets to a general election that's where the real battle lies, for the moderate vote (though you could argue Romney will need to appeal more to conservatives than other GOPers may have to, and that's a plausible argument, plus's he's got to do well in today's primaries).
So here's the more complete report, and by complete I mean no real information on the kinds of knowledge or current events questions asked. But there's some good stuff here if you like to have fun with data. Take, for example, the following:
Conservative Republicans who were unable to answer any questions correctly and thus rank low on the knowledge scale have a 31% chance of supporting Romney, a 16% chance of supporting Paul, and a 19% chance of supporting Santorum.
In English, the less knowledgeable conservatives went with Romney. Why? Because the less knowledgeable you are, the more likely you are to be swayed by political advertising, various appeals, and even emotions. Romney so far has outspent the others, so this makes sense. Yes, Santorum makes more emotional appeals, but outside of cable news he's not getting quite the airtime that Romney gets through big spending.
Monday, February 27, 2012
Where People Learn About Food
It's not really a shocker, but a new survey says people rely not merely on the Internet for info about food and recipes and meals, but specifically on social media.
As it says:
Again, not so much a surprise as really kinda validating what we'd expect as the world changes. There's a neat typology further down the article, breaking people into categories in how they use the Net for info about food and the like. Worth a look.
As it says:
Study results show almost half of consumers learn about food via social networking sites, such as Twitter and Facebook, and 40 percent learn about food via websites, apps or blogs. "Consumers used to rely on mom and family traditions for meal planning, but now search online for what to cook, without ever tasting or smelling," said Laurie Demeritt, president and COO at The Hartman Group. "Digital food selection is less of a sensory experience and more of a visual and rational process: What's on the label? What's in the recipe? Show me the picture!"
Again, not so much a surprise as really kinda validating what we'd expect as the world changes. There's a neat typology further down the article, breaking people into categories in how they use the Net for info about food and the like. Worth a look.
Wednesday, February 22, 2012
Booze in the News
Here's a shocking lede:
Okay, not shocking if, like me, you happen to work work at a university whose students tend to be stagger their way to near the top of the annual "top party school" list. The story above, based on survey work in England, found that for five of the seven knowledge questions, fewer than half of respondents gave the correct response.
The actual study is here. A couple of things to note. First, the authors do an excellent job with their title, following Hollander's Academic Title Rule. That is, using titular colinicity, you put in something clever, followed by a colon, followed by what the study is about. In this case:
My cup runneth over: Young people's lack of knowledge of low-risk drinking guidelines
See how it's done, children?
Second point, to really get at this, you have to read the full report. Well, you don't, because I'm here to do the ugly work for you, especially in digging into how they measured (every pun intended) drinking knowledge. It's pretty damn obscure. Here's the main part from the study methodology:
The takeaway? When it comes to what people know, booze is one of those things they can get wrong.
A new study published in the journal Drug and Alcohol Review reveals that young people do not possess the knowledge or skills required to adhere to government guidelines for responsible alcohol consumption.
Okay, not shocking if, like me, you happen to work work at a university whose students tend to be stagger their way to near the top of the annual "top party school" list. The story above, based on survey work in England, found that for five of the seven knowledge questions, fewer than half of respondents gave the correct response.
The actual study is here. A couple of things to note. First, the authors do an excellent job with their title, following Hollander's Academic Title Rule. That is, using titular colinicity, you put in something clever, followed by a colon, followed by what the study is about. In this case:
My cup runneth over: Young people's lack of knowledge of low-risk drinking guidelines
See how it's done, children?
Second point, to really get at this, you have to read the full report. Well, you don't, because I'm here to do the ugly work for you, especially in digging into how they measured (every pun intended) drinking knowledge. It's pretty damn obscure. Here's the main part from the study methodology:
Participants indicated what they believed to be: the volume (in mL) of pure ethanol in a ‘unit’; and government guidelines (in units) for maximum weekly intake for men and women, maximum daily intake for men and women, and binge drinking for men and women.The number of correct responses was recorded as a total knowledge score. Respondents used 5-point Likert scales anchored with the end-points ‘not at all’ and ‘extremely’ to indicate: how familiar they were with the concept of ‘units’; how useful they believed the concept of ‘units’ to be; and how useful it would be to have more information about ‘units’.Above sounds more complicated than it really was. Essentially, how much booze is in these various drinks. Get 10 percent off, you're coded as incorrect. And a lot of answers were incorrect.
Respondents estimated the alcohol unit content of 10 drinks selected to cover different sized servings of three types of alcoholic drinks consumed by young people. Colour pictures of each drink were accompanied by brief descriptions: red wine: 250 mL large glass; red wine: 175 mL standard glass; regular strength beer: pint (568 mL); Stella Artois lager: 330 mL bottle; Stella Artois lager: 500 mL can; Carling lager: 440 mL can; Carlsberg lager: 275 mL bottle; mixed drink—for example vodka and tonic: pub measure; Smirnoff Ice mixed vodka drink: 275 mL bottle; spirit—for example whisky: 25 mL pub measure. Estimates were dichotomised as outside or within 10% of the actual alcohol unit content [9]. Participants were given a score denoting the proportion of estimates within this range.
The takeaway? When it comes to what people know, booze is one of those things they can get wrong.
Monday, February 20, 2012
TV is for Recognition, Print is for Recall
I wrote a few days ago about recall vs. recognition and the challenges of coding open-ended questions that attempt to measure a respondent's political knowledge. Without repeating at length that post, let me just say that measures of recognition are a lot like those multiple choice tests we remember so well from school, while recall are more open-ended kinds of questions.
I am in the middle of a study that looks at how people differ in their ability to answer either recognition (closed-ended) questions versus recall (open-ended) questions. As an example, I might ask you who is Speaker of the House and provide four possible answers, with the correct response (at the moment) of John Boehner included. You'd be likely to either know it or, perhaps, recognize the right person from the list. A recall version would simply ask what office is held by John Boehner? It's a cognitively more difficult task, that last one, because you have to pull it from memory without any hints or help.
Okay, fine. So how's the study looking? Read through this, because it gets to something I find kinda interesting.
Early in, as expected, I find people are much better at answering the recognition versus the recall questions.
In my John Boehner example above, for example, 70.4 percent correctly answered the recognition version, but only 49.1 percent correctly answered the recall version. The same is true for Joe Biden as VP (94.8 percent on recognition, 83.2 percent on recall), John Roberts as Chief Justice (69.9 percent on recognition, 33.1 percent on recall), and David Cameron as Prime Minister of the U.K. (46.5 percent on recognition, 23.1 percent on recall). These results are tentative.
So, why does this matter? In part it matters from a methodology point of view in how we attempt to gauge what people know. One method paints a dismal portrait of the public's knowledge, the other one that is less so dismal.
But for me, the interesting issue is in how use of various media may lead to greater success on one versus the other. For example, my underlying hypothesis is the ephemeral nature of television news (both in how news is presented and in how we tend to watch it) makes viewers more likely to do better on recognition questions as compared to recall questions.
So far, my analysis supports this idea, even after statistical controls for a host of other likely factors (education, political interest, etc.). I love it when data confirm a good theory. It doesn't happen often enough, at least for me.
Interestingly, use of Internet news sites seems to be replacing the reading of the print newspaper as the single best media predictor of knowledge. There's probably a study just in that topic.
More to the point, so far I'm finding that getting the news from Internet news sites is the only media exposure question that is associated with recall (open-ended) questions, while only television exposure is significantly associated with recognition (closed-ended) questions. That's a great finding if it holds up to more scrutiny. My models so far are fairly rigorous, but there's a lot of "under the hood" work to do before writing it all up and submitting the results to an academic publication. Still, I'm hopeful.
The news here? The medium in some ways remains the message. It's about depth of processing and how the news is presented, but most of all it's about how people understand the world. Relying on TV is great for recognizing someone, but not so great when it comes to pulling out a difficult piece of information from memory. Nothing beats reading, and apparently news from the Internet is supplanting news from paper newspapers in this regard.
I am in the middle of a study that looks at how people differ in their ability to answer either recognition (closed-ended) questions versus recall (open-ended) questions. As an example, I might ask you who is Speaker of the House and provide four possible answers, with the correct response (at the moment) of John Boehner included. You'd be likely to either know it or, perhaps, recognize the right person from the list. A recall version would simply ask what office is held by John Boehner? It's a cognitively more difficult task, that last one, because you have to pull it from memory without any hints or help.
Okay, fine. So how's the study looking? Read through this, because it gets to something I find kinda interesting.
Early in, as expected, I find people are much better at answering the recognition versus the recall questions.
In my John Boehner example above, for example, 70.4 percent correctly answered the recognition version, but only 49.1 percent correctly answered the recall version. The same is true for Joe Biden as VP (94.8 percent on recognition, 83.2 percent on recall), John Roberts as Chief Justice (69.9 percent on recognition, 33.1 percent on recall), and David Cameron as Prime Minister of the U.K. (46.5 percent on recognition, 23.1 percent on recall). These results are tentative.
So, why does this matter? In part it matters from a methodology point of view in how we attempt to gauge what people know. One method paints a dismal portrait of the public's knowledge, the other one that is less so dismal.
But for me, the interesting issue is in how use of various media may lead to greater success on one versus the other. For example, my underlying hypothesis is the ephemeral nature of television news (both in how news is presented and in how we tend to watch it) makes viewers more likely to do better on recognition questions as compared to recall questions.
So far, my analysis supports this idea, even after statistical controls for a host of other likely factors (education, political interest, etc.). I love it when data confirm a good theory. It doesn't happen often enough, at least for me.
Interestingly, use of Internet news sites seems to be replacing the reading of the print newspaper as the single best media predictor of knowledge. There's probably a study just in that topic.
More to the point, so far I'm finding that getting the news from Internet news sites is the only media exposure question that is associated with recall (open-ended) questions, while only television exposure is significantly associated with recognition (closed-ended) questions. That's a great finding if it holds up to more scrutiny. My models so far are fairly rigorous, but there's a lot of "under the hood" work to do before writing it all up and submitting the results to an academic publication. Still, I'm hopeful.
The news here? The medium in some ways remains the message. It's about depth of processing and how the news is presented, but most of all it's about how people understand the world. Relying on TV is great for recognizing someone, but not so great when it comes to pulling out a difficult piece of information from memory. Nothing beats reading, and apparently news from the Internet is supplanting news from paper newspapers in this regard.
Thursday, February 16, 2012
Revisited: Recall vs. Recognition
I've written quite a bit about the differences between recall and recognition when it comes to measuring political knowledge (see, for example, here). I've also published research on the topic. Well, I'm back.
First, a quick-and-dirty theory/methodology lesson. Recognition tests are the ones we often see in survey research, such as "Who is the Vice-President?" and four choices given. Recall questions are harder. "Who is the Vice President?" with respondents required to generate a name from memory, that's a cognitively more challenging question.
Case in point: John Boehner, the Speaker of the House.
In some national data I am analyzing now, a random half of the respondents were asked to name the Speaker of the House and were provided four names (a recognition question). The other half of respondents were provided the name John Boehner and asked "what job or political office does he now hold?" That's recall.
As you'd expect, fewer people got the recall question correct.
Only about half of those asked the harder recall question managed to identify Boehner as the Speaker of the House. However, on the recognition question, 70 percent got it right. I am still coding other similar questions (VP, for example), cleaning data, and then I'll turn my attention to the predictors of who gets the recall versus the recognition questions correct. Setting aside a host of other likely predictors, based on theory, I expect respondents who rely more heavily on print media to do better on the recall question as compared to the recognition question. In other words, I'm guessing (hypothesizing) that television news helps people with recognizing a political actor but not so much in free recall of his or her name (or office).
Anyone out there doing similar research, I'd love to hear from you.
I'll write a bit more on this as my research progresses, in part because it allows me to think aloud, in part because I'm curious as to whether others are also engaged in similar research. There are several good recall vs. recognition studies out of advertising and marketing, but relatively few out of political science or political communication.
First, a quick-and-dirty theory/methodology lesson. Recognition tests are the ones we often see in survey research, such as "Who is the Vice-President?" and four choices given. Recall questions are harder. "Who is the Vice President?" with respondents required to generate a name from memory, that's a cognitively more challenging question.
Case in point: John Boehner, the Speaker of the House.
In some national data I am analyzing now, a random half of the respondents were asked to name the Speaker of the House and were provided four names (a recognition question). The other half of respondents were provided the name John Boehner and asked "what job or political office does he now hold?" That's recall.
As you'd expect, fewer people got the recall question correct.
Only about half of those asked the harder recall question managed to identify Boehner as the Speaker of the House. However, on the recognition question, 70 percent got it right. I am still coding other similar questions (VP, for example), cleaning data, and then I'll turn my attention to the predictors of who gets the recall versus the recognition questions correct. Setting aside a host of other likely predictors, based on theory, I expect respondents who rely more heavily on print media to do better on the recall question as compared to the recognition question. In other words, I'm guessing (hypothesizing) that television news helps people with recognizing a political actor but not so much in free recall of his or her name (or office).
Anyone out there doing similar research, I'd love to hear from you.
I'll write a bit more on this as my research progresses, in part because it allows me to think aloud, in part because I'm curious as to whether others are also engaged in similar research. There are several good recall vs. recognition studies out of advertising and marketing, but relatively few out of political science or political communication.
Tuesday, February 14, 2012
Being in the Minority ... Makes You Slow
He who hesitates ... must be in the minority.
This new study, at least from what I can tell from only the abstract, does a whole lot of things right.
Perhaps knowledge pops up as one of the "individual differences" found to moderate the effect. In all, this appears to verify how being in the minority on an issue, long believed to influence people to not speak out as often, may have at its root a caution about speaking out that is measurable by very subtle methods. Neat.
This new study, at least from what I can tell from only the abstract, does a whole lot of things right.
- First, it uses the first rule of crafting an academic title -- come up with something cover, toss in a colon, and finish with what the study is really all about. In this case, the title was: "Hesitation Blues: Does Minority Opinion Status Lead to Delayed Responses?" Titular colonicity at its best.
- The study is also on a topic I really like, how those in the majority or the minority differ. There's a long and storied history of this work that ranges from psychology to political science to, yes, even mass communication.
- Plus it hits on a favorite theoretical area of mine, spiral of silence, or at least I think it does. Hard to tell from only the abstract.
- And finally, it uses a neat measurement scheme, in this case response latencies (academese for, best I can tell, how long it takes people to respond as either liking or disliking something.
Perhaps knowledge pops up as one of the "individual differences" found to moderate the effect. In all, this appears to verify how being in the minority on an issue, long believed to influence people to not speak out as often, may have at its root a caution about speaking out that is measurable by very subtle methods. Neat.
Monday, February 13, 2012
Knowledge and Acceptance
Here's a survey that essentially argues the more we know about something, the more accepting we are. The something in this case is the "smart grid." You probably know the "smart grid" best in terms of those "smart meters" they're sticking on houses to measure and help control power costs and consumption.
(as an aside, the wingnuts out there are convinced such smart meters are part of a UN plot to take over the world)
To get to the knowledge part, scroll down the story to the Knowledge is Key subhead. The story tells us, for example, that "there is a strong correlation between basic knowledge and willingness to change behavior patterns to meet broad goals." What it fails to tell us, of course, is how the hell they measured knowledge. And there are other issues. More on those in a moment.
More fun: "However, this pattern is reversed in issues related to privacy. Here, the more knowledge consumers had about energy, the more concerned they were with privacy issues with home energy-usage data."
In Other Words: the more people knew about the smart grid, the more they liked it but the more they worried about it too.
In Other Words Part 2: the more people knew about the smart grid, the more they could answer question about it in a direction that makes, let's face it, common sense.
And finally -- irony alert. The folks who conducted the survey, IBM, also happen to sell technologies having to do with, yes, smart grids and smart meters. With some rooting around I found this page that gives some of the survey details (an N of 15,000 across 15 countries). Here's my favorite. It looks as if part of the knowledge test involved whether the respondents had heard of smart grids, which in turn is (gasp!) associated with their attitudes toward smart grids.
Our final lesson? Surveys of this type are designed to get the results you want, which in turn become press releases and web fodder and, eventually, uninteresting blog posts by myself.
(as an aside, the wingnuts out there are convinced such smart meters are part of a UN plot to take over the world)
To get to the knowledge part, scroll down the story to the Knowledge is Key subhead. The story tells us, for example, that "there is a strong correlation between basic knowledge and willingness to change behavior patterns to meet broad goals." What it fails to tell us, of course, is how the hell they measured knowledge. And there are other issues. More on those in a moment.
More fun: "However, this pattern is reversed in issues related to privacy. Here, the more knowledge consumers had about energy, the more concerned they were with privacy issues with home energy-usage data."
In Other Words: the more people knew about the smart grid, the more they liked it but the more they worried about it too.
In Other Words Part 2: the more people knew about the smart grid, the more they could answer question about it in a direction that makes, let's face it, common sense.
And finally -- irony alert. The folks who conducted the survey, IBM, also happen to sell technologies having to do with, yes, smart grids and smart meters. With some rooting around I found this page that gives some of the survey details (an N of 15,000 across 15 countries). Here's my favorite. It looks as if part of the knowledge test involved whether the respondents had heard of smart grids, which in turn is (gasp!) associated with their attitudes toward smart grids.
Our final lesson? Surveys of this type are designed to get the results you want, which in turn become press releases and web fodder and, eventually, uninteresting blog posts by myself.
Labels:
ibm,
smart grids,
smart meters,
survey methodology
Monday, February 6, 2012
Quality PR Matters
My colleague, PR prof and guru Karen Russell, will love this -- evidence that good PR matters. According to this study, higher quality press releases from medical journals result in higher quality news stories. Information subsidies, confirmed.
I can also stretch this to what people know, of course, by pointing out the following causal connection:
Quality PR --> Quality News --> Better Learning about Medical News
They don't test this, unfortunately, but I'm confident in the causal relationship.
I can also stretch this to what people know, of course, by pointing out the following causal connection:
Quality PR --> Quality News --> Better Learning about Medical News
They don't test this, unfortunately, but I'm confident in the causal relationship.
Thursday, February 2, 2012
Coding Hell
This is a problem for many who do research in political knowledge. What's a right answer?
This gets a bit complicated. Hang in there.
If I ask respondents what office does John Boehner hold, the right answer is obvious -- Speaker of the House. But what if a respondent answers congressman or member of Congress. They're not wrong answers, but are they right enough to code as being correct? And what if they just say Republican leader? That's right too. Not what I asked -- name his office -- but not so very wrong either.
I've blogged about this several times, most recently here as I discussed an excellent study that looked at "partial right" responses to surveys. The result there? Folks who give kinda sorta right answers are an awful lot like the folks who give the completely right answers. This is an enduring problem in political knowledge studies, especially as the ANES wrestled with coding issues of its knowledge question about the Supreme Court (see my link above, track back to earlier posts on this one, including a technical report).
Simply put -- what's a right answer?
I say this as I fiddle with data with just these kinds of questions and where I have to go in and decide what's a right (or wrong, or sorta wrong, or kinda wrong) answer. My initial response is to code them in multiple ways (absolutely correct, right but not what I was looking for, not right but not wrong, wrong, and no answer). That's five different codes. We often dump the non-responses and incorrect responses together (that's a different methodological problem for another day). So if a respondent didn't answer, or answered incorrectly, that's be a "0" for that question, a "1" if they got it right. Then we'd sum their correct answers on an index designed to measure, yup, political knowledge.
But a lot of "0" responses sometimes are not so very wrong. And is an incorrect answer the same as no answer? Not really. There's work out there that suggests, for example, that men are more willing than women to guess on these kinds of questions. They get some of those guesses right, those end up with higher scores since we often through coding do not punish being wrong any more than simply saying "I dunno." In other words, we treat both the same -- and probably shouldn't.
I'll probably use the multiple coding scheme above, collapse them if I don't see significant differences, and move on. But it's a serious issue that, to be honest, raises serious questions about a host of previous studies on the topic.
Subscribe to:
Posts (Atom)