The latest issue of Public Opinion Quarterly includes an article asking a simple question: should survey questions about political knowledge include a "don't know" response, or should respondents be forced to guess if they do not know the answer.
A series of articles by Jeffery Mondak and his colleagues suggest that perhaps it is best to create a forced choice. Encouraging "don't knows" "invites a guessing response set" perhaps linked to personality differences, they argue, and thus is biased.
Most measures of political knowledge include a "don't know" or "no answer" alternative. Is there a systemmatic bias? Can this screw up the results in some underlying way? A POQ piece by Patrick Sturgis, Nick Allum, and Patten Smith that showed up in my mailbox today suggests that "don't know" is, at least from what can tell today, a perfectly fine approach to questions of knowledge. Their experiment finds no obvious distortions, although they recommend further work to tease out potential issues.
My own gut feeling is that "don't know" is a good alternative, that we can't really tell what are guesses, even good guesses, versus bad ones that by sheer dumb luck turn out to be correct. I've also wondered whether the 1 = correct, 0 = otherwise is a good coding scheme for knowledge items. Shouldn't an incorrect response be coded as -1, a 0 reserved for no answer or don't know, and then a 1 for correct responses? An incorrect response seems to me to be qualitatively different than no answer at all. And yet like many others, I use the 1 if correct, 0 otherwise approach.
Old habits, even old methodological habits, die hard.
No comments:
Post a Comment