A Methodological Moment
An interesting Public Opinion Quarterly piece follows a line of research about the measurement of political knowledge and whether we should encourage or discourage "don't know" responses. Using an online survey, the authors argue that eliminating the "DK" response works fine and it removes self confidence and other psychological factors that influence results.
It's a fine analysis, excellent discussion. Given the holiday season and grades to be recorded, I'm gonna talk only about a few key points.
"With the DK-omitted strategy, confident risk takers are no longer privileged relative to those who lack confidence or are risk averse."
First, the writer in me absolutely hates the use of privileged in this instance. Setting that aside, basically people willing to guess often get items right even when they don't know. Removing DK takes this out of the equation.
"While estimates of political knowledge have been consistently low, it is clear that question design may be a confounding factor. As web surveys become more ubiquitous, the DK omitted strategy is an increasingly viable option for measuring political knowledge."
I agree, but worry how this applies to traditional telephone or F2F (face to face) surveys. "I dunno" is an easy out, true, but in those people-based interactions not giving them an out might create survey difficulties. Hard feelings. Frustration. Guessing. All bad, all adding random error or perhaps even systematic error. In most surveys, the "don't knows" are often collapsed with incorrect answers to be scored as a "0" and correct responses getting a "1" in some kind of summed index. In other words, "don't know" often is the same as being wrong.
(As an aside, I prefer a -1 for getting it wrong, 0 for dunno, 1 for being right, but I've not used it very often because the results look just about the same as the traditional approach.)
But as we move more and more to online use of surveys, the arguments here are well taken and important for those who research what people know.