A probe in survey lingo is a follow-up question prompted by a respondent's failure to answer a previous question. For example, in the 2008 ANES, respondents were asked to identify Nancy Pelosi. If they could not or did not answer, the interviewer was trained to prompt with "Well, what's your best guess?"
That's a probe.
Not all surveys probe an initial lack of response, and this can make a significant difference if we're studying something like political knowledge. For example, the ANES made available a redacted version of the open-ended responses to certain questions, including those measuring political knowledge (download a zip file here of the Excel document). It's interesting. You can see the various open-ended responses, which I've blogged about previously, but they also include a column of the probes. If a respondent gave an initial answer--right or wrong--the probe code is 5 (no probe) or one of the other codes that signify other stuff.
A "1" in the probe meant they asked for a respondent's best guess. And sometimes, the probe resulted in a respondent "guessing" correctly. How often? There were 1,294 instances of probes of the Pelosi question. Roughly counting, I estimate at least a hundred instances, perhaps more, where the probe resulted in a correct answer. And that's only looking at the Pelosi question.
In other words, a probe can definitely influence the results, which I suspect has some bearing in analyses. And this doesn't even get into what is considered a "correct" versus an "incorrect" response.
If I was so inclined, I'd do a paper on the power of probes to elicit a correct versus an incorrect response, and then position these competing approaches to political knowledge against key variables to see whether the probe improves results. Honest, I'd do it, except I don't know where the heck I'd publish something like this. Public Opinion Quarterly? Dunno, cause I'm not sure I'm smart enough to successfully publish there. It's full of folks far brighter than myself.