I've written about this extensively, research into how people lie (or fib, or fudge, or exaggerate) in surveys. The most recent issue of Public Opinion Quarterly has a study that examines viewing of U.S. presidential debates.
People lie. Or, at least, they exaggerate about their viewing, not unlike previous studies that find people say they voted when they didn't, or that they inflate how often they consume the news and so on.
In the POQ study, Marcus Prior compared survey questions about debate exposure with, as a benchmark, Nielsen ratings of the vice presidential and presidential debates from 2000 through 2008. Between 47 and 63 percent of voting-age residents said they watched the presidential debates, about double what the Nielsen ratings say is the more likely number.
But is Nielsen a good benchmark? Probably so, and Prior addresses this by examining data in different ways to make clear his point -- that surveys make for a lousy way to evaluate exposure to a major political event. It may be all we have, but one filled with error and inflation, so much so that it may affect studies that seek to understand the consequences of viewing on opinions and voting preferences.