Thursday, June 30, 2016

Suicide

A CDC research paper out today breaks down what we know about suicide by professions (scroll down to Tables 1 and 2). There's some interesting and important stuff here. Also read this story, which alerted me to the original paper.

First, as a university professor, Table 1 caught my eye as "students" are #6 on the list, which is scary with 665 suicides reported. Keep in mind this is a count of all suicides regardless of how many people are in that category. Among students, 74 percent of suicides were by males. No surprise that the bulk of the student suicides are in the 16-24 age cohort.

OK, but there are a lot more students than fishermen, for example, so Table 2 is a better tool for understanding the problem.  According to Table 2, in terms of per 100,000 population, the worst occupational group for suicide is farming, fishing, and forestry. The top three occupations are all physical jobs and "production" at #4 is probably the same. The highest "white collar" job is at #5 for architecture and engineering.

Students don't show up on Table 2 and, best I can tell, are not folded into any other occupational group.

Given I'm a journalism prof I looked for that occupation, but it appears to be folded into a broad "Arts, design, entertainment, sports, and media" category that comes in at #7 with 24.3 suicides per 100,000.


Finding Out About Science

Here's an interesting couple of questions from the General Social Survey. Note the slight differences.
We are interested in how people get information about science and technology. Where do you get most of your information about science and technology

If you wanted to learn about scientific issues such as global warming or biotechnology, where would you get information
The first question is generic, the "science and technology," while the second aims specifically at "global warming or biotechnology." The two questions have identical response alternatives (answers). The answers were newspapers, magazines, the internet, books, TV, radio, government agencies, family and friends, and "other."

What I'm curious about is how often people shift their source of info when "global warming" pops up versus the more generic "science and technology." And I'm really curious as to whether certain types of folks (religious, conservative, etc.) are even more likely to downshift from mainstream news to other sources when that magic phrase "global warming" appears on their radar.

For example, 18.2 percent of respondents identified newspapers or magazines as a generic source of science info, but only 9.9 percent identified these two sources for more specific "global warming or biotechnology." The internet was a generic source of 31.4 percent, but when we get specific about global warming it jumps to 56.3 percent. That's big. (Nerd Note: I'm collapsing these results across the six years these two questions were asked).

As a further example, among those who identified newspapers as a generic science source, only 29 percent identified it as a specific global warming or biotech source. A lot of these "newspaper" generic folks shifted to the internet as a source (36.6 percent of them). The rest are scattered among other sources. Something about "global warming or biotechnology" sent these newspaper folks scurrying to the net. Among strong Republicans, 40.7 percent left newspaper for the internet, while among strong Democrats only 28.3 percent went to the internet. In other words, GOPers were more likely to go to a source with more personal control, one where they were perhaps more likely to seek out sources that they prefer, when the "global warming" thing appeared.

Here's a quick and dirty breakdown with only the three top sources provided. Note there are few real differences. The two are party IDs are about the same on generic, but GOPers jump up on the internet as a source. Still strong partisans both name it first on the more specific question, it's just that Republicans lean even more heavily that way. Newspapers disappear from the top three when it gets specific.

Generic Science Sources

Strong Democrats: 38.1 percent TV, 28.1 percent internet, 11.7 percent newspapers.
Strong Republicans: 37.2 percent TV, 31.3 percent internet, 10.2 percent newspapers.

Specific (global warming) Sources

Strong Democrats: 50.5 percent internet, 22.1 percent TV, 9.0 percent books
Strong Republicans: 56.3 percent internet, 19.0 percent TV, 8.0 percent books.


Cops Hitting Citizens

It's not a hot topic at the moment, but we've had lots of controversy about how police deal with citizens, particularly African-American citizens. I was messing with General Social Survey data today on something else entirely and came across this question asked from 1973 to 2014. It asks:
Are there any situations you can imagine in which you would approve of a policeman striking an adult male citizen?
Two-thirds of American adults are OK with this. Still, there's been a slight decrease in the percent who say "yes" to this question (see graphic below). The high was 79.3 percent in 1983 and the low is 2014, at 66.1 percent. The trend, ever so gently, is down.



But this fails to capture the obvious question -- in what situations would this be OK? Luckily those data are available as well. When we get specific, the numbers drop.

  • Far fewer believe someone using vulgar or abuse language is reason enough for an officer to strike a citizen. The percent who say "yes" to this is about one-in five in the early years but down to 8.9 percent by 2014. That's a significant drop.
  • A murder suspect isn't much of a reason either, according to the data. Only 8.5 percent said it was OK back in 1973, but it's climbed in later years to about 14 percent saying "yes."
  • Attempting to escape appears to be perfectly acceptable, however, with three fourths saying it was OK in early years and two-thirds saying so by 2014.
  • Finally, the survey asked what if the citizen attacked the cop first with his fists, OK then to strike them? Seems obvious and over nine-out-of-10 respondents agreed. Still, 97 percent said so back in 1973 and only 88 percent said so by 2014, so even that reason has gone down.

Race makes a difference (duh). While over time about three-quarters of whites say there are situations where an officer should strike a citizen, rarely is the number above half for black respondents.

Where's the media angle? I don't have much of one as GSS asks crappy media questions. Watching television is slightly, but statistically, more likely to make you say no, as does reading the news. It's not a huge relationship and would probably disappear if I controlled for other factors like race, education, and the like.

Wednesday, June 29, 2016

New Poll Says It's A Tie

A new Quinnipiac poll out today says the U.S. presidential race is a statistical tie, with Hillary Clinton leading Donald Trump 42 - 40 percent. Given the poll's 2.4 percent margin of error, that's a statistical tie. This is surprising given one recent poll had Clinton up by as much as 12 percentage points, another up by 5 percentage points.

This poll will cause gnashing of teeth among Clinton fans and exuberance among Trump supporters. Trump, who complained that polls are biased and underestimate his strength, will finally have one he can appreciate.

My message is this -- never pay much attention to any single poll. Look at the poll average, which in one case has Clinton up 6.2 percentage points. I mention this not to mollify the Clintonites or piss off the Trumpsters, just to say that the rolling average, especially so early in the campaign, is a better guide. Any single poll can be quirky, so buyer beware.

This poll has a helluva sample, 1,610 respondents, and was weighted to reflect region of the country, gender, education, age, and race. It called landline and cell phones. There's little to quibble with, methodologically. And of course the usual caveat comes to play, that the election is not about nationwide votes but rather electoral votes state by state. Yes, I know.


NerdStuff

Poll Details here
Additional methodological details here

Tuesday, June 28, 2016

Post Brexit, and Regrexit

I've written before about how asking people who they think is going to win is often more accurate than the traditional survey asking respondents who they are for.

In the case of Brexit -- this didn't work.

As these SurveyMonkey data collected after the historic vote demonstrate, more people expected Remain to win. Here's the basic breakdown. I don't have access to the "don't knows" or refusals or such, so percentages reflect the prediction based on respondents who answered this question.

  • Remain win by a lot (434, or 11.7%)
  • Remain win by a little (2,659, or 71.7%)
  • Leave win by a little (476, or 12.8%)
  • Leave win by a lot (138, or 3.7%)
The percentages above do not equal 100 due to rounding.

OK, that's all well and good. Brits are lousy prognosticators. Turns out, 80 percent of those who said they voted to Leave also expected Leave to win. But among those who voted to Remain, only 55 percent thought their side would win. 

Friday, June 24, 2016

Science

Should science be used to solve our problems? The easy answer seems yes, but not everyone agrees. There's this question in the 2012 ANES:
When trying to solve important problems, how often should the government rely on scientific approaches? 
First, let's look at the distribution of responses. I have weighted the data to reflect the general population parameters, so this differs from the raw numbers and percents.
  • Never (239, or 4.4 percent)
  • Some of the Time (1,807, or 33.6 percent)
  • About Half of the Time (1,351, or 25.1 percent)
  • Most of the Time (1,471, or 27.3 percent)
  • Always (511, or 9.5 percent)
If you score this as a 1-to-5 variable, with 1 being "never" and 5 "always," you get a mean of 3.0 and standard deviation of 1.1. It has a reasonable distribution. 

As you'd expect, certain factors are negatively associated with a belief in science being used to solve our problems. Below are some correlations. To explain, a negative number means the greater that variable, the less you support scientific approaches to solve problems. A positive coefficient means the greater that variable, the more you support such approaches. Below the table, my comments.

Variables
Correlation
Born-Again Christian
-.15*
Attend Religious Services
-.16*
Pray
-.15*
Religion as Guide
-.17*
Literal Bible
-.19*


Age
-.09*
Education
 .23*
Income
 .09*


Party ID (GOP high)
-.14*
Ideology (conservative)
-.22*


TV News Exposure
-.06*
Fox News Viewing
-.11*
Internet News
 .13*
Paper Newspaper Read
-.01


Vocabulary
 .17*

Asterisks signify statistical significance. The first batch of variables are the obvious ones -- religious, and all are negatively associated with the notion that government should rely on scientific solutions to solve our problems. No big surprise that the more you believe in a literal interpretation of the Bible, for example, the less you believe in government using scientific approaches to solve problems.

The next small group is demographic. Older respondents are less trusting of science, while those with more education or income prefer a scientific approach. Then comes partisanship and ideology and it's no surprise that the more Republican or conservative you are, the less enamored you are with scientific approaches. The media variables demonstrate the difference between those who rely on television news (and especially Fox News) and those who rely on Internet-based news. Finally, the "Vocabulary" variable is a test of one's vocabulary, just what it sounds like. No surprise those who score higher on this also are more willing to use scientific approaches.

I'm just messing with data, trying to decide whether a more comprehensive analysis is justified. I can say a quick-and-dirty regression analysis finds some factors drop out (income, prayer, television news) when controlling for all the other factors. The single most powerful predictor is education (beta = .17, p<.001). The most powerful negative predictor is ideology (beta = -.12, p<.001). Even with all these controls, watching Fox News remains a negative predictor (beta = -.04, p<.01) and Internet news reading a positive predictor (beta = .05, p<.001).

Oh, and just for fun, I found that watching The Big Bang Theory is unrelated to a belief in whether the government should use scientific methods to solve problems. No idea what that means. Just passing it along.

Addendum

I calculated an index based on four items designed to measure moral traditionalism. For example, one question asks
The world is always changing and we should adjust our view of moral behavior to those changes.
The other items are similar and they all hang together nicely as an index (Cronbach's Alpha = .71 for you stats nerds out there). The correlation between moral traditionalism and science to solve problems is -.26, p<.001, stronger than any item above. In the regression analysis it dominates compared to other factors, with beta = -.14, p<.001.







Tuesday, June 21, 2016

Trump Doubt

A fresh CNN/ORC poll is out and there's no real change in who people predict will win the 2016 U.S. presidential election. The poll asked this question in a poll in the field March 17-20 and again a few days ago (June 16-19). "Regardless of who you support," it asks, "and trying to be as objective as possible, who do you think will win the election in November if these are the candidates on the ballot -- Hillary Clinton or Donald Trump?"


Clinton
Win
Trump
Win
Other
etc.
June Poll
55
38
6
March Poll
56
42
2

So no difference on expectations on Clinton, but a drop for predictions of a Trump victory (4 percentage points). This could be the tough campaign stretch Trump suffered, shaking not only some of his support but also predictions he can eventually pull it off. Call this Trump Doubt.

As I wrote yesterday in more detail about the March poll, you can break these down further. The new poll also includes, deep down, crosstabs on this question by gender, political party, age, etc. Here's what jumps out at me in comparing the two polls via their crosstabs:

Among Trump supporters, the percentage who predicted he would win dropped from 88 percent in March to 78 percent in June. That's startling. 

It's not unusual for people to believe their own candidate will win. Three-fourths of Mitt Romney supporters believed so in 2012. But to see a 10 percentage point drop in belief he will win, that says more in some ways than merely asking people their preference.

June Poll
March Poll

Monday, June 20, 2016

Who Will Win? Depends on Who You Ask?

I wrote here about survey questions that ask not just who you're going to vote for in November but, more interestingly, who do you think is going to win? Read that entry to understand the subtle and not-so-subtle differences between the two questions. Here, I want to break down the only Clinton v Trump question I've found so far this election cycle, one that was conducted in March by CNN/ORC. Go here if you want to wade through the results, or just follow my summary below and save yourself the pain. The question asked:
Regardless of who you support, and trying to be as objective as possible, who do you think will win the election in November if these are the candidates on the ballot -- Hillary Clinton or Donald Trump?
Fifty-six percent predicted Clinton would win, 42 percent Trump. That's covered in my previous post. Today let's dive deeper to make the point that people believe what they want to believe and they're more often to predict their own candidate will win. See some of the breakdowns below.

  • Among Trump supporters, 92 percent predicted he would win.
  • Among Clinton supporters, 92 percent predicted she would win. 

Let that sink in for a moment. People tend to believe their own candidate will win, concept us PhDweeby types call wishful thinking. It's been studied in politics, sports, and a few other areas. No real surprise, but important to note. Now:

  • Men (51 percent) and women (61 percent) predicted Clinton would win.
  • Seventy-two percent of college grads predicted a Clinton win, while 48 percent of non-grads predicted a Clinton win (50 percent predicted Trump).

No doubt the gender differences above are reflected in Clinton's success with female vs. male voters. The education effect also reflects the differences the two candidates demonstrate in supporters.

  • Among Dems, 87 percent predict a Clinton victory. 
  • Among Republicans, 75 percent predict a Trump victory.
  • Among independents, 53 percent predict Clinton, 47 percent Trump.
What's important above is fewer GOPers predict a Trump victory than actual Trump supporters. There's a lot going on there, suggesting the uneasiness with Trump's campaign or, perhaps, a better sense of electoral reality. 

Finally:
  • Urban respondents believe Clinton wins (66-33 percent)
  • Among suburban responds, it's Clinton (56-44).
  • But, among rural respondents, 53 percent predict Trump will win to 45 percent for Clinton.
I should point out that even seeing, reading, or hearing about polls has little effect in shaking people from their loyalty to their preferred candidate. It's really difficult to break people from this wishful thinking. The more strongly people feel about the candidate or campaign, the more likely they are to fall into this trap. Consider this the Karl Rove effect, for lack of a better name, for his infamous meltdown on Fox News during the 2012 election night.

As I discussed elsewhere, three polling firms traditionally ask these "who's gonna win" questions. Two of them have yet to release any questions, though it should be soon, and at least one academic poll always asks this question but those results won't come out until after the election. That is more useful for academics like myself who want to understand why people engage in wishful thinking and what the consequences such beliefs may have, such as being a "surprised loser" in an election. 





Tuesday, June 14, 2016

Bad Poll Writing

Please please please, take individual poll stories with all the skepticism you can muster. Take, for example, this new poll that breathlessly reports:
Hillary Clinton's lead over Donald Trump in the U.S. presidential race has narrowed since late last week, according to the results of the first Reuters/Ipsos poll conducted since the Orlando shooting rampage on Sunday.
If you're a Clinton supporter, you're going "Holy crap." If you're behind Trump, you're cheering.If you're a journalist writing a poll story, you should know better.

Let's break it down. The difference is miniscule, from a 13-percentage-point lead to a 11.6-percentage-point lead. That's all of 1.4 percentage points. Well within a good poll's margin of error. Except they don't report a margin of error, they report something called:
The online poll included 1,063 likely voters and had a credibility interval, a measure of accuracy, of about 3.5 percentage points.
What the hell is a credibility interval? The polling firm explains it here if you're interested, and a more skeptical look by AAPOR worth the read is here, but it boils down to the fact the sample in this poll is not random. That matters. It's an opt-in online poll. Now these are being used more and more often, and are not necessarily the evil they may seem to be. The 538 poll rankings gives IPSOS an "A-," which is damn good. But even if we accept the "credibility interval" as a surrogate for "margin of error" the results are still within that interval. In other words, there's no real difference between the two polls. That makes the hed and lede wrong.




Monday, June 13, 2016

University-Based Polls

There are lots of polls, more perhaps than we need. As I was skimming 538's invaluable rankings of the polls, I noticed a lot of university-based polls and wondered they do in comparison to others. To test this I downloaded the 538 data and sorted it by those polls clearly from colleges or universities (I may have missed one or two if the words "college" or "university" were not included in the name).

Before we get to my analysis, a few eyeball notes. Monmouth University's poll gets an "A+," the only school-based poll so rewarded in Nate Silver's coding scheme. The lowest graded university-based poll is by two places -- Brigham Young University and Millersville University, both with "D" grades. I think that counts as a failing grade, even at Brigham Young.

In all, the analysis includes 373 different polling shops, 87 of which were conducted by university-based operations,(including where I teach, a single poll by the University of Georgia graded as a "C" (ouch). The most common grade for polls both university-based or otherwise was a "C+." See the grade distribution below.


Grade
Univ Poll
Non-Univ
A+
1
4
A
4
5
A-
5
8
B+
13
20
B
10
28
B-
17
49
C+
19
78
C
10
54
C-
4
22
D+
3
8
D
1
4
D-
0
1
F
0
5


University-based polls make up only 23 percent of all polls, but are overrepresented among those with an "A" grade (44.4 percent) and an "A-" 38.5 percent, and are about the same for an "A+" at 20 percent. At the bottom end, at D+, they're also slightly overrepresented (27.3 percent). But no university-based polls got an F, which helps their GPA (which I didn't try to compute because, dammit, I'm lazy). Overall, university-based polls do OK by comparison, being overrepresented in the better grades, such as those already mentioned or also B, B-, and B+). Plus not getting an F helps too.

So, in all, not half bad. Call it ... slightly above average.





Wednesday, June 8, 2016

Could Trump Break the "Afraid" Record?

In a survey last month, 47 percent said they were "scared" of Donald Trump as the Republican nominee. That's interesting, but let's put it in context. No recent Republican candidate has scored so high on the "afraid" scale in recent elections. See the graphic below.


The ANES data can be found here.  And yes we're a bit apples-and-oranges in comparisons, given the nature of the ANES academic surveys versus one-shot commercial surveys, but still you get the idea. The highest percentage of people being "afraid" of the GOP candidate was 43 percent in 2004. It's my hunch that Trump will break the "afraid" record once ANES collects its 2016 data this Fall.

Oh, but how about Dems, you ask? The Dem data is here. The highest was for Barack Obama in 2012 (34 percent).

I also wonder whether Trump will break another ANES record, on the question of how "knowledgeable" you perceive the candidate to be. In this case, you'd bet he may set a low record. For GOP candidates, according to ANES data, the lowest was whether "knowledgeable" described a candidate "extremely well" was 16 percent for George W. Bush in 2000 (Gore was 25 percent). The lowest ever for a Dem since 1980 was 15 percent for Michael Dukakis in 1988. I'm betting Trump, when ANES runs its surveys, takes the record from Bush and gets around 15 percent.

From a PhDweeby perspective, we call this affect (angry, hopeful, etc.) and traits (knowledgeable, honest, etc.). Neither are terribly predictive of vote, at least not consistently across elections. You can find a whole list of them via the ANES data here, just scroll down to #7. I'm at home, otherwise I'd access the raw data and have even more fun.




Tuesday, June 7, 2016

Student Comments

I'm going to do something here you rarely see a prof do -- share student evaluations. Normally there'd be no good reason to do this, but I was teaching a new class in our new curriculum, a large lecture class called Information Gathering. Essentially I lectured a hundred or so students on how to find stuff out. You can see a version of the class here. Scroll down the calendar to get a sense of what we did, then return here.

I learned a lot from the comments, stuff I need to add in the Fall. Here are some stats based on evaluations of 85 of the 106 students enrolled. Below the stats I include some of my favorite comments by students (good and bad). That stats below are on a 1-to-5 scale, with 5 being good (agreement).
  • The instructor knows the subject matter. (4.8)
  • The instructor appeared to be thoroughly competent in his/her area. (4.8)
  • The course was well organized. (4.2)
  • The instructor was enthusiastic when presenting course material. (4.6)
  • I have become more competent in this area because of this course. (4.5)
  • I felt that this course challenged me intellectually. (3.8)
There are lots of other questions, but the results reflect what you see above. So I'm competent, but I need more rigor in the class. I wasn't sure about this one, given the large lecture format, but one thing that pops out in comments is students would like more hands-on exercises. I have plans to do just that. I had three such exercises in Spring, the first semester of the class. For example, one of them required students to background a non-profit. I taught them how to analyze Form 990 via GuideStar, then had them pluck a non-profit from their hometown and upload the results. The issue with such exercises, of course, is someone has to go through the hundred or so uploads. That's why God made graduate students, I suppose.

For comments, students are asked what they liked most and liked least, forcing them to come up with both positive and negative comments. A third question asked for any additional comments but most didn't bother. Below, a greatest hits.

Liked Best

Dr. Hollander made the material interesting and exciting. I can tell that he knows his stuff, and I enjoy hearing from him about his experiences in the field.

He had lots of examples of how to use the tools he taught us.

Hollander's a smart ass and a jack ass, but he knows what he's talking about and his anecdotal teaching is entertaining and genuinely helpful.

Hollander was a fantastic lecturer. Every morning I was very motivated to go to class because he made class very enjoyable.

OK, you get the idea. Blah blah blah. Fun guy, if you can take any class this semester, make it this one. Now let's get to the fun stuff, what they didn't like about the class.

Liked Least

The weight of tests- more class assignments would help

Not enough assignments where we can practice what we learned.

The thing I liked least was that we couldn't use our computers to take notes, but I also understand because if we could use them about 50% of students would be doing other things rather than actually taking notes.

Looking back, I think Professor Hollander could have had more in class discussions. I feel like we did not get to discuss things as much as I would have liked.

So there's a lot to be gained from these comments, especially on a new class. Anticipate more class discussions in Fall, more out-of-class assignments. I've already got a few lined up and I'm giving thought to a semester-long portfolio in which each student has to background stuff in their hometown based on what I've taught them. Gotta give that one some more thought, but it's not a bad idea and perhaps it could be the equivalent of a test grade. Thoughts?



Wednesday, June 1, 2016

Who Will Win?

It's early yet, but as far as I can tell few if any polling firms are asking my favorite presidential election horserace question -- who's gonna win?

UPDATED: I did find a CNN question that asks this recently.
See bottom for details on Clinton vs Trump

This is different from asking a respondent who he or she supports.

Add those up, the aggregation of individual opinion, and you get what we generally define as public opinion. I'm talking about a related but very different question in which respondents are also asked who do they think will win. In other words, predict the outcome for me.

Why ask this? One, it's interesting and two, there's evidence this is better at predicting an election outcome than is the traditional asking of who people prefer. I'll discuss this later in the post. First, below are some of the question wordings, in parentheses the polling firm or sponsor that used it in the 2012 U.S. presidential election:
  • Just your best guess, who do you think will win the presidential election this year: Obama or Romney? (ABC/Washington Post)
  • Regardless of who you might support, who do you think is most likely to win the presidential election (Pew Research Center)
  • Regardless of whom you support, and trying to be as objective as possible, who do you think will win the election in November: Barack Obama or Mitt Romney? (CNN/ORC)
Below are the answers I could find from the 2012 election. My comments follow. Numbers are percent. I collapsed the Other/Unsure into a single category.



Obama Win
Romney Win
Other
November



   CNN/ORC
57
36
  7
   ABC/WaPo
55
35
10
October



   ABC/WaPo
57
33
10
   Pew #1
52
30
18
   Pew #2
48
31
19
September



   ABC/WaPo
63
31
  6
   Pew
53
24
23
August



   CNN/ORC
61
35
  5
   ABC/WaPo
56
37
  8
July



   ABC/WaPo
58
34
  9
June



   Pew
52
34
15
April



   CNN/ORC
61
35
  5


As you can see, at no point did people expect Romney to win. Sure, there are differences, such as Pew's higher "unsure" or "other" numbers, but the results are fairly consistent all the way back to April of 2012. Yes, in a few polls Romney closed the gap via the traditional "who will you vote for" question, but those polls were outliers (as the election itself proved) and there's a lot to be said for people's expectations.

Why?

First, lemme note that the two are highly correlated. People tend to believe their own candidate will win. There's even an academic name for it, called wishful thinking. Yes, there's research on it, and yes, I've published myself stuff on the phenomenon. But people also have a good sense of inevitability, and asking them to predict an outcome, although biased by their own predispositions and selective exposure to likeminded others, also has value in their sense of the opinion climate. In other words, they're often more truthful in predicting the outcome than they are in who they actually will vote for.

My own hunch -- this measures the private leanings of the unsure, the undecided, the ones not completely buying into one candidate or another. Yes, when asked for a preference they give one, but it appears to me when time comes to predict who will win, they tap something different. Something more honest, perhaps, or more accurate and reflective of what they'll do in the privacy of the ballot box.

As I said above, so far I've seen few Trump-Clinton questions (if any) posed this way, in part because Trump just captured the GOP nomination and Clinton remains tied up with Sanders. I figure after June 7 we'll see a few more, and then of course a flood of 'em once the convention season hits.

All data drawn from this PollingReport page.

Update

Turns out CNN did ask this question in a March 2016 survey, but I somehow missed Question 26 on this survey. The results are:

Hillary Clinton 56%
Donald Trump 42%
No opinion 2%

What does this tell us? The same poll, on preference pulled from PollingReport, had it Clinton 51 percent and Trump 41 percent, with 6 percent either saying "neither" or undecided. So what we have is Clinton adding 5 percentage points to her total on the "who's gonna win" question as compared to the "who ya for" question. I'll do more on this another day, but we know the predictive power of asking who is gonna win often tops the traditional presidential preference question. Yes, I know ... elections are by state, not national. I get that. But if's fun nonetheless.