Friday, June 24, 2016

Science

Should science be used to solve our problems? The easy answer seems yes, but not everyone agrees. There's this question in the 2012 ANES:
When trying to solve important problems, how often should the government rely on scientific approaches? 
First, let's look at the distribution of responses. I have weighted the data to reflect the general population parameters, so this differs from the raw numbers and percents.
  • Never (239, or 4.4 percent)
  • Some of the Time (1,807, or 33.6 percent)
  • About Half of the Time (1,351, or 25.1 percent)
  • Most of the Time (1,471, or 27.3 percent)
  • Always (511, or 9.5 percent)
If you score this as a 1-to-5 variable, with 1 being "never" and 5 "always," you get a mean of 3.0 and standard deviation of 1.1. It has a reasonable distribution. 

As you'd expect, certain factors are negatively associated with a belief in science being used to solve our problems. Below are some correlations. To explain, a negative number means the greater that variable, the less you support scientific approaches to solve problems. A positive coefficient means the greater that variable, the more you support such approaches. Below the table, my comments.

Variables
Correlation
Born-Again Christian
-.15*
Attend Religious Services
-.16*
Pray
-.15*
Religion as Guide
-.17*
Literal Bible
-.19*


Age
-.09*
Education
 .23*
Income
 .09*


Party ID (GOP high)
-.14*
Ideology (conservative)
-.22*


TV News Exposure
-.06*
Fox News Viewing
-.11*
Internet News
 .13*
Paper Newspaper Read
-.01


Vocabulary
 .17*

Asterisks signify statistical significance. The first batch of variables are the obvious ones -- religious, and all are negatively associated with the notion that government should rely on scientific solutions to solve our problems. No big surprise that the more you believe in a literal interpretation of the Bible, for example, the less you believe in government using scientific approaches to solve problems.

The next small group is demographic. Older respondents are less trusting of science, while those with more education or income prefer a scientific approach. Then comes partisanship and ideology and it's no surprise that the more Republican or conservative you are, the less enamored you are with scientific approaches. The media variables demonstrate the difference between those who rely on television news (and especially Fox News) and those who rely on Internet-based news. Finally, the "Vocabulary" variable is a test of one's vocabulary, just what it sounds like. No surprise those who score higher on this also are more willing to use scientific approaches.

I'm just messing with data, trying to decide whether a more comprehensive analysis is justified. I can say a quick-and-dirty regression analysis finds some factors drop out (income, prayer, television news) when controlling for all the other factors. The single most powerful predictor is education (beta = .17, p<.001). The most powerful negative predictor is ideology (beta = -.12, p<.001). Even with all these controls, watching Fox News remains a negative predictor (beta = -.04, p<.01) and Internet news reading a positive predictor (beta = .05, p<.001).

Oh, and just for fun, I found that watching The Big Bang Theory is unrelated to a belief in whether the government should use scientific methods to solve problems. No idea what that means. Just passing it along.

Addendum

I calculated an index based on four items designed to measure moral traditionalism. For example, one question asks
The world is always changing and we should adjust our view of moral behavior to those changes.
The other items are similar and they all hang together nicely as an index (Cronbach's Alpha = .71 for you stats nerds out there). The correlation between moral traditionalism and science to solve problems is -.26, p<.001, stronger than any item above. In the regression analysis it dominates compared to other factors, with beta = -.14, p<.001.







Tuesday, June 21, 2016

Trump Doubt

A fresh CNN/ORC poll is out and there's no real change in who people predict will win the 2016 U.S. presidential election. The poll asked this question in a poll in the field March 17-20 and again a few days ago (June 16-19). "Regardless of who you support," it asks, "and trying to be as objective as possible, who do you think will win the election in November if these are the candidates on the ballot -- Hillary Clinton or Donald Trump?"


Clinton
Win
Trump
Win
Other
etc.
June Poll
55
38
6
March Poll
56
42
2

So no difference on expectations on Clinton, but a drop for predictions of a Trump victory (4 percentage points). This could be the tough campaign stretch Trump suffered, shaking not only some of his support but also predictions he can eventually pull it off. Call this Trump Doubt.

As I wrote yesterday in more detail about the March poll, you can break these down further. The new poll also includes, deep down, crosstabs on this question by gender, political party, age, etc. Here's what jumps out at me in comparing the two polls via their crosstabs:

Among Trump supporters, the percentage who predicted he would win dropped from 88 percent in March to 78 percent in June. That's startling. 

It's not unusual for people to believe their own candidate will win. Three-fourths of Mitt Romney supporters believed so in 2012. But to see a 10 percentage point drop in belief he will win, that says more in some ways than merely asking people their preference.

June Poll
March Poll

Monday, June 20, 2016

Who Will Win? Depends on Who You Ask?

I wrote here about survey questions that ask not just who you're going to vote for in November but, more interestingly, who do you think is going to win? Read that entry to understand the subtle and not-so-subtle differences between the two questions. Here, I want to break down the only Clinton v Trump question I've found so far this election cycle, one that was conducted in March by CNN/ORC. Go here if you want to wade through the results, or just follow my summary below and save yourself the pain. The question asked:
Regardless of who you support, and trying to be as objective as possible, who do you think will win the election in November if these are the candidates on the ballot -- Hillary Clinton or Donald Trump?
Fifty-six percent predicted Clinton would win, 42 percent Trump. That's covered in my previous post. Today let's dive deeper to make the point that people believe what they want to believe and they're more often to predict their own candidate will win. See some of the breakdowns below.

  • Among Trump supporters, 92 percent predicted he would win.
  • Among Clinton supporters, 92 percent predicted she would win. 

Let that sink in for a moment. People tend to believe their own candidate will win, concept us PhDweeby types call wishful thinking. It's been studied in politics, sports, and a few other areas. No real surprise, but important to note. Now:

  • Men (51 percent) and women (61 percent) predicted Clinton would win.
  • Seventy-two percent of college grads predicted a Clinton win, while 48 percent of non-grads predicted a Clinton win (50 percent predicted Trump).

No doubt the gender differences above are reflected in Clinton's success with female vs. male voters. The education effect also reflects the differences the two candidates demonstrate in supporters.

  • Among Dems, 87 percent predict a Clinton victory. 
  • Among Republicans, 75 percent predict a Trump victory.
  • Among independents, 53 percent predict Clinton, 47 percent Trump.
What's important above is fewer GOPers predict a Trump victory than actual Trump supporters. There's a lot going on there, suggesting the uneasiness with Trump's campaign or, perhaps, a better sense of electoral reality. 

Finally:
  • Urban respondents believe Clinton wins (66-33 percent)
  • Among suburban responds, it's Clinton (56-44).
  • But, among rural respondents, 53 percent predict Trump will win to 45 percent for Clinton.
I should point out that even seeing, reading, or hearing about polls has little effect in shaking people from their loyalty to their preferred candidate. It's really difficult to break people from this wishful thinking. The more strongly people feel about the candidate or campaign, the more likely they are to fall into this trap. Consider this the Karl Rove effect, for lack of a better name, for his infamous meltdown on Fox News during the 2012 election night.

As I discussed elsewhere, three polling firms traditionally ask these "who's gonna win" questions. Two of them have yet to release any questions, though it should be soon, and at least one academic poll always asks this question but those results won't come out until after the election. That is more useful for academics like myself who want to understand why people engage in wishful thinking and what the consequences such beliefs may have, such as being a "surprised loser" in an election. 





Tuesday, June 14, 2016

Bad Poll Writing

Please please please, take individual poll stories with all the skepticism you can muster. Take, for example, this new poll that breathlessly reports:
Hillary Clinton's lead over Donald Trump in the U.S. presidential race has narrowed since late last week, according to the results of the first Reuters/Ipsos poll conducted since the Orlando shooting rampage on Sunday.
If you're a Clinton supporter, you're going "Holy crap." If you're behind Trump, you're cheering.If you're a journalist writing a poll story, you should know better.

Let's break it down. The difference is miniscule, from a 13-percentage-point lead to a 11.6-percentage-point lead. That's all of 1.4 percentage points. Well within a good poll's margin of error. Except they don't report a margin of error, they report something called:
The online poll included 1,063 likely voters and had a credibility interval, a measure of accuracy, of about 3.5 percentage points.
What the hell is a credibility interval? The polling firm explains it here if you're interested, and a more skeptical look by AAPOR worth the read is here, but it boils down to the fact the sample in this poll is not random. That matters. It's an opt-in online poll. Now these are being used more and more often, and are not necessarily the evil they may seem to be. The 538 poll rankings gives IPSOS an "A-," which is damn good. But even if we accept the "credibility interval" as a surrogate for "margin of error" the results are still within that interval. In other words, there's no real difference between the two polls. That makes the hed and lede wrong.




Monday, June 13, 2016

University-Based Polls

There are lots of polls, more perhaps than we need. As I was skimming 538's invaluable rankings of the polls, I noticed a lot of university-based polls and wondered they do in comparison to others. To test this I downloaded the 538 data and sorted it by those polls clearly from colleges or universities (I may have missed one or two if the words "college" or "university" were not included in the name).

Before we get to my analysis, a few eyeball notes. Monmouth University's poll gets an "A+," the only school-based poll so rewarded in Nate Silver's coding scheme. The lowest graded university-based poll is by two places -- Brigham Young University and Millersville University, both with "D" grades. I think that counts as a failing grade, even at Brigham Young.

In all, the analysis includes 373 different polling shops, 87 of which were conducted by university-based operations,(including where I teach, a single poll by the University of Georgia graded as a "C" (ouch). The most common grade for polls both university-based or otherwise was a "C+." See the grade distribution below.


Grade
Univ Poll
Non-Univ
A+
1
4
A
4
5
A-
5
8
B+
13
20
B
10
28
B-
17
49
C+
19
78
C
10
54
C-
4
22
D+
3
8
D
1
4
D-
0
1
F
0
5


University-based polls make up only 23 percent of all polls, but are overrepresented among those with an "A" grade (44.4 percent) and an "A-" 38.5 percent, and are about the same for an "A+" at 20 percent. At the bottom end, at D+, they're also slightly overrepresented (27.3 percent). But no university-based polls got an F, which helps their GPA (which I didn't try to compute because, dammit, I'm lazy). Overall, university-based polls do OK by comparison, being overrepresented in the better grades, such as those already mentioned or also B, B-, and B+). Plus not getting an F helps too.

So, in all, not half bad. Call it ... slightly above average.





Wednesday, June 8, 2016

Could Trump Break the "Afraid" Record?

In a survey last month, 47 percent said they were "scared" of Donald Trump as the Republican nominee. That's interesting, but let's put it in context. No recent Republican candidate has scored so high on the "afraid" scale in recent elections. See the graphic below.


The ANES data can be found here.  And yes we're a bit apples-and-oranges in comparisons, given the nature of the ANES academic surveys versus one-shot commercial surveys, but still you get the idea. The highest percentage of people being "afraid" of the GOP candidate was 43 percent in 2004. It's my hunch that Trump will break the "afraid" record once ANES collects its 2016 data this Fall.

Oh, but how about Dems, you ask? The Dem data is here. The highest was for Barack Obama in 2012 (34 percent).

I also wonder whether Trump will break another ANES record, on the question of how "knowledgeable" you perceive the candidate to be. In this case, you'd bet he may set a low record. For GOP candidates, according to ANES data, the lowest was whether "knowledgeable" described a candidate "extremely well" was 16 percent for George W. Bush in 2000 (Gore was 25 percent). The lowest ever for a Dem since 1980 was 15 percent for Michael Dukakis in 1988. I'm betting Trump, when ANES runs its surveys, takes the record from Bush and gets around 15 percent.

From a PhDweeby perspective, we call this affect (angry, hopeful, etc.) and traits (knowledgeable, honest, etc.). Neither are terribly predictive of vote, at least not consistently across elections. You can find a whole list of them via the ANES data here, just scroll down to #7. I'm at home, otherwise I'd access the raw data and have even more fun.




Tuesday, June 7, 2016

Student Comments

I'm going to do something here you rarely see a prof do -- share student evaluations. Normally there'd be no good reason to do this, but I was teaching a new class in our new curriculum, a large lecture class called Information Gathering. Essentially I lectured a hundred or so students on how to find stuff out. You can see a version of the class here. Scroll down the calendar to get a sense of what we did, then return here.

I learned a lot from the comments, stuff I need to add in the Fall. Here are some stats based on evaluations of 85 of the 106 students enrolled. Below the stats I include some of my favorite comments by students (good and bad). That stats below are on a 1-to-5 scale, with 5 being good (agreement).
  • The instructor knows the subject matter. (4.8)
  • The instructor appeared to be thoroughly competent in his/her area. (4.8)
  • The course was well organized. (4.2)
  • The instructor was enthusiastic when presenting course material. (4.6)
  • I have become more competent in this area because of this course. (4.5)
  • I felt that this course challenged me intellectually. (3.8)
There are lots of other questions, but the results reflect what you see above. So I'm competent, but I need more rigor in the class. I wasn't sure about this one, given the large lecture format, but one thing that pops out in comments is students would like more hands-on exercises. I have plans to do just that. I had three such exercises in Spring, the first semester of the class. For example, one of them required students to background a non-profit. I taught them how to analyze Form 990 via GuideStar, then had them pluck a non-profit from their hometown and upload the results. The issue with such exercises, of course, is someone has to go through the hundred or so uploads. That's why God made graduate students, I suppose.

For comments, students are asked what they liked most and liked least, forcing them to come up with both positive and negative comments. A third question asked for any additional comments but most didn't bother. Below, a greatest hits.

Liked Best

Dr. Hollander made the material interesting and exciting. I can tell that he knows his stuff, and I enjoy hearing from him about his experiences in the field.

He had lots of examples of how to use the tools he taught us.

Hollander's a smart ass and a jack ass, but he knows what he's talking about and his anecdotal teaching is entertaining and genuinely helpful.

Hollander was a fantastic lecturer. Every morning I was very motivated to go to class because he made class very enjoyable.

OK, you get the idea. Blah blah blah. Fun guy, if you can take any class this semester, make it this one. Now let's get to the fun stuff, what they didn't like about the class.

Liked Least

The weight of tests- more class assignments would help

Not enough assignments where we can practice what we learned.

The thing I liked least was that we couldn't use our computers to take notes, but I also understand because if we could use them about 50% of students would be doing other things rather than actually taking notes.

Looking back, I think Professor Hollander could have had more in class discussions. I feel like we did not get to discuss things as much as I would have liked.

So there's a lot to be gained from these comments, especially on a new class. Anticipate more class discussions in Fall, more out-of-class assignments. I've already got a few lined up and I'm giving thought to a semester-long portfolio in which each student has to background stuff in their hometown based on what I've taught them. Gotta give that one some more thought, but it's not a bad idea and perhaps it could be the equivalent of a test grade. Thoughts?