Here's a bit of interesting methodology, or analysis strategy if you like:
Responses to belief questions were categorized, using theoretically derived categories, remodeled and confirmed through factor analysis, into five main categories; belief in life on other planets, faith-based beliefs, belief in unscientific phenomena, general attitude toward science and technology, and ethical considerations.Okay, I can buy this. And then:
Analysis revealed that demographic information explained less than 10% of the overall variance in students’ forced-answer scientific literacy scores.That's a bit surprising. In plain English, it means basic demographics like sex or age or race or whatever they included didn't really separate those who know a lot from those who know a little. Then again, we're talking not about a general population where there will be huge demographic differences but rather students, who may differ, but let's face it -- they're not real people.
And the big result:
We present how students’ beliefs in these categories relate to their scientific literacy scores.Argh! I'd love to know how they relate, but apparently you have to either wait for the movie or attend the conference where this is being presented.
And I've never seen this before in an abstract:
Stop by our poster and fill out a new survey that will give us important parallel information to help us continue to analyze our valuable data set.Weird. Makes sense, just never have seen someone do this in a research abstract. May have to try it myself some time.