It's no surprise that, in some instances, media use can moderate the public's perceptions of science, as was found in a new paper published in the latest issue of the
Journal of Media and Religion. This is one of those odd pieces of research I want to break down a bit. First, here's an interesting finding:
A post-hoc inspection of the tables revealed that, in several cases, the introduction of media use as an independent factor rendered significant relationships between religion and views of science insignificant
I'm a bit surprised that media use toppled the predictive power of religiosity. In other words, media use explains the same variance as religiosity and explains it better. In even more other words, the negatively relationship between religious activities and what one thinks of science sometimes disappears when considering media use.
I mentioned above this is an odd study. It uses GSS data, which is terrific, but provides a long set of tables with very simple ANOVAs of "religious activity" and various media variables. Table 12, for example, is religious activity and Internet use. We get in the table a Mean Square -- unusual -- to the thousandth, as well as the F-score and p level (again, to the thousandth, unnecessary false precision). I'll fuss at the editor sometime about this. In full disclosure, I'm on the journal's Editorial Board but I don't believe I reviewed this manuscript, or if I did the authors did nothing I suggested. I read a lot of stuff for a lot of journals.
Anyway, the analysis strategy here should have been, given the survey data, an analysis of covariance to control for various socio-demographic factors that might better explain the results. And some of the results are just curious, such as an F-score of 1.021 (1.0 is fine) but significant at the .033 level. Here's what's more wrong. The authors report the p level of .033 (or others) and add asterisks for below to tell us -- shockingly -- that .033 is less than .05.
C'mon guys.
I'm being picky, I know, but the jillion or so hypotheses are without any real foundation in the literature. They almost seem written post-hoc, after the analyses. I've spent years doing this stuff and predicting interaction effects is tricky and, often, theoretically messy.
And how can it be a hypothesis to predict a significant interaction without predicting the direction of the relationship? Additive? Moderating influence?
This is a paper with potential, but it unravels on a number of fronts. That said, there's something to be learned here and there's a reason it was published. I just wish there'd been more of a revise and resubmit.
h