Yet another published social-science/humanities study has been retracted formally. This one got huge attention from secular (of course!) media for supposedly showing that religious children were less generous & altruistic.
Fortunately other scientists, who had been doing similar work with different data, and whose work (along with prior studies) showed much the opposite, didn’t buy it. They suspected something was awry with those results…rightly so! They asked for the study data, got it, reproduced the analyses, and found that the results were indeed bad, due to a simple coding error. See the link I provided above for more details on how this mess happened.
Another festering sore has appeared on the face of peer review as a practice, science as a whole, and on “soft science” humanities in particular. Lessons are many, and include:
- Peer review is good, but not always good enough. Not just in the “soft sciences” either! It can miss important analytic flaws, as I have seen first-hand in physical (atmospheric) science. I find minor errors in published papers often, and mostly e-mail authors directly about them. Once, however, the problems were so numerous and crucial that I lead-authored a published rebuttal of many points in a formally peer-reviewed work that had appeared in an AMS journal.
- Peer review in a highly specialized science can be insular, even incestuous. Reviewers are busy, and can slack off details. Reviewers often know the people whose work they’re reviewing, and/or can be awed by admiration for a respected name, and let stuff slide. It helps to have at least one anal-retentive reviewer willing to re-analyze data, or at least question findings that don’t make sense. Obviously this paper’s reviewers did not find anything suspicious—perhaps because their own preconceived biases about religious people clouded their potential to find such results puzzling.
- Evaluate the biases of the media that promote scientific results, especially in more subjective fields like humanities and social science, but really, any science. What sociopolitical slants are evident in their other stories in general, including choices of subjects about which they publicize scientific results?
- Don’t count on mass media to promote retractions, to more than a tiny, token, trivial fraction of the extent they pushed forth original, likely agenda-conforming results. Sometimes a retraction will be noted on a deep page (physical or digital) for the purpose of tokenism. Keep up with the latest retraction news on reputable, devoted watchdog sites.
- In science, reproducibility is paramount! The greatest of alarm bells should go off when relatively recent (less than a decade or so) studies cannot be reproduced, especially if they involve unavailable data. In decades of yore, data were in hardcopy or primitive digital form, and often got lost, burned, flooded, heaved into the dumpster, etc. Today, with data-storage, archival and retrieval capacity smashing new records on a monthly basis, and redundant and cloud storage being common, there’s little, if any, valid excuse for the data not being there. This includes so-called “proprietary” data. Science is about openness, and all research data should be made available upon request, for the fundamental scientific tenet of reproducing analyses and results.