Just because a paper has been published formally doesn’t mean it’s worth a damn.
The recent publication of papers with themes akin to “tornado-preventing walls” and “smoke makes tornadoes worse because we think it did on one outbreak day” underscores this point. [No, I’m not going to feed them clicks by linking to them.] When papers are published in journals not focused to their subject matter, they tend to get poorly qualified reviewers and lots of flaws get through. Publish meteorology papers in a physics journal (or vice versa) and the likelihood of best-qualified possible reviews goes way down.
A paper I recently reviewed cites one such source. I reckon the author(s) were surprised to see this in my review (names removed):
- “xxxx and xxxx (20__) is a scientifically deficient and poorly written paper—one I’m quite sure would not be accepted to an AMS, EJSSM or NWA journal in the form it was published in [the overseas journal]. I don’t have room here, nor is it appropriate in this space, to go into all the details, but for starters, it uses unsupported classifications, obsolete metrics, overgeneralized hand-waving (e.g., “dynamics”), misuses that and other terminology in ways that indicate basic lack of understanding, fails to document errors and uncertainties in the data, and is suffused with banal fluff. This citation can be dropped with no harm at all.”
At least all that involves is dropping one citation to a paper they cited and I read–a paper that is useless rubbish. Easy enough…but what else gets through unnoticed by reviewers who are not careful due to being busy, distracted, superficial in their reviews, or ignorant of the depth of the subject matter? How many reviewers will seek and read unfamiliar papers cited by authors of articles they are reviewing? Anecdotal evidence suggests it is a minority, and that’s very unfortunate.
Thankfully I recently had the opportunity to review a weather (not climate) paper submitted to an unnamed overseas climate journal by unnamed U.S. author(s) [keeping things unnamed so as to not soil the integrity of the review process], and recommended rejection for this reason: it is totally out of scope. The paper is forecasting-related to a specific short-fused, localized severe-weather phenomenon in the U.S., and should be submitted to a U.S. weather journal read by operational forecasters (EJSSM, WAF, or NWA EJOM) since it deals with short-fused prediction methods for a severe-weather phenomenon based on U.S. cases. The paper itself actually is reasonably good, but in the wrong place. I don’t know yet if the editor agreed or if he went ahead and decided to publish a paper completely out of the journal’s scope. If the latter, chalk up another one to the misfit brigade.
I’m an editor of a hazardous-weather journal that accepts papers on supercells, hail, lightning, fire weather, extreme winds, winter storms and other acutely dangerous weather. There is no chance I would put into review a submission on Antarctic ice-albedo radiation processes, climatology of nighttime temperatures in a city, chemistry of methane bubbles in Siberia, climate-change effects on wildlife, or electromagnetism of aurorae–regardless of how high-quality it looked, and even if it would boost the journal’s published-paper metrics. Instead I’d try to find the right journal(s) and make that recommendation to the author(s). The reason is simple, regardless of the paper’s worth in its realm–the subject matter is not within the journal’s scope. I get the sense that these (mainly overseas) journals are taking out-of-scope and sometimes very poor-quality papers just to boost publicity and bean counts.
What about relevance? What about quality over quantity? Are bean counts and number-based ratings all that matter to some journals, and to the academic departments that rate faculty? Apparently so–and if so, a pox on ’em. Screw paper and citation counts…too many mindless artifices are involved! What about how good those papers are?
Quality is harder to judge. It takes time, reading and work–work that reviewers, journal editors and faculty evaluators alike need to be doing to uphold the integrity of the science!
Leave a Reply
You must be logged in to post a comment.