This short series (go to Part 1) arises from the recently published paper, “The Evolving Role of Humans in Weather Prediction and Communication“. Please read the paper first.
The authors briefly mention the need for forecasters to avoid the temptation to get lazy and regurgitate increasingly accurate and complex postprocessed output. I’m so glad they did, agree fully, and would have hammered the point even harder. That temptation only will grow stronger as guidance gets better (but never perfect). To use an example from my workplace, perhaps in 2022 we’re arriving at the point that an outlook forecaster can draw probabilities around ensemble-based (2005 essay), ML-informed, calibrated, probabilistic severe guidance most of the time and be “good enough for government work.”
Yet we strive higher: excellence. That necessarily means understanding both the algorithms behind such output, and the meteorology of situations enough to know when and why it can go wrong, and adapting both forecast and communication thereof accordingly. How much of the improvement curve of models and output vs. human forecasters is due to human complacency, even if unconscious? By that, I mean flattening or even ramping down of effort put into situational understanding, through inattention and detachment (see Part 1).
It’s not only an effect of model improvement, but of degradation of human forecast thinking by a combination of procedurally forced distraction, lack of focused training on meteorological attentiveness, and also, to be brutally honest, culturally deteriorating work ethic. I don’t know how we resolve the latter, except to set positive examples for how, and why, effort matters.
As with all guidance, from the early primitive-equation barotropic models to ML-based output of today and tomorrow: they are tools, not crutches. Overdependence on them by forecasters, being lulled into a false sense of security by their marginally superior performance much of the time, that complacency causing atrophy of deep situational understanding, invites both job automation and something perhaps worse: missing an extreme and deadly outlier event of the sort most poorly sampled by ML training data.
Tools, not crutches! Air France 447 offers a frightening, real-world, mass-casualty example of this lesson, in another field. Were I reviewing the Stuart et al. AMS paper, I would have insisted on that example being included, to drive a subtly made point much more forcefully.
The human-effort plateau is hidden in the objective verification because the models are improving, so net “forecast verification” appears to improve even if forecasters generally just regurgitate guidance and move on ASAP to the next social-media blast-up. Misses of rare events get averaged out or smoothed away in bulk, so we still look misleadingly good in metrics that matter to bureaucrats. That’s masking a very important problem.
Skill isn’t where it should or could be, still, if human forecasters were as fully plugged into physical reasoning as their brain capacity allows. The human/model skill gap has shrunk, and continues to, only in part because of model improvements, but also, because of human complacency. Again, this won’t manifest in publicly advertised verification metrics, which will smooth out the problem and appear fine, since the model-human combination appears to be getting better. Appearances deceive!
The problem of excess human comfort with, and overreliance on, automation will manifest as one or more specific, deadly, “outlier” event forecasts, botched by adherence to and ignorance of suddenly flawed automated guidance: the meteorological equivalent of Air France 447. This will blow up on us as professionals when forecasters draw around calibrated-guidance lines 875 times with no problem, then on the 876th, mis-forecast some notorious, deadly, economically disastrous, rare event because “the guidance didn’t show it.”
That disaster will be masked in bulk forecast verification statistics, which shall be of little consolation to the grieving survivors.
Consider yourself warned, and learn and prepare accordingly as a forecaster!
More in forthcoming Part 3…