Dangerous Crutch of Automation in Weather Forecasting

In the last installment of “Scattershooting”, I offered the following short take regarding the evolving role of humans on the operational forecasting process. I’m reproducing it again here for additional elaboration based on input received since:

“IDSS” — a bureaucratic acronym standing for “Integrated Decision Support Services” — is the pop-fad buzzword in today’s public weather services. Behind all the frivolous sloganeering (“Weather Ready Nation”) and even more needless managerial lingo bingo, is a nugget of extreme truth: the science of operational forecasting must improve its messaging and communications capabilities. But it is possible to swing the pendulum too far, to turn professional forecasters into mere weather briefers (full-time IDSS) and let “forecasts” be straight model output. That would be a degradation of service, because models still cannot forecast multivariate, greatest-hazard events in the short term (tornado risks, winter weather, hurricane behavior, etc.) as well as human forecasters who are diligently analyzing and understanding the atmosphere. Today’s tornado and hail threat is not the same animal as single-variable temperature grids, and is far, far, far, far, far more important and impactful.

There is still a huge role in the forecast process for human understanding. Understanding has both physical and empirical bases — the science and art of forecasting. Models of course are great tools — when understood and used well. However, models themselves do not possess understanding. Moreover, models do not issue forecasts; people do. A truly excellent forecast is much more than regurgitation of numerical output in a tidy package. It is an assembly of prognostic and diagnostic understanding into a form that is communicated well to customers. A forecast poorly understood cannot be communicated well. And a bad forecast that is masterfully conveyed and understood is a disservice regardless. Eloquent communication of a crappy forecast is akin to spray-painting a turd gold. It is still a smelly turd.

Solution? Get the forecast right. All else will fall into place. Credibility is top priority! As for automation, it will proceed at the pace forecasters permit their own skills to atrophy, both individually and collectively. For those unfamiliar as to how, look up the prescient meteorological prophet Len Snellman and his term, “meteorological cancer”.

Now the elaboration:

Patrick Marsh supplied a remarkably insightful article on the role automation and resulting ignorance played in plunging Air France Flight 447 (Rio to Paris) into the Atlantic: Crash: How Computers Are Setting Us Up for Disaster. Please read that before proceeding; somewhat long but certainly riveting, the article should be well worth your time.

This has important ramifications to hazardous-weather forecasting too. It fits in nicely with a lot of concepts in meteorology, as a whole, about which I have been thinking and writing for many years (e.g., here, and here, and here, and here), and as have even more-accomplished scientists (e.g., here). Let’s hope the “powers that be” read these lessons and pay attention.

In the meantime, even though it has nothing directly to do with severe midlatitude storms or other hazardous-weather forecasting done out of Norman, the Air France crash article offers such a powerful and plainspoken lesson that it could and arguably should be included in all new-forecaster training materials. My unnamed office has remained ahead of the automation monster and in control of it, because the forecasters see and understand its limitations on a daily basis, as with (for example) SREF calibrated probabilities and so-called “convection-allowing models” (CAMs) at fine grid spacing, such as the SSEO members and the NCAR ensemble. My office is at the forefront of how to help to develop the tool and use it in its proper role, without dangerous over-reliance; the model is the slave, not the human. Let’s keep it that way.

One way we can do so as forecasters is through continuing to provide meaningful developmental and testing input, as we have. This way we still understand the monster we help to create, and know when and how to employ its growing powers. Another, even more important way is by maintaining the high level of both baseline meteorological understanding and diagnostic situational awareness of events for which we are properly well-known and -respected. As I have stated resolutely elsewhere: A forecaster who tries to predict the future without a thorough understanding of the present is negligent, and prone to spectacular failure.

Keeping up with the science, and maintaining and improving fundamental skills, are so important. How? Reading, training and practice…reading, training and practice! Writing (research papers) helps immensely too, in literature review and assembly. Forecasting experience matters too, as long as it involves experiential learning and progress, not one year experience repeated over and over! Again, they key word and concept here is understanding. When forecasters’ core abilities atrophy from disuse and weaknesses creep in, perhaps even unrealized (as was the case on that A380 disaster), is when the atmosphere analogously will make us pay.

Nearly four decades ago (Snellman 1977, Bulletin of the AMS), visionary scientist Len Snellman foresaw this when he coined the term, “meteorological cancer”, to describe the threat of over-reliance on automated output to our ability to predict the weather. This can include the extreme and/or exceptional events, the “money events” that disproportionately cause casualties and destroy property. What is my money event? The tornado outbreak, and to some extent the derecho.

Since such events are, by nature and definition, extreme and unusual, an “average” model forecast might not get there. Predicting the mean on a day with little precedent can be horribly wrong. Sometimes the ensemble outlier is the closest solution, not the ensemble mean or median. We will need to be able to recognize in advance when that could be reasonably possible. [Emphasis on “reasonable”–not “you never know, you can’t rule it out” grade of total CYA over-emphasis on the outliers.] Our probability of recognizing the correctness of the outlier increases with our understanding, both of the ensemble and of the meteorology behind the exceptions, then in turn, our ability to diagnose when the model is veering or could veer rapidly awry, thereby nailing a once-in-career success and saving some lives to boot. Either that, or go down like that plane and suffer spectacular, career-defining failure…

We can and must prevent the latter, on both individual and institutional levels. Meteorological understanding and diagnostic ability don’t guarantee success, but they make good forecasts much more likely and consistent. When we as a science will get in trouble is when we start treating automated output as an unquestioned crutch instead of as a tool in a holistic toolbox that still involves human diagnostics and scrutiny. Is that skill being lost in the field with less-experienced forecasters who have never had to fly anything but autopilot? Will “SuperBlend” become “SuperCrutch”? If not, when will we reach that tipping point, what sort of disaster will it take to reveal such a failure, and how can we see it coming in time to change course and lift that altitude? Vague as the proposal is for now regarding specifics on forecaster duties and staffing and office-hour reductions, does “Operations and Workforce Analysis” (OWA, NWS document and NWSEO union response), for all its superficial and advertised benefits for “IDSS”, carry a dark undercurrent of swinging the automation pendulum on the forecast side too far toward that fate?

These questions and more need to be discussed. Now that truly would be (to use a piece of bureaucratic jargon) a “vital conversation”.


Leave a Reply

You must be logged in to post a comment.