Weather or Not

Severe Outflow by R. Edwards

  • Home
  • About
  • Archives

Powered by Genesis

Dangerous Crutch of Automation in Weather Forecasting

October 27, 2016 by tornado Leave a Comment

In the last installment of “Scattershooting”, I offered the following short take regarding the evolving role of humans on the operational forecasting process. I’m reproducing it again here for additional elaboration based on input received since:

…

“IDSS” — a bureaucratic acronym standing for “Integrated Decision Support Services” — is the pop-fad buzzword in today’s public weather services. Behind all the frivolous sloganeering (“Weather Ready Nation”) and even more needless managerial lingo bingo, is a nugget of extreme truth: the science of operational forecasting must improve its messaging and communications capabilities. But it is possible to swing the pendulum too far, to turn professional forecasters into mere weather briefers (full-time IDSS) and let “forecasts” be straight model output. That would be a degradation of service, because models still cannot forecast multivariate, greatest-hazard events in the short term (tornado risks, winter weather, hurricane behavior, etc.) as well as human forecasters who are diligently analyzing and understanding the atmosphere. Today’s tornado and hail threat is not the same animal as single-variable temperature grids, and is far, far, far, far, far more important and impactful.

There is still a huge role in the forecast process for human understanding. Understanding has both physical and empirical bases — the science and art of forecasting. Models of course are great tools — when understood and used well. However, models themselves do not possess understanding. Moreover, models do not issue forecasts; people do. A truly excellent forecast is much more than regurgitation of numerical output in a tidy package. It is an assembly of prognostic and diagnostic understanding into a form that is communicated well to customers. A forecast poorly understood cannot be communicated well. And a bad forecast that is masterfully conveyed and understood is a disservice regardless. Eloquent communication of a crappy forecast is akin to spray-painting a turd gold. It is still a smelly turd.

Solution? Get the forecast right. All else will fall into place. Credibility is top priority! As for automation, it will proceed at the pace forecasters permit their own skills to atrophy, both individually and collectively. For those unfamiliar as to how, look up the prescient meteorological prophet Len Snellman and his term, “meteorological cancer”.

…

Now the elaboration:

Patrick Marsh supplied a remarkably insightful article on the role automation and resulting ignorance played in plunging Air France Flight 447 (Rio to Paris) into the Atlantic: Crash: How Computers Are Setting Us Up for Disaster. Please read that before proceeding; somewhat long but certainly riveting, the article should be well worth your time.

This has important ramifications to hazardous-weather forecasting too. It fits in nicely with a lot of concepts in meteorology, as a whole, about which I have been thinking and writing for many years (e.g., here, and here, and here, and here), and as have even more-accomplished scientists (e.g., here). Let’s hope the “powers that be” read these lessons and pay attention.

In the meantime, even though it has nothing directly to do with severe midlatitude storms or other hazardous-weather forecasting done out of Norman, the Air France crash article offers such a powerful and plainspoken lesson that it could and arguably should be included in all new-forecaster training materials. My unnamed office has remained ahead of the automation monster and in control of it, because the forecasters see and understand its limitations on a daily basis, as with (for example) SREF calibrated probabilities and so-called “convection-allowing models” (CAMs) at fine grid spacing, such as the SSEO members and the NCAR ensemble. My office is at the forefront of how to help to develop the tool and use it in its proper role, without dangerous over-reliance; the model is the slave, not the human. Let’s keep it that way.

One way we can do so as forecasters is through continuing to provide meaningful developmental and testing input, as we have. This way we still understand the monster we help to create, and know when and how to employ its growing powers. Another, even more important way is by maintaining the high level of both baseline meteorological understanding and diagnostic situational awareness of events for which we are properly well-known and -respected. As I have stated resolutely elsewhere: A forecaster who tries to predict the future without a thorough understanding of the present is negligent, and prone to spectacular failure.

Keeping up with the science, and maintaining and improving fundamental skills, are so important. How? Reading, training and practice…reading, training and practice! Writing (research papers) helps immensely too, in literature review and assembly. Forecasting experience matters too, as long as it involves experiential learning and progress, not one year experience repeated over and over! Again, they key word and concept here is understanding. When forecasters’ core abilities atrophy from disuse and weaknesses creep in, perhaps even unrealized (as was the case on that A380 disaster), is when the atmosphere analogously will make us pay.

Nearly four decades ago (Snellman 1977, Bulletin of the AMS), visionary scientist Len Snellman foresaw this when he coined the term, “meteorological cancer”, to describe the threat of over-reliance on automated output to our ability to predict the weather. This can include the extreme and/or exceptional events, the “money events” that disproportionately cause casualties and destroy property. What is my money event? The tornado outbreak, and to some extent the derecho.

Since such events are, by nature and definition, extreme and unusual, an “average” model forecast might not get there. Predicting the mean on a day with little precedent can be horribly wrong. Sometimes the ensemble outlier is the closest solution, not the ensemble mean or median. We will need to be able to recognize in advance when that could be reasonably possible. [Emphasis on “reasonable”–not “you never know, you can’t rule it out” grade of total CYA over-emphasis on the outliers.] Our probability of recognizing the correctness of the outlier increases with our understanding, both of the ensemble and of the meteorology behind the exceptions, then in turn, our ability to diagnose when the model is veering or could veer rapidly awry, thereby nailing a once-in-career success and saving some lives to boot. Either that, or go down like that plane and suffer spectacular, career-defining failure…

We can and must prevent the latter, on both individual and institutional levels. Meteorological understanding and diagnostic ability don’t guarantee success, but they make good forecasts much more likely and consistent. When we as a science will get in trouble is when we start treating automated output as an unquestioned crutch instead of as a tool in a holistic toolbox that still involves human diagnostics and scrutiny. Is that skill being lost in the field with less-experienced forecasters who have never had to fly anything but autopilot? Will “SuperBlend” become “SuperCrutch”? If not, when will we reach that tipping point, what sort of disaster will it take to reveal such a failure, and how can we see it coming in time to change course and lift that altitude? Vague as the proposal is for now regarding specifics on forecaster duties and staffing and office-hour reductions, does “Operations and Workforce Analysis” (OWA, NWS document and NWSEO union response), for all its superficial and advertised benefits for “IDSS”, carry a dark undercurrent of swinging the automation pendulum on the forecast side too far toward that fate?

These questions and more need to be discussed. Now that truly would be (to use a piece of bureaucratic jargon) a “vital conversation”.

Filed Under: Weather Tagged With: automation, ensemble forecasting, forecaster, meteorological cancer, meteorology, numerical guidance, operational meteorology, research, science, severe storms, severe weather, situational awareness, weather forecaster

Forecasting on the Edge

January 31, 2011 by tornado 1 Comment

The Norman area sits on the edge of a possible heavy snow and/or sleet and/or freezing rain event for Tuesday. Which is it and how much, categorically? My answer as of midnight Sunday night/Monday morning: Still too soon to say! Anyone who tries to nail any spot down to a specific amount, or a narrow range (like, say, 5-6 inches) this soon is full of BS, and should be trusted no more than a used-car dealer in Vegas. Neither the human forecasters nor the models are that good yet.

While I have looked at some more recent forecasts, and have a decent grasp of the general scenario, I’ll first post and link to some SREF (short-range ensemble forecast) panels that I had a chance to grab yesterday afternoon from the morning’s run. They illustrate the difficulty faced by winter-weather forecasters really well!

[For the uninitiated, the SREF is a 21-member package of various numerical models. I don’t have room to explain it in detail here; but this site has a good summary of SREF and a big variety of forecast charts.]

The above forecast is the maximum value (basically, at any spot on the map) for the total liquid (melted) equivalent precipitation during 6Z-18Z (midnight-noon) Tuesday. Take every one of the 21 models, find its max precip, and plot that value, and basically that’s what this is–a heaviest-case scenario. Notice that most of the heaviest precip forecasts are near (but mainly south and east) of Norman. Similar forecasts ending later during the day are not quite so large around Norman. Again, this is liquid amount–not the equivalent of snow. For snow…

This forecast is for the average snowfall in inches at any given spot on the map for the 12 hours ending later that day (21Z or 3 p.m.). The times are offset some because of an expected change to snow in Norman sometime during that block of time, and as forecaster, I think this time selection probably will capture most (not necessarily all) of the snow event. Notice Norman is on the opposite edge of the heaviest snow belt from where it was with respect to the heaviest total precip accumulation ending a few hours before. Hmmm…so if we’re on one edge of melted equivalent and another of snow, where and when is the transition?

This forecast shows the average position of the freezing line southeast of Norman by 12Z (6 a.m.), but quite a bit of spread off that mean in the extreme positions (dotted and dashed lines). That freezing line matters hugely for what kind of precip we’ll have! The most likely precip type (out of all probabilities) shows rain east, a mix of sleet and freezing rain overhead and nearby, and snow just to the NW. At 12Z, we’re on the edge of a lot of things that only need to be a little bit wrong to trash the hell out of any forecast that’s too specific! Assuming the dominant forecast is accurate, this trend shifts east over us during the day to render all snow…

Now, by 21Z (3 p.m.), we’re pretty confident that it’s snow, being deep into the blue area, with all the models’ freezing lines past us, and (if you’ve also looked at forecast soundings, which I’m not showing) an understanding that it’s too cold aloft for sleet or freezing rain. But the critical issue is: when does that freezing line go past us, and how far behind it is the air above the surface warm enough to yield freezing rain or sleet before the change-over? On a national scale, it looks puny. Make a spot forecast, and just a few hours one way or another makes a huge difference.

Now let’s look at what we call “plume diagrams” — because they often look like plumes from a chimney. They’re actually spot forecasts of accumulated totals for the time period, in this case precip amounts for Norman, generated by the very same set (ensemble) of models. Each line represents one model in the set (colored by numbers at the top, so you can follow your favorite models for the situation, if you have some). The average forecast is black, with dots every 6 hours. The timeline goes from earlier at left to later at right…

This is the melted precip total, regardless of the form. We are in a drought, and this is what matters most in a hydrologic sense anyway, so we’ll look at precip totals first. Notice how the models generally agree well on when we’ll get the most precip–the steep ramp-ups between 6Z (midnight) and 21Z (3 p.m.) Tuesday. But they disagree vastly on how much (from less than a quarter inch to almost an inch and a half). Even if this were all rain, we would have one hell of a time forecasting how much…and we haven’t even looked at the precip type yet. Let’s do that!

This is the ensemble of models for accumulated freezing rain (ice). They’re all over the place too, and many of the same models that are heavier with liquid rain are lighter with ice. The green models set isn’t even there; none of those are forecasting freezing rain. That says there’s a lot of disagreement on when the transition happens with respect to how much is rain versus ice, if there’s any ice! Again, this wide variation is just in one spot (Norman)…not even considering the potential for much bigger or smaller amounts just E or W of here. Are you sure you even want to see sleet forecasts? Well, if you don’t, stop now, because that’s what’s next…

Oh, joy. Sleet forecasts are all over the place too. Not only that, a few models have high accumulations of rain, ice and sleet, a few are low on them all, and the rest vary greatly between which will be dominant, and by how much. Only the green members, which are all fairly dry across the board, seem consistent. It’s enough to make a forecaster with little patience for uncertainty throw his hands up and walk away in abject frustration. But wait! There’s more…namely, the one precip type that it seems everybody demands to know down to the inch: snow…

This model set says Norman will get anywhere from nothing to a foot, but with a low average of just above 2 inches. If those two blue models are onto something that the others are missing, millions of dollars in snow-plowing and salting expenses might be justified. If any of the models are right in forecasting a heavy snow band of, say, 16 inches someplace else, but are wrong on the location, we could get a lot more in Norman than the highest model predicts. Or, we hype it up, get nothing, all that road salt was laid down for naught, and the local governments are quite upset. Lots at stake here for the public, emergency managers, school systems, law enforcement, media, and your credibility as a forecaster…after a wildly uncertain period of rain and/or sleet and/or ice!

We’ve got quite a forecasting dilemma here. Can you see? It’s tempting to go with the averages as a hedge against a huge forecast error; but what if the extreme-upper model turns out to be right in Norman itself? Again, history tells us that sometimes, especially in the middle of a narrow and badly forecast snow band, even the extreme solution wasn’t extreme enough. Other times, the lowest solution wasn’t low enough, because we get barely a dusting, while either Slaughterville or Del City, each 15 miles away, gets over a foot. These things have happened before, and the forecaster needs to keep historical similarities in mind too.

Based on all that information, about the best we can say is that cold rain probably will change to snow, with a period of assorted freezing rain and sleet possible (but not for sure) sandwiched in between. How much? Too uncertain to call. That’s the honest answer.

At least the strong consensus is that snow, if any, starts after the rain and sleet or freezing rain. That’s not much consolation if Aunt Matilda is pestering you for an exact snow amount and when it will happen; and you just can’t explain these crazy uncertainties to her without sounding like a waffling, blathering know-nothing whose parents wasted money on your meteorology degree.

Uh oh…what’s this? After all that weeping and gnashing of teeth, let’s pretend a new SREF package has come in and the forecasts of many of the individual models have flip-flopped around like a fish out of water. Trends are up with some, down with others, earlier with some, later with others. As your local teenager might type in a text message, “OMG WTF!!!”.

If you are a forecaster, what do you do now? With all those mixed signals, and a historical precedent for everything from nothing to 15+ inches, this scenario can drive you to the brink of incoherently blubbering lunacy, and beyond.

One way to keep sane is give up, stop thinking about it, and forecast some average default, which of course a machine can do without your help. Don’t come crying to me, then, when you lose your job to that machine!

Another way to become more certain and confident in a forecast, as well as maintain sanity, is through strong physical understanding of the situation, which comes from a combination of education (school and self), training, experience, analytic skill, understanding the models, and the continuing motivation to keep up with it all. This “learned path” is not foolproof, but it’s an insurance policy against consistent failure, and one that sometimes pays off big in correctly forecasting an extreme event that lazy, “model-hugging” forecasters will miss. [About five years ago, I discussed the future of human forecasting in detail here. ]

In short, the forecaster must first diagnose what’s going on now, both to judge how the models are performing right from the start, and more importantly, to form a 4-dimensional conceptual model of the ongoing atmosphere in his own head. This means analysis–including hand analysis of surface and upper air charts–which takes time to do with due accuracy and attention to detail. A forecaster who tries to predict the future without a thorough understanding of the present is negligent, and prone to spectacular failure.

Then comes assorted model guidance. The SREF package of 21 models has far more ways to display output from them than I’ve shown here. For a truer appreciation, go to the SREF website and look in detail at everything it contains. The exercise, done carefully, will take an hour or more. Then comes operational models outside SREF, of which there are several. The variety of guidance available to the forecaster these days is dizzying. Truly I declare, there’s hardly time to look at a substantial fraction of it, much less all. It really is informational overload.

The ability to sift out the irrelevant and distill the pertinent in weather prediction is an uncommon skill, one gained mainly through experience. Even then, in the face of inflexible time deadlines, it’s easy for even the best and sharpest forecasters to overlook a potentially important but small detail anywhere along the way. Make it 4 a.m. for a rotating-shift worker, and the potential for human error rises (unless, like me, you are a bonafide night-owl). It’s also possible for some forecasters–such as the model-huggers I mentioned above–to have years and years of experience doing it in a poor way (in which case 20 years of “experience” is actually 1 year of experience repeated 20 times over). Every forecaster takes at least a slightly different approach from every other.

Given all these factors, no wonder one forecaster can differ so much from the next, and one forecaster’s own predictions can vary from one day to the next.

If you are not a meteorologist, have mercy on your friendly neighborhood forecaster. Don’t get upset with the meteorological prognosticators for being unsure, or changing their minds often, or giving you a wide range or possibilities, or differing a lot. And when your local forecasters get a winter-storm prediction right–or even close–heap laud profusely upon them, for they have accomplished an extraordinarily difficult feat. Remember: uncertainty is part of the deal. It is unavoidable, if a forecaster is being honest with himself and with you. The forecasters who are hedging or talking in probabilities are because it’s the smartest approach, the right approach.

Filed Under: Weather Tagged With: ensemble forecasting, ensembles, forecast automation, forecast uncertainty, forecasting, freezing rain, meteorology, numerical guidance, numerical models, plume diagram, probabilistic forecasting, probabilities, rain, sleet, snow, SREF, weather forecasting, winter storms, winter weather

Search

Recent Posts

  • Norman “Stormwater” Utility: An Unpublished Letter
  • Better Choices than Woke Cult vs. Trump Cult?
  • Critical Thinking as Applied to an Overseas News Item
  • AI in Weather Forecasting (Not the Last)
  • The Sound of Freedom: An Important Movie

Categories

  • Not weather
  • Photographic Adventures
  • Scattershooting
  • Weather
  • Weather AND Not

Twitter API temporarily busted. Check back later.

Blogroll

  • CanadianTexan
  • Chuck's Chatter
  • Cliff Mass Weather & Climate
  • Digital Photography Review
  • DMN Dallas Cowboys BLOG
  • Dr. Cook's Blog
  • Dr. JimmyC
  • E-journal of Severe Storms Meteorology
  • Eloquent Science
  • Image of the Week
  • Jack's Cam Wall
  • Jim LaDue View
  • Laura Ingraham
  • MADWEATHER
  • Michelle Malkin
  • Photography Attorney
  • Severe Weather Notes
  • SkyPix by Roger Edwards
  • Tornatrix
  • With All My Mind

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org