Weather or Not

Severe Outflow by R. Edwards

  • Home
  • About
  • Archives

Powered by Genesis

Human Weather Forecasting in an Automation Era, Part 3: Garbage In, Garbage Out

August 28, 2022 by tornado Leave a Comment

This short series (go to Part 1 or Part 2) arises from the recently published paper, “The Evolving Role of Humans in Weather Prediction and Communication“. Please read the paper first.

Objective verification of forecasts will remain hugely important, and the authors duly note that. But one factor not discussed (perhaps due to space limitations?) is the quality of the verification data. That matters…perhaps not to bureaucrats, who tend to overlook components of the verification sausage that provide context. But flawed verification datasets give you flawed verification numbers, even if the calculations are completely mathematically correct!

As someone who has analyzed and examined U.S. tornado, wind and hail data for most of my career, and published some research rooted in it, I can say two things with confidence:
1. It’s the most complete, precise and detailed data in the world, but
2. Precision is not necessarily accuracy. The data remain suffused with blobs of rottenness and grossly estimated or even completely fudged magnitudes, potentially giving misleading impressions on how “good” a forecast is.

How? Take the convective wind data, for example. More details can be found in my formally published paper on the subject, but suffice to say, it’s actually rather deeply contaminated, questionably accurate and surprisingly imprecise, and I’m amazed that it has generated as much useful research as it has. For example: trees and limbs can fall down in severe (50 kt, 58 mph by NWS definition) wind, subsevere wind, light wind, or no wind at all. Yet reports of downed trees and tree damage, when used to verify warnings, are bogused to severe numeric wind values by policy (as noted and cited in the paper). A patently unscientific and intellectually dishonest policy!

For another example, estimated winds tend to be overestimates, by a factor of about 1.25 in bulk, based on human wind-tunnel exposure (same paper). Yet four years after that research published, estimated gusts continue to be treated exactly like measured ones for verification (and now ML-informing) purposes. Why? Either estimated winds should be thrown out, or a pre-verification reduction factor applied to account for human overestimation. The secular increase in wind reports over the last few decades since the WSR-88D came online also should be normalized. That’s the far more scientifically justifiable approach than using the reports as-is, with no quality control nor temporal detrending.

For one more example, which we discussed just a little in the paper, all measured winds are treated the same, even though an increasing proportion come from non-AWOS, non-ASOS, non-mesonet instruments such as school and home weather stations. These are of questionable scientific validity in terms of proper exposure and calibration. The same can be said for storm-chaser and -spotter instrumentation, which may not be well-calibrated at a base level, and which may be either handheld at unknown height and exposure, or recording the slipstream if mounted on a vehicle.

Yet all those collectively populate the “severe” gust verification datasets also are used for training machine-learning algorithms — to the extent that actual, measured winds with scientific-grade, calibrated, verifiably properly sited instruments are a tiny minority of reports. With regard to wind reports, national outlooks, local warnings, and machine-learning training data use excess, non-severe wind data for verification, but because they all do, comparisons between them still may be useful, even if misleading.

Several of us severe-storms forecasters have noticed operationally that some ML-informed algorithms for generating calibrated wind probabilities put bull’s-eyes over CWAs and small parts of the country (mainly east) known to heavily use “trees down” to verify warnings, and that have much less actual severe thunderstorm wind (based on peer-reviewed studies of measured gusts, such as mine and this one by Bryan Smith) than the central and west. This has little to do with meteorology, and much to do with inconsistent and unscientific verification practices.

To improve the training data, the report-gathering and verification practices that inform it must improve, and/or the employers of the training data must apply objective filters. Will they?

This concludes the three-part series stimulated by Neil’s excellent paper. Great gratitude goes to Neil and his coauthors, and the handful who ever will read this far.

Filed Under: Weather Tagged With: data, data quality, education, forecast verification, forecasting, meteorology, numerical models, operational meteorology, quality control, science, severe storms, severe weather, storm observing, thunderstorm winds, understanding, verification, weather, wind, wind damage

Reflections on a Quarter Century of Storm Forecasting

April 30, 2018 by tornado Leave a Comment

As of last week, I have been forecasting and researching severe storms (in SELS-Kansas City and its Norman successor) for 25 years, not counting prior time at NHC and NSSL. That’s 1/4 century of living the dream of a tornado-obsessed kid. Much has transpired professionally and personally in that time span, most of it decidedly for the better. The only negative is that I’m a quarter-century older. Give how little I knew then compared to now, and how little I knew about how little I knew, maybe the geezers of my youth were right, in that youth is wasted on the young.

The science of severe-weather prediction has advanced markedly. More is understood about the development and maintenance of severe storms than ever before. Numerical models also are better than ever, yet still riddled with flaws known to forecasters that belie their hype as panaceas. Most weather media, social media weather pundits outside front-line forecasters, and far too many Twitter-active pure researchers and grad students exhibit naivete and ignorance about both the flaws of models in applied use, and the still-urgent need for humans in forecasting (yes, forecasting, not just so-called “decision support services” a.k.a. DSS).

Fortunately, most of those who actually do the job — the experienced severe-storms-prediction specialists who are my colleagues — know better, and incorporate both the science and art (yes, art!) of meteorology into forecasting, to varying extents. Yet pitfalls lie in our path in forms of several interrelated ideas:

    * Automation: Even if the human forecast is better at a certain time scale, at what point does the bureaucracy (beholden to budget, not excellence) decide the cost-benefit ratio is worth losing some forecast quality to replace humans with bots that don’t take sick leave, join unions, nor collect night differential? I wrote in much more detail about this two years ago, and that discussion touches upon some of what I am re-emphasizing below. Please go back and read that if you haven’t already.

    * Duty creep with loss of diagnostic-understanding time: Cram more nickel-and-dime, non-meteorological side duties into the same time frames with the same staffing levels, a social-media nickel this year, a video-briefing dime the next, and something must give. In my experience, that is analysis and understanding, which in an ironically self-fulfilling way, stagnates human forecast skill (and more importantly, sacrificing concentration and situational understanding) whilst allowing models to catch up. Knowing how bureaucracy works, I suspect this is by design.

    * Mission sidetracking – “DSS” including customized media and social-media services: I don’t deny the importance of DSS; in fact I support it! Outreach is good! Yet DSS should not be done by the full-time, front-line forecasters who ideally need to be laser-focused on meteorological understanding when on duty, and making forecasts the most excellent possible. DSS should be a separate and parallel staffing with social-science-trained specialists in outreach everywhere DSS is required. Otherwise, quality above what the models can provide (which still is possible, especially on Day-1 and day-2, and in complex phenomena like severe and winter storms) will be lost prematurely and unnecessarily.

    * Loss of focus — see the last two bullets: A growing body of psychological literature resoundingly debunks the notion of “multitasking”. We lose focus and delay or dilute accomplishment when concentration is broken and interruptions occur. Management should be focusing on reducing, not increasing, distractions and interruptions on the forecast desk. Forecast quality and human lives are at stake.

    * De-emphasis of science in service: Physical and conceptual understanding matter in the preparation of consistently high-quality forecasts — especially on the complicated, multi-variate area of severe local storms. These are not day-5 dewpoint grids, and this is why my workplace has published more scientific research than any other publicly funded forecasting office, by far. Tornadoes, severe hail and thunderstorm winds are highly dependent on time and space overlaps of multiple kinds of forcings that models still do not often handle well, partly because of the “garbage in, garbage out” phenomenon (input observations are not dense enough), partly due to imperfect model physics and assimilation methods. Severe-storms specialists must have both self-motivation and continued support from above to understand the science — not only by getting training and reading papers, but by writing papers and performing research!

    * Model-driven temptation to complacency: This is a form of Snellman’s meteorological cancer. I wrote about some of these topics here 13 years ago in far more detail, under the umbrella of ensemble forecasting. Please read that discussion! I see no need so far to amend any of it, except to add thoughts about focus and concentration (above). If forecasters don’t think they can improve on a model, even if they really can, or just don’t feel like making effort to do so amidst other demands for time, they’ll just regurgitate the output, at which point their jobs can (and probably should!) be automated.

    * Meddling in the mission by distant, detached bureaucratic ignoramuses. Schism between upper-management assumptions and real front-line knowledge is a common theme across all governmental and corporate bureaucracies, and is nothing new across generations. In my arena, it manifests as lack of understanding and appreciation for the difficulty and complexity of the work, and in the difference in respecting the absolutely urgent need for direct, devoted, focused human involvement. The very first people with whom policy-makers should discuss severe-storms-prediction issues are the front-line severe-storms forecasters — that is, if knowledge and understanding matter at all in making policy.

At this stage of my career, I’m neither an embittered old cynic nor a tail-wagging puppy panting with naive glee. I never was the latter and I intend not to turn into the former. Instead I observe and study developments in a level-headed way, as both an idealist and a realist, assess them with reason and logic, and report about them with brutal honesty. I doing so, I’ll say that there is cause for both optimism and pessimism at this critical juncture. I’ve covered the pitfalls (pessimism) already.

How can optimism be realized? It’s straightforward, though not easy. We must continue to grow the science, emphasize the human element of physical and conceptual understanding (including the still-important role of human understanding and the art of meteorology) in such complex and multivariate phenomena, use ever-improving (but still highly imperfect!) models as tools and not crutches, study and learn every single day, minimize distractions/disruptions, and most of all, focus on and fight for excellence!

I’m now decidedly closer to retirement than to the start of my career. Yet you can count on this: you won’t see me coast, nor go FIGMO, nor be merely “good enough for government work”! Such behavior is absolutely unacceptable, pathologically lazy, morally wrong, and completely counter to my nature. The passion for atmospheric violence still burns hot as ever.

Excellence is not synonymous with perfection, and the latter is impossible anyway. I will issue occasional bad forecasts, and I hope, far more great ones. Regardless of the fickle vagaries of individual events, I must start each new day for what it is — a different challenge ready to be tackled, compartmentalized unto itself, not the same as the great or crappy forecast of the previous shift. I must settle for nothing less than consistency of excellence in performance, lead the next generation by example in effort, and advance the science further. I’ll be pouring the best reasoning I know into each forecast, even if that is necessarily imperfect and incomplete. I’ll be doing research and writing more papers. I’ll be educating and speaking and writing and raising awareness on severe-storms topics, trying to pass understanding on to both users of the forecasts and forecasters of the future.

I’m paid well enough, and the taxpayer deserves no less than excellence in return for his/her investment in me. That is my driven purpose in the years remaining in full-time severe-weather forecasting.

Filed Under: Weather Tagged With: bureaucracy, comlacency, concentration, decision-support services, excellence, focus, forecasting, hail, meteorological cancer, meteorology, models, numerical models, research, science, severe storms, Snellman, storm forecasting, tornadoes, wind

Search

Recent Posts

  • Scattershooting 230128
  • A Thanksgiving Message
  • Human Weather Forecasting in an Automation Era, Part 3: Garbage In, Garbage Out
  • Human Weather Forecasting in an Automation Era, Part 2: Lessons of Air France 447
  • Human Weather Forecasting in an Automation Era, Part 1: Situational Understanding

Categories

  • Not weather
  • Photographic Adventures
  • Scattershooting
  • Weather
  • Weather AND Not
@SkyPixWeather

- January 30, 2023, 3:38 am

@cschultzwx @TwisterKidMedia So many holds don’t get called. That looked quite familiar. I know this as a Cowboys/Micah Parsons fan.
h J R
@SkyPixWeather

- January 30, 2023, 3:27 am

@TwisterKidMedia @sdantwx Worst officiating I’ve ever seen was in a college game too, and Andrew should know this one. https://t.co/pC5fTkFFrF
h J R
@SkyPixWeather

- January 30, 2023, 3:20 am

@TwisterKidMedia @tempestchasing I still don’t understand fully WTH happened w/the “unheard whistle” clock debacle and play that wasn’t. That was bizarre.
h J R

Blogroll

  • CanadianTexan
  • Chuck's Chatter
  • Cliff Mass Weather & Climate
  • Digital Photography Review
  • DMN Dallas Cowboys BLOG
  • Dr. Cook's Blog
  • Dr. JimmyC
  • E-journal of Severe Storms Meteorology
  • Eloquent Science
  • Image of the Week
  • Jack's Cam Wall
  • Jim LaDue View
  • Laura Ingraham
  • MADWEATHER
  • Michelle Malkin
  • Photography Attorney
  • Severe Weather Notes
  • SkyPix by Roger Edwards
  • Tornatrix
  • With All My Mind

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org