Forecasting the Ensemble Outlier

Several days back, I posted some thoughts about a pretty good short-range ensemble (SREF) spot forecast of a recent snow event in the Norman area. Now let’s look at even more recent ensemble forecasts that were absolutely wretched–so much so that even the most extremely low outlier in the latest of the two forecasts was overdone by a factor of nearly 2.5!

When is forecasting at or even beyond the outlier the best choice? When it’s the closest to correct! Granted, that’s easy to say in hindsight; but the cold truth is that the atmosphere does hold the answer key. Forecasts are judged in an objective sense by their closeness to the verifying quantity. In this case, it’s accumulated snowfall.

Below please witness two different SREF runs’ predictions for Norman snow, in the form of plume diagrams (explained in the previous link above). The answer key, in the form of the average snow depth measured at my house, is in magenta. [NOTE: The vertical scales are not exactly the same. Look at the numbers at left, in inches.]

Model initial hour 21Z Monday:

Model initial hour 15Z Tuesday:

The best spot forecast here clearly would have been for the greatest probabilities and/or snowfall ranges skewed strongly toward, or even beyond, the low end of the distribution. Following the cliff-leaping lemmings of ensemble consensus here would have been a miserable forecast failure. There probably are many more lessons here than my fading, sleep-deprived mind can muster, but I can offer two for now:

  1. The outlier sometimes is the best solution! Forecasters get paid to produce the best possible prediction; and sometimes that means forecasting way above or below the average or “consensus” model forecast. Clearly leaning heavy toward the low extreme would pay off here. I don’t pretend it’s easy; but since when did anyone promise that forecasting is supposed to be? For more discussion on this issue, see this 5-year old post on ensemble forecasting–specifically, scroll down to the “Consensus Forecasting vs. Extreme Event ‘Outliers'” section.
  2. Spot forecasting for something like forecasting a banded snow event, which can vary wildly based on subtleties that observations or models cannot resolve well, probably is a foolhardy endeavor. Parts of northeastern Oklahoma have had over 20 inches of snow from the same event! Norman stayed outside the southern fringes of a persistent, mostly W-E aligned snow belt that belted parts of northern Oklahoma, southern Kansas and northwestern Arkansas.


Comments

One Response to “Forecasting the Ensemble Outlier”

  1. tornado on February 10th, 2011 9:14 am

    Chuck Doswell has posted a fine commentary on issues raised by this event:
    Living and Dying with the Models
    …including over-reliance on model solutions, and cost/loss arguments. Check it out! I’ve also received some comments about this BLOG entry on another forum that I hope to re-post here soon, after I make sure there are no objections.

Leave a Reply

You must be logged in to post a comment.