Joaquin’s Forecasting Lessons

What happened?
Much consternation has erupted this tumultuous October over the superior forecasts of the operational European (ECMWF) model over that U.S. “Global Forecast System” (GFS) for Hurricane Joaquin. Never mind that the big story ended up (rightly so) as the South Carolina flooding. Here’s one example (you may need to power past Forbes’ stupid and annoying jump page).

First of all, hurricane forecasting is hard–very hard. I know this from direct experience, having worked three formative years of my career at the National Hurricane Center during a time when human and model forecasts overall performed much worse for track prediction than today. Anyone who loves to claim otherwise, or play Monday-morning quarterback against the best in the business, needs to sit their butts down in that seat, work on those same deadlines and under the same rules as those guys for fair comparison, and try to do better. [Not happening…]

And yet we all sometimes get forecasts wrong. That includes NHC, every other forecast office of every kind, and me personally (now in severe thunderstorm and tornado prediction). Forecasting imperfection not only does happen, and often, but I’ve argued that it should happen! This is the nature of operational meteorology.

As such, it’s easy to find flaws in any prediction–whether by human or numerical model or some combination of both. Forecast trolls, the insecure, petty and contemptible critters that they are, seem to specialize in offering up others’ flaws without consistently putting their own (again, made under similar conditions) to the public, in advance, to be verified mano y mano in an objective way.

Yet, there’s nothing wrong with taking a look at a forecast for the sake of improvement and learning. That is my aim, regardless of whether the forecast was mine or somebody else’s. I hold no one to a higher standard than I hold myself (per the Golden Rule).

Here is an NHC forecast for Joaquin during an especially difficult period when the GFS, most of its ensemble siblings and a few hurricane models said “go left!” and the ECMWF and its ensembles mostly said, “go right!”. The forecast path doesn’t do much of either.

In side discussions, several fellow meteorologists (none of them current or former NHC forecasters, naturally) have suggested that NHC should have just picked one swing or the other and stuck with it until it became apparent that the choice wouldn’t work. I do not subscribe to that point of view. I understand the notion of just choosing one or the other, but by choosing the middle track instead of patriotically sticking with the U.S. guidance, they reduced error compared to the GFS. [To state what I personally would have forecast is irrelevant since I am not a hurricane specialist. Once again, that would be Monday-morning quarterbacking, and unfair to them. I only can offer clues potentially useful in the future.]

Synoptically, via pattern recognition and diagnostics, the middle track between the two extremes of guidance seemed to be an improbable solution (more below). Yet some ensemble members actually *did* split the difference and show solutions not far from the official track: in the case of the GFS at times, more so than offered the correct (eastward) solution.

Computer models offered a fork in the road.
Examine this wonderful little graphical animation retrospective of the models, recently hoisted by Brian Tang at SUNYA. To toggle between ensembles, “Up Var” means GFS, “Down Var” is ECMWF. This is a very insightful site for illustrating the synoptic-model conundrum that forecasters faced, and it doesn’t even include the hurricane models such as HWRF, GFDL and others (most of whom hooked the system leftward into somewhere along the U.S. coast, adding to the conundrum).

For when that link eventually breaks, here are some still captures:

  1. 29 Oct 15, 12Z, GFS ensemble
  2. 29 Oct 15, 12Z, ECMWF ensemble
  3. 1 Oct 15, 00Z, GFS ensemble
  4. 1 Oct 15, 00Z, ECMWF ensemble

Notice that many ECMWF ensemble members also did the “left hook” into the coast early, even as the operational run and other members more correctly predicted the seaward track. So the issue is not as simple as, “ECMWF declareth northeast, henceforth thou must boweth in worship to thine holy Euro!” There are numerous simultaneous ECMWFs; some of those ECMWFs disagreed with their siblings and fell inside the GFS ensemble envelope.

NHC noted in this tweet that the GFS and ECMWF have been close to even for overall, bulk verification of TC tracks the past few years. This is despite the fact the ECMWF spectacularly outperformed the GFS in this and Sandy. This means that GFS outperformed ECMWF in other, less well-publicized scenarios not called Sandy. [Of course, fore-knowledge of model performance was unavailable to forecasters here, and this case certainly will skew the bulk stats for operational models back toward ECMWF for some time.]

So what can we learn? Much! Other atmospheric scientists will think of many more lessons than I, but here are two: flexibility to forecast according to the atmosphere and not rigid procedure, and attention to diagnostic analysis.

Rules sometimes should be breakable.
First, this event calls into question the unyielding rigidity of forecast-format requirements. In NHC’s defense, that is the biggest problem here. Procedural rules force NHC to put out that same sort of forecast image for every single TC out to 5 days, regardless of the situation and regardless of how valid making a deterministic forecast is for a unique situation (and every scenario is unique). That mandate is scientifically unjustifiable in *some* scenarios, but probably okay in most. This is one where it couldn’t work. Some situations are not amendable to expressing deterministic certainty out more than even a couple days, such as Joaquin before it parked itself in the Bahamas.

The bureaucratic directives that handcuff operational meteorologists to rigid procedure are very often stupid, one-size-fits-all, and mindless. They force forecasters to offer specifics for which they (and we as a science) are sometimes simply not able: issue a day-5 path. Sometimes it’s possible. In those cases, fine…do so. We should forecast to the limits of predictability! The problem is, those limits shift around from storm to storm and over time. The limits of predictability also can vary from one forecast cycle to the next for the same storm. [This also means that we should forecast out to days 7 or 8 on those uncommon occasions when certainty is high that far out!]

I understand and respect audience demands for consistent guidance. The problem is: consistency is not the same as accuracy! Which is more important? Reality check: not all situations are identical. Rules should have exceptions. Flexibility is so important in operational meteorology. And yes, that also applies to the type of forecasting I do (something I’m also pushing for, as an aside).

At times, hurricane forecasters should have the freedom to NOT issue deterministic tracks once uncertainty becomes too high. There’s nothing wrong with saying, “we don’t know yet”, when it’s the truth! That applies graphically as well as buried deep in a forecast discussion. As with the PREDICTABILITY TOO LOW tag on day-4-8 convective outlooks elsewhere, NHC forecasters should be allowed the flexibility to say that a potential event isn’t predictable yet. In that case, it would mean stopping a forecast line at a point where uncertainty becomes too high, and saying so, graphically and textually.

And the audience would just have to suck it up and accept that! Remember my reality check: not all situations are identical. Any “customer” insisting on specific TC forecasts 5 days out is making an unjustifiable demand in scenarios like Joaquin and Sandy and a few others. It’s no different than if I walked into a restaurant and demanded to be served a medium porterhouse steak cooked from scratch in five minutes. Maybe some chefs could pull that off somehow, but it is a highly unreasonable request!

Do not pull loose from our diagnostic, analytic roots.
In strongly baroclinically influenced scenarios with wild, dichotomous model spread, and the potential for extratropical transition near the coast *if* the left-hook solution played out, the steak just can’t be ready in time. We are just not fully there yet as a science. Joaquin and Sandy prove that. Yet unreasonable demands have been made of forecasters to produce something that, in some cases, is not justifiable. They are forced to put out *something*–anything, to satisfy a procedural mandate. How is that scientific?

Now, with benefit of hindsight, it would be very prudent to dig into the ECMWF and find out what it “knew” (in terms of physics, assimilation, etc.) that the others didn’t, but also, and more importantly, to revisit the meteorology of these cases to learn what clues the actual atmosphere offered observationally at day-minus-five. Forecasting is not just about models, models, models! It’s also (and ideally should be even more) about meteorological understanding. I hope any meteorological studies that arise on Joaquin perform some meteorology.

One such clue is the presence of an intense (1040+ mb) high over SE Canada and pronounced cool ridging SWward across the eastern CONUS, manifesting a wedge of continentally derived, low-level static stability fronted by a strong baroclinic zone along and just off the SE coast. Pattern recognition (which one of my colleagues facetiously calls “dumb-man meteorology”) has elements of validity for a reason, and says fundamentally that this surface map, and the upper-air conditions that caused it, are not suitable for inland penetration of a TC. I reckon the ECMWF latched onto the valid physical reasons for that pattern-recognition clue (along with its forecast of the cold-core southeastern cyclone aloft), calculated and integrated those out into the future, and thus made the call that was closest to correct.

Finally, some testimony from decades of experience: In general, regardless of any specific case, regardless of tropical, midlatitude or polar meteorology, it behooves all forecasters to stay thoroughly infused with minus-hours (recent past) and time zero (the present). Deeply steep yourself in diagnostics, and not just in a small area near the feature of interest. Yes, that still necessarily includes hand analysis of surface and upper-air charts in order to stay rooted in the past evolution and present state, before ever looking at a single model of any sort.

In complementary fusion with skilled sounding examination, satellite interpretation, radar-detected features, and other diagnostics, map analysis forces the forecaster to swim in the present foundationally — before diving into the future. I practiced this at NHC, I’ve practiced at where I am now, and I can attest first-hand that it works. Perfectly? No! But keen attention to diagnosis is far better than just plunging straight into model fields. Everybody (no matter how talented or experienced) busts forecasts sometimes; but consistently good forecasting over time requires more than models: it requires understanding…and making the time for high-quality analysis fosters understanding.


Leave a Reply

You must be logged in to post a comment.