“Snowmageddon” was predicted – three feet of snow, blizzards whipped up by high winds, a freeze of the whole transport system. What New York got was “snowperbole”. Yes, it snowed, but not as badly as predicted, and many people have been left wondering why the city was effectively shut down for what was, in New York terms, a light dusting. So what happened?
The blame game began immediately. Some meteorologists have already put their hands up. “My deepest apologies to many key decision makers and so many members of the general public” said the National Weather Service’s Gary Szatkowski, on Twitter. The politicians have defended their actions, with New York mayor Bill de Blasio saying the city shutdown was the sensible choice, given the potential for damage and loss of life. He said “we made the decision, better safe than sorry.”
Is this a “forecast bust” or a case of expectations exceeding capabilities? Clearly people are not satisfied with the system. But is there anything that can be done to reduce such problems in the future?
The answer, of course, is complicated. If you want to study how society could better respond to extreme weather, you need the help of political scientists, behavioural psychologists, media analysts, communication designers, town planners, engineers – and perhaps even some meteorologists.
We will need time and in-depth research to assess precisely what happened in this event, but, on the face of it, the National Weather Service forecast was actually pretty good. Yes, they were predicting a chance of very high snowfall, which failed to materialise so dramatically over New York City. But Long Island and other parts of the north-east were badly hit, as predicted.
Snow depth is one of the hardest aspects of weather to forecast, and the old saying of “too cold to snow” has some truth behind it. There can be a very fine line between conditions warm and moist enough to bring lots of snow, those which bring only some, and those which mean rain.
It’s also very hard to see in advance just how extreme the level of precipitation might be. We can very often say that the conditions will be right to cause a lot of rain or snow, but knowing whether a storm system will bring three inches of rain or five – or a foot of snow or three – is much more difficult.
But this critical in terms of planned response. Five inches of rain might cause dramatic flooding, whereas two inches is just a normal shower. In New York, three feet of snow might cause snowmageddon. Less than a foot? Well, that’s just snow.
Forecasts are probabilities
Weather forecasting has made enormous advances. A forecast of the quality and accuracy provided by the National Weather Service this week would have been unthinkable 20 years ago. But as we have advanced we have realised more and more that we can only think of forecasts in terms of probabilities.
The essence of extreme events is that, by definition, they are rare at a given location; conditions have to combine in just the right way to give us the worst case scenario. Otherwise, we just get “severe but typical”, for which we’re usually quite well prepared. Advanced predictions of the most extreme winds or rain will always see significant uncertainty.
The paradox of extreme events is that it is impossible to judge the accuracy of a forecast from a one off; we need enormous amounts of computer power not just to make the forecast, but to evaluate and improve the reliability of the forecast by studying past events. Some have called for New York to be given a special high-powered supercomputer to help make more accurate predictions in future.
More processing power might well have helped in this case, but computers are not the answer to everything. Improved observations are just as important – larger numbers of measurements, provided with greater accuracy, from weather stations, radars, satellites, aircraft and elsewhere. But we also need a greater understanding of how best to use these observations and, perhaps first and foremost, a deeper understanding of the weather processes themselves. This is the “science behind the weather” that keeps me busy.
We don’t know everything
It is important to realise that there is still lots about how weather works that we just don’t understand. Even where we do have a good understanding, we need to simplify things so that our computers are capable of processing all the information we have in time to make a useful forecast.
Some computer programs are so complex they take days or even weeks to model a single cloud. That’s not much use to forecasters who have to make a prediction now, about tomorrow’s weather, for a whole city, state or country.
Extreme weather warnings are crucial and can save lives, but it’s important that neither scientists nor politicians promise things they can’t deliver. In a way, forecasters are victims of their own success, by raising people’s expectations of their abilities.
Any ambitious forecast system will have false alarms. Imagine, for example, an extreme storm that might occur only once in a hundred years – about 0.0027% chance on any given day or 0.01% if we restrict ourselves to winter months. Forecasting a 10% probability of this storm would be an incredibly impressive achievement. However nine times out of ten such a forecast will be perceived as “wrong” (though it may be, on its own terms, a perfect forecast).
While false alarms might be expensive, and lead to the danger that people don’t pay heed to the next one (“crying wolf”), it’s better to be safe than sorry. Response must be weighed against both the severity of the event and its likelihood.
Authorities must learn how to make the most objective and rational decisions (which they may well have done in this case) while the public need to understand that “we acted on the best information available” does not mean “we got it wrong”.