Everyone talks about the weatherman…0 January 29, 2015 at 6:54 pm by Glenn McGillivray
Once again a big storm was forecast, once again, it failed to materialize (at least for many New Yorkers and New Jerseyans) and once again meteorologists are being criticized for dropping the ball. What’s more, weather models are also being blamed for at least part of the failure and people are, of course, again making the statement “If I was as wrong as often as the weather man, I’d be out of a job.”
To recap, an historic Nor’easter was set to strike New York, New Jersey and the New England states as well as Atlantic Canada beginning the evening of Monday, January 26 through to Tuesday the 27th. Cities like Boston and New York were set to get hammered with snow. But when the white stuff settled, NYC only got about seven inches, not the two or three feet that was forecast for much of the region.
To be fair, forecasts rang true for much of the area in question, particularly New England, where high winds, large snow drifts, storm surge, a seawall failure, flooding and power outages were rife.
But that was of little comfort in NYC, where costly and disruptive measures were taken, including a shutdown of the NYC subway system and implementation of a travel ban.
So what happened with the forecast for NYC?
According to NOAA, there was a very narrow gradient, measuring just 50 to 150 miles across, that demarcated a ‘western wall of snow’. If you were on one side of the gradient, there would be little snow, if you were on the other – a lot. This gradient didn’t go where the model indicated it might, and missed NYC by about 50 miles to the east.
The tool used for this particular forecast was a European model that, by all reports, has proven to be more accurate on average than the model built by the U.S. weather service. It was the same model that called Superstorm Sandy almost perfectly.
There is no doubt that tools and methods for forecasting weather can (and will) improve), particularly as the two GOES satellites that currently provide weather forecast data for North America are replaced next year with more advanced GOES-R satellites, which will take images with higher resolution and be able to sense parts of the electromagnetic spectrum.
But this latest forecasting blip has put a spotlight on a pair of other issues that must be looked at in tandem with the technical side of weather forecasting.
The first is sensationalism. We live in a world of hyperbole, where people choose to use such words in the normal course as ‘unbelievable, ‘amazing’ and ‘incredible’ when lesser descriptors would do just fine. It’s a world where small flubs are called ‘epic fails’ and where ‘literally’ is often used figuratively. A place with a non-stop news cycle that seems to subscribe very much to philosopher Walter Benjamin’s idea of the ‘permanent emergency’. It’s a world where labels (and, these days hashtags, like ‘Snowmageddon’ and ‘Snowpocalypse’) do nothing but add fuel to the fire. People, I believe, can easily spot exaggeration (though they may be guilty of it themselves) and have learned to tune it out. It’s unclear to me how society can address this problem, which now appears to be deeply embedded in our popular culture. But perhaps a good start is if social media users out there refused to take part in the labelling and hash tagging of average events as ‘exceptional.’ Maybe, in time, we can move this mountain.
The second issue, one that is more tangible and much more likely to be able to be addressed institutionally – is the matter of communicating forecast uncertainty.
Essentially, weather is non-linear, and regardless of model capability, computer processing power and forecaster experience, there will always be uncertainty. Weather forecasters are essentially trying to make an educated guess about something they do not control. As one warning coordination meteorologist for NOAA put it, “There is a limit to predictability when it comes to forecasting and it’s always going to be there. Human behavior behaves like that, the stock market behaves like that and weather behaves like that.”
Weather forecast output from models includes confidence levels, and these must somehow be used when communicating to the general public and policymakers.
Whether we use the very same lingo as the forecasting models (eg. Low confidence, High confidence, Very high confidence etc.), or whether some other system is devised (such as a colour-coded warning system or a scale of one to ten); the level of confidence in a forecast must be made clear to end-users. Indeed, brand new (and very timely) research from the University of Washington released January 27 indicates that the public may respond best to severe weather warnings if they include a probability estimate.
Seeing as though it is not up to the weatherman to decide to shut down public transit, implement a travel ban or order an evacuation, for instance, it is critical that those who are responsible for such calls be given every piece of info they need to make good decisions and set the overall tone of an impending event, without going overboard.
Better and/or greater use of certainty statements will aid in this endeavour.
In the meantime, there will be more missed calls. Get used to them, and don’t shoot the messenger.
Note: By submitting your comments you acknowledge that insBlogs has the right to reproduce, broadcast and publicize those comments or any part thereof in any manner whatsoever. Please note that due to the volume of e-mails we receive, not all comments will be published and those that are published will not be edited. However, all will be carefully read, considered and appreciated.