Tuesday, January 27, 2015

January Blizzard of 2015

(Disclosure: I am in no way, shape, or form a meteorologist or forecaster)

Snowfalls from January 26th/7th as of 5:08pm (via @NWS)
Large storms that impact high-population areas of the world (especially the US) always draw some sort of response from the general public. Whether that response is to note that the forecast was right-on or not, it is always heard. Yesterday's storm that impacted the northeast US coast was no different - it was expected to bring over a foot of snow to areas of the country north and east of Philadelphia, all of New Jersey, New York City, and all the way up to Boston and Maine.

Forecast snowfall by the NWS New York on January 26th - ~20 hours out
Just a day out ahead of the storm, there were multiple forecasts of 18-24" for the National Weather Service (NWS)'s New York office region, which did not seem incredibly outlandish. The two main global models - the GFS and the Euro - were consistently saying that there would be significant amounts of snow for the entire region, although the two models differed by more than a few inches for the greater New York City area. These same models also showed 2-3 feet of snow for the Boston metropolitan area and elsewhere north, which they have indeed received.

During and after the storm, the system tracked a small amount east, and New York was left with a maximum of about 11-12 inches of snow, significantly under the "official" amounts of snow forecast. While still a very sizable amount of snow, for which the city appeared to sufficiently prepare, it left hundreds or thousands disappointed as the "potentially historic" storm did not happen. This was for a multitude of reasons, both technical and human, but the outcome of this is purely one of human communication: how to accurately portray forecasts and the inevitable uncertainty that arises in trying to predict the future. The forecast presented by anybody - NWS or other - does not accurately portray the entire bell-curve of potential options that there are.

No forecast is going to be 100% correct, but it may be 100% wrong, depending on how events turn out. The NWS was one of the institutions forecasting this storm that went with snow estimates leaning on the high side; it appears that one of the models was favored more than others, due to its past reliability. This gets us to the underlying issue: forecasts are purely scientific and educated guesses. These forecasts are very scientific and analyzed thoroughly, but there's no getting around the fact that we cannot completely predict future weather. We try as we might, but there is an inevitable amount of uncertainty and unknown potential. One of the key takeaways of this storm is the fact that this known unknown needs to be presented along with the forecast, in order to allow people to have full knowledge of the situation.

Several different stations had their own takes on what the chances of snow in the NYC area would be. (@brianstelter)

All in all, the forecast as a whole was pretty decent. The total numbers were spot-on from around Long Island (30 inches!) to Connecticut and Boston, but lacking on the west side. As the image above shows, everybody has their own take on the information available to them. If there is even uncertainty expressed to forecasters, then this information needs to be translated to the public as well. Why do we make the assumption - or at least make it seem - that a forecast will fall into a single category or gradient, when we know those are only the most statistically-likely options?

Capital Weather Gang forecasts, including "boom" and "bust" percentages
One of the groups doing weather forecasting in the DC region is the Capital Weather Gang, who have been mastering the displaying of a forecast along with the potential for going above or below what is predicted. Especially when presented in a highly-readable format, this helps to present the general reader with a range of snow/precipitation that might be expected, as well as the explicit knowledge that the final result may exceed or fall below what is written. Some type of system appears to be needed on a larger scale and adopted by others in order to help convey the challenges of forecasting. While we wish that forecasts could be completely accurate all the time, that is simply just not the case.

Forecasting, like any science, is challenging, There will be good days, and there will be bad days. But at the end of each of these days, we need to evaluate what went right and/or wrong, and use those as lessons to feed future runs of different forecasts. Most forecasters do not do what they do for the money - they are in the job, just like any other, because it is what they like: the work, the challenge the reward, whatever it may be. A forecast that bust because an area only received a foot of snow instead of two or three feet? There's some sort of humor in that. They forecast that it would snow a significant amount, and it did! Were there wording mistakes by people all around when calling for the "potential historic" storm to topple the charts of largest storms? Yes, that's for certain. The potential risk in a forecast is part of what needs to be accomplished in order to provide people with information on how to go about their day - just like we say that there might be a 75% of rain on a given day - and that can then become part of their decision process in order to how to proceed.

All in all, the overall forecasts for this system were not bad. Areas that were forecast to receive snow received snow, and those that didn't generally did not. The global and mesoscale models did what they were supposed to and provided information to guide forecasts, and forecasters did what they thought was best. Mother Nature did what she does best and threw a couple of curveballs. No entity is infallible, but every entity has the opportunity to see where improvements may be made and execute on those. Presenting forecasts to the public is no different.

Additional reading material