David Rosenberg, former chief economist at Merrill Lynch, is credited for saying that God invented economists to make weathermen feel good about themselves. Yet despite how common it is for people to point out the failures of these two groups, the latter has actually gotten quite a bit better in recent years.
Take this past January. As a polar vortex brought arctic cold temperatures to large sections of the United States, meteorologists predicted the unusual arrival of balmy spring-like weather within a week. Sure enough, the cold snap evaporated into an unlikely warm spell.
The rising accuracy of weather prediction models has led some weather sources to publish ten-day forecasts. But how much further into the future can scientific weather forecasting go?
Answering that question is the subject of a new study led by Pennsylvania State University in State College meteorologist, Fuqing Zhang. The work, entitled "What is the Predictability Limit of Midlatitude Weather?", has been accepted for publication in the Journal of the Atmospheric Sciences and suggests that accurate forecasts can be made for longer periods than 10 days. Just not by much.
At least in midlatitude regions - where most of the world's population lives - Zhang says that the forecast limit is around 2 weeks. "It's as close to be the ultimate limit as we can demonstrate," Zhang told Science Magazine.
The reason why relates to a fairly well-known scientific concept: the butterfly effect. First introduced in a 1969 paper by Massachusetts Institute of Technology, mathematician and meteorologist Edward Lorenz, theorised the effect shows up when two atmospheric models yield wildly different results after about two weeks due to tiny differences in the models' initial conditions. That difference in starting conditions could be the result of any small disturbance - including that of a butterfly flapping its wings.
The snowballing of tiny differences in initial conditions into completely unrelated forecasts is precisely what the researchers found when they were granted the extraordinary opportunity to run their global weather models at groundbreakingly high resolution using the supercomputer-powered weather modelling systems of the US National Weather Service (NWS) and the European Centre for Medium-Range Weather Forecasts.
Since the two modelling systems are different, the researchers used them to run separate tests with similar or identical data. For the initial atmospheric conditions, the scientists assumed a level of uncertainty about current atmospheric conditions that is ten times smaller than forecasters use today. This assumption was deliberate in order to see how much longer accurate forecasting might go if weather observations - which are currently collected by satellites, weather balloons, and other data collection points - were to dramatically improve in coming decades.
The answer the researchers found was that, with much more accurate knowledge of current weather conditions, reasonably trustworthy day-to-day weather forecasts could be extended by up to 5 days beyond the current 10-day limit. After that point, however, the US and European models diverged completely so as to destroy all confidence in a weather prediction.
The researchers admit that better modelling could extend forecasts beyond two weeks, but the prospects of that are uncertain. For now, the two-week limit appears to validate in practice what Edward Lorenz argued in theory fifty years ago. "It's a very credible result," said Eugenia Kalnay, a meteorologist at the University of Maryland in College Park who previously led the NWS's modelling arm, according to Science Magazine. "It's nice because it's simple."