How accurate are climate change forecasts?


Climate is what you expect, weather is what you get
Attributed to Mark Twain amongst others

Climate refers to the average conditions of the atmosphere, oceans, and land surfaces over a period of time.  Weather is the state of the atmosphere and ocean at a given period of time.

Climate changes over millennia as exhibited by ice ages and interglacial periods.  It also changes over centuries, for example the little ice age in the 17th century in Europe, which has been associated with the Maunder minimum from 1645 to 1715 when the approximately 11 year sun spot cycle disappeared and sunspots were rare.  Over the past 100 years, the climate appears to have changed due to human activity.

The greenhouse effect was first mooted in the 1770’s.  In the 1820’s French physicist Joseph Fourier suggested that the earth was warmer than it was calculated to be because light from the sun penetrated the atmosphere easily and heated the Earth but the heat radiated by the Earth couldn’t pass back through the atmosphere to the same degree.

In 1859, physicist John Tyndall, who was familiar with the theory, decided that experimental proof was needed.  He set about developing the sensitive apparatus needed.  He found that the main gasses in the atmosphere, Nitrogen and oxygen, were virtually transparent to heat radiation – while more complex compound molecules absorbed heat more effectively.  In particular, he found that carbon dioxide and water vapour were particularly effective at blocking heat radiation.  These gasses were clearly responsible for absorbing most of the heat trapped by the atmosphere.  On 10 June, Tyndall demonstrated these experiments before a meeting of the Royal Society with Albert the prince consort in the chair.

In 1894, Swedish chemist Svante Arrhenius decided to calculate the amount of warming for a given increase in atmospheric carbon dioxide.  It took him more than a year but he calculated that a doubling of the carbon dioxide concentration would cause a warming of around 5 to 6 degrees C.

In 1958, daily atmospheric carbon dioxide measurements commenced at Mauna Loa, Hawaii, set up by C. D. Keeling.  In 1975, the first three dimensional global model of carbon dioxide induced climate change was constructed by Suki Manabe.

Like weather models, climate models are based on physical laws such as conservation of energy, mass, and momentum, as well as thermodynamic and radiation laws.

Some of the important drivers of a global climate model relate to:

  • The response to variability of solar irradiance on a range of time scales.
  • Changes to the Earth’s energy balance at the surface and top of atmosphere from volcanic eruptions
  • How radiation is absorbed and reflected on its way through the atmosphere but also at the surface.
  • Atmosphere and ocean dynamics (and how energy and momentum is transported through the different media)
  • How greenhouse gases and aerosols affect the Earth’s climate and climate variability
  • Sea ice and polar ice sheets
  • Various climate ‘feedback’, such as the interaction of clouds and water vapour with the warming climate, and the changing absorption or emission of CO2 from the ocean and land surface.

Climate models simulate historical variations between 1850 and 2005 and then project forward to 2100.  The historical simulations are not designed (or expected) to reproduce the observed sequence of weather and climate events during the 20th century, but they are designed to reproduce observed multi-decadal climate statistics, such as averages.  The 21st century simulations are run from 2006-2100, driven by prescribed anthropogenic forcings.  Owing to uncertainties in the model formulation and the initial state, any individual simulation represents only one of the possible pathways the climate system might follow.  To allow some evaluation of these uncertainties, it is necessary to carry out a number of simulations either with several models or by using an ensemble of simulations with a single model.

Questions about the accuracy of climate change forecasting surfaced in 2006, when climate change deniers started pointing out that the global temperature had not increased since 1998.  This claim was echoed for several years by a number of commentators.

There are many natural factors which influence temperature over a range of timescales, in addition to the increase in greenhouse gasses in the atmosphere.  Still, the claim went largely unanswered by climate scientists for several years.  This contributed to a decrease in the level of belief in the community that human activity was causing the climate to change and provided conservative politicians with a reason to refuse to take any action on emission reduction.

In turn, this was probably a significant reason for the Senates in Australia and the USA refusing to pass legislation to introduce emissions trading schemes in 2009.

The claim that the temperature had not increased since 1998 was a fallacious argument but it was not countered effectively.  There was an extreme el nino event in 1997-98 and this raised the global temperature in 1998 to a then record high.  Temperatures in subsequent years were warmer than 1997 or any earlier year, by a significant margin.  The denial claim cherry picked the single very hot year of 1998.

The subsequent extreme El Nino event, in 2015-16 meant that 2015 was significantly warmer than 1998, so the claim that there has been no warming since 1998 now has even less credibility.  As the next extreme El Nino event may not occur until the early 2030’s, however, there is significant potential for deniers to soon claim that there has been no significant warming since 2016!

Such a claim would again tend to reduce the level of belief in climate change and provide conservative politicians with a further excuse for inaction.

In 2007, Scott Armstrong, who compiled the book “Principles of Forecasting” and who is a business professor at Wharton Business School, challenged Al Gore to a bet on how global average temperature would change.  Gore, former US Vice President, who made the movie “An Inconvenient Truth” and wrote the book “Assault on Reason”, did not take the bet.  In early 2018 Armstrong says he has enough data to claim he would have won the bet.  He has concluded that global temperature deviations since 2007 had easily fallen within the natural level of variation, and “no change” was the most accurate way to describe global weather patterns over the past decade.  This was reported in The Australian newspaper on 1 February 2018.  “When you lack scientific evidence, the primary way to keep ‘global warming’ alive is to avoid having a testable hypothesis,” Professor Armstrong said.

In fact, there has been a statistically significant upwards trend in annual global temperature data between 2007 and 2016, but that was heavily influenced by heat generated by the extreme El Nino event in 2015-16.  It will take several years before we can determine whether Armstrong has won or lost his proposed bet.

Like the earlier claims that there had been no global warming since 1998, ten years is too short a period to separate the long term signal from the short term noise (or natural variation).  Also, like the earlier claim, the ENSO cycle had a role to play.  There was a strong La Nina in 2010-11, which cooled the globe significantly.

Also like the earlier claim, the inability of climate models to accurately predict the near-term climate has the potential to damage the credibility of their models.

Nate Silver, in his 2012 book “The Signal and the Noise” evaluated the 1990 and 1995 IPCC temperature forecasts to 2010.  He found that their 1995 prediction of warming of 1.8 degrees C per century for the business-as-usual scenario was about right over the period 1995 to 2010.

Climate scientists have, belatedly, started to question the accuracy of their models.

Fyfe, Gillett and Zwiers, writing in Nature Climate Change in 2013 found that observed global warming over the period 1993 to 2012 was significantly lower than that simulated by the climate models – specifically Phase 5 of the Coupled Model Intercomparison Project (CMIP5).  These models generally simulate natural variability including that associated with ENSO and explosive volcanic eruptions, as well as estimating the combined response of climate to changes in greenhouse gas concentrations, land use, solar variability, and several other factors.

They ran 117 simulations of the climate by 37 participating models.  The evidence indicated that the then current generation of climate models did not reproduce the observed global warming over the 20 year period, nor did they predict the slowdown over the period 1998 to 2012.  They found that such an inconsistency, if the models were correct, is only expected to occur by chance once in 500 years.

This finding does not confirm the view of deniers that global warming had ceased or that the greenhouse effect is non-existent.  Rather, it means that the models need improving.  This would start by more accurately simulating natural variations – some of which may not be adequately represented in the models.  As the models improve, we will then have a more accurate estimate of the relationship between atmospheric concentrations of greenhouse gasses and temperature.

One of several factors limiting the accuracy of climate forecasting models is clouds.  There are many climate models and they don’t all agree on the effects on clouds.  This is important because some types of clouds enhance the greenhouse effect by trapping heat rising from the earth.  Other types of clouds prevent sunlight from reaching the earth’s surface, so cooling the planet.  Which type of cloud will increase most as the planet warms is uncertain at present.  See “The cloud conundrum” by Kate Marvel in Scientific American, December 2017.

The planetary response to carbon dioxide doubling is called equilibrium climate sensitivity.  The degree of warming the models predict for a doubling of carbon dioxide ranges from 2.0 to 4.4 degrees C – quite a wide range.

This is an extract from my book “Forecasting, the essential skills” which evaluates the performance of forecasting in a range of fields and which contains numerous case studies.

Charlie Nelson