Uncertainty is an essential part of both measurement and prediction. But how confident can we be in our efforts to measure changes in the climate and also to predict future changes resulting from our geoengineering choices?
Why might it be difficult to detect the effect of geoengineering on global mean temperature in the years immediately following its use?
The global temperature curve has large year-to-year variability, so over the short-term this can mask the underlying trend. A lot of data are needed to detect a climate change: ideally 30 years or more.
Put another way: recall that climate is a distribution (Session 1), so detecting climate change means detecting a shift in the distribution. This may not be obvious over short time periods.
When detecting climate change, scientists must decide how different two distributions should be, and how many years of data are sufficient, to make a reliable decision about whether the climate has changed. As well as this, they must decide which aspects of the Earth system (temperature, rainfall, sea ice, and so on) to use in their decision.
There are no simple answers to these questions, because ‘difference’ is a continuous spectrum, and scientists will be interested in a variety of aspects of climate.
This means it is likely to be extremely difficult to be sure of detecting climate change caused by geoengineering – potentially needing many years of data – and to be confident in attributing those changes to geoengineering rather than other factors.
Not only that, but deciding whether an individual event is unexpected or ‘extreme’ is essentially subjective. You choose a threshold, and there is always some probability this threshold will be exceeded occasionally as part of natural variability. However, the more extreme your threshold, the less likely this will be.
If a severe heat wave, drought or storm were to occur immediately after you began geoengineering, how would you decide if this was unlikely in an un-engineered climate?