Skip to content
Skip to main content

Grassroots climate prediction

Updated Monday, 13th February 2006
The development of the climateprediction.net climate model was almost as involved and intricate as the climate it sought to map. Bob Spicer recalls the inspirations and frustrations of getting the model created and reveals exactly what the number crunching is for

This page was published over 17 years ago. Please be aware that due to the passage of time, the information provided on this page may be out of date or otherwise inaccurate, and any views or opinions expressed may no longer be relevant. Some technical elements such as audio-visual and interactive media may no longer work. For more detail, see how we deal with older content.

climate computer model

For years scientists have strived to build models of the climate so that the effect of human activity can be assessed and the future predicted. However the climate system is highly complex and all modelling attempts have been frustrated by limitations in computing power.

Nevertheless through the 1980s and 1990s computer processors improved dramatically to the extent that today’s desktop has more power than the large mainframe computers of a mere decade ago. As computer power increased so did the complexity of climate models.

By the end of the 20th Century it proved possible to build models that reproduced well the dynamics of both the atmosphere and the oceans, ice sheets and even vegetation. The unfortunate thing was that to do this even on supercomputers required some short cuts or approximations – so called parameterizations.

Now, once you introduce such short cuts you are never quite sure if the result you get is a real result or an artifact of the parameterization scheme. To test this you need to run the model millions of times with slightly different parameter values to see how susceptible the model is to the short cuts and what the consensus result is.

Running climate models on supercomputers is very expensive, after all big powerful computers are in short supply. But what if you could run the models on ordinary machines? Would this make it possible to explore the effect of different parameterizations? Would this even be possible?

In the late 1990s Myles Allen was looking in wonder at the SETI@home project. The Search for Extra-Terrestrial Intelligence (SETI) had run into problems – they just didn’t have the computing power available to search all their radio telescope data for the tell-tale signals that might indicate radio transmissions from other worlds. But rather than just admit defeat, the SETI team had made an imaginative leap and created a computer program that almost anyone connected to the Internet could download. This program would process a tiny part of the huge volume of data the SETI team had, but only when that computer wasn’t doing much.

Even more cleverly, the program would automatically send back its results when it had finished and then request another bit of the SETI data. SETI@home proved to be hugely popular – here was a way that people could lend their computer time to help a major science project, but only when they could spare it. Myles realised that this ‘distributed computing’ offered the potential of dramatically increasing the scope, scale and speed of any experiment that is normally limited by the amount of computer power. Suddenly a key issue for climate prediction looked like it could be overcome - Myles wondered if he could set up the equivalent of SETI@home for climate prediction.

It was still a fairly mad idea - the kind of computer model used to predict weather and climate was dramatically more complex than the task of looking for potential signals from ET amongst the random background noise of space. Nobody knew whether it was even possible to use home computers to run these highly complicated models. In fact, an editor of Nature magazine even bet Myles an ice-cream that the project would never produce any results!

The task that confronted Myles was too large for any one person to undertake, so he set about getting others to join him and to get funding for the project. In April 1999 the first proposal for funding the project was rejected as being utterly unrealistic. However, just a few months later one of Myles’ colleagues at Reading University, Andy Heaps, managed to get the Met Office climate model to run on a personal computer and in September 1999 the concept was presented at the World Climate Modelling conference in Hamburg. A month later Nature published a short article called “Do-it-yourself Climate Prediction” [requires Acrobat Reader]. This was an idea whose time had come and when a website was set up so that people could register their interest in taking part in the experiment - thousands did.

By the start of 2000 Myles’ team had made enough progress to convince the Natural Environment Research Council (NERC) to help fund a pilot study. By the summer the team were working on the central computer systems that would “talk” to the participants home PCs as well as the design of the experiment, and the first full length ‘run’ of the Met Office’s model took place on a secretarial PC at the Rutherford Appleton Laboratory in Oxfordshire.

While all this was going on funding was being sought (unsuccessfully) from dying dotcoms. A bid to NERC’s eScience Programme in December 2001 finally secured funding for a public launch, with more money coming from the DTI just months later. The team was then able to grow and begin to improve the software to improve reliability following “friends and family” testing.

2002 also saw the Open University joining to help develop the educational benefits of participants effectively having a virtual planet, unique to them, in their PCs – leading to the idea that the software should include a visualization package that would allow you to actually see your virtual world.

After years of work Myles’ vision was finally coming to life. Extensive ‘beta testing’ ensured that the model gave the same results on a PC as it did on a supercomputer, couldn’t be tampered with to give different results and also checked that the software didn’t interfere with other programs running on PCs and didn’t inadvertently allow the spread of computer viruses. At last the project was released to the public on September 12 2003, with BBC Weather's Carol Kirkwood launching the the screensaver at the National Museum of Science and Technology.

 

The BBC and the Open University are not responsible for the content of external websites

 

 

climate computer model

The aim of climateprediction.net is to investigate the approximations that have to be made in the Hadley Centre model used by the Met Office. By running the model thousands of times (a 'large ensemble') it will be possible to find out how the model responds to slight tweaks to these parameterizations - slight enough to not make the approximations any less realistic.

 

What do we mean by parameterization?
To completely describe all the dynamics of the atmosphere we would have to simulate the movements of every individual air molecule. Clearly this is impossible and larger parcels of air have to be modelled. In reality these parcels have to be quite large, larger in fact than single clouds or even storms. As a consequence equations have to be used that describe the physics of the atmosphere at these large scales and not at the scale of individual molecules. The physics is in other words an approximation and not a precise description of the way the molecules behave.

Now the numerical values we use in these equations can quite legitimately vary within certain limits and depending on the exact values used the end climate model results will vary. The problem is that when there are several such approximations, each with equations having a range of values, the combined effect of these differences is impossible to predict as they cascade through a climate model run. Therefore the only way we can explore the prediction “envelope” is to run the model over and over again with different values in each approximation and see what effect these differences have on the climate predictions.

In the past estimates of climate change have had to be made using one or, at best, a very small ensemble (tens rather than thousands!) model runs. By using participant’s computers, we are able to improve our understanding of climate change predictions far more than would ever be possible even using the supercomputers currently available to scientists. In this way we also improve our confidence in the predictions because we understand better the inherent uncertainties.

The climateprediction.net experiments will help to "improve methods to quantify uncertainties of climate projections and scenarios, including long-term ensemble simulations using complex models", identified by the Intergovernmental Panel on Climate Change (IPCC) in 2001 as a high priority. Hopefully, the experiments will give decision makers a better scientific basis for addressing one of the biggest potential global problems of the 21st century.

Within just 24 hours of launching, the project exceeded the computing capacity of the largest supercomputer dedicated to climate change issues and became the world’s largest climate modelling facility. Within three months the model had simulated one million years of atmospheric processes (not though 1 million years of climate change which is another issue entirely!). The concept of distributed computing for exploring climate change had well and truly been demonstrated, opening the way to more sophisticated experiments of the kind now being run in conjunction with the BBC.

And the ice cream? Needless to say Myles won the bet and the Nature Editor duly paid up.

Further Reading:

Discover the current state of the project
or follow the full history of the project

The BBC and the Open University are not responsible for the content of external websites

 

Become an OU student

Author

Ratings & Comments

Share this free course

Copyright information

Skip Rate and Review

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?