Christopher Booker, writing in Britain’s The Sunday Telegraph of 26 December 2010, reported:
The Met Office’s forecasts of warmer-than-average summers and winters have been so consistently at 180 degrees to the truth that, earlier this year, it conceded that it was dropping seasonal forecasting. Hence, last week, the Met Office issued a categorical denial to the Global Warming Policy Foundation that it had made any forecast for this winter. Immediately, however, several blogs, led by Autonomous Mind, produced evidence from the Met Office website that in October it did indeed publish a forecast for December, January and February. This indicated that they would be significantly warmer than last year, and that there was only “a very much smaller chance of average or below-average temperatures”. So the Met Office has not only been caught out yet again getting it horribly wrong (always in the same direction), it was even prepared to deny it had said such a thing at all.
Three days later, Barry Hunt, honorary research fellow at the CSIRO’s Marine and Atmospheric Research Unit, appeared in a widely reported interview on ABC Radio National. The interview title was “Climate change is real despite cold snap”, presumably meaning the hypothesis of anthropogenic global warming (AGW) was not invalidated by recent unpredicted weather in both hemispheres.
Hunt warned listeners not to interpret the snowstorms over Europe and North America or the cool wet summer in Australia as indicating an end to either global warming or the AGW hypothesis:
Over the last century, the global mean temperature has gone up by 0.8 degrees [Celsius], and that’s the extent of the global warming. But at the same time, we also have natural climatic variation, and you don’t get one or the other, you get them both. They interact.
I analysed one of our climatic experiments where we ran it out to 2100 with carbon dioxide increasing. I found that even up to 2040 and 2050, you can still get cold snaps under greenhouse warming.
These last two or three months, you get them over Eurasia, or North America, and you can get temperatures 10 to 15 degrees below present temperatures. And that’s just natural variability, pulsing back briefly, overwhelming the greenhouse warming.
… the climate deniers think that unless you’ve got constant warming, every year, that greenhouse warming has gone away. And they forget about the natural variability.
What is going on here?
Both statements appear to be scientific statements dealing with real events in the physical world. But are they a reasonable appraisal of recent extreme weather events and our ability to predict them (or otherwise)?
Christopher Booker, the journalist, has a sceptical, evidence-based approach whereas Barry Hunt, the scientist, assumes the CSIRO’s Global Warming model is correct and any departures from its predicted trend are merely temporary aberrations, attributed to “natural climate variation” or “natural variability”.
Natural variability can be modelled and predicted only on a time scale of several days; these predictions are called “weather forecasts”. However it is impossible – mathematically and practically – to model natural variability over timescale of months to decades, as the UK Met Office fiasco clearly shows.
Barry Hunt nevertheless harbours no doubts about the validity of his long-term modelling and assumed inevitability of “dangerous” global warming consequent upon continued burning of fossil fuels, despite the strange hiatus at intermediate time scales, where modelling and prediction do not appear to work very well at all.
Yet if climate modelling at intermediate time scales fails miserably, how can the CSIRO claim its long term models are so good? Furthermore, how can the IPCC’s Fourth Assessment Report confidently predict the climate of Greenland and behaviour of the Greenland Icecap a millennium ahead, while its Third Assessment Report (Chapter 14, 126.96.36.199, Working Group 1, The Scientific Basis) noted:
In sum, a strategy must recognize what is possible. In climate research and modeling, we should recognize that we are dealing with a coupled nonlinear chaotic system, and therefore that long-term prediction of future climate states is not possible.
Could it be that the real difference between intermediate-scale and long-term models is that the former have been tested against observation whereas the latter have not? In fact, Barry Hunt’s long-term climate model can never be tested against observation because any departure from reality can always be dismissed by him as “natural variability”. Such an approach to numerical modelling is dubious scientifically because there is no way that such a model ever be disproved; models become unfalsifiable.
Which brings us to the scientific method. Set out by Sir Francis Bacon in the early 17th century, it was further refined by Newton and others and culminated last century in the work of Popper, Kuhn, Feyerabend and Lakatos.
As Popper pointed out:
A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice. Furthermore, some genuinely testable theories, when found to be false, are still upheld by their admirers — for example by introducing ad hoc some auxiliary assumption, or by reinterpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status.
This is precisely the function of Barry Hunt’s “natural climatic variation”; it is an excellent example of a Popperian ad hoc auxiliary assumption.
The scientific method was cast into mathematical form via the methods of statistical inference developed by R.A. Fisher (1925) and others in the early part of the 20th century. These methods formalised and quantified the idea of falsifiability by introducing the concept of the “Null Hypothesis”, that is, an hypothesis which has been set up to be deliberately falsified using the methods of mathematical statistics.
The null hypothesis usually takes the form of an assumption that a sample comes from a particular population and the probability is then calculated of the observed event being due to chance alone. If this calculated probability is less than some accepted standard (typically 0.05) the null hypothesis is rejected. The technique is known as “hypothesis testing”.
Note that there is no inherent difficulty involved in testing climate models in this way. Climate models typically comprise an “ensemble” of runs of a particular numerical model from random starting conditions. The ensemble averages are then used to make predictions. Such an ensemble method lends itself to statistical testing because statistics such as sample variance can be readily calculated for each cell in the model grid and compared with observations to test the null hypothesis that the real world resembles the ensemble.
Yet such tests are never performed by climate modellers such as Barry Hunt. A possible reason is stated in the IPCC’s Third Assessment Report:
We recognize that, unlike the classic concept of Popper … our evaluation process is not as clear cut as a simple search for “falsification”.
Perhaps there is another reason? When assessed in this way, climate models don’t actually work. They cannot successfully hindcast observations, the first requirement of any numerical model. When climate models are tested against real world data they fail at a very high level of significance.
I compared the global sea surface temperature output of the Hadley Centre, HadCM3 Model with the monthly SST measurement from the HadSST2 dataset using a variance ratio test. I submitted this work to Geophysical Research Letters. It was rejected without peer review. Was it rejected because it threatened to undermine climate modeling and the AGW paradigm?
The remarkable advances in science and technology are largely the result of meticulous application of this hypothesis testing approach. When a model fails the test, new insights into the underlying science are gained, whereas clinging desperately to supposedly correct theories leads only to sterility. This is the fundamental difference between science on the one hand, and pseudo-science, superstition and religion on the other.
Scientists use the methods developed by Fisher, pseudo-scientists do not. Climate modellers are pseudo-scientists. Fisher worked out how to defeat randomness and encapsulated the scientific method in his null hypothesis methodology. Pseudo-scientists use randomness to obfuscate the relationship between theory and observation. They often give it a pompous, scientific-sounding name like "natural climatic variation". Real scientists call it "model error".
Christopher Booker, although a journalist, is behaving more like a scientist than CSIRO’s scientists.
John Reid is the editor of the online magazine Science Heresy
R.A. Fisher, Statistical Methods for Research Workers (Edinburgh, 1925)
K. Popper, Conjectures and Refutations (London, 1963)