Science

False Precision in Climate Predictions

On March 14, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the Australian Bureau of Meteorology (BoM) released a jointly-produced short report titled State of the Climate. The six-page online document was intended to provide “a snapshot of the state of the climate to update Australians about how their climate has changed and what it means”. Most of the report summarises and discusses recent Australian climatological data, but the final section (“What It Means”) goes beyond description and makes a number of statements about the future. It says that Australia will become hotter and drier, and also states, “There is greater than 90% certainty that increases in greenhouse gas emissions have caused most of the global warming since the mid-20th century.”

Unsurprisingly, many media outlets reported or commented on the document. The Australian quoted the “90% certainty” figure, while Leigh Sales, presenter of ABC’s Lateline, went one step further and proclaimed, “The CSIRO and the Bureau of Meteorology say the evidence is unquestionable, climate change is real and the link with human activity is beyond doubt.” Coincidentally, two days later the British Energy and Climate Change Secretary, Ed Miliband, used the 90 per cent figure in a slightly different context. As reported in the Australian, he said, “The science tells us that it is more than 90 per cent likely that there will be more extreme weather events if we don’t act [on global warming].”

The confidence with which the 90 per cent figure is used in these statements might reasonably be taken to mean that it reflects “objective” statistical results, such as the 90 per cent confidence interval for a parameter estimate in standard statistical method. However, this is not the case. Rather than being an objectively-derived and quantitative estimate, the 90 per cent figure is instead a pseudo-precise summary of a loosely-specified and highly subjective judgment. Contrary to standard statistical method, where terms such as “highly confident” are used as convenient proxies for precise probabilities, in the climate change context the convenient verbal proxy invariably precedes the precise probability figure. It is the judgment of the International Panel on Climate Change (IPCC) that the anthropogenic global warming hypothesis and a hotter and drier climate future are “very likely”. This judgment is made ultimately on the basis of climate change research carried out by practitioners in many different disciplines, using many different techniques; but however good the research is and however well-founded the judgment, it is still a judgment and not a statement that can or should be dignified with a false impression of numerical exactitude.

It is very difficult to adequately survey and summarise the results of research on a particular topic, whether in the sciences, social sciences or humanities, even when all the researchers come from the same disciplinary background and a common methodology is used. Details of method, technique and data invariably differ, which makes the task of summarising the results of the research very difficult. Judgment is needed at every step, and however well-grounded the original research might be, any conclusions reached from the survey cannot themselves be regarded as “scientific”.

As an example, consider the type of statistical research typically carried out in the social sciences, but also used in many areas of applied science. A set of data relevant to a particular issue is available to all researchers, and all researchers agree that it is appropriate to use the technique of multiple regression analysis to determine whether a particular variable, X, has a statistically significant effect on another variable, Y, as suggested by a relevant piece of theory. Theory is also likely to suggest that many other variables, A, B, C, and so on, will also have an impact on Y. Multiple regression is a technique that allows the statistical “control” of these other variables so that the true impact of X on Y, uncontaminated by concurrent changes in A, B, C, and so on, may be estimated. Unfortunately, even with this level of agreement among researchers—over method, technique and data—it is highly unlikely that results will be identical, and it is even unlikely that results can be properly compared in any precise way; judgment will still be needed, and only limited and essentially trivial quantitative support can be given to this judgment.

In the type of situation illustrated here, results will differ, among other reasons because judgment is needed to select a limited number of control variables from the many choices available (too many control variables reduces the precision of the estimates) and theory is not often of much help here. Also, researchers may choose different methods of incorporating the same variable, either using the original variable or transforming it in some way, while judgment is needed on many other matters—choice of functional form, whether to include “outliers” in the data set, whether and how to adjust for apparent biases in the data, and a host of other matters.

It is a difficult and demanding task to comprehensively yet succinctly summarise the details of the various pieces of research, and it is harder still, even in the sort of circumstances assumed here, to summarise their results. We would expect a range of estimates of the quantitative effect of X on Y, and the best that can often be hoped for is a statement of the form, “There is a general consensus that the value lies in the range [say] +2 to +5.” In rare cases a more specific statement may be made, for example, “Fifteen of the twenty studies estimate the 90 per cent confidence limits of the estimate as +2 to +5.” However, this sort of precision would be unusual, and itself would not warrant a conclusion that we can be 75 per cent certain (that is, a probability of 15 out of 20) that the true value does lie in the range +2 to +5.

The task of the IPCC was much more difficult than suggested in this example. The contributions to climate change science come from a wide range of disciplinary backgrounds, use often vastly different methodologies and techniques, and employ a wide range of disparate data of varying reliability. Under these circumstances it is impossible to summarise results in any sort of quantitative fashion or to objectively report probabilities of certain relationships being true. Even the final judgment of the IPCC must be at best a consensus, because it is a committee, not an individual, making this judgment; the IPCC does not explain how any consensus is arrived at.

The IPCC online document, “Guidance Notes for Lead Authors of the IPCC Fourth Assessment” contains a section titled “Report on Addressing Uncertainties”, which gives instructions on how uncertainty is to be treated by the lead authors. The notes identify three broad categories of uncertainty, and give guidance on how to summarise the degree of uncertainty in each case.

So why, given the inherent impossibility of summarising the results of any area of research in a simple confidence or probability percentage, does the IPCC present its conclusions in this way?

The first category is the type of uncertainty that is “assessed qualitatively”. This type of uncertainty “is characterised by providing a relative sense of the amount and quality of evidence … and the degree of agreement”. The degree of uncertainty is characterised “through a series of self-explanatory terms such as: high agreement, much evidence; high agreement, medium evidence; medium agreement, medium evidence …”

The second category is the type of uncertainty that is “assessed more quantitatively using expert judgment of the correctness of underlying data, models or analyses” and uses a simple numerical scale “to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.”

The third category relates to where “uncertainty in specific outcomes is assessed using expert judgment and statistical analysis of a body of evidence” and uses quantitative “likelihood ranges … to express the assessed probability of occurrence”. The chosen ranges are: “virtually certain >99%; extremely likely >95%; very likely >90%,” and so on.

All that is clear about this classification is that the distinctions between the types of uncertainty are quite fuzzy. For example, it is difficult to see how “qualitative assessment” (category 1) differs from assessment “using expert judgment and statistical analysis of a body of evidence” (category 3), given that any assessment of a wide range of research using different methodologies and techniques and coming from a variety of disciplines cannot be assessed in any way other than qualitatively. The “assessed probabilities of occurrence” in the category 3 statements do not relate to any objective measurements; rather they are simply the shorthand numerical values given to necessarily imprecise judgments that are better summed up verbally using terms such as “likely” or “very likely”. Moreover, these judgments are made, in the final analysis, by the lead authors of the IPCC report, who presumably will take varying notice of the judgments of other contributors as well as those of scientists who have provided no direct input into the report. In these circumstances there is no justification for assigning numerical probability values to these subjective judgments.

The IPCC and others in the climate change fraternity have, over the years, presented many long and detailed arguments to justify the use of seemingly “objective” probabilities to represent judgments that are loose and subjective. The argument, presented in detail in a report of the US Climate Change Science Program and the Subcommittee on Global Change Research entitled Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making is that if people, even experts, are asked to assign a subjective probability number to terms such as “likely” or “very likely”, the numbers they choose will (unsurprisingly) differ quite widely. Therefore, when the results of climate change research are presented to politicians and the public, rather than risk the possibility that the non-scientist readers will accurately interpret the conclusions as fairly loose and imprecise and adequately summarised in the “likely”, “very likely” terminology, the judgments are instead falsely presented as numerical probabilities that appear to brook no challenges.

In certain circumstances the assignment of probability numbers to subjective judgments can be justified; by implication, gamblers make their own probability estimates of the likelihood of a particular horse winning a race or of a particular politician winning an election, before deciding whether or not to take the gamble at the odds offered. However, it would be improper to present the subjective estimate of any one gambler, or a summary of the estimates of any number of gamblers, however expert, as an “objective” probability to guide the decisions of others. Neither is there any excuse for presenting conclusions about climate change as precise probabilities that appear to be designed to forestall any debate on what remains a contentious and important issue.

Leave a Reply