Is the World Scrutable?
Constructing the World
by David J. Chalmers
Oxford University Press, 2012, 494 pages, $47.95
David Chalmers is one of Australia’s most distinguished philosophers. His latest book is based on his John Locke Lectures at Oxford in 2010. The invitation to deliver the John Locke Lectures constitutes one of the most coveted awards in Anglophone philosophy. They are often used to address issues that the lecturer considers of more than academic interest. In this case the topic is whether the world is scrutable.
Scrutable is not a word in everyday use, not even in the jargon of philosophers. We have a vague idea of what inscrutable means, usually in the context of talking about people’s intentions and mental processes, where we have difficulty in understanding what is going on. Assigning a precise meaning to that notion depends on what would satisfy our need for understanding, and that is what Chalmers is attempting to do from the widest and most fundamental perspective.
The story of how the approach that he takes to this task has come about is central to the intellectual and cultural history of the twentieth century.
Chalmers presents his book as a revaluation of Rudolf Carnap’s 1928 classic Der Logische Aufbau Der Welt—“the logical structure of the world”, or possibly “the logical construction of the world”. Hence his own title Constructing the World. To situate the enterprise it is best to go back six years to another Viennese work, by Ludwig Wittgenstein, Tractatus Logico-Philosophicus, which made the startling claim of solving all philosophical questions by adopting a radically new approach to them.
Philosophers were split between those who thought that metaphysical issues about what sorts of entities really exist are fundamental and those who prioritise epistemological questions about what we can know. Debates between them seemed to go around in circles. Wittgenstein said there were prior questions about what kinds of sentences mean anything. Before we start proposing answers to any question we need to be assured that it is a genuine question. We need to know what would constitute an answer to it. He believed he could characterise precisely all meaningful sentences on purely logical grounds, by analysing the procedures for settling their truth-value. The analysis he offered excluded all of traditional philosophy and theology and much else as meaningless. The moral was that we should give up making truth claims about such matters. Science is the judge of all truth claims. But Wittgenstein was not a Positivist. Beyond science was “the mystical” which embraced everything of importance to us, morals, religion, art, personal relationships and things that “showed themselves to us” but could not be discussed in terms of truth and falsity. In this respect he resembled the German philosopher who was to dominate the divide between analytic and phenomenological philosophy, Martin Heidegger. Heidegger went on to develop forms of description of experience that could not be reduced to matters of fact, but could “uncover” certain realities, allowing them to “show themselves”. Analytical philosophers were not impressed. But Wittgenstein presented a challenge.
The core of Wittgenstein’s position was that all meaningful relations between sentences come down ultimately to logical or mathematical relations, and all logical or mathematical relations come down to basic truth-function logic. There must be simple statements that are clearly either true or false and relate to each other only in virtue of their being true or false in some determinate pattern.
So “If it rains the grass gets wet” can only mean “There is in fact no state of affairs such that it rains is true and the grass is wet is false”. There is ultimately no distinction between a coincidence and a causal connection. Causality is a superstition. A full analysis of what we can know would come down to a set of atomic sentences, each capable of being true or false quite independently of the others, and the patterns of coincidence among them that are mathematically describable and calculable. Bertrand Russell, Wittgenstein’s mentor, was credited with having shown that all mathematics was deducible from basic logic.
At first sight this looks like dogmatic philosophy at its worst, setting up a grid of arbitrary abstractions and excluding everything that could not squeeze through its narrow openings. Surely it could appeal only to those who were blinded by empiricist prejudice. It cannot provide an account of how we think or of most of what we mean by what we say.
Wittgenstein’s response can be explained by a familiar analogy. A batsman hitting a ball for six has to succeed in getting the timing exactly right. What must be the case in order to do so can in principle be analysed precisely in terms of the physics involved. Of course the batsman succeeds not because, in a split second, he makes all those calculations that might take a competent physicist months to work out. He recognises a certain pattern in the flight of the ball and has trained himself to respond appropriately. There are interesting psychological and physiological question about how he can do that, but what he is doing with the ball is explained by physics. Similarly, ordinary speech works on our identifying certain complex patterns in our experience, not in referring explicitly to basic sentences. But what we are in fact achieving in ordinary speech is to be understood in terms of the correct logical analysis of that achievement.
Wittgenstein’s view has sometimes been described as a picture theory of meaning. Another familiar analogy may help understand the point of that claim. The display on any television or computer screen is capable of portraying any conceivable visible state of affairs. It can do this because it exhibits an array of pixels, each capable of instantiating every colour quite independently of all other pixels in the array. That makes it possible for different combinations and sequences of colours the individual pixels take to exhibit any possible sequence of patterns of colours and shapes. Of course, when we look at the screen we do not see a lot of pixels, but the bad guy shooting the good guy, or the girl flirting with the boy. But what makes it possible for us to see such things is the extraordinary versatility of the way the pixels can exhibit patterns, precisely because their independence from each other allows variety without limits other than those of the mathematical possibilities of combinations of different colour-values in the array. Similarly, far from imposing arbitrary limits on what can be meaningful, the analysis of all meanings into patterns of distribution of truth-values among basic sentences explains the enormous variety of sentences that can have a determinate sense.
Carnap, following Wittgenstein, set out to develop his position and solve some of the problems it posed, in particular by proposing that the content of all the postulated basic sentences could be reduced to names of sensory qualities and allowing some ways in which sentences that had no determinate truth-value, might still have some significance. He tended to suggest that the appropriate attitude to metaphysical and epistemological sentences might be agnosticism about their meaningfulness. It sounded more plausible and congenial than Wittgenstein’s rigorism.
Carnap and others of “the Vienna Circle” fled to America in the 1930s and soon achieved a dominant position as the avant-garde of scientifically-minded philosophy. Wittgenstein meanwhile had in 1929 migrated to Cambridge, where he abandoned the claim that there was any general criterion of meaning. Basic truth-function logic is just a form of mathematics without any special claim to privilege. The meaning of any expression is revealed in the uses to which it can be put, depending on the underlying rules of the particular activity or “language-game” in which it has a role. Such rules and the roles they govern can be very various. Vagueness has its place in some contexts where looking for an idealised precision is illusory. There is no reason to think there can be a set of basic atomic sentences such as is postulated in the Tractatus.
Under Wittgenstein’s influence many British philosophers devoted attention to ordinary language, emphasising relations among meaningful expressions that were implicit in their meaning, but could not be reduced to truth-functional patterns. The point was no longer so much to exclude the meaningless as to elucidate meanings of all sorts.
Meanwhile the Logical Positivist program was running into difficulties. One was that it seemed to make it impossible for us ever to know the truth of any general proposition if what it says is that a certain perfectly possible coincidence never in fact happens. Such statements can never be verified, only falsified. So scientific inquiry is a matter of groundless conjectures, none of which has any claim to prior consideration over any other. That suggests that the central, perhaps unique, prescription for scientific progress is that hypotheses should be so framed to make them as vulnerable as possible to falsification. This rule was widely popularised as the acid test of a theory’s being worth considering.
As the Sydney philosopher David Stove insistently pointed out to unreceptive audiences, this falsificationism is wrong. In both practical and theoretical contexts we have good grounds for thinking that some statements are, on a given set of evidence, more probably true than others or are better explanations of certain data than others. On the other hand statements of probability cannot be definitively verified or falsified until all the relevant evidence is in. Nevertheless, a great deal of scientific work depends on such assessments, and a great deal of work needs to be done to elucidate them. To reject them out of hand is sheer dogmatism.
In America the great Harvard mathematical logician W.V. Quine insisted that scientific theories are complex, involving many interconnected assumptions. If their predictions fail, it is generally impossible to identify conclusively just where the fault lies. Theories face the evidence as complexes and have to be assessed pragmatically. One can always tinker with a theory to eliminate some unwelcome consequence and so avoid falsification.
Such objections represented a challenge that the Positivists met with determination and constructive ingenuity. Carnap produced both a more flexible analysis of meaning and a very thoroughly developed theory of probability that involved minimal departures from the Aufbau position. They were admired but not generally accepted and Logical Positivism languished. The strict regimentation it involved was increasingly seen as arbitrary.
Nevertheless, few followed the permissiveness of the later Wittgenstein. Most retained a strong preference for accounts of meaning in terms of the true/false dichotomy, even in the face of such development as many-value logics that analysed truth and falsity as the extremes between which lay a continuum of probability values. Other systems allowed three values—true, false and indeterminate—thus allowing for the inherent element of vagueness in most language, even scientific language. Most analytic philosophers were not impressed by such logics. For them a desirable analysis of a concept specified a set of necessary and sufficient conditions for determining whether that concept applied to any particular case so as to produce a true sentence. The analysis of a concept, say of knowledge, had to produce the right answer in any conceivable instance to the question, “Is this an instance of knowledge?” How do we determine whether the answer is right? By consulting our intuitions about what a competent speaker of the language would say, unprompted by any theory about the matter.
Surprisingly, contrary to popular assumptions that people generally mean quite different things by the same words, this test often works well, and a plausible logical explanation of why it works was supplied by the central figure in logical theory, Gottlob Frege. Frege distinguished the sense of an expression from its reference. So “the morning star” and “the evening star” have the same reference, to wit the planet Venus, but different senses. The sense of an expression determines its reference. It is only the reference of an expression that matters to the truth of a sentence that contains it, and to what can be deduced from it in truth-function logic. It is only the fact of being true or false, as the case may be, that determines its logical relations to all other sentences. In the jargon, identity of reference is ultimately a matter of the extensions of concepts, the set of particular instances or objects to which they apply. Differences of sense (differences of intension) are irrelevant if they do not cash out as differences of extension.
That is too peremptory. We want more than a list of conditions for being a man. We look for some account of the connections between the various conditions that makes sense of their senses. So if humans are featherless bipeds, but also intelligent, we would like to know whether that is just a coincidence or whether there is some way in which these characteristics are connected. One explanation is that all vertebrates have four appendages, usually legs, but capable of being developed differently. In the feathered bipeds the forward appendages are developed into wings, allowing them to fly, a distinct advantage in certain circumstances. In humans those appendages develop hands, which are capable of manipulating objects. That gives scope for the development of intelligence being of great importance in developing tools and weapons that enable humans to dominate their environment. It is, of course, possible for theories to be very plausible, but quite wrong. They must be checked against the facts. We are, unfortunately, too inclined to ignore facts when they get in the way of a good story. Extensional considerations are indispensable, but so are the intensional connections that make sense of the facts and make it possible for us to characterise facts in ways that make them scrutable.
Not many philosophers now accept the “extensionalist” program without reservation. In seeking to find as much relevance as possible to our contemporary situation for Carnap’s vision, Chalmers is inclined to push extensionalism as far as he can, and expends much technically interesting ingenuity in the attempt. He still claims that all truths about the world are deducible by truth-functional logic from a limited set of basic truths, but he takes most of the bite out of Carnap’s position by including in that set not only many intensional sentences, but also some counterfactual truths, for which there is no accepted extensional analysis. Where Carnap sought to establish that there was a closed, clearly delineated set of sentences from which all meaningful sentences could be constructed, Chalmers’s basic set of sentences comprises whatever sentences turn out to be needed to generate all true sentences. That makes his position much less interesting.
However, although it may appear at first sight as a rather limp theoretical speculation, in the age of Google it could well have a practical application. Google is close to storing and retrieving on demand any item of information that is anywhere recorded in published form. It is now said to be possible to store the entire contents of the 35 million volumes in the Library of Congress for about $1200. So the idea of making accessible all known significant truths and their consequences on the basis of a relatively compact set of truths no longer looks utopian. Certainly in so far as it is a matter of factual truths involving concepts at present in use, that ambition seems to present only minor difficulties.
Nevertheless, its accomplishment would render the world scrutable only in a limited sense, for two reasons. First, we are not satisfied to accept the fact that a certain set of sentences is true unless we have an account of why they are true (or false, as the case may be). Assessing the worth of an explanation seems to involve much more complex logical moves than those allowed by truth-function logic. And second, when we are unsatisfied by explanations in terms of current concepts, we need to search for concepts that offer a better explanation. In that case it is important to be able to identify precisely in what respect the existing concepts fall short and to be open to considering new concepts.
To illustrate. In Newtonian physics it is assumed that inertial mass, resistance to change of direction or pace of motion, and gravitational mass, attraction between bodies in virtue of their mass, come to the same thing. Both give the same measure of mass. But they are clearly conceptually different. From the point of view of extensional analysis that does not matter. But from the point of view of theory, it cries out for explanation, and contemporary physicists have devoted a great deal of attention to the problem, especially since General Relativity opened up the possibility that in certain circumstances they need not be equivalent, for interesting theoretical or conceptual reasons.
Again, take a basic law of Newtonian physics: to every action there is an equal and contrary reaction. Taken extensionally, as stating what happens in every instance, it is not just wrong in some instances, but in every instance. Bounce a ball against a wall, and it will inevitably come back with less force than that with which it hit the wall. Something is always lost to heat in the bounce, and heat has no direction. It is a scalar, not a vector quantity. It does not help to say that Newton’s law is true as an idealisation of what happens in fact, because idealisations have little explanatory value and Newton’s law has very strong explanatory force. Like most important theoretical equations in physics, it supports strong counterfactual conclusions, which are the key element both in giving precise causal explanations of physical processes and in identifying clearly where some other explanatory factor must be introduced to account for a result that does not match a theoretical prediction. We argue from the fact that a certain force fails by a clear margin to produce the predicted result that there must be some other contrary force producing that precise shortfall. The prediction is right about the force it accounts for. There must be some contrary force at work. It sometimes turns out that we need to invoke an irreducibly different kind of force to explain the discrepancy. At the level of our experience electrical phenomena appear as occasional oddities. It turns out that electro-magnetic forces are ubiquitous and fundamental.
One important answer to how certain generalisations sustain such robust counterfactuals is that they do not refer to a set of particular instances that we lump together under the same label, but to factors that are strictly identical in every relevant respect in every instance. The instances differ only in quantity and location, not in either their intrinsic or dispositional attributes. In scientific practice one very careful experiment is often taken as decisive evidence for a general statement. The point of repeating the experiment is not to provide additional evidence, but simply to check that no mistakes were made in conducting it. Just as we do not need further tests or more evidence to be sure about the formula for calculating the volume of a circle from its radius, we believe that the fundamental properties to which physical explanations refer are identical in every instance of their occurrence. Chalmers does not discuss universals, but the tenor of his discussion of related matters makes it clear that he would like to do without them.
Empiricists point out that this view sits uneasily with the fact that fundamental physical theory has seen some very significant conceptual changes. Statements that were believed to be true without qualification are downgraded to being true only within certain limits. Quantum theory gave rise to doubts about the importance of the sort of deterministic analyses with which universals were associated. The same phenomena of radiation can be described, equally precisely, as waves or particles. What appear to be continuous processes can be described and analysed mathematically as sequences of distinct events. In axiomatising a set of interconnected truths it is always possible to arrange them such that what appears in one arrangement as an axiom in another appears as theorem. Such systems are like a net that can be suspended from different points in a variety of arrangements, each of which is capable of supporting the whole net.
One conclusion to which such considerations inexorably lead is that the fact that a particular form of description can and often does provide accurate information about a wide range of states of affairs does not justify us in concluding that the structure of the form of description tells us anything important about the structure of those states of affairs. This point is easily grasped by analogy with a television screen. The screen represents anything visible through an array of distinct pixels. But it would be ridiculous to conclude that the objects in the real world that it portrays are all made up of bits the size of pixels. In order to describe the world we have to distinguish different components to which we can refer. Some of those components are really separable from each other, like trees and animals, while some are just artefacts of a form of description, like lines of latitude and longitude on a map. Not all artefacts of a form of description are as purely arbitrary as the conventions that assign their meanings to the sounds we use to make a language. Lines of latitude capture in a systematic way objective facts about relations between different components of the world. On the other hand precisely where to draw the line between plants and animals is not obvious. The distinctions we make in ordinary language are often discarded as unhelpful in science. Corals are animals, whales are not fish, but only when one classifies by physiology rather than habitat or appearances. It is not that the pre-scientific classification is wrong, just not as useful in the pursuit of a scientific understanding of these and related things. In the long run other forms of description tend to give way to the more scientifically warranted forms. We still say the sun rises, but we interpret it as metaphor.
There is a story that somebody once said to Wittgenstein, “You have to admit that it looks as though the sun goes around the earth.” To which he replied, “And how would it look if it were the other way round?” What seems to be the natural or right form of description is often just a matter of the form of description we are used to or of our conferring a privileged status on one form of description in spite of there being many others that can be usefully applied in the same domain. But the choice of forms of description is not an arbitrary matter. Practical considerations such as simplicity can be important, but in scientific matters what is crucial is that a form of description have explanatory force. It needs to lead to an understanding of causes.
Kepler devised a way of describing accurately and systematically the orbits of the planet in terms of the geometrical properties of ellipses. But he had no explanation of why the planet continued to revolve around the sun in that pattern. Newton got the idea of explaining those orbits in terms of an attraction between the mass of a planet and the mass of the sun in the context of his law of inertia. Scientific orthodoxy was shocked by this talk of attraction. There was only one way a body could act on another, and that was by immediate contact with it. Talk about attraction was reminiscent of anthropomorphism, a form of description and explanation that was outlawed in science.
Which brings me to why I regard the sort of enterprise on which Chalmers is engaged as wrong-headed. It perpetuates the empiricist search for a way of describing the world as economically as possible, minimising assumptions about the bits it is composed of and about the sort of connections that may hold between them. But we know very well that behind any surface description of the world there are often factors that are very important for understanding more fully what appears on the surface. The thrust of modern science and its technological applications is to reveal and harness entities and processes that cannot be directly observed, but are accessible only to certain kinds of apparatus based on theoretical analysis. These factors are unequivocally real, and understanding them turns out to involve increasingly surprising concepts that do not fit in the simpler forms of description to which we are habituated.
The significance of reduction in physics is regularly misunderstood. There is a tendency to conclude from the fact that icebergs, snowflakes, liquid water, clouds and steam are just different complexes of the same molecules that the differences between them are superficial or even just a matter of arbitrary classifications. Tell that to the captain of the Titanic! The important thing is that each of the different forms of H2O has very different physical properties from all the others. The differences between the structures into which they are arranged are what make those differences. And the differences are not just of practical classification. Water floats the Titanic, but ice rips a hole in it. The differences between the forms of water are very important theoretically, far beyond the context of practical technology. Understanding ice opens up the whole field of crystallography and the understanding of all solids in terms of their structures. Steam leads us into thermodynamics, which leads even to considerations about the death of the universe. What matters is not just what a structure is made of, but what that structure makes of what it is made of. The kinetic theory of heat reduces heat to mechanical forces only to the extent that it shows that there is no special sort of force involved, but it goes on to explain why heat has effects that are utterly different from those of a mere aggregate of mechanical forces.
The original thrust of empiricism was to show that everything could be explained in terms of patterns of sensory data. That ambition was not only unachievable, but wrong-headed. The success of modern science points us in the opposite direction. The problem now is to understand ourselves and our experiences in terms of physical theory. Unlike the empiricist program, this is not inherently a reductionist enterprise. The point is not to show that human beings and their achievements are nothing special, but to understand more deeply and precisely what is special about ourselves. That may well lead, as David Chalmers himself has argued elsewhere, to new concepts and even new ways of understanding our world.
As a dogmatic theory of meaning the Aufbau program deserves to be buried, not rescued. Nevertheless, it offers Chalmers the opportunity to discuss many important questions in his attempt to pursue the ambition of providing a framework in terms of which what we know about the world can be organised. He makes an interesting claim and faces up to the objections that might be raised against it. My disappointment is that he does not go beyond organising existing knowledge to look at the problem of how we use it to go beyond it.
John Burnheim’s autobiography, To Reason Why: From Religion to Philosophy and Beyond, was published in 2011 by Darlington Press.
It seems the cardinal virtue in the modern Christianity is no longer charity, nor even faith and hope, but an inoffensive prudence
Oct 13 2024
4 mins
Many will disagree, but World War III is too great a risk to run by involving ourselves in a distant border conflict
Sep 25 2024
5 mins
To claim Aborigines have the world's oldest continuous culture is to misunderstand the meaning of culture, which continuously changes over time and location. For a culture not to change over time would be a reproach and certainly not a cause for celebration, for it would indicate that there had been no capacity to adapt. Clearly this has not been the case
Aug 20 2024
23 mins