Learning to See the Gorilla
Several years ago, a senior businessman asked to meet me for a coffee, because he wanted to learn a little more about what my company, Austhink Consulting, does. For five or six years we had been running a three-day workshop on cognitive biases, reasoning and the Kennedy assassination debate. I took with me an A1-size diagram, what we at Austhink call an argument map, which showed the whole structure of that then forty-four-year-old controversy. When the appropriate moment came, I laid it out in front of him and invited him to take a look at it. He gave it a cursory glance, then commented: “The Kennedy case? Yeah, well, of course, everyone knows there was a conspiracy.”
“Really? Why do you say that?”
“The evidence shows it.”
“Ah!” I replied. “What evidence are you referring to, and how precisely do you think it shows that there was a conspiracy?” To this he had no answer, never in fact having given the matter serious thought.
This essay first appeared in our October 2011 edition
Nor did it appear to concern him much that he could not account for his acceptance of one of the most persistent urban legends of modern times. I didn’t press the point. I wasn’t there to corner him, much less to debate the complex subject matter of the workshop with him. But I learned a lot about him right there. His easy acceptance of conventional popular opinion on a controversial subject; his lack of curiosity about that popular opinion; his disinterest in so much as examining a clinical dissection of the matter prepared by professionals in analytic reasoning, spoke volumes about his cast of mind.
Of course, his behaviour was perfectly normal. In just over a decade of working as a trainer and consultant in applied cognitive science and argument analysis, I have been ever more fascinated by the ways in which human minds work—the curious amalgam they represent of perceptiveness, quickness, intuition, shrewdness with prejudice, muddle, complacency, confusion and proneness to error, particularly to cognitive bias. It’s a very interesting field in which to work, and it is not one which makes me feel scornful or superior. Rather, it is a true education in human nature and in why the world works and malfunctions as it does.
Cognitive biases have only started to be discerned over the past century or even half-century, as cognitive science itself has developed. They are likely to be associated in most people’s minds with experimental psychology of the Stanley Milgram and Daniel Kahneman kind, with neuroscience and neuro-imaging and with our recent deeper understanding of animal minds and human evolution.
Some years ago I was on my way back via Hong Kong from work in Taiwan when I got a request from the ABC to come to a studio in Brisbane for a recorded interview about the Bali bombings. By the sheerest happenstance, I was flying to Brisbane to attend a workshop, so I agreed. When I got to the studio, I found that the interviewer wanted me to agree that ASIO had stuffed up in the lead-up to the Bali bombings; it had failed to do its job competently.
I said that while this was perfectly possible, I lacked the information necessary to make a judgment on the subject. But consider, I said, before we even attempt to go down that path, what else might have happened. I described to him a little exercise that my company had already by then run with many groups of intelligent people, including intelligence analysts, bankers and lawyers, which might enable him to put the apparent dereliction of duty by ASIO in a different perspective.
What this exercise involves, I explained, is showing people a thirty-second video clip of half a dozen US college kids moving around in a confined space, three of them in black T-shirts and three in white. They have two basketballs between them and are either throwing or bounce-passing the balls to one another as they move around. The objective, people are told, is to count the number of times a basketball is passed, whether by bouncing or throwing, between any two figures in white T-shirts. Concentrate, they are urged, because there is never unanimous agreement on the count and it varies substantially within any given group.
So they count and there is never unanimous agreement on the count and it varies substantially. So they ask, what is the correct answer? Well, we say, the correct answer is sixteen, which almost no one gets, but actually that’s not the point of the exercise. What we really want to know is how many of you saw the gorilla walk into the middle of the basketball throwers, stop in front of the camera, beat its chest and then walk off the other side of the screen? Every time the exercise is run, the great majority of participants are completely incredulous at the question and gobsmacked when they watch the video clip again. That’s because between 60 and 85 per cent of people do not see the gorilla the first time, but everyone sees it the second time.
The interviewer was astonished at the very idea. So, I pointed out to him, consider that what may have been happening in the lead-up to the Bali bombings was that ASIO’s analysts were counting basketballs for all they were worth, believing that that was what they needed to do and for that very reason they failed to see the gorilla. And here’s the thing to get, I added: the harder they were working at counting basketballs, the less likely they were to see the gorilla.
Moreover, as Daniel Simons, who designed the exercise, has shown, even when people know the exercise and see the gorilla the first time, if you vary something else—for example the colour of the curtains at the back of the space—they’ll almost all fail to see that. The problem is one rooted in the way our perceptual and attentional systems work.
The ABC chap was so fascinated by this that he ended up framing the whole program around it. When it went to air you could hear basketballs bouncing throughout much of the program and he asked more than once, “So did we fail to see the gorilla?” A week later, I was in Canberra and a senior intelligence analyst approached me at Parliament House, saying that he and his colleagues at ONA had all listened to and taped the broadcast and then had listened to it again several times. “You’re right,” he confessed. “In the case of Bali, we failed to see the gorilla.”
When I am running the JFK workshop, which has been run for lawyers, intelligence analysts, military officers, bankers and all manner of other people, their understanding of the case develops through three phases. The first, when they are immersed in the mass of evidence as it commonly presented, is a powerful sense that there must have been a conspiracy. The second, as they are coached through a patient argument mapping of the case, is a gradual realisation of exactly why the case for conspiracy has convinced so many people for so long. The third phase, as they are led through a step-by-step analytical evaluation of the case, is the slowly dawning realisation that in fact there are countless illusions, errors and misinferences involved in the understanding that had seemed so compelling; that the case for conspiracy does not stand up to relentless critical scrutiny. I open the final phase by saying to them, having shown them the gorilla video, that the famous Zapruder film and the case they have just laid out graphically is crawling with gorillas and that they have yet to see any of them.
Cognitively speaking, that is a pretty fair description of how we operate in the world and there is no way once and for all to overcome that state of affairs. All we can do is make ourselves a bit more aware of the kinds of cognitive traps that our brains and the complex nature of reality set for us and try to devise measures for coping better with these challenges. In doing so, one of our most important virtues is humility—the willingness to acknowledge that getting things right, seeing what is actually going on, avoiding astonishing oversights and taking others along with you in the process is demanding work, and it is work that always has to be done over, because of the way things are.
The gorilla exercise is one of a number which enable people to experience, as distinct from merely being told, the fact that their cognitive systems are not under their conscious control and can blind them, fool them, confuse or ensnare them in a number of ways without them even realising it. It is such phenomena that cognitive psychologists broadly speaking refer to as cognitive biases.
They should not be confused with what we often think of as mere stupidity or ignorance or everyday prejudice. They are hardwired features of our perceptual and deliberative systems and they affect all of us, even when we are aware that they are there. It requires not simply self-awareness, but disciplined and iterative thinking processes to counteract such biases. Needless to say, most people most of the time are neither aware of the biases nor equipped—even when they might be disposed—to submit their perceptions and beliefs to disciplined corrective processes. The consequences are in evidence all around us.
APES
We can use the gorilla as a vivid anchor for an acronym: APES. We are APES, which is to say Apprehensive Pattern-seeking, Emotional Story-tellers. Long before any disciplined cognitive processes were invented, this is how we evolved and by and large this is how we remain, beneath any hard-won and fastidiously maintained scientific discipline that we may have acquired through education. And most of us never do get a systematic education in scientific method or critical thinking. We rely on experience and what is sometimes called horse sense.
As neuroscientist William Calvin puts it, our brains are susceptible to colourful rhetoric, to being swept along by group dynamics that overwhelm our emotional autonomy and critical faculties, to finding hidden patterns where none exist. They are highly susceptible for these reasons to myths, stories, superstitions and mass emotions. Our memories are selective and unreliable, our decision-making easily swayed by the last thing to make a vivid impression on us; our intuitions about logic, probability and causation are powerful but flawed in a number of ways and these flaws are actually magnified rather than diminished by our creation of complex, increasingly data-dependent social orders.
In short, we are, as individuals and as groups, riddled with cognitive biases that go with being human. This is not, let me emphasise, confined to the poor and uneducated. Experts remain susceptible and can be very resistant to accepting their fallibility. In a wonderful book titled Expert Political Judgment: How Good Is It? How Can We Know? Philip Tetlock remarks: “Human performance suffers because we are deep down deterministic thinkers with an aversion to probabilistic strategies that accept the inevitability of error. We insist on looking for order in random sequences.” The book explains that political and geopolitical experts perform abysmally in making predictions in their own fields of expertise, but are overwhelmingly disinclined to acknowledge this fact.
In reality, Tetlock discovered in longitudinal studies, even the best experts he could find performed not much better than chimps throwing darts at a board. None, even the most sceptical and thoughtful, could hold a candle to simple statistical models. But confronted by the evidence of their failure, experts would have recourse to all manner of excuses or defences more readily than to humility. His findings showed that even advanced education, considerable erudition and long-established traditions of academic controversy and peer review did not make experts very good at either predicting developments in their own fields of expertise or at seeing and acknowledging the flaws in their predictive methods.
The sobering lesson here is that cognitive biases are not merely the province of the noble or ignoble savage. They are part and parcel of our humanity and they are very difficult to counteract. I suggest a corrective, however, to Tetlock’s phrasing. I think he was making an important point, but it needs to be qualified by the observation that of course human beings do allow for uncertainty in all manner of cases. The problem is that we are not especially good at estimating or correcting degrees of uncertainty. As the inimitable Dubya Bush might have expressed it, we are rather prone to misunderestimating probabilities, only partly due to deterministic thinking; more generally because we are only equipped by nature to make rough and intuitive guesses and we often get them wrong.
Yet, you might counter, elementary Darwinian reasoning would suggest that human beings cannot have been too bad at getting things right on the whole, since otherwise our ancestors would have been culled by natural selection some time back. So what’s going on? The answer would appear to be that human social orders seem able to compensate to a considerable extent for the cognitive biases of their members through variations on what James Surowiecki calls The Wisdom of Crowds. Various kinds of informal deliberative polling in the form of markets, voting mechanisms, extended debates, scientific method and institutional resilience have helped us to navigate the seas of reality and achieve many astonishing things.
But the human world remains cognitively very messy, and the more socially complex and technologically sophisticated our societies become, paradoxically, the greater this messiness. The overwhelming majority of human beings are carried along by the tides of change almost entirely unable to grasp the cognitive foundations of the science, technology, philosophy and economics that move those tides. They often see them in confused symbolic terms derived from archaic religious beliefs, new-fangled urban legends, tropes from popular culture and even psychopathological obsessions—such as global conspiracy theories or monstrous myths.
It can be exceptionally difficult, as we all know, to achieve rational consensus on major issues, to induce parties to a dispute to appreciate the merits of the opposing view and compromise, to change their cherished beliefs, to undertake certain kinds of risk, to avoid others, to live with uncertainty or to cope with it intelligently. Just watch Q&A on the ABC or follow the climate debate. All these things, I suggest, are due to the chaotic effects of various kinds of cognitive bias on human thinking.
What is remarkable, let me add, is not that we should have these problems. After all, we evolved over millions of years in small hunter-gatherer societies and then were launched on an accelerating trajectory of technological and cultural innovation and population growth a mere 6000 to 10,000 years ago—the blink of an eye in human evolutionary terms. What is truly remarkable is that we have accomplished collectively the many things we have. It is to be hoped that our newly developing knowledge of cognitive biases can contribute to improving the way we conduct ourselves in future, so that those accomplishments do not prove our undoing but rather prove to be the foundation for truly transcendent achievements in what one hopes will be the millennia to come.
Cognitive humanism
Know yourself, urged both the Greek and Chinese sages back in what Karl Jaspers called the axial age. But none of them was able to know with any precision or clear evidence what we can now know, if we tap into the new sciences of humanity. The knowledge of cognitive biases and of their evolutionary roots is integral to this new self-knowledge that has become increasingly possible, though very far from universally shared, in recent decades.
Hubert Dreyfus, famous for his work on the differences between human and artificial intelligence, wrote a couple of decades ago, that “we embody an understanding of being that no one has [explicitly] in mind. We have an ontology without knowing it.” Cognitive biases, as I see them, belong within this framework of understanding. They are so ingrained in what we are as beings that we don’t see them, and even when they are pointed out to us they don’t go away. Indeed, they are highly likely to shape our thinking without our being aware of them, the moment we cease being watchful for them—and even when we are.
The ontology that we embody, in Hubert Dreyfus’s sense, though it has many cultural and even individual variants across time and space, is that of a species of evolved being, whose cognitive development can now be traced back with growing precision through millions of years of neurological and biological evolution. Because we are the product of biological evolution, our cognitive abilities are species-specific and imperfect. Yet the general capabilities that we have inherited have enabled us, after millennia of philosophical and scientific effort, to begin to see this and work to overcome our limitations. I call this scientific approach to humanity universal cognitive humanism. On a day-to-day basis, though, the challenge is to find ways to apply it to practical problems of business and government.
Somata, Gestalta, Cognitiva
When the Swedish naturalist Carl Linnaeus started, in the eighteenth century, to put together the first taxonomies of living beings, he used Greek and Latin names to classify things. He also used a hierarchical schema. I propose a similar framework for seeing what kinds of cognitive biases we have to deal with. We might think of three high-level classes of cognitive biases in human beings: Somata, Gestalta and Cognitiva.
Somata (from the Greek word soma, meaning body; singular Somatum) are those bodily characteristics, habits, tastes, reflexes, instinctual fears and congenital capabilities which universally characterise us as a species of mammalian biped of the genus Homo and which can often be traced back, when they have been traced at all, into the ancient biological past.
Gestalta (from the German word gestalt, meaning form or shape; singular Gestaltum) are those characteristics of our perceptual systems which are involuntary and automatic and which literally shape the way we see the world or fail to see things in it. They include, for instance, the physiological blind spot we all have and the way our binocular vision compensates for it; as well as our limited perception of the spectrum of light, which only optical science since William Herschel about two hundred years ago has enabled us to appreciate. They include the kinds of patterns our brains are prone to look for and generate in the world around us.
They include the deeply ingrained tendencies to attribute intentional agency to natural phenomena arising out of the experience of being an embodied agent in the world with an intentional stance. They include hundreds of deep cultural universals across the fields of number, colour, territoriality, facial expressions, musical creativity, games, legal principles and what have you. Stanislas Dehaene (Director of the Cognitive Neuro-imaging Unit at Saclay, France) has pointed out in his recent book Reading in the Brain that all of the world’s writing systems, despite their apparent wide diversity, spring from a small inventory of intuitive visual shapes shared throughout the species which are based on universal neurological constraints in the human brain.
These findings of cognitive neuroscience, by the way, have clear implications for education, especially the teaching of literacy. While Dehaene cautions that neuroscience is still far from being prescriptive in many areas, he is unambiguous at least in this respect: that neuroscience directly refutes any notion of teaching literacy via the “whole language” approach. There must be an early, extended and explicit teaching of the systematic correspondence between graphemes (written symbols) and phonemes (units of sound) in order to wire the brain up for reading. The brain needs to be wired to create efficient neuronal mechanisms for reading, because it does not develop these spontaneously.
You might ask at this point whether what Dehaene is pointing to here should be categorised as a cognitive bias. As that term is often understood, it probably should not be. However, it seems to me that we need to reclassify cognitive biases as a subset of the broader class of phenomena that cognitive science has been discovering: deeply ingrained tendencies, preferences, reflexes or dispositions of our kind of mind. They are such that, unless we are aware of them, we can fall into error and fail to accomplish tasks efficiently or even at all.
The third class of cognitive biases, which I have dubbed Cognitiva (from the Latin cognitio, meaning thought or judgment, singular Cognitivum) are those biases in judgment, prediction, belief, subjective probability and causation to which our brains are prone, simply because of the natural way in which they work and which are only overcome through unusually disciplined and insightful thinking.
In fact it is mostly Cognitiva that are drawn to our attention as “cognitive biases”, but to see them in isolation from their deeper background is to miss the underlying reasons for why they are there at all. One of the leading specialists on these kinds of phenomena is Gerd Gigerenzer of the Max Planck Institute for Human Development in Germany. In a recent paper, he explained “three of the factors producing the phenomena labelled cognitive illusions”. “Understanding these factors,” he wrote, “allows us to gain theoretical insight into the processes underlying judgment and decision-making and to design effective tools to help people reason under uncertainty.”
A simple and fascinating example of Gigerenzer’s approach to efficient representations, set out at length in his book Reckoning with Risk (2002) is his demonstration that thinking about uncertainty in terms of probability or percentages confuses almost all people and results in frequent and remarkable errors. If, instead, risk is expressed in terms of natural frequencies and simple diagrams are used, the errors tend to be largely eliminated.
This is a very abstract way to express the matter. Let me tell you a story which throws the matter into clearer relief. It’s taken from Gigerenzer, but I’ve compressed it to the bare minimum.
You are a prominent medical professional and are presented with the information that, in a sample of asymptomatic women aged forty to fifty, the probability that one of them has breast cancer is 0.8 per cent. If a woman has breast cancer, the probability is 90 per cent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 per cent that she will have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
The specialist quizzed by Gigerenzer took ten minutes confusedly trying to work out his answer. He then estimated that the probability would be 90 per cent. Then he commented, “Oh, what nonsense! I can’t do this.”
Unless you are very practised at this kind of reasoning and quick at doing it in your head, you will probably feel flummoxed by the question, or will feel confident but get it wrong. Is that because you are stupid? No. Is it because you are uneducated? No. Is it because the problem is just intrinsically difficult for the human brain to do without external representation? That’s closer, but it’s not quite the right answer. Gigerenzer suggests that we reframe the question as follows:
Eight out of every 1,000 women in the sample population have breast cancer. Of these eight women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, 70 will still have a positive mammogram. Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
You can now probably solve the problem in your head. Only seven of the seventy-seven women who tested positive actually have breast cancer, which is one in eleven, which is 9 per cent—not 90 per cent.
Just pause for a moment and consider that every day of the week risk estimates are attempted using probabilities rather than natural frequencies—by doctors, lawyers, policemen and various other people in different contexts—and an alarming number of such estimates are wrong. Gigerenzer’s point is that just by representing the problem in terms of natural frequencies you enable people to solve the problem easily. He explains that this is due to the way our evolved brains work. They didn’t evolve to remember and apply formal rules of probability. But they did evolve to reckon reliably with natural frequencies. You might say that they have a cognitive bias in favour of natural frequencies.
As Gigerenzer puts it:
Natural frequencies result from natural sampling, the process by which humans and animals have encountered information about risks during most of their evolution. In contrast, probabilities, percentages and other normalized forms of representing risk are comparatively recent. Animals can monitor the frequencies of important events fairly accurately. Rats, for instance, are able to “count” up to about 16 as evidenced by their ability to learn to press a lever a fixed number of times to get a dose of food … The human mind records the frequencies of events, like the spatial and temporal locations of objects, with little effort, awareness, or interference from other processes …
We can say that the human brain has an innate preference for the representation of risk in terms of natural frequencies. I would classify this as a Cognitivum.
Visualising information
Our brains also have a preference for being able to see things in front of us, rather than having to reconfigure them inside the head. This is why the use of drawings, simple diagrams and so on is a great aid to cognitive processing—provided that the drawings or diagrams are well conceived for the purpose. I would call this a Gestaltum—a human cognitive bias in favour of the visual. There is a good deal of recent evidence, in the books of Edward Tufte, for example, and more recently in Katy Borner’s beautiful book Atlas of Science: Visualizing What We Know, that good diagramming enormously assists comprehension.
Good visualisations of information greatly enhance the capacity of human beings to grasp what is going on. My own company’s work with the visualisation of deliberative processes—of inferences, options, evidence and objections or rebuttals—pivots on this very basic insight. And if we ascend up the taxonomic scale, we can see that fundamental Somata over-determine our preferences and our practices in many ways. They find unwitting expression in the frequency with which we use metaphors of seeing, pointing, touching and grasping to describe our cognitive processes. Do you see what I mean? Do you get the point? Do you have a feel for what I am driving at here? Do you grasp the importance of all this?
Putting all these insights together has taken an enormous amount of patient research and careful cross-examination by many scientists and they are only beginning to impinge on things like public policy, economic analysis, forecasting and education. As Stanislas Dehaene points out, towards the end of Reading in the Brain, our cognitive sciences are just beginning to pin down underlying patterns of cognitive process, pattern perception or creation, and even aesthetic perception and so forth in the human brain.
We are on the verge of breakthroughs in understanding a whole cluster of innate human biases. These breakthroughs will, in turn, give us far greater insight than we have had up until now into the sources of error, prejudice, and misperception; irrational decision-making, retarded learning, cognitive dysfunction, organisational psychology and so forth. These insights, in turn, should enable us to make education and training more scientific, as we have started to make medicine over the past century or so, and hopefully with similarly substantial gains.
Conclusion
Let me, having skimmed over a number of topics, conclude by pulling together a few of the threads here.
I’ve suggested that we might classify cognitive biases as belonging within a taxonomic schema—Somata, Gestalta and Cognitiva—which begins with the inbuilt predispositions of our embodied, species specific nature (Somata), encompasses at the next level a variety of perceptual phenomena to do with our senses and the way our brains generate or prefer patterns, pictures, agency in the world, stories and meaning (Gestalta); and then descends, as it were, to the undergrowth of our often puzzling tendencies to error in matters of inference, probability and causation (Cognitiva).
I don’t wish to suggest that this is a scientific theory. I would suggest, however, that it covers the field in such a way as to put the debates about biases and heuristics, behavioural economics, popular delusions, superstition, ideology and irrationality into a broader context. I like to think of that broader context as what I call a universal cognitive humanism. This would be a worldview for enlightened APES, if you like.
By acknowledging our uncanny and imperfect nature, we can, with patience and humility, come gradually to understand ourselves and the world around us in a way that is scientifically accurate and not merely mythical or deluded. In other words, we can transcend our biases, whether of the cognitive kind or the banal kind. But along the way, we have a great deal of work to do and there is a deep, underlying conundrum we have to deal with: that the cognitive biases in question—whether Somata, Gestalta or Cognitiva—will be reborn in every human being who comes into the world, while the wisdom that is won by those of us who work hard to transcend such biases will die with us.
Or rather, it will survive us only to the extent that it becomes recorded and transmitted and institutionalised through education and training. Something like this difficult and rather disconsolate view of the human world may be said to have underlain many a fatalistic worldview over the millennia. It underlies some of the more pessimistic or even apocalyptic outlooks of the educated even now. But we need not be fatalistic. We can be cautiously and scientifically optimistic, because genuine understanding allows for practical action and innovation in the ways we tackle the age-old problems of being human in the world.
I am something of a temperamental pessimist with a very wry view of human belief systems and the human condition, but I share something of this scientific optimism. I read as much and as widely as I can in the proliferating literature on cognitive science and human evolution. And I share Stanislas Dehaene’s enthusiasm for the idea of an emergent twenty-first-century neuronal culture. Towards the end of Reading in the Brain, he writes:
As we reach the end of our journey into the reader’s brain, it becomes apparent that only a stroke of good fortune allowed us to read. If books and libraries have played a predominant role in the cultural evolution of our species, it is because brain plasticity allowed us to recycle our primate visual system into a language instrument. The invention of reading led to the mutation of our cerebral circuits into a reading device. We gained a new and almost magical ability—the capacity to “listen to the dead with our eyes”. But reading was only possible because we inherited cortical areas that could learn to link visual graffiti to speech sounds and meanings.
Unsurprisingly, perhaps, our brain circuits turn out to be imperfectly adapted to reading. Poor visual resolution, a steep learning curve and an annoying propensity to mirror symmetry bear witness to our past evolution. Unfortunately, evolution did not anticipate that our brain circuitry would one day be recycled for word recognition. But our imperfect hardware did not prevent many generations of scribes, from antique Sumer on, from finding ways to take advantage of those circuits. They designed efficient writing systems whose refinement over ages has allowed the marks on this page to speak to your brain.
Our brains did not evolve in order to be able to read. They did, however, evolve a unique plasticity and a capacity for the recycling of neurons to enable adaptive learning and a frontal lobe which makes possible the conceptual reframing of our ideas or experiences and thus occasional innovation. They also evolved an imaginative capacity unparalleled elsewhere in the animal world and a linguistic capacity which makes possible—let me emphasise that word, possible—thinking and planning of kinds impossible for any other creature on the planet.
All these wonderful capacities, however, are riddled with or constrained by the cognitive biases I have briefly outlined. Overwhelmingly, since the Stone Age, they have been lived far more than understood. They have produced the flowering of human cultures, as well as their nightmares and their fiascos. Technological innovation, art, religion and trade all long antedate the first writing systems. But writing and reading emerged about five millennia ago and science first emerged in the Hellenistic world just under two and a half millennia ago. Now the human world stands on the cusp of unprecedented possibilities—both for flowering and for fiascos.
Still, our basic way of being in the world, the ontology that no one has explicitly in mind, is characterised by Somata, Gestalta and Cognitiva of which overwhelmingly we remain unconscious. Our sciences and the arts of reading well enable us to work with what we are and refashion our lives or our workplaces to make them less prone to error, prejudice, stupidity, confusion, superstition and inefficiency. With wisdom and patience we can undertake such refashioning with reasonable hope of progress.
Wisdom begins with an appreciation of the nature and implications of cognitive biases. Patience rests on awareness that those biases are innate and cannot even be seen by the untutored human mind.
Dr Paul Monk is a director of Austhink Consulting www.austhinkconsulting.com. This article is based on addresses he delivered in Melbourne and Sydney.
Authorities insist on describing predators only as "Asian". In Banbury, Birmingham, Blackburn, Bristol, Burnley, Cambridge, Carlisle, Derby, Leeds, London, Manchester, Oldham, Oxford, Peterborough, Preston, Rochdale, Rotherham, Sheffield and Telford locals need no translation
Sep 30 2024
15 mins
"The druids were no match for the swarming New Age revellers. Not even hippies take them seriously"
Sep 26 2024
32 mins
As if they yearned for it to happen, authors and artists prophesied and pictured the first global conflict
Sep 25 2024
26 mins