Have you ever evaluated how good you are at making big decisions? Have you ever wondered why you went ahead with something which you knew was wrong for you or your business? In an industry where decisions can affect millions of people, it may be a good idea to take a minute to reflect because, according to Koen Lamberts and Nick Chater from the Institute of Applied Cognitive Science at the University of Warwick, UK, people are less optimal in virtually all aspects of decision-making.
At Warwick University, a vast amount of research has been collected on how people reason and make decisions. But according to Chater, applying the results from this research to help businesses and individuals make better decisions has been surprisingly slight.
The university has been exploring links with businesses interested in improving their decision-making based on proven research results, and Chater and Lamberts were happy to discuss their work at a recent Warwick Policy Briefing, boldly entitled ‘Big Decisions that Go Wrong’, held in London, UK on 18 January.
I was curious to find out what makes people make poor decisions in the face of high levels of risk. And, judging by the briefing’s popularity, a wide range of individuals from a multitude of businesses felt they could learn something too.
The decision-maker
Lamberts opened the discussion by announcing that, ‘human judgement is generally frail and irrational’. But we are not worried yet. This may be true of novices we think, but we leave the big decision making to ‘experts’ or ‘groups of experts’ who are obviously better equipped to make these decisions because of their specialist knowledge and experience.
Not according to Lamberts and Chater. They claim that ‘expert judgement’ is a myth and they discovered that in over 100 different studies, experts performed often much worse than probabilistic statistical models which predict an uncertain outcome based on predictable variables or cues.
The next bombshell Lamberts dropped was that although academic training was found to improve an ‘expert’s’ ability to predict outcomes, additional experience does not help at all and experts with lots of experience in their field were found to be just as bad at making big decisions as novices. Lamberts concluded that the role of experience is often vastly overrated. ‘It has a mythical quality,’ he said, ‘and is not justified by the findings.’
Lamberts also described what he called, ‘the pervasiveness of overconfidence’, where people think they know more than they really do. According to Lamberts, two effects typically emerge from this:
• Overconfidence: people are more confident than they are accurate.
• Hard-easy effect: overconfidence increases with the difficulty of the task.
Overconfidence was found to affect the judgement of experts, but novices are more overconfident in their decisions than experts. The amount of confidence people have in their decisions is affected by the following:
• Time between prediction and outcome.
• Feedback from outcome of decision.
• Degree of control people perceive they have over the outcome of their decisions. People often overestimate how much they can control a situation.
Lamberts said that experts suffer less from overconfidence because they are better ‘calibrated’ than novices. For example, weather forecasters can calibrate their judgements more accurately because the time between their predictions and the outcome is small. Also the quality of feedback they get from their predictions is good because it is easier to see why and when a decision was wrong, and people are quick to respond when the predictions are inaccurate.
Another factor which contributes to poor decision-making is what Lamberts described as ‘the fundamental attribution error’ — we assume people behave or think in a certain way because they are a particular type of person or a pessimist, and are not driven by cues or the situation. However, we do not apply this belief to ourselves and attribute our own actions and behaviour to the situation at hand. This has important implications for individual and group decisions because:
• Individuals can maintain high levels of self-confidence by attributing their own errors and failures to the situation.
• A ‘blame’ culture emerges in which others are judged by a single failure attributed to them, not the situation.
• Avoiding blame becomes very important so individuals conform to the group’s decision.
So human judgement is frail or unreliable and experts fail too but their academic training and level of confidence makes them a little more reliable than novices. It is therefore not surprising that we make such poor decisions in the face of adversity, particularly when we are so confident in the ability of experts and ourselves to make good decisions and are spurred on by group confidence.
Chater says that big decisions often start to go wrong during the initial planning stage. An individual’s overconfidence in his or her judgement can be magnified by a group. Sometimes people sense that their own judgement is frail, but in a group your judgement seems stronger because if you think someone else agrees with you, then you must be right; or if you doubt someone else’s decision but another member of the group agrees with them, then may be they are right after all.
Chater says that everyone seeks comfort from trusting everyone else’s judgement and this develops into what he calls a ‘group think’, where people become more confident in a decision than when they first got involved.
What often makes matters worse is when individuals do not voice their concerns because they are intimidated by others with more power than themselves, or they are afraid of being ridiculed or later blamed by members of the group if things start to go wrong.
Taming human irrationality
As the danger signs loom, Chater describes how people develop an illusion of control. People think they have control over a situation or the outcome of a decision because of their power and status, but in reality this is rarely the case. Also, anyone who questions what is now the group’s decision is deemed a traitor and as a result the group tends to become stronger and more polarised, particularly if people outside the group suggest that the decision is a mistake. Chater says that this type of behaviour is quite evident in religious cults or other closed groups.
It is clear to people outside the group that the project should stop, but inside the group Chater says there is a struggle for consistency as people question why they should stop now if they did not stop before. Again, to defect would be treachery so members of the group continue to go along with the decision and conform to avoid being blamed for what is now clearly a mistake. The fundamental attribution error now also begins to set in which makes us act to avoid blame, blame others, deny the mistakes and follow through with the decision, thinking that at least it is one way out.
At this point in the discussion the audience looks worried. With all of this psychology to overcome, we start to wonder whether we will ever make a good business decision. But all is not lost. Chater has a few suggestions which he believes will help us to make better big decisions in the future.
Firstly, he recommends we take an objective view of the situation by:
• Using objective statistics if possible.
• Collecting statistics on past forecasts and decisions so we can adjust future expectations or predictions.
• Accepting imprecision.
• Being open about uncertainty.
The next step is to break the ‘group think’ mentality. Chater suggests that this can be done by:
• Requesting people’s judgement independently and anonymously. A good way to do this is to ask people to vote in secret.
• Setting ground-rules. For example make sure each
discussion starts afresh to prevent intellectual baggage being carried across from earlier talks.
• Creating adversarial discussions. Ask people outside the group who have no direct interest in the project to comment on the decision’s progress at regular intervals. Chater says that this often helps people to face issues they would have otherwise ignored or find difficult.
Finally, Chater recommends that you provide you and your colleagues with a ‘get-out’ plan, just in case things start to go wrong by:
• Specifying ‘pull-out’ criteria at the outset.
• Placing ‘pull-out’ decisions in the hands of group outsiders, so someone with an objective view has the power to undo all your hard work if they think the decision is not in the best interests of the business or the individuals it involves. This process may be harsh but it avoids the ‘blame culture’ and intimidation which can arise from group conformity.
• Tolerating that in reality, human decision-making is quite often frail.