- Matt's Newsletter
- Posts
- Book review of Noise by Daniel Kahneman, Cass R. Sunstein and Olivier Sibony
Book review of Noise by Daniel Kahneman, Cass R. Sunstein and Olivier Sibony
A guardian columnist once joked that if a cabal of evil psychologists had to concoct a crisis humanity would be hopelessly ill-equipped to address, they couldn’t have done better than climate change. Humans are wired to only respond to tangible problems or at least those easy to narrativise. While most people wouldn’t hesitate jumping into a dirty Amsterdam canal and ruining a £300 jacket to save a drowning child, many wouldn’t donate £300 to a faraway charity that could arguably use that money to save a child’s life. The psychological reward from an invisible victory just doesn’t summon our main character energy.
The latest book by Nobel Prize winner Daniel Kahneman, co-written with legal scholar Cass Sunstein and McKinsey consultant Olivier Sibony, explores an enemy whose defeat can only yield an invisible victory: noise. Not the type of noise your flatmate might nod to as you show off your latest cheap vinyl purchase, but a statistical phenomenon. Noise refers to unwanted variability in judgment. It aggregates into system noise when an organisation made up of decision-makers who should, in principle, be interchangeable and come to the same conclusions (such as a legal system or a loan office), inadvertently makes inconsistent judgments.
The authors deserve credit for developing the first comprehensive framework for system noise and advancing the debate on the tradeoffs incurred from accepting noise. However, they spend too much time rehashing mainstream decision-science topics related to heuristics, biases, and decision-making tactics, and simply tying them back to noise. Despite this flaw, the book’s significant academic contribution sets it apart from the Gladwell-esque pop psychology books that increasingly dominate the non-fiction section of your local bookstore.
What is system noise?
The authors’ framework breaks system noise down into two components. First, there is level noise, driven by persistent individual dispositions that generate judgments that appears random at the team level. To illustrate, consider two recruiters each interviewing half of a candidate pool to hire a team. If each recruiter has a bias towards a particular gender, the resulting team may have an overall gender balance. However, these biases can lead to bad individual hiring decisions, which appear random at a team level.
The second component, pattern noise, is driven by idiosyncratic reactions to individual cases. It can be further divided into stable pattern noise and occasion noise. The distinction between the two is based on whether the response to a given situation would be repeated or not. Stable pattern noise reflects our constant individual susceptibilities, such as when an investor favours a founder because they remind them of a friend. Occasion noise on the other hand, is caused by moment-to-moment variability in judgement, which may be driven by an external factor, like a caffeine hit, but also because our brains are inherently transient. Contrary to the idea of a “true self,” our brain’s neural networks never compute the same exact output.
A great example deployed to illustrate occasion noise involves wine experts at a major competition tricked into tasting the same wines twice and only scoring 18% of the wines identically. If pressed for explanation, these experts would have likely produced confident arguments garlanded with words such as “quaffable” and “barrique.” Like these “experts”, we are overconfident and in denial of our objective ignorance. The authors follow this example with explanations of how we view most outcomes in life as logical and predictable in hindsight. Our brain’s search for explanations is almost always successful because patterns can be drawn from a bottomless reservoir of facts and illusions.
Why are our stories so sadly but understandably disconnected from reality? Noise answers this question with an astute romp through heuristics and biases; explaining how our emotional brains embark on simplifying missions to prevent our infinitely complicated world from exploding our psychic seams. Pair that stuff up with our unsophisticated language, overconfidence and the herd mentality that got us everything from nation states to gorpcore fashion, and you get bad decision systems. It’s interesting to learn how heuristics and biases specifically create noise, but since these topics are mainstreams, readers of behavioural psychology books won’t gain many new insights.
Finding noise
Confusingly, the distinctions between the drivers of system bias, level noise and stable pattern noise are fuzzy. We can never know which of these components of error a singular judgment contributes to. We can only break down system noise through a statistical analysis using data from multiple responses to a singular judgement opportunity or responses to a recurring judgement opportunity.
Measuring noise in verifiable judgements also requires the use of the Brier score, which accounts for calibration (how close your forecast was) and resolution (how certain you were). It’s a time consuming task, and eager beavers inspired to measure system noise at work should not be surprised to find little appetite for the use of probabilistic reasoning, let alone for tracking the accuracy of forecasts.
Evaluative judgements are non-verifiable, meaning they don’t have a true value, and so can’t be assessed with the Brier score. Instead, they can only be evaluated based on the quality of the thinking that goes into them. But can they be noisy? Absolutely. Good evaluative judgements should come from a consistent set of principles and measures that yield the same output. For example, judges should make consistent decisions, otherwise they are noisy.
Human-led tactics to reduce noise
More than a quarter of Noise is dedicated to explaining team tactics to reduce noise; breaking problems down into several components, using distinct and independent sources of judgment across these components, aggregating judgments, and aligning on common metrics. It’s interesting to hear how these tactics reduce error but they’re conventional. Many companies, including Google, have been using them in their hiring process for years.
The authors also touch on personal tactics to reduce noise; discussing probability theory, statistics, biases, evidence evaluation, introspection, and delayed intuition. They do well not to linger on these topics given they are also conventional and already well covered in popular decision-science books such as Phillip Tetlock’s Superforecasting, Adam Grant’s Think Again, and Tim Harford’s How to Make the World Add Up.
There is nothing new about the decision-making tactics discussed in the book, which take up a big chunk of its pages, and it is disappointing that the authors never directly acknowledge this. However, it is useful to see how these mainstream approaches, generally only discussed as tactics for reducing bias and increasing precision, can also promote consistency.
Relinquishing power
When it comes to reducing noise, human tactics are no match for algorithms. Actually, even simple mechanical rules such as regression models with integer weights often outperform experts. This is partly because regression models are bound by rules, while human experts are not. Algorithms are not just superior because they can perform harder calculations, but also because they’re consistent.
The authors argue that humans become more confident in their judgement when inventing and applying complex rules, but this complexity does not necessarily lead to better predictions. Our entangled and irregular thought patterns give us the illusion that we are capturing complexity and subtlety, but we often just produce noise.
It is challenging to convince professionals that a machine can outperform their judgment, as so much is at stake, including pride and livelihood. This difficulty brings to mind Upton Sinclair’s famous adage: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Convincing people to accept decisions made by algorithms may prove more difficult than convincing human experts to relinquish decision-making power. The dehumanising impact of a rigid black box making unexplained decisions is satirised in the show Little Britain, where a dead-eyed receptionist consults her computer when asked a question and blankly responds “computer says no.” This clever comedy crystallises the common perception that treatment without individualised human consideration is grossly unfair.
Would people prefer to receive an important decision made by an algorithm that provides some level of explainability through an engaging user experience, rather than from an inaccessible Little Britain character? Perhaps, but a veneer of technical legitimacy would also be needed. Similarly to how it would be socially unacceptable to have driverless cars that are only as safe as human drivers, algorithms will need to outperform our judgment capabilities before we defer to them.
But would algorithmic outperformance be enough? To answer this question, imagine being recommended for a promotion by your entire team, but then being rejected after interviewing with an AI chatbot. Would you be comforted if presented with conclusive evidence that the AI makes exponentially better HR decisions than your human team? No! The obstruction of simple earthly justice would drown out any mention of statistically significant results.
Calibrating the tradeoffs incurred from noise-reducing algorithms is a messy exercise that requires constant fine-tuning. While there are many debatable use cases, there are also overwhelmingly positive ones that violate people’s perception of control and dignity. Unfortunately, it’s people’s illusion of control and dignity rather than the more rational tradeoffs that will largely determine where boundaries are set.
Those who think the use of algorithms involves few rational tradeoffs should read the book’s take on the topic. The tradeoffs discussed include the extent to which algorithms reduce error, the costs of implementation, and the implications of an inflexible system on our ability to evolve our rules and values. Although it’s a short list, it is certain to be expanded upon as as our values evolve and use cases for AI increase.
AI chatbots and noise
Unfortunately, Noise does not discuss AI chatbots. It was published before OpenAI demonstrated that the deep learning revolution could produce powerful language models that can spout advice that’s closer to the truth than we might like. It’s now become clear that AI will not just automate intelligence but also inform everyday human decision-making, and consequently impact how much noise we produce.
AI chatbots could be effective tools for reducing noise while maintaining illusions of human control, especially if they’re fine-tuned for specific use cases and have good explainability. Decision-makers such as doctors, lawyers and hotel receptionists could keep a strong sense of agency and their customers a sense that they’ve received individualised human treatment. It’s funny to consider that AI chatbots could create a world with many “expert decision-makers” who never actually influence any decisions.
I think AI chatbots unfortunately have the potential to amplify noise. They could be untuned, over-personalised, ambiguous, hallucinatory, and lack explainability. As a result, they could generate idiosyncratic human responses at scale. They could do more to melt our shared reality than any algorithmically-driven, hyper-targeted malicious content strategy.
Breaking new ground
The authors’ framework for understanding noise offers an academic foundation that will become increasingly relevant, but also in need of an upgrade. Those eager to see the field advance should be excited to know that Kahneman expects the framework to be replaced within a few years.
The next book on noise should withstand temptations to come accompanied by recycled conventional advice on smart decision-making, but should match Noise’s academic rigour. Unfortunately, keeping out cherry picked anecdotes and psychobabble is difficult when publishers require books to be comprehensive, self-contained and entertaining. Nonetheless, it’s important that books do follow — we need to continue raising awareness about noise.
We need to create a narrative for noise
The authors argue that creating a narrative for noise is harder than for bias because it lacks causal explainability. Correcting an identified bias can give you a sense of achievement, but reducing noise can only yield an invisible victory, seen only through a statistical viewpoint. There’s truth in their argument, but we should remember that history does not lack examples of narrative-driven movements to fight invisible enemies. We can summon fighting spirits to fight noise. Talk of contribution to Mean Squared Error isn’t going to do it. If we want to rouse a feeling of indignation that will inspire droves of people to glue themselves to the Old Bailey wearing t-shirts saying “AI now”, we’re going to need a good story.