You are here

  1. Home
  2. Business and Law in the time of Covid-19 archive
  3. 'Model Land' and its phantasies threaten us all

'Model Land' and its phantasies threaten us all

Image of global coronavirus cases by Martin Sanchez, Unsplash.

This blog is written by Dr Erica Thompson and Professor Mark Fenton O’Creevy. Erica is Senior Policy Fellow at the Centre for the Analysis of Time Series at LSE and a Fellow of the London Mathematical Laboratory, where she leads the research programme on Inference from Models. Mark is Professor of Organisational Behaviour in The Open University's Business School. Both are members of the CRUISSE network set up by UK Research councils in 2017 to advise what research needs to be done to support real world decision-making by business and government.

Our politicians are awash with scientific and other advice and wish to feel informed and in control. With the current pandemic they face radical uncertainty and must make life-and-death decisions that could well go wrong. They risk being blamed, and to protect themselves insist they are following “the science”.

As a mathematician and a psychologist from a network studying real-world decision-making under radical uncertainty, we work on the dangers of investing scientific models with phantastic (wished-for but unrealistic) power to resolve uncertainty and produce optimal solutions.

Mathematical modelling offers a playground to explore a state of affairs without commitment. In “Model Land” we make simplifying assumptions, define clear goals and can optimise our strategies in accordance with the goals.  In the real world, assumptions rest on significant uncertainties and goals are rarely uncontested, clear or singular. 

We not only want to minimise direct deaths and suffering resulting from COVID-19, we also want to keep hospitalisations below a threshold, continue the economy, maintain community cohesion, safeguard civil liberties and protect against unintended and unforeseen consequences we do not yet comprehend. 

The additional goals are not minor tweaks to an optimisation performed by a model. They express contested human values that are fundamental to our self-identity. Even when we all agree on an outcome – eradicating the virus – it will often only be achievable with serious consequences elsewhere.  

What modelling offers in such cases is not an analysis of the best thing to do to achieve the outcomes we want, but insight. The process of building a model and recognising the necessary assumptions can be used to exercise and extend our imaginations, or not.

Treating model output as real prediction of what we will see when deciding about pandemics, climate-change strategies or financial regulation will land us in trouble. Models tend to generate excessive confidence and a dominant story, causing inattention to data or other possible stories that do not fit. Reframing and recognising new data once committed is difficult, not least because it feels frustrating and forces admission of error. And especially if it must be done in public by politicians.

More importantly, over-reliance on models is an abdication of human responsibility for judgment given facts, values and all the emotional and political context of difficult decision-making. To fetishize model output as objective – as in the fashionable jargon of “big data”, “machine learning” and “AI” – is counter-productive: it relies instead on other, obscured value judgments and socially mediated assumptions present in all model building and model interpretation.

Deciding what information and values matter to make tough decisions is for dialogue between human minds. Uncertainty in high-impact cases necessarily induces anxiety. So, while investing models, or “science”, with magic powers of resolution without human intervention may appear as a way out – it should make everybody extremely anxious.

The problem with anxiety is the temptation it creates to close off uncertainty with framings that carry unrecognised baggage. We know government risk assessments anticipated a pandemic and tried to prepare for it. The UK’s recent experience was with influenza – unlike in Taiwan or South Korea where it was with Sars – so at first decisions were based on “COVID-19 is like the flu”, which was unfortunate.

Stable accounts of “what is going on here” are needed to support preparatory action. But in this case they became too stable and closed off to “known neglecteds,” and became a hindrance – like models before the financial crisis.

Facing uncertainty, successful anticipatory thinking is not about making good predictions, it is about agility when judging what information to gather to grasp an unfolding landscape, and constant curiosity about what might be left out. We must never avoid ambivalence by getting locked into a single story, a single class of models or a single perspective. It is why algorithms – and humans masquerading as algorithms, or passing the buck to algorithms – are fragile decision-makers under many real-life conditions. 

One hallmark of true human intelligence, and leadership, is to be able to act despite uncertainty while remaining open to the constant need to revise how circumstances are understood – and despite having multiple competing goals, inadequate information and insufficient time. 

Too much decision-making research has been done in different silos devoted to optimising particular modelling perspectives without having to commit to the experiences decision-makers face when using them. To develop the competence of policy-making systems, we need to treat models as useful fictions rather than phantasies or magic devices for calculating the future, take messy lived experiences seriously, and return valuable model insights from “Model Land” to the real world. As we navigate an exit from the COVID-19 crisis we will need adaptive policymaking and decision-makers who have the courage to tolerate the uncertainties we face rather than becoming trapped in Model Land.

Read the original blog on the Emotional Finance website