A new chat-based implementation of an artificial intelligence (AI) project is making waves due to its apparent ability to provide plausible answers to questions in a wide range of domains, not least law. This is potentially a huge step for those who are interested in issues such as legal adjudication by computers. ChatGPT is a chat based version of the OpenAI project. Anybody can sign up and interact with the programme to see for themselves what all the fuss is about.
Others, such as Gary Smith on Slate are more sceptical. Before thinking about the objections to AI in law, try answering the following question:
‘A bat and a ball together cost £1.10. The bat costs £1 more than the ball. How much do the bat and ball each cost?'
We'll come back to the answer shortly.
Smith's key objection is that the attention is unwarranted, and the programme is not really intelligent in any meaningful sense. Though ChatGPT can give plausible answers to many questions (and thus often passes the so-called 'Turing Test', Smith argues that it does not really understand what it is saying. If this is right, by extension, then an AI programme dispensing justice would not really understand things like fairness. This cuts to the heart of objections to the AI project. However, the view of proponents of AI may come as something of a surprise, particularly to lawyers.
The debate may be best illustrated by the 'Chinese Room' thought experiment devised by John Searle (Searle, 1980). Imagine a room containing an English speaking human operator together with huge indexes of books. When a message written in Chinese is passed to the operator, they look those characters up in the books and respond with whatever the books tell them to. While those responses may be intelligible to a Chinese speaker, Searle's argument is that nothing about the room understands Chinese (or, similarly, would a similar room be able to understand justice).
But philosophers such as Daniel Dennett (2013, pp 319-329) bite the bullet and argue that the Chinese Room thought experiment is misleading. The 'systems reply' is that it is the whole system that is doing the understanding, not the human operator. Proponents of 'strong AI' believe that the human mind is a product of numerous much simpler information processing systems that could be replicated on any sufficiently complicated information processing device (and GPT3 is very complicated...). Thus, a suitably advanced AI system could dispense justice as reliably as a human judge. Many lawyers will be surprised at this view, but it is one that I am sympathetic to.
Smith, as a sceptic, points to the following exchange to support his case:
‘Human: I have 47 pennies. I throw away 44 of these pennies and divide the remaining pennies into three groups. How many pennies are in each group?
AI: There would be 3 pennies in each group.
Human: Are you sure?
AI: Yes, I am sure. The result of 44 divided by 3 is 14.4, so the remainder after accounting for 44 pennies would be 3. Therefore, each group of pennies will have 3 pennies.’
Obviously, the AI is very wrong. But let's return to the bat and ball problem. As you can tell from the prices, this is quite an old example. What was your answer? Compare this with ChatGPT's answer when I asked it:
‘The bat costs £1 more than the ball, so if x is the cost of the ball, the cost of the bat is x + 1.
The cost of the bat and ball together is £1.10, so x + x + 1 = 1.10.
This simplifies to 2x + 1 = 1.10.
Subtracting 1 from each side gives 2x = 0.10.
Dividing each side by 2 gives x = 0.05.
The cost of the ball is £0.05.
The cost of the bat is £0.05 + £1 = £1.05.’
Here, ChatGPT gets the answer right when we know that most people answer £0.10 for the ball and £1.00 for the bat (which is wrong because this is only a £0.90 difference). So reliably generating right answers cannot be the touchstone for 'real' intelligence.
Ultimately, the mechanisms that allow people to give right (or wrong) answers or judges to dispense justice are likely to be a combination of genetically inherited capacities and learnt behaviour.
ChatGPT acquires those mechanisms through a very different processes, trawling and synthesising vast numbers of texts, but ultimately achieves a very similar level of performance. While dispensing justice may well be some way off (and subject to many challenges such as avoiding synthesising the all-too-human biases and prejudices), the astounding level of performance of ChatGPT in many domains provides quite convincing evidence that might begin to overcome the scepticism of lawyers.
References:
Dennett, D.C. (2013) Intuition Pumps and Other Tools for Thinking. Penguin UK.
Searle, J.R. (1980) ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences, 3(3), pp. 417–424. Available here.
Paul is Convenor of the Brain Sciences & Law (BraSciL) Research Cluster, Lecturer in the Open University Law School, and a practising barrister.
His doctoral thesis focusses on cognitive science and law in the judicial decision-making context.
Twitter - Paul Troop https://mastodon.green/@paultroop