'Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power' by Marion Oswald in (2018)
Philosophical Transactions of the Royal Society A comments
This article considers some of the risks and challenges raised by the use of algorithm-
assisted decision-making and predictive tools by the public sector. Alongside, it
reviews a number of long-standing English administrative law rules designed to
regulate the discretionary power of the state. The principles of administrative law
are concerned with human decisions involved in the exercise of state power and
discretion, thus offering a promising avenue for the regulation of the growing number
of algorithm-assisted decisions within the public sector. This article attempts to re-
frame key rules for the new algorithmic environment and argues that ‘old’ law –
interpreted for a new context – can help guide lawyers, scientists and public sector
practitioners alike when considering the development and deployment of new
algorithmic tools.
Introduction
In 1735, in this very journal, one Reverend Barrow published a short piece, hardly a
page in length, in which he surveyed births, deaths and overall population in the
parish of Stoke-Damerell in Devon. He notes that ‘the Number of Persons who
died, is one more than half the Number of Children born; and that about 1 in 54 died’
in a year when the ‘General Fever’ infected almost all the inhabitants. He further
points out that one of the persons buried was ‘a Foreigner brought from on board a
Dutch Ship’ and two more were drowned from on board a Man of War ‘but that the
Ships Companies are not included in the Number of Inhabitants.’ This data, together
with ‘Experience and Observations, both of my self and better Judges’ leads him to
‘reckon the Parish of Stoke-Damerell as healthful an Air as any in England.’
Fifty-four years later, we find William Morgan (communicated by a Reverend Richard
Price) promoting ‘the method of determining, from the real probabilities of life, the
value of a contingent reversion in which three lives are involved in the survivorship.’
In an age when prospects in society – and lines of credit - might be dependent on
one’s ‘great expectations’ of an inheritance, calculating the probability of achieving
that inheritance (known to lawyers as contingency reversion) becomes of great
interest. For instance, I might transfer a piece of land on the following basis: to my
niece for her lifetime, remainder to my nephew and his heirs, but if my nephew dies
in the lifetime of my niece, then the land reverts to me and my heirs; I have a
‘reversionary interest’ in the land. The question for my eighteenth century nephew is
how to value the sum that might be payable on the contingency that he will survive
his sister. The method and calculations proposed by Morgan are set out at length
and in considerable detail so as to enable a reader to test and critique them. To this
author’s non-expert eye, two points are striking. First, that the calculations appear to
be based on group data i.e. on the number of persons living at the age of my nephew,
and at the end of first year, second year, third year and so, from the age of my
nephew. Secondly, the article goes onto criticise a rule proposed by a certain ‘Mr
Simpson’ and points to its results as deviating ‘so widely from the truth as to be unfit
for use’ [my emphasis] in some cases producing ‘absurd’ results.
A modern reader might be tempted to regard these articles as illustrations of a naïve
age or to a context long past, or to highlight the lack of causal evidence for Reverend
Barrow’s conclusion about the ‘healthful’ nature of his parish. Yet both articles tackle
issues with which we remain concerned today: the healthiness (or otherwise) of a
community, the reasons behind it and the life expectancy of an individual when
compared to others. Risk forecasting and predictive techniques to aid decision-
making have become commonplace in our society, not least within public services
such as criminal justice, security, benefit fraud detection, health, child protection and
social care. We should be better at it than our eighteenth century clergymen. It has
become almost unnecessary to say that we now inhabit an information society.
Information technologies driven by the flow of digital data have become pervasive
and everyday, often leading to the assumption that access to vast banks of (often
individualised) digital data, combined with today’s networked computing power and
complex algorithmic tools, will lead automatically to greater knowledge and insight,
and so to better predictions.
Knowledge, however, is not the same as information (as many before me have
pointed out): Knowledge, Hassan argues, ‘emerges through the open and experiential
and diverse (and often intuitive) working and interpreting of raw data and
information.’ Reverend Barrow’s conclusion as to the healthfulness of his parish,
for instance, was based, not only on the outcome of analysis of raw data, but on
additional ‘experience and observations’ of himself and others. Some criticise such
human ‘intrusion’ on the data as casting further doubt on the conclusion. Grove and
Meehl, a leading proponent of the use of statistical, algorithmic methods of data
analysis over clinical methods, argued that ‘To use the less efficient of two prediction procedures in dealing with such matters is not only unscientific and irrational, it is
unethical. To say that the clinical-statistical issue is of little importance is
preposterous.’ It is this often-claimed superiority, together with the potential for
more consistent application of relevant factors often taken from large datasets, that
give algorithmic tools their appeal in many public sector contexts. Although this
article is written from a legal perspective, it draws attention to arguments made in
the ongoing ‘algorithmic predictions versus purely human judgement’ debate and
applies these to the legal principles discussed below. It is particularly concerned with
algorithm-assisted decisions, whereby an algorithmic output, prediction or
recommendation produced by machine learning technique is incorporated into a
decision-making process requiring a human to approve or apply it. ‘Machine learning
involves presenting the machine with example inputs of the task that we wish it to
accomplish. In this way, humans train the system by providing it with data from
which it will be able to learn. The algorithm makes its own decision regarding the
operation to be performed to accomplish the task in question.’ Machine learning
algorithms are ‘probabilistic...their output is always changing depending on the
learning basis they were given, which itself changes in step with their use.’
Predictive algorithms and administrative law
The growth in the use of intensive computational statistics, machine-learning and
algorithmic methods by the UK public sector shows no sign of abating. What then
should be the role of the human when these tools are planned and then deployed,
particularly when the accuracy of an algorithmic prediction is claimed to be at least
comparable to the accuracy of a human one? I consider this question by reference to
a number of connected English administrative law rules, some of which (such as
natural justice) date back to before the origins of this journal. I have done this
because this body of law governs the exercise of discretionary powers and duties by
state bodies, and thus the humans working within them; discretion must be exercised
within boundaries or the public body is acting unlawfully. As Le Sueur explains, ‘The
assumption made until comparatively recently is that the decision-maker using the
executive power conferred by Parliament is a human being or an institution
composed of humans and that there is a human who will be accountable and
responsible for the decision.’
We see this today in witnesses called to give
evidence to Parliamentary Select Committees. The introduction of an algorithm to
replace, or even only to assist, the human decision-maker represents a challenge to
this assumption and thus to the rule of law, and the power of Parliament to decide
upon the legal basis of decision-making by public bodies. I argue below however that
English administrative law – in particular the duty to give reasons, the rules around
relevant and irrelevant considerations and around fettering discretion – is flexible
enough to respond to many of the challenges raised by the use of predictive machine
learning algorithms, and can signpost key principles for the deployment of algorithms
within public sector settings. These principles, although derived from historic case-
law, have already been applied and refined to modern government, to the
development of the welfare state, privatisation, the development of executive
agencies and so on.
I then attempt to re-frame each of these rules in order to suggest how they could
guide future algorithm-assisted decision-making by public bodies affecting rights,
expectations and interests of individuals. In doing so, I do not recommend any
particular method of building or interpreting these systems - as to do so would
require consideration of many different contexts and informational needs - but to
suggest principles to guide those engaged in future development work. I focus
attention on the requirements of legitimate decision-making from the perspective of
the public sector decision-maker, rather than from the perspective of the subject.
Fair decision-making in accordance with administrative law rules by its very nature
also protects the interests of the human subject of those decisions. I argue that
carefully considering exactly what the algorithm is or is not predicting, and explaining
to the decision-maker at the point results are displayed, is key to ensuring this
fairness.