The use of algorithmic systems for public use comes with this own set of transparency expectations. The history of administrative decisions offers a rich body which can clearly inform the threshold of transparency for these decisions and how to apply them.

___

AS

__

Read Part I of this series here.

The increased use of AI systems has been accompanied by concerns about bias, fairness, and a lack of algorithmic accountability and fears that algorithms have the potential to exacerbate entrenched structural inequality and threaten core constitutional values. While these concerns are applicable to both the private and public sector, this section focuses only on public functions, as standards of comparative constitutional law dictate that the state must abide by the full scope of fundamental rights articulated both in municipal and international law.

What are administrative decisions?

The classical definition of administrative decisions is those exercised by public bodies. Over a period, it has come to include the discharge of all public functions. [1]

In the UK, there have been a series of cases which establish the duty of reasons in several contexts, which we will deal with below. In other common law countries, South Africa, Canada, New Zealand and Australia the duty to give reasons has established itself as a core tenet of rule of law. In the EU, Article 253 of the Treaty provides that “'regulations, directives, and decisions ... shall state the reasons on which they are based.”

Transparency in the exercise of public functions

As public authorities begin to adopt AI into decision-making processes for public functions and begin to determine the ideal form of intervention(s), the extent to and the way in which decision-making capabilities can and are delegated to AI need to be questioned from the perspective of its transformative impact on justice, civil liberties, and human rights. The justifications for transparency in the exercise of public functions draws from standards of due process and accountability evolved in administrative law, where decisions taken by public bodies must be supported by recorded justifications, a consequence of both procedural and substantive procedural fairness. A further extension of this principle is the need for administrative authorities to record reasons to exclude or minimize arbitrariness In some jurisdictions such as the UK and the US, there are statutory obligations that require administrative authorities to give reasoned orders.

PF

The introduction of an algorithm to replace, or even only to assist, the human decision-maker represents a challenge to this assumption and thus to the rule of law, and the power of legislatures to decide upon the legal basis of decision-making by public bodies. Marion Oswald argues that “administrative law—in particular, the duty to give reasons, the rules around relevant and irrelevant considerations and around fettering discretion—is flexible enough to respond to many of the challenges raised by the use of predictive machine learning algorithms and can signpost key principles for the deployment of algorithms within public sector settings.” [2]

The duty to give reasons

During the peak of British colonialism, an army officer, appointed as the governor and chancellor of a Caribbean Island famously approached Lord Mansfield for advice on how to decide cases, given his complete lack of training in the law. The famous judge asked him to never provide his reasoning in any matter “for your judgment will probably be right, but your reasons will certainly be wrong.”

Traditionally, judges in common law countries have been against recognising a clear duty to give reasons on judicial or administrative bodies. However, the doctrine of fairness in English common law dictated that one must act fairly towards those affected by their decision. This doctrine, in recent times, began manifesting itself in the form of a requirement on the part of an administrative or judicial decision maker. In R v Secretary of State, the House of Lords concluded that in certain circumstances, a specific duty to give reasons is implied. The judges tailored the rule narrowly, in line with the fairness doctrine, based on whether the failure to provide reasons was fair. In the beginning, the law required that the nature of the process itself called for reasons. In the Institute of Dental Surgery case, a further criterion was added whereby peculiar or aberrant decisions would call for reasons to be given.

TK

Over a period, two sets of reasons have emerged in case law for the imposition on administrators of a duty to give reasons. The first set of reasons is instrumental in nature—they contribute to other established objectives. These objectives include the accuracy rationale. This means that the public body will make more accurate decisions when they are required to think about, and set down reasons for, their decisions in a systematic manner. Another objective is the ‘review rationale’: courts and organs of review of these decisions often recognize that an unreasoned decision is very difficult to review. A public confidence rationale is also featured in justification for the duty to give reasons. The provision of reasons by public authorities is essential for demonstrating that laws are being applied consistently and carefully, an extension of the legal principle that justice must not only be done but also seen to be done.

Aside from the instrumental reasons, we also see the duty of give reasons arising from its intrinsic basis in principles of fairness, which is central to administrative accountability. The focus of reasons is not on what their provision might help to achieve but rather on treating the subject of the decision with the appropriate respect for their personhood. Individuals need to understand why decisions have been made about them, particularly when those decisions have been made by the state sitting in a privileged position over the individual. Despite the controversy over judges hesitating to impose a duty in administrative law to give reasons for decisions, as stated earlier, ‘there is a strong case to be made for the giving of reasons as an essential element of administrative justice’. [3]

As Frank Pasquale argues, explainability is important because reason-giving is intrinsic to the judicial process and cannot be jettisoned on account of algorithmic processing. The same principles equally apply to all administrative bodies, as it is a well-settled principle of administrative law that all decisions must be arrived at after a thorough application of mind. Much like a court of law, these decisions must be accompanied by reasons to qualify as a “speaking order”. Where the administrative decisions are informed by an algorithmic process opaque enough to prevent this, the next logical question is whether a system can be built in such a way that it flags relevant information for independent human assessment to verify the machine’s inferences. Only then will the requirements of what we call a speaking order be in any position to be satisfied.

TK

If we consider the duty to give reasons, as informing the threshold for transparency, there may still be diverse ways in which it may be implemented. Where algorithmic systems use transparent model techniques such as linear regression models, decision trees, k-nearest neighbor models, rule-based learners, general additive models, and Bayesian learners, it would be possible to create ante-hoc explanations even before the deployment of these systems. This would be a prime example where the trigger for creation of transparency documentation can be designed, through contractual obligations between the public body and contractor, to precede the deployment or implementation.

Conversely, for models which render themselves better to post-hoc explainability, the triggers can be the demand for algorithmic logic, or justifications for specific decisions such as local or example-based explanations.

Relevancy as a criteria

Several administrative law cases focus on the relevancy of considerations while public bodies exercise administrative power. The principles of ultra vires would apply to both situations—here a public authority has acted upon irrelevant considerations or failed to consider relevant ones. Overall, the courts allow public bodies a significant degree of discretion to consider a range of legitimate factors in their decision-making and so may be reluctant to uphold a challenge in administrative law to the use of predictors. However, even with this high degree of latitude, the predictors must be discernible and satisfy a degree causality between the predictors and the decisions.

If an administrative body's decision was influenced by irrelevant factors, it is subject to judicial review. If, in exercising its discretion on a public duty, the body considers factors that the courts consider improper, it has not exercised its discretion legally in the eyes of the law. In the UK, in R. v. Secretary of State for the Home Department, the Home Secretary regarded public opinion when deciding on a 15-year sentence for two boys detained at Her Majesty's pleasure for murdering James Bulger, a two-year-old child, when they were both ten years old. It was held that the public petitions under consideration were worthless and incapable of informing the Home Secretary of the true state of public opinion on the tariff. As a result, the reliance on the public petition in reaching its decision was an irrelevant consideration that justified the overturning of the Home Secretary's decision.

There are two types of considerations relevant to a public authority's decision: mandatory relevant considerations (those that the statute empowering the authority expressly or implicitly identifies as those that must be considered) and discretionary relevant considerations (those which the authority may consider if it regards them as appropriate).

TK

When determining whether a decision-maker failed to consider mandatory relevant considerations, courts typically look at how the decision-maker weighs the considerations. However, once the decision-maker has considered the relevant factors, the courts are hesitant to scrutinise the way the decision-maker balances the factors. Lord Hoffmann elucidates on the “distinction between whether something is a material consideration and how much weight it should be given. The former is a legal issue, while the latter is a matter of planning judgement, which is entirely within the purview of the planning authority.”

Therefore, it flows clearly that a duty to give reasons does not end at the mere recording of reasons. The reason, as legal precedents have dedicated, must be intelligible and adequate to enable “the reader to understand why the matter was decided as it was and what conclusions were reached on the ‘principal important controversial issues.’” Further, the “reasoning must not give rise to a substantial doubt as to whether the decision-maker erred in law, for example by misunderstanding some relevant policy or some other important matter or by failing to reach a rational decision on relevant grounds.” For an administrative decision which was delegated to an algorithmic system either wholly or partly, there is a legal mandate to clearly demonstrate the considerations on the basis of which the decision has been taken. A human-in-the-loop supervisor should be able to ascertain that the considerations are relevant.

This requires two set of factors to be accounted for in the decision matrix. The first is the availability of the relevant considerations to a human agent in a form that they can comprehend. For instance, test set accuracy could be used to evaluate the considerations at play. Guestrin et al. take a pragmatic approach where they set out a general scheme to provide, for any decision by a learning algorithm, a locally linear and interpretable approximation to that answer or decision. In the specific dataset they looked at, despite satisfying high accuracy on validation data, it contained features that did not generalise. Therefore, the high accuracy was a true indicator of its performance outside of test setting in the real world. The second requirement would be the presence of a human actor with prior knowledge and domain expertise to identify the irrelevant considerations to the task at hand. In such cases as above, where individual prediction explanations are possible, the human-in-the-loop will be able to determine if a decision was based on arbitrary or irrelevant reasons, and thus, needed to be rejected. [4]

Recommendations

The above principles of administrative law require that when algorithmic systems are involved in decision-making for discharge of public function, they need to be designed to satisfy the following ends.

  • The delegation of administrative discretion to algorithmic systems must be predicated on their ability to meet clearly defined transparency goals.
  • The transparency goals can be defined in terms of the algorithmic system’s duty to provide reasons.
  • Where the technical nature of the algorithmic system poses fundamental interpretability challenges, it needs to be designed to flag sufficient information for independent human assessment to verify the machine’s inferences. Sufficient information may include the input data, the nature of model in use and the likely factors which informed the decision.
  • The independent assessment of the algorithmic decision must ensure that the consideration on which it is based are relevant.

[1] By public functions, I mean the exercise of power by public bodies as well as privatisation of public functions.

[2] Oswald sets forth a clear and concise anatomy of the particulars of duty to give reasons, and its relevance for algorithmic decisions.

[3] Most of the common law cases referenced here are from the UK, however, they have parallels in several other common law countries, or hold good as precedents with string persuasive value.

[4] The wisdom of the 'human in the loop' as just regulatory requirement raises serious questions about the viability of business models intent on using AI to 'automate' the workflow.