The unregulated use of AI in the U.K. justice system is potentially creating miscarriages of justice, according to a new report from the House of Lords.
The report cites several examples of AI systems being used by the police, prison and probation services without “any thorough evaluation of their efficacy”.
“Proper trials methodology is fully embedded into medical science but there are no minimum scientific or ethical standards that an AI tool must meet before it can be used in the criminal justice sphere,” the Lords’ Justice and Home Affairs Committee found.
The committee further warns that “without sufficient safeguards, supervision, and caution, advanced technologies may have a chilling effect on a range of human rights, undermine the fairness of trials, weaken the rule of law, further exacerbate existing inequalities, and fail to produce the promised effectiveness and efficiency gains”.
Rating risk of reoffending
The report makes specific mention of questionable uses of AI in U.K. law enforcement. For example, the Harm Assessment Risk Tool (HART) which was used by the Durham Police Constabulary.
The software used 34 pieces of data about individual offenders to predict the risk of the person reoffending, which assisted decision making about whether the person was eligible for a rehabilitation programme.
Although the software reportedly outperformed human judgement, the report cites fears raised by researchers from the University of Essex that “any potential tendency to defer or over-rely on automated outputs over other available information has the ability to transform what is still considered to be a human-led decision to de facto an automated one”. The system was withdrawn in September 2020.
Similar fears were raised over an automated system used by the Home Office, known as the ‘Sham marriage algorithm’, in which AI was used to flag whether a proposed wedding might be being conducted purely for immigration purposes.
“While the decision on the genuineness of the marriage is in the hands of an official, the Public Law Project were concerned that the human decision-maker may fall victim to ‘automation bias’, defined as ‘a well-established psychological phenomenon whereby people put too much trust in computers’,” the report stated.
The report makes a series of recommendations about the use of AI in the justice system. These include mandatory training for officers on the use of the tools, as well as the formation of a national body “to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards”.
Without such structures in place, the report claims that the current “Wild West” approach of police forces implementing their own tools could lead to serious problems. “When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties,” the report states. “At what point could someone be imprisoned on the basis of technology that cannot be explained?”