Call for legal AI to have “ethical black box” to explain decisions


AI: sociologists should be involved

Artificial intelligence (AI) should be developed in conjunction with a wide range of non-technical specialists, while an ‘ethical black box’ showing how a system made particular decisions may also be needed, according to an academic.

He argued that if legal use of AI reflected the worldview of only scientists and engineers, and left out “social and cultural” perspectives, it could damage public trust in the law.

Siddarth Peter de Souza, a PhD student at the law faculty of Humboldt University in Berlin, writing in the Journal of the Oxford Centre for Socio-legal Studies, warned that, without transparency on the assumptions that have gone into the AI’s construction, there was a danger that human prejudices could be buried in the systems that threatened justice.

He pointed out that AI was now being used in the fields of legal research, document review, e-discovery, and predictive analysis.

Each of the platforms was “designed to improve accuracy in legal research, reduce uncertainty and risks in terms of strategic decisions and save time and costs by enabling lawyers to spend more time on strategic tasks”.

He referred, for example, to a 2016 exposé of an algorithm used by US judges to determine the recidivism of a criminal defendant, which found that black defendants were judged to be a higher risk than in fact they were, whereas whites were thought to be at less risk than was the case.

A key problem was the sheer complexity of the AI products in use – such that even their creators found it difficult to understand. This made it vital that a method was found for the systems to explain transparently how they arrived at a particular conclusion, he argued.

One solution was “introducing sociological insights”. He added: “An argument can be made that by diversifying the pool of developers to include other disciplines, such as sociologists, designers, historians, and psychologists, a multiplicity of views will be brought to the table…

“Introducing a plurality of views would ensure a more balanced outlook on the use, development and management of data and methods that are being used to build the AI-driven legal products.”

Another possibility was to build an “ethical black box” into AI systems to “establish a process for discovering how and why a robot acted in a particular way, similar to the way in which a flight data recorder tracks and transmits internal data…

“Robots will be making decisions that often require a moral compass, and introducing such a framework would allow for accountability and transparency in their functioning, in addition to public trust in their processes.”

Mr de Souza concluded: “The framework and algorithms that go into designing the processes and technologies of AI products [must] adopt elements of social, ethical and moral reasoning, because the implications of the decisions of many of these products are entering into spheres that consist of assessment, appraisal and judgement, with profound implications for humans…

“Addressing the social will allow for a more holistic consideration of the increasingly critical functions performed by technologies in the legal domain…

“Unpacking the ‘black box’ of these technologies can make them more trustworthy, understandable, and accountable.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Keeping the conversation going beyond Pride Month

As I reflect on all the celebrations of Pride Month 2024, I ask myself why there remains hesitancy amongst LGBTQ+ staff members about when it comes to being open about their identity in the workplace.


Third-party managed accounts: Your key questions answered

The Solicitors Regulation Authority has given strong indications that it is headed towards greater restrictions on law firms when it comes to handling client money.


Understanding vicarious trauma in the legal workplace

Vicarious trauma can happen to anyone who works with clients who have experienced trauma such as domestic or other violence, child abuse, sexual assault, torture or being a refugee.


Loading animation