Explainable Artificial Intelligence and what it means for the legal world

The Defense Advanced Research Projects Agency or DARPA has recently launched a new initiative “looking to support A.I. projects that will make it clear to the end user why something happened

The point of this effort, known as Explainable AI aka XAI, is according to DARPA’s official announcement, is

to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.

I had not heard of “Explainable AI” before but the concept is sound and could be of immense significance to the application of artificial intelligence to the legal system.

One of the major obstacles to widespread adoption of artificial intelligence to the legal system is lack of transparency. Many (most?) legal decisions are made in a low trust environment.

Take the case of predictive coding in an employment lawsuit.

Defense attorney: “we have to review 10,000,000,000 emails and that will cost 12 billion skillion dollars. Let’s use an algorithm to find the relevant ones”

Plaintiff attorney: “that’s your problem that it costs so much. How do we know we can trust your fancy algorithm? In fact, we most definitely do NOT trust your fancy algorithm. In particular, we are worried that if later it turns out that relevant documents were not produced you are just going to dodge responsibility and say ‘Hey sorry, well, you know, algorithms, what are you gonna do?'”

Now, there are solutions to this problem, but there is still a fundamental issue: if a human attorney reviews documents and doesn’t turn them over, the human attorney is going to have to justify that decision or face serious sanctions. But what happens if the algorithm doesn’t turn over the documents? Who is accountable. This has come to be known as the “black box” problem in ediscovery. So far there have been several efforts (mostly by advocates of predictive coding and especially predictive coding vendors) to address this problem, but full on explainable AI could go much further to solve it.

I’m eager to see how this develops.

 

H/T Inverse.com

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s