Explainable Artificial Intelligence and what it means for the legal world

The Defense Advanced Research Projects Agency or DARPA has recently launched a new initiative “looking to support A.I. projects that will make it clear to the end user why something happened

The point of this effort, known as Explainable AI aka XAI, is according to DARPA’s official announcement, is

to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.

I had not heard of “Explainable AI” before but the concept is sound and could be of immense significance to the application of artificial intelligence to the legal system.

One of the major obstacles to widespread adoption of artificial intelligence to the legal system is lack of transparency. Many (most?) legal decisions are made in a low trust environment.

Take the case of predictive coding in an employment lawsuit.

Defense attorney: “we have to review 10,000,000,000 emails and that will cost 12 billion skillion dollars. Let’s use an algorithm to find the relevant ones”

Plaintiff attorney: “that’s your problem that it costs so much. How do we know we can trust your fancy algorithm? In fact, we most definitely do NOT trust your fancy algorithm. In particular, we are worried that if later it turns out that relevant documents were not produced you are just going to dodge responsibility and say ‘Hey sorry, well, you know, algorithms, what are you gonna do?'”

Now, there are solutions to this problem, but there is still a fundamental issue: if a human attorney reviews documents and doesn’t turn them over, the human attorney is going to have to justify that decision or face serious sanctions. But what happens if the algorithm doesn’t turn over the documents? Who is accountable. This has come to be known as the “black box” problem in ediscovery. So far there have been several efforts (mostly by advocates of predictive coding and especially predictive coding vendors) to address this problem, but full on explainable AI could go much further to solve it.

I’m eager to see how this develops.

 

H/T Inverse.com

Advertisements

AI researcher Luke Muehlhauser on past forecasts about progress in Artificial Intelligence

Lots of people talk about artificial intelligence as if it is “way post Facebook” and even post Snapchat  , i.e. something that got invented around 5 years ago around the time that Watson won on jeopardy.

But of course that is crazy, and in fact artificial intelligence has been going through boom and bust cycles since the 1950s. The first cycle took off during the Cold War when the U.S. government was eager to translate piles and piles of Russian documents and someone got the idea to use computers to do the translating.  The optimistic forecasters figured they just needed a year, or two, or five, to get this “machine translation” problem squared away. Results did not live up to the optimistic forecasts and — probably apocryphal — stories came out that, when asked to translate the phrase “out of sight out of mind” a machine offered up “invisible lunatic” (or maybe it was “blind idiot” but you get the point). After spending 20 million dollars (back when that was real money) the National Research Council called it quits and yanked the funding.

Now, as they say “past performance [or lack thereof] is no guarantee of future results” and it’s hard to know what to make of these earlier predictions. Maybe they weren’t wrong, but just premature.

Luke Muehlhauser has written an extremely useful summary of prior predictions, both optimistic and skeptical, What should we learn from past AI forecasts?

  • The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.
  • After ~1973, few experts seemed to discuss HLMI [aka “Human Level Machine Intelligence” – ed.] (or something similar) as a medium-term possibility, in part because many experts learned from the failure of the field’s earlier excessive optimism.
  • The second major period of AI hype, in the early 1980s, seems to have been more about the possibility of commercially useful, narrow-purpose “expert systems,” not about HLMI (or something similar).

The piece resists easy summary, in part because, unlike many people writing on AI, he is careful to lay out the facts and not overstate his claims.

But if you care about AI in general, and specifically AI as it applies to the legal world, you have to read this.