Embrace Artificial Intelligence While Embracing Our Humanity

From the recent Commencement Speech at Columbia University Engineering School “An Engineer’s Guide to the Artificial Intelligence Galaxy

Kai-Fu Lee, Founder & CEO, Sinovation Ventures and President, Sinovation Ventures Artificial Intelligence Institute, suggests that what separates humans from AI is the capacity to love.

And in the future, even if an AI diagnostic tool is 10 times more accurate than doctors, patients will not want a cold pronouncement from the tool: “you have 4th stage lymphoma and a 70% likelihood of dying within 5 years.”  Patients will want a “doctor of love” who listens to our complaints, gives us encouragement, like “Kai-Fu had the same lymphoma, and he survived, so you can too”, and perhaps visits us at home, and is always available to talk to us. This kind of “doctor of love” will not only make us feel better, and have greater confidence, but a placebo effect will kick in and increase our likelihood of recuperation.

He does not specifically mention the legal realm but it strikes me that this same argument applies strongly to the use of AI in legal. Certainly there are lawyers for whom their main differentiator is technical skill. But in general, for most, the ability to show empathy for clients and explain legal issues with empathy is crucially important.

I wonder about the role of the lawyer will come to be seen as someone who focuses on translating between an artificial intelligence agent and the client.

 

 

Can artificial intelligence transform trademark law?

The main reason I write the daily observations is because I want to know where I’m wrong. — Ray Dalio

This is the first in a series of posts where I am trying to follow Ray Dalio’s example and write about where I think algorithms can be of most use to the legal system in the next 2 to 5 years. Many of the recent articles on how AI is going to “transform” the law have played it safe and focused their predictions on how things will look many decades from now. I want to talk about how better algorithms can help now, or at least, soon.

This is, inevitably, a work in progress as I try to develop the right questions to ask. There does not seem to be any doubt that a whole collection of technologies under the heading of “Artificial Intelligence and Machine Learning” are going to transform EVERY aspect of life, and it seems to me literally impossible that the legal system will be exempt from the coming changes. But to date, lots of people writing about “the coming transformation” seem to want to jump past the “how are we going to get there?” part.

In any case, I’m going to start — somewhat arbitrarily — with trademark law. And, I propose that, when we try to think about how algorithms might change the practice of trademark law, some basic “opportunity assessment” type questions. From this I lean on the approach developed by the folks at Pragmatic Marketing

Step 1, what are the problems with trademark that the CURRENT legal system solves?

Step 2: how pervasive are these problems? How urgent are they? Are people willing to pay to solve them?

Step 3: how do people solve these problems now? And what are the benefits and costs of a different approach, specifically an approach that incorporates machine learning.

What are the problems that a trademark lawyer helps a client solve

My initial hypothesis — and here I’m hoping my trademark lawyer readers are going to jump in and correct what I got wrong — is that a lot of the heavy lifting in trademark law comes down to, essentially (and as we’ll see below I’m oversimplifying) figuring out  whether one trademark is so similar to another that it INFRINGES on the other mark, i.e consumers will be confused. And a lot of what a good trademark attorney does for you, as a business owner, comes down to ASSESSING whether a COURT will rule that one trademark is close enough to another that is is said to “infringe” on the other mark.

Say, for example, that a small coffee roaster in New Hampshire sells a dark roast called “charbucks”. Does that name remind you of any other brand you might have heard of? Is that name confusingly similar to any other coffee brand you might have heard of?  Not surprisingly Starbucks thought so, and they sued. But ultimately the court held that there was no infringement.

This case is a good illustration of a few points about trademark law. First, the issue is not, is the name similar to another name, or at least that’s not the whole issue. The legal question is, would your average consumer be confused. Would the consumer think “hmmm hey how about that ‘charbucks’ well, I bet that is made by Starbucks, and I like Starbucks so I’m going to buy charbucks?”

Trademark basics and understanding the moron in a hurry

Quick reminder from trademark 101 (which I got a B in, so you should only be about 85% confident that I have this right) a trademark is NOT just a name, but a name (or a logo or other mark) associated with a particular BUSINESS, well, technically with a particular good or service or set of goods and services. The somewhat shopworn illustration of this principle is that both Delta Airlines and Delta Faucets use the “Delta” mark, but there is no confusion because one sells faucets and the other operates an airline. (By the way, I looked for, but could not find any examples of Delta Airlines and Delta Faucets sending angry letters to each other. Note that Delta airlines wound up with delta.com domain name. I still wonder if they sometimes get each other’s mail by mistake….)

But what about the case of Apple. You’ve heard of Apple, right? It was founded in 1968. By a band called the Beatles. Then, in 1976, somebody in California started another company, called Apple Computer. The Beatles were not pleased, and they sued. The parties reached a settlement because, after all, computers and music were totally different businesses,  so there was no chance for confusion.

So what counts as confusion? Well, of course that’s the $64,000 question. The court cases are themselves confusing. My favorite version of the “what counts as confusion” test is Lord Denning’s “moron in a hurry” test https://en.wikipedia.org/wiki/A_moron_in_a_hurry.  After a British newspaper launched a new paper called the Daily Star, the British Communist Party sued, claiming that the name Daily Star was confusingly similar to their paper the “Morning Star.” Lord Denning threw the case out, explaining that, if you put the two papers side by side, “only a moron in a hurry” would be confused.

The Courts in the U.S. never adopted the “moron in a hurry” test (more’s the pity) but have instead created several different versions of a “multi factor test” for determining infringement. And in fact, federal courts in different regions have adopted slightly (more maybe more than slightly) different versions of the tests to apply to determine whether there is infringement (when federal courts in different regions apply different standards, this is referred to as a circuit split).

In a very helpful paper, An Empirical Study of the Multifactor Tests for Trademark Infringement, Barton Beebe studied several hundred trademark decisions and concluded, basically, that the courts were kind of all over the map.

So, where does that leave business owners seeking help with trademark law issues? First, again this is a hypothesis, there are two market segments. First there are owners of EXISTING marks, i.e. Starbucks or Mattel. And there are businesses who are CONSIDERING adding NEW marks. The hypothesis I want to explore in the next post is about two problems: (1) how to tell, from millions of trademarks out there, which ones are relevant, i.e. POTENTIALLY conflicting with your trademark and (2) for a given trademark, how to assess the strength of your case against the other party.

More on this next time.

Explainable Artificial Intelligence and what it means for the legal world

The Defense Advanced Research Projects Agency or DARPA has recently launched a new initiative “looking to support A.I. projects that will make it clear to the end user why something happened

The point of this effort, known as Explainable AI aka XAI, is according to DARPA’s official announcement, is

to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.

I had not heard of “Explainable AI” before but the concept is sound and could be of immense significance to the application of artificial intelligence to the legal system.

One of the major obstacles to widespread adoption of artificial intelligence to the legal system is lack of transparency. Many (most?) legal decisions are made in a low trust environment.

Take the case of predictive coding in an employment lawsuit.

Defense attorney: “we have to review 10,000,000,000 emails and that will cost 12 billion skillion dollars. Let’s use an algorithm to find the relevant ones”

Plaintiff attorney: “that’s your problem that it costs so much. How do we know we can trust your fancy algorithm? In fact, we most definitely do NOT trust your fancy algorithm. In particular, we are worried that if later it turns out that relevant documents were not produced you are just going to dodge responsibility and say ‘Hey sorry, well, you know, algorithms, what are you gonna do?'”

Now, there are solutions to this problem, but there is still a fundamental issue: if a human attorney reviews documents and doesn’t turn them over, the human attorney is going to have to justify that decision or face serious sanctions. But what happens if the algorithm doesn’t turn over the documents? Who is accountable. This has come to be known as the “black box” problem in ediscovery. So far there have been several efforts (mostly by advocates of predictive coding and especially predictive coding vendors) to address this problem, but full on explainable AI could go much further to solve it.

I’m eager to see how this develops.

 

H/T Inverse.com

AI researcher Luke Muehlhauser on past forecasts about progress in Artificial Intelligence

Lots of people talk about artificial intelligence as if it is “way post Facebook” and even post Snapchat  , i.e. something that got invented around 5 years ago around the time that Watson won on jeopardy.

But of course that is crazy, and in fact artificial intelligence has been going through boom and bust cycles since the 1950s. The first cycle took off during the Cold War when the U.S. government was eager to translate piles and piles of Russian documents and someone got the idea to use computers to do the translating.  The optimistic forecasters figured they just needed a year, or two, or five, to get this “machine translation” problem squared away. Results did not live up to the optimistic forecasts and — probably apocryphal — stories came out that, when asked to translate the phrase “out of sight out of mind” a machine offered up “invisible lunatic” (or maybe it was “blind idiot” but you get the point). After spending 20 million dollars (back when that was real money) the National Research Council called it quits and yanked the funding.

Now, as they say “past performance [or lack thereof] is no guarantee of future results” and it’s hard to know what to make of these earlier predictions. Maybe they weren’t wrong, but just premature.

Luke Muehlhauser has written an extremely useful summary of prior predictions, both optimistic and skeptical, What should we learn from past AI forecasts?

  • The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.
  • After ~1973, few experts seemed to discuss HLMI [aka “Human Level Machine Intelligence” – ed.] (or something similar) as a medium-term possibility, in part because many experts learned from the failure of the field’s earlier excessive optimism.
  • The second major period of AI hype, in the early 1980s, seems to have been more about the possibility of commercially useful, narrow-purpose “expert systems,” not about HLMI (or something similar).

The piece resists easy summary, in part because, unlike many people writing on AI, he is careful to lay out the facts and not overstate his claims.

But if you care about AI in general, and specifically AI as it applies to the legal world, you have to read this.

 

In-house counsel likely to be the first adopters of legal artificial intelligence, says Jordan Furlong

Jordan Furlong argues that corporate in-house legal departments will be the first ones in the legal world to really adopt artificial intelligence because in-house lawyers are not beholden to the billable hour:

Corporate law departments, unlike law firms, are financially structured to seek the most effective and productive outcomes with the least use of effort and resources. Cognitive reasoning technologies are designed to achieve exactly this kind of outcome, so the alignment of interests between AI and in-house is clear and substantial.

Read the whole interview and indeed, read anything Jordan Furlong has to say about the legal world.

By the way, I just learned about artificial lawyer, and eager to dive in and read more about what they have to say.

 

“Artificial Intelligence” vs “Machine Learning”

The terms “Artificial Intelligence” and “Machine Learning” get thrown around a lot but what is the difference. As Chris Nicolson explains on Quora, artificial intelligence is a broad term that covers lots of examples of “computer doing smart things that everyone used to think only a human mind could do.” An example of this is “win at chess”. “Machine Learning” is a particular APPROACH to attempting to create artificial intelligence, namely, giving the algorithm (aka the machine) data and allowing the computer to find patterns in the data.

How smart does a legal research tool have to be before we can call it a “lawyer”

Two different friends forwarded to me the story Artificially Intelligent Lawyer “Ross” Has Been Hired By Its First Official Law Firm, recently published on the website futurism.com.

Per the story, the bankruptcy practice group at Baker & Hostetler, a nationally recognized firm with offices throughout the U.S., has announced that they will be “employing ‘Ross,’ the world’s first artificially intelligent attorney.”

I’ve heard interesting things about Ross and eager to learn more about it, and in particular how exactly the attorneys at Baker are going to be using it. A few observations, not in any exact order.

First off, my hat is off to Andrew Arruda, CEO of Ross, and the whole Ross team for making this deal happen. Anyone who has sold (or tried to sell) anything to corporate law firms will tell you it is not an easy market, so congratulations to their team for putting this together.

Second, I’m curious to learn more about why the bankruptcy practice group in particular is where Ross is first being deployed. The, perhaps unfair, knock on corporate bankruptcy attorneys is that they are among the least innovative practice groups because they have the least incentive to innovate:

When it comes to estate-paid Chapter 11 fees, the professionals are pushing their
bills across the table, but on the other side of the table, the client charged with
evaluating the reasonableness of the bill may have no meaningful way to put the bill into context. Moreover, because no single client is charged with footing the
professionals’ entire bill, it’s possible that none of the clients really cares how much these professionals are charging

Nancy B. Rapoport, Rethinking Professional Fees in Chapter 11 Cases, 2010)

Third, but I guess my most substantive point is that I’d like to learn more about exactly how Ross is going to be “employed” by the department. In particular, a skeptic might wonder if this is really better described as a legal research tool, comparable to Lexis or Westlaw, rather than as “the worlds’ first artificially intelligent attorney.” As SoCalWingFan put it on reddit “This looks less like an AI lawyer and more like the product of LexisNexis having a baby with a better version of Siri, and that baby being fed legal research steroids.”

But then again SoCalWingFan (if that IS your real name…) may be a little harsh. In Ross’s defense, I would very much like to learn more about how the system learns in response to feedback from attorneys. If that’s true then that really IS a big deal and a major step forward.

Fourth, on a more personal note, I just have to say that I feel like a real grown up blogger when readers send me stories. So, thanks Amy and Trish for making me feel like a real grown up blogger.