AI researcher Luke Muehlhauser on past forecasts about progress in Artificial Intelligence

Lots of people talk about artificial intelligence as if it is “way post Facebook” and even post Snapchat  , i.e. something that got invented around 5 years ago around the time that Watson won on jeopardy.

But of course that is crazy, and in fact artificial intelligence has been going through boom and bust cycles since the 1950s. The first cycle took off during the Cold War when the U.S. government was eager to translate piles and piles of Russian documents and someone got the idea to use computers to do the translating.  The optimistic forecasters figured they just needed a year, or two, or five, to get this “machine translation” problem squared away. Results did not live up to the optimistic forecasts and — probably apocryphal — stories came out that, when asked to translate the phrase “out of sight out of mind” a machine offered up “invisible lunatic” (or maybe it was “blind idiot” but you get the point). After spending 20 million dollars (back when that was real money) the National Research Council called it quits and yanked the funding.

Now, as they say “past performance [or lack thereof] is no guarantee of future results” and it’s hard to know what to make of these earlier predictions. Maybe they weren’t wrong, but just premature.

Luke Muehlhauser has written an extremely useful summary of prior predictions, both optimistic and skeptical, What should we learn from past AI forecasts?

  • The peak of AI hype seems to have been from 1956-1973. Still, the hype implied by some of the best-known AI predictions from this period is commonly exaggerated.
  • After ~1973, few experts seemed to discuss HLMI [aka “Human Level Machine Intelligence” – ed.] (or something similar) as a medium-term possibility, in part because many experts learned from the failure of the field’s earlier excessive optimism.
  • The second major period of AI hype, in the early 1980s, seems to have been more about the possibility of commercially useful, narrow-purpose “expert systems,” not about HLMI (or something similar).

The piece resists easy summary, in part because, unlike many people writing on AI, he is careful to lay out the facts and not overstate his claims.

But if you care about AI in general, and specifically AI as it applies to the legal world, you have to read this.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s