Why driverless cars, AI and the creation in a machine of human-like intelligence remains distant

Why driverless cars, AI and the creation in a machine of human-like intelligence remains distant
Why driverless cars, AI and the creation in a machine of human-like intelligence remains distant

Few ideas have enthused technologists as much as the self-driving car. Advances in machine learning, a subfield of artificial intelligence (AI), would enable cars to teach themselves to drive by drawing on reams of data from the real world. The more they drove, the more data they would collect, and the better they would become. Robotaxis summoned with the flick of an app would make car ownership obsolete. Best of all, reflexes operating at the speed of electronics would drastically improve safety. Car- and tech-industry bosses talked of a world of “zero crashes”. And the technology was just around the corner.

In 2015, the Tesla boss Elon Musk predicted that his cars would be capable of “complete autonomy” by 2017. Mr Musk is famous for missing his own deadlines. But he is not alone. General Motors said in 2018 that it would launch a fleet of cars without steering wheels or pedals in 2019; in June it changed its mind.

Waymo, the Alphabet subsidiary widely seen as the industry leader, committed itself to launching a driverless-taxi service in Phoenix, Arizona, where it has been testing its cars, at the end of 2018. The plan has been a damp squib. Only part of the city is covered; only approved users can take part.

Phoenix’s wide, sun-soaked streets are some of the easiest to drive on anywhere in the world; even so, Waymo’s cars have human safety drivers behind the wheel, just in case. Jim Hackett, the boss of Ford, acknowledges that the industry “overestimated the arrival of autonomous vehicles”.

Stuck in a jam

FREMONT, CA - SEPTEMBER 29: Tesla CEO Elon Musk speaks during an event to launch the new Tesla Model X Crossover SUV on September 29, 2015 in Fremont, California. After several production delays, Elon Musk officially launched the much anticipated Tesla Model X Crossover SUV. The (Photo by Justin Sullivan/Getty Images)
Tesla chief exec Elon Musk introduces the Tesla Model X crossover SUV in 2015 (Photo: Getty)

Chris Urmson, a linchpin in Alphabet’s self-driving efforts (he left in 2016), used to hope that his young son would never need a driving licence. Mr Urmson now talks of self-driving cars appearing gradually over the next 30 to 50 years.

Firms are increasingly switching to a more incremental approach, building on technologies such as lane-keeping or automatic parking. A string of fatalities involving self-driving cars have scotched the idea that a zero-crash world is anywhere close. Markets are starting to catch on. In September Morgan Stanley, a bank, cut its valuation of Waymo by 40 per cent, to $105bn (£84bn), citing delays in its technology. The future, in other words, is stuck in traffic. Partly that reflects the tech industry’s predilection for grandiose promises. But self-driving cars were also meant to be a flagship for the power of AI. Their struggles offer valuable lessons in the limits of the world’s trendiest technology.

Read More:

The future of transport, from driverless motorbikes to flying taxis

One is that, for all the advances in machine learning, machines are still not very good at learning. Most humans need a few dozen hours to master driving. Waymo’s cars have had more than 10 million miles of practice, and still fall short.

And once humans have learned to drive, even on the easy streets of Phoenix, they can, with a little effort, apply that knowledge anywhere, rapidly learning to adapt their skills to rush-hour Bangkok or a gravel-track in rural Greece. Computers are less flexible.

Quick-fire learning

AI researchers have expended much brow-sweat searching for techniques to help them match the quick-fire learning displayed by humans. So far, they have not succeeded. Another lesson is that machine-learning systems are brittle. Learning solely from existing data means they struggle with situations that they have never seen before. Humans can use general knowledge and on-the-fly reasoning to react to things that are new to them – a light aircraft landing on a busy road, for instance, as happened in Washington state in August (thanks to humans’ cognitive flexibility, no one was hurt). Autonomous-car researchers call these unusual situations “edge cases”. Driving is full of them, though most are less dramatic.

Mishandled edge cases seem to have been a factor in at least some of the deaths caused by autonomous cars to date. The problem is so hard that some firms, particularly in China, think it may be easier to re-engineer entire cities to support limited self-driving than to build fully autonomous cars.

The most general point is that, like most technologies, what is currently called “AI” is both powerful and limited. Recent progress in machine learning has been transformative. At the same time, the eventual goal – the creation in a machine of a fluid, general, human-like intelligence – remains distant.

People need to separate the justified excitement from the opportunistic hyperbole. Few doubt that a completely autonomous car is possible in principle. But the consensus is, increasingly, that it is not imminent. Anyone counting on AI for business or pleasure could do worse than remember that cautionary tale.

© THE ECONOMIST 2019