An interview with HOD LIPSON, co-author with Melba Kurman of the book Driverless: Intelligent Cars and the Road Ahead.
Hod, you wrote in Driverless that today’s cars are brainless. What sort of thinking is important for automakers today to prepare for the time when cars begin to think for themselves?
We have a lot of computers already inside most vehicles. There are millions of lines of code. But the one thing cars don’t have yet is something humans take for granted: the ability to understand what’s around them, to understand if there’s a child or a fire hydrant on the side of the road in front of them. Something almost trivial for us human drivers. That’s been holding back the dream of driverless cars from coming to fruition. People have been dreaming of driverless cars since the ’30s. We have been able to drive cars from a computer for a long time. We’ve been able to control motion, to find the shortest path from A to B—with and without traffic. But it’s that one thing—to understand the difference between a pothole and a puddle or a bicycle and a motorcycle—that’s been holding cars back from reaching full autonomy. That is the last piece of the puzzle, and it’s being solved as we speak.
What would be your guess for how many more years it will take to solve that piece of the puzzle?
To a large extent, it has been solved. From a technology point of view, it’s almost a done deal. It’s a question now of training. These systems need to prove themselves and get better at doing what they do—to a point where humans will trust them blindly. That will take a little bit of time. It will also take some time for legislation to catch up, and for governments to be clear about how safe is “safe enough” for these cars to be out there—and to figure out a way of testing them and assuring consumers, and also automakers, that [autonomous cars are safe] before deployment. It’s difficult to predict how long that will take because it has to do a lot with idiosyncrasies of lawmakers, with politics and things like that. But I would say it’s going to be anywhere from five years where we’ll see the beginning of Level 5, fully autonomous vehicles that will work in some areas. Maybe not in the snow, maybe only on highways and so on. And it will take about 20 years more before half of the cars on the road are fully autonomous.
You wrote about AVs operating in environments that are other than typical roadways. What have the mining companies in Australia and the farmers in North America discovered using autonomous vehicles that will help speed up production of AVs on regular streets and highways?
Many autonomous vehicles have been used already in very carefully controlled areas like agriculture and mining industries. Mostly the benefits there were pretty clear in terms of both saving on cost and efficiencies and safety. So saving people from doing jobs that are both dangerous and boring sometimes. But in terms of solving technology issues, again, everything about an autonomous vehicle is solved except for that last piece of the puzzle: detecting surprise things like a pedestrian or a motorcycle or a pothole. These kinds of things don’t necessarily exist as frequently in carefully controlled environments: You don’t have pedestrians walking as frequently in a mine; you don’t have surprise motorcycles in a cornfield. These surprise situations are what have prevented this technology from migrating from very successful implementations to a busy city intersection.
Is that a “corner” case? Robotocists talk about solving corner cases—unexpected rare events that take place 1% of the time—and that that’s a problem, right? In your book, you give the example of a deer leaping onto the hood of the car. What are the top three corner case categories that still must be puzzled out?
Corner cases have plagued the AI world. For many years, people have tried to solve the AI problem, this challenge of perception, with rules. And rule-based AI is notoriously bad at dealing with corner cases, as you mentioned. These rare events that can spell disaster. I remember we had this challenge in the Darpa Grand Challenge, a robotic race across the desert, where the Cornell vehicle couldn’t handle a situation where there was an overpass. That was a situation the programmers did not anticipate, and, “Bam!” The car stopped there. It couldn’t handle it.
You asked about the top three. The challenge is you cannot enumerate them. Nobody knows what they are. They are rare. They are very difficult to articulate. But for sure, it’s going to have to do with things like recognizing some peculiar, rare combination of issues. Like two kids running into the street from opposite directions, and a construction sign that diverts traffic, and the sun right on the horizon blinding some of the sensors. So some crazy combination of situations that are, in themselves, fairly rare. When you put them together it’s even more complex.
Humans have a hard time dealing with these things, as well. But these are the sorts of things that are so rare that they’re difficult to train an AI system on. This is where some of the new techniques in AI help.
I know researchers in machine vision have tried and failed to automate the art of perception. Now you write a new type of AI software called “deep learning” has attained human-level accuracy in correctly classifying random objects in thousands of digital images. So we have “machine learning,” which you note grows as researchers each day feed the software tons of raw visual data collected by car-mounted cameras. How has this software attained this level of accuracy?
It’s a fascinating story on how deep learning, a type of machine learning that is a type of AI, has made so much progress in perception and classification of objects and recognizing objects on-the-fly over the past couple of years. Since 2012, we’ve seen exponential growth in this technology. This type of software is so good at understanding and perception that it far surpasses humans. It can see in the dark, in the rain. It can see not with two eyes but with 20 eyes. It has become very, very good at understanding.
What is even more exciting is instead of just training with the data that’s coming from research, increasingly we’re seeing that cars, as they drive themselves more and more, are also able to collect more data. And that data trains the next generation of AI. In other words, the AI is training itself. The AI gets better and better every cycle.
It’s very difficult for people to understand and appreciate that, unlike human drivers who at most have one lifetime of experience of driving, AVs learn from other cars. The more autonomous cars there are on the road, the better each one of them gets. This is very counterintuitive because we humans don’t follow that kind of path. But that’s another exponential accelerator.
Hod, how close are automakers around the world to developing an intelligent operating system complete with “consistently accurate artificial perception,” which you call the crown jewel of AI research?
I think automakers are close to that. When you say automakers, people might think about the traditional automotive industry. But now this includes the Googles and Waymos and Apples and all these different software industries, which we are now combining in this big category called “automaker.” The race is on—not only to make it reliable, but how reliable is it? And how do you prove it’s reliable? And what kind of environment does it work in? Does it just work on California streets, or does it work on Manhattan streets? Does it work in India and China? And then there’s Baidu and Tencent [in China]. Companies around the world are working on this.
One thing we’re missing right now is some kind of transparency around reliability. That is part of why it is hard for me to tell you who’s at the forefront—because this data tends to be very secret and opaque. One thing that would help consumers would be some kind of metric system, a rating system, where governments could say, “If you think you have an autonomous vehicle that’s reliable, here’s the test. Show us your data, perform this test and we’ll give you a rating.” Just like MPG and horsepower. We have rating systems for vehicle performance and other areas. We need a rating system that will allow consumers to understand how safe autonomous cars are. Is it as safe as the average human driver? Is it twice as safe or 10 times as safe? Maybe if I pay more, I get a system that’s 20 times safer. This is where we want to be.
Is any organization or company developing that safety rating system you were just speaking about?
I haven’t heard about anybody putting out a systematic safety rating system. it has to be a third-party system. something simple would be ideal: How safe is it compared with a human? Two and a half times? Three times?
If you look at the automotive industry at large, and the software industry as well, the race now is not just on making a reliable car. It’s really on making a platform onto which other businesses can start creating opportunities.
Toyota just came out with e-Palette, for example. It’s a platform that is autonomous—on top of which you could conceivably build anything. You could build a storefront. You could build a hotel—a room that shows up and you can sleep in it overnight as you go somewhere. The idea is once this AV thing happens, it’s not just another car; it’s a platform onto which you can build every kind of business you can imagine—and some things you can’t imagine.
You’re saying you could sleep through the night in a motel room that travels from one place to another?
Exactly. So you can imagine a company that deals with rooms, an Airbnb of sorts, slowly expanding into this area. They might not want to develop the driverless car, but they would want to work with the platform and build their service on top of that.
We are talking about a huge new sector of the economy that’s going to expand. And the race is about who will provide that platform. We saw that with smartphones. There are only a couple of providers of cellphone platforms: the Android and the iPhone. Numerous businesses build on top of that and create new opportunities. We’ll see the same thing happen with autonomous vehicles.
One might be tempted to assume that automakers, with all of their experience using robotic labor, wouldn’t break a sweat adjusting to mobile robots that transport people and goods all over creation. But you say in the book that driverless cars would not recognize their brawny, simple-minded cousins on the assembly lines as fellow robots. Why is that so?
Because it all boils down to the perception problem. The robots that work in car factories do not have to deal with the problem of perception. Yes, they have to recognize a machine part or a component moving on an assembly line, but that’s a recognizable part. You can train the system. You can use all sorts of traditional machine vision techniques for that. But when it comes to recognizing arbitrary things on the road, from oil spills to potholes, to a ball, to a squirrel, to reflections of things, it’s a whole different ballgame. And until recently that problem has not been solved.
You also write about a study that estimates driving-related deaths in the US alone would fall from 32,400 a year to 11,300 a year if 90% of the cars on the road were AVs. But I get the feeling you don’t see this as the most likely catalyst for the adoption of AVs for the general public. What is the most likely catalyst, and when should manufacturers expect it?
Why will people buy these cars? At the end of the day, it’s convenience. Contrary to a lot of the images you might perceive from advertisements for cars, people don’t like to drive. Most driving in traffic to and from work is laborious, tedious and dangerous. So people will give that up in a second, given the option. So at the end of the day, it’s going to be convenience: Once manufactures can assure drivers that autonomous cars will drive, let’s say, twice as safe as the average human driver, people will flock to this technology and start doing other things in the car: working, reading, playing, sleeping.
The safety issue is very difficult to argue about. It’s usually the one that is used when it comes to discussing how urgent it is that we adopt this technology. There are 23,000 fatalities from cars a week around the world. Around the world, 23,000 people are going to die next week from cars. That’s like a nuclear bomb going off once a month. But we’re completely numb to this. Car accidents don’t even make it into the local news, and yet that number can be basically brought down to almost zero.
That impacts everything—from what we do in the cars to the cars’ structure. A lot of the cost of a car and its weight and mechanical construction now has to do with safety and crashes. Once we can eliminate a lot of that, the car can change entirely.
How important is it that extensive investment in roadside infrastructure will not be needed for driverless cars to take off, thanks to machine learning?
There’s a big myth around that topic. AVs do not need any additional infrastructure beyond what conventional cars and drivers need: They need good roads, good lane markings and bridges and tunnels. That’s it.
Any talk about infrastructure, like transponders on the side of the road and V2V communications between vehicles, slows us down. Anything that requires infrastructure investment is a red flag for implementation. Municipalities and states would be very reluctant to implement anything that requires infrastructure. Infrastructure doesn’t scale very well. It requires maintenance. It requires operations. It requires all kinds of things. But it’s just not necessary.
You wrote that deep learning will change the development trajectory of mobile robotics in general. How so, specific to manufacturing?
The moment you have a machine that can perceive things around it, there’s a ripple effect not just in autonomous vehicles but any kind of robotics. Because robotics has been stifled by this challenge of understanding the environment. And that’s why robots have mostly worked in structured environments like factories.
Even in factories, for example, you can have robots that work side-by-side with humans. Robots will be able to do that because they will be able to see the people and therefore be safe around people, and they will be able to learn from people more easily, to copy what people are doing. They can be “programmed” by people just the way you teach a child by example. A lot of these things are enabled by the fact that machines can see and understand what they are seeing.
Other examples have to do with robots that will be able to handle a larger variety of tasks because they won’t need to confine themselves to just one type of component. They’ll be able to, for example, package different objects with different shapes and sizes without being programmed in advance. They will be able to understand what they’re seeing. They’ll be able to sift through and find defects in quality assurance, with much less training and careful calibration than is needed today. All of these different things will have a ripple effect on how we use robots in industry and will create new opportunities.
In the book, you challenged the assumption made by the US DoT and the Society of Automotive Engineers that it’s best for humans and robots to take turns at the wheel as we transition to AVs. What should manufacturers of future vehicles know about the limitations of “human in the loop” software?
Human-machine collaboration is a great thing, but not for driving.
Driving is a tricky thing to share. It boils down to this idea of split responsibility: When things are critical, you don’t want to split the responsibility, for example, between two people. Each of them will feel it’s safe to drop the ball because the other person will pick it up. Well, that’s exactly what happens when human drivers and AI share the driving: Neither of them is 100% responsible, and that can create a problem.
When you look at some of the recent unfortunate crashes that have happened with driverless cars, typically there was a human driver there ready to take over, but they didn’t have the time to take over or they weren’t ready to take over. It’s false comfort. We have to get away from this idea. In fact, this idea of relying on humans sharing the driving becomes deceptively dangerous the closer the machine is to 100%. We have to skip that phase and go straight to 100% autonomous—and get there as quickly as we can.
You wrote about how driving is tedious and it’s inappropriate for humans to be in the loop. And you say that because of the tedium, humans are exceedingly happy to let machines take over. But that’s not established publically. How do we get to that point?
Everybody agrees that Level 4, where cars are almost fully autonomous, is a transition point. It’s not going to last long. And the moment there are fully autonomous vehicles, why would you want to mess with a vehicle that’s 95% autonomous? I think we’ll quickly sail through that piece and at some point nobody will argue about it anymore. I don’t think it’s a matter of convincing anybody; it’s just a matter of getting to Level 5 sooner rather than later.
Air France flight 447 in 2009 was an example of a failed handoff between machine control and human control: More than 200 people perished. How did that not turn into a rallying cry for taking humans out of the loop?
First, most people don’t know the details of that case—where I think it was a malfunction with the autopilot, with the specificity of the speed sensor, and the handoff did not work. Either the pilots weren’t ready or they weren’t trained enough, and the handoff didn’t work. But you also can look at the situations with the recent driverless cars that crashed: The human wasn’t ready to take over. It’s a complete misconception [that humans can be in the loop].
But somehow we have this feeling that if there is a human in the loop, there is a human there that we can point a finger to and say, ‘Okay, that person is responsible. We’re good.’ It’s a false comfort to feel there’s a human there.
We need that comfort I guess. And it’s going to stick with us until we have a fully autonomous vehicle twice as reliable as the average human driver. That’s the point where most people will say, “Okay, even though I like the feeling of having a human in control, if a reliable, independent party tells me the car drives itself twice as safe as a human driver, I’m ready to relinquish control completely.’”
It has to be [at least] twice as safe. It cannot simply be just as safe as the average human driver because most people believe they’re better than the average human driver.
You’d have to be crazy to insist on driving a car when the AI is proven to be 10 times safer than the average human driver. You’d have to be insane to put your kids in a car where you’re insisting on driving yourself, as the average human driver, when the car can drive itself 10 times more safely.
I think it will be a
no-brainer pretty soon.
Where outside of the US are companies and government agencies showing leadership?
A lot of companies recognize this is a huge potential benefit. It’s not just about AVs. It’s not just about automotive. It’s also a ripple effect throughout the economy—from agriculture to real estate. There are many reasons to try to promote this kind of technology. And you see a lot of work in Europe and Asia. Singapore, for example, is leading work on this. China is putting incredible efforts into this area. Really every corner of the planet.
I’ve had people contact me from developing countries. They’re saying, ‘We want to leapfrog public transportation and go straight to autonomous. What do we need to do to get there? What laws do we need to change to allow the Googles and the Apples to test their vehicles on our soil rather than in the US?’ So there’s an understanding that this is “now.” It’s a game of perfecting the technology, of collecting miles. Which company can get the most mileage on the road to train their systems? Wherever you can do that, that’s where your cars are going to be trained, and that’s the first place that’s going to be deployed.
When you published the book in 2016, you noted that no robotics operating system could claim to have fully mastered the three crucial capabilities: real-time reaction speed, 99.999% reliability, and better than human-level perception. Do you have any update?
Yes, I think we’ve reached that point. Those three things are doable. In fact, each one of them separately was accomplished already at the time of the book’s writing—perception being the last piece of the puzzle. I do not have access to actual data from these companies. But it seems like all three have been reached in several of the autonomous vehicles that are out there right now.
You wrote that Google has many employees working in Bangalore on data-driven driving. How much of the work on driverless cars that will be sold in the US is being done outside the US, and where are the hotspots?
That’s difficult to answer because a lot of the work is being done in stealth. There are also a lot of small startups that are acquired. So who owns what is a changing landscape.
But, like many other software-driven technologies, there’s relatively little infrastructure required in order to get off the ground, which makes it very easy for companies all over the planet to start. This is a software-based race. Together with other things happening in the automotive industry, like electric vehicles and ride sharing, it means anybody around the planet can pick up this challenge and start working on different aspects of it.
We are seeing this happen everywhere: India. Singapore, China, Europe. This technology is easy to move around because it’s not infrastructure based. If you develop an algorithm that can drive more safely, you can deploy that into any vehicle around the planet. This is a very mobile technology that in a sense makes the world flat.
Increasingly, it’s a competition of who can get the most data. Tesla, for example, is offering free charges in China— in return for access to the data.
You wrote that mid-level control software is kind of the Holy Grail for AVs. What constitutes this software and why is it so, so important?
Mid-level control is a term we coined to try to articulate what is difficult about making a driverless car. The high-level control—how to get from point A to B with traffic, without traffic, path planning—was solved decades ago. The low-level control—feedback control of how to drive a car in a straight line, make a turn in real time, stay on the road, accelerate smoothly—was also solved decades ago.
What remained a challenge up until about five years ago was what we call the “mid-level control,” the control that has a horizon of maybe one minute. “How do I cross this intersection? How do I merge into traffic? How do I avoid this obstacle? How do I navigate these two pedestrians and a bicycle and another car coming from a different direction that may or may not turn?”
It’s a challenge you don’t have when you’re … driving an autonomous-guided vehicle in the factory. You don’t have it when you have autopilot on an aircraft. You don’t have it in these other kinds of autonomous systems. You don’t have it in agriculture or in a mine. But you do have it when you’re trying to cross a busy city intersection. That’s the mid-level horizon control. That’s been the hardest challenge. And that’s the challenge that has been solved with deep learning in the last couple of years.
You mentioned the possibility of having three visual perception subsystems in AVs. Isn’t it all going to be so expensive in the end that no one other than a corporation will be able to afford to own AVs?
On the contrary. There should be multiple systems just for reliability. But remember, these are just sensors and software, which are cheap. Their prices follow Moore’s law. They double their price performance every so many months. Actually, the marginal cost of deploying software is really easy. If you have three or five systems, it’s really not that much more expensive. Cameras, for example, are a dime a dozen. If you want to have 50 cameras around the car, it’s not that much more expensive.
In contrast, you take out things that humans need. The dashboard, for example, is very expensive in a car. All the instruments and controls will be replaced with apps. That’s software, so that scales.
The electrification of vehicles also takes away a lot of the complexity. Overall, as cars become autonomous and electric, we are looking at both simplification and substantial cost reduction in parts.
Autonomous cars will be more affordable. That leads me to this other myth that somehow car ownership will drop. I think car ownership will only increase.
Why? Isn’t the idea to have fleets of cars that serve people, so people wouldn’t have to own cars anymore?
That’s a misconception. Yes, there are going to be fleets that serve people. But they will cater to one small [market segment]. Frequently, if you’re looking at mobility as a service—getting from point A to point B like an Uber—yes that makes sense. It makes sense also in an urban setting. But for many reasons, it doesn’t solve the entire picture.
One reason is that model doesn’t work well in rural areas. I don’t want to wait 15 minutes for some vehicle to show up in my driveway if I’m living in a rural area; I want the car to be there waiting. I want to own it.
Another reason is people like to own things. People own their phones, for example.
And finally, if you have a child, you want to own the car. You don’t want to schlep stuff into the car; you want to leave it there. And that’s only going to increase: If you’re going to be able to work or sleep or play in the car, as is inevitably going to be the case in autonomous vehicles, you want to keep your stuff there. You want to have your own bed, your own chair, your own desk, your own toys.
And so increasingly, people will want to have their own car for convenience, for immediate access, for the feeling of ownership.
What is your message today to the manufacturing world regarding AVs?
My message is, no matter what business you’re in, what industry, what sector, what discipline, AVs are going to have a ripple effect. Even for small manufacturers in rural areas. The fact that transportation will become a lot cheaper and a lot smoother could mean that rural manufacturing will become profitable.
For example, transportation of goods from rural areas into the city is a big burden for rural manufacturers. And suddenly it makes them viable. So, whether you’re a rural manufacturer or not, you should care about that. Regardless, it means you’re going to have more competition. It means goods can be [more easily] shipped around. The delivery models are going to change. E-commerce is going to change. It’s going to mean that real estate value is going to change, which will impact manufacturing. It means that purchasing patterns are going to change.
It doesn’t really matter what business and sector you’re in. This technology is going to change things for you, and it’s a tsunami heading our way.
- VIEW ALL ARTICLES
AutomationAugust 14, 2020With the potential for a 30% productivity increase or even more, there's a strong incentive for automating CNC machining processes. But before you flip the switch on that robot, you’ll need to check out the surrounding tools and processes.
Smart ManufacturingJuly 22, 2020The credential hanging on my wall that swells me with pride is my machinist certificate. That apprenticeship experience was the “ON!” switch for my career path. The brightness of that light helped maintain the vision and the hope even as I faced significant racial bias almost 30 years ago.
Smart ManufacturingJuly 21, 2020If Industry 3.0 is identified by the computerization of factory floor processes to make them “smart,” then Industry 4.0 can be understood as the expansion of the idea to include all of the non-factory floor inputs required to produce a quality product and a successful enterprise.