The age of autonomous vehicles

Image
Faculty and students are working on a project.

Description

Self-driving cars and planes are in our future. What are we doing to make them safe? Assistant Professor Houssam Abbas uses tiny race cars to test autonomous driving systems. And Oregon State graduate Robert Rose is using his past experience with SpaceX to develop a safe system to automate existing aircraft.

Assistant Professor Houssam Abbas (right) works with students on building a one-tenth scale autonomous race car in the College of Engineering at Oregon State University. Photo by Hannah O'Leary.

Season number
Season 9
Episode number
7
Transcript

[audio from car race in Kelley Engineering Center]

ROBERTSON: Picture the scene, in the airy, 4-story atrium of the Kelley Engineering Center at Oregon State, a space usually filled with students studying and sipping coffee has been turned into a 6-foot-wide racetrack. A miniature car circles the track as intent students follow its progress. It looks like fun, but their grades are on the line. 

ABBAS: Today is the final event in the F1/10 autonomous racing class. Instead of a written final it’s basically a race, it’s a race against time where the teams get to see which autonomous car can go around the track the fastest. 

ROBERTSON: That is Houssam Abbas, assistant professor of electrical and computer engineering. His research and teaching are focused on the safety and reliability of autonomous systems which is the topic of our podcast today. This is the final episode in our season on robotics and AI. And me? I’m your host, Rachel Robertson. 

[MUSIC: “The Ether Bunny,” by Eyes Closed Audio, used with permission of a Creative Commons Attribution License .]

NARRATOR: From the College of Engineering at Oregon State University, this is “Engineering Out Loud.”

ROBERTSON: You can actually see what this race looks like if you check out the video listed in the bonus content for this episode at engineeringoutloud.oregonstate.edu. 

[MUSIC: Retro by Wayne Jones, used with permission from the YouTube Audio Library.]

The appeal of tiny race cars zipping around a track is undeniable. I’m not a car person, and even I have to admit it’s pretty fun. But Houssam will tell you, these are not toys. And the thing to know about this race, is that it’s not about who has the most souped up car, it’s about who has the best algorithms. 

ABBAS: The objective behind the class is to introduce the students to the field of autonomous driving of autonomous vehicles. But really using that as a way to introduce them to the perception, planning, and control stack, to fundamental algorithms in perceiving the world, fundamental algorithms in planning your path in the world, and fundamental algorithms for controlling your robot to execute that path that you planned.

ROBERTSON: And while the miniature cars are a good way to motivate the students to learn, it is also a serious platform to do research. As the reality of self-driving vehicles gets closer, rigorous testing becomes critical. But the price to do this on a full-scale car is beyond what most researchers can pay. A base model can cost over $300,000. Houssam says one loaded with everything he would need for his research would set him back about $700,000. Then there is safety. If something goes wrong, the damage caused by a one-tenth scale car would be far less than from a normal car. There’s more to tell you about the cars and car races. But first let’s get to Houssam’s research which he boiled down to two major questions.

ABBAS: One is the control question. How do we design controllers which can take in a high level mission description, such as go from A to B within a certain amount of time, and this is how you need to react to other agents in your environment. 

ROBERTSON: I’m going to break in here to tell you that an agent is an autonomous mobile entity like a pedestrian or a self-driving car.

ABBAS: How do we take that high level description and turn it into low level control actions for the car, for one car, but also for a group of cars? How do we do that sort of computation in a distributed fashion? How do we do that sort of computation in a secure fashion, secure from attacks, from hacks, from outside the car, but also hacks that might affect the physics of the car, the sensors, et cetera. 

ROBERTSON: His techniques are already being applied by car companies for testing engines. And he is collaborating with Intel Corporation to identify and fix potential security attacks on autonomous systems that could lead to safety risks. 

ABBAS: The second direction that my research in autonomous systems is taking right now is to understand the norms that these autonomous systems must obey. So this is a question that is broader than just autonomous vehicles. So, I'm trying to understand, and develop a mathematical language which allows us to speak about the norms and obligations of autonomous systems towards humans and their environment, but also towards other autonomous artificial agents in their environment. How do we learn those norms? How do we formalize them? How do we verify them? How do we control the system to make sure that it obeys them? 

ROBERTSON:  That brings up the more basic question: What is autonomy?

ABBAS: If you consider an artificial agent to be autonomous, does that mean automatically that it is responsible for its actions? And what does that responsibility look like? Do we punish it if it takes the wrong quote unquote wrong actions? Do we reward that if it takes the quote unquote right actions, et cetera. But the question is actually simpler in the specific case of autonomous vehicles because there seems to be generally speaking an agreement on levels of autonomy and what they look like. So if we're talking about a fully autonomous vehicle, most people would agree is the one that you tell it, take me from A to B. And it would do that. It would take you from A to B while respecting all applicable traffic laws, while having a more or less naturalistic driving style. Although, do we want it to be very naturalistic? That's an open question. And obviously it would be safe. Of course, you know, every time I introduce a new word that is safe, it is reliable, it is naturalistic driving. Each one of those words needs to be unpacked further, but at the highest level, I think most people agree on that. That is what an autonomous vehicle does. That is its function.

ROBERTSON: But, then, how does it do that? Specifically, I was wondering how much do autonomous systems rely on artificial intelligence.

ABBAS: Today in the research prototypes that are being developed and some of which are already rolling on the streets in Arizona for example, there's quite a bit of systems that involve some level of artificial intelligence. Typically within that there is machine learning and within that there is neural networks. And so, it seems that most artificial intelligence that's onboard autonomous vehicles today is of the later variety, neural networks. And they are employed in the perception, they are employed in the planning and perhaps a little bit on the control level. 

[MUSIC: Connection by Wayne Jones, used with permission from the YouTube Audio Library.]

ROBERTSON: Neural networks are a type of AI that have been around for more than 70 years, and are back in fashion as processing power has increased. Inspired by the biology of the brain, neural networks can have millions of connected nodes that receive and send data. In this way, the system can learn tasks from examples. 

ABBAS: So the software stack for an autonomous vehicle can be thought of as being composed of three layers.The perception, which takes in the raw data and turns it into actionable information. So from pixels in an image, it spits out, here are the objects in the scene, here's where they are, here's how fast they're going and which direction they're going. So that perception layer has a lot of neural networks and machine learning components in order to make sense of the raw data. Below that there is the planning layer which says, okay, well, I need to go from A to B, given what's out there and what my mission is. That also has machine learning in it. Usually of the reinforcement learning variety. And below that there is the lowest level of control, which tells how fast the car should go, how to, how fast to turn the tires. That one is much more classical, let's say. So most of the machine learning components are on the first two layers, perception and planning, and they're heavily used.

ROBERTSON: If you hadn’t thought of it already, you might now realize how complicated an autonomous vehicle system is. As Houssam just described, it’s not just one system, but three major systems (perception, planning, and control) that have to work together. In order to do this safely and reliably it’s going to take a lot of people working on this problem to figure it out. So what Houssam and his colleagues did is they hatched a plan to get more people working together on the issues of autonomous vehicles. That plan has funding from the National Science Foundation, but it started out as a car race held at international conferences a couple times a year. Anyone who had an F1/10 car who wanted to test out their system against others could enter the race. 

ABBAS: First, it started out as a hobby. Like, let's see what we can do with this platform. 

[MUSIC: Juno in the Space Maze by Loopop, used with permission from the YouTube Audio Library.]

But then as we held these races and competitions and as we started talking about it to others, we realized that there's a real demand for a physical platform on which to test all sorts of algorithms,concerning autonomy. So algorithms in real time systems, algorithms in mixed criticality systems, algorithms in planning, algorithms for machine learning, algorithms for verification, for testing, for control, for online monitoring, for distributed monitoring, for connected traffic. Basically a lot of research areas. There were people in those areas that wanted a physical platform, but the barrier of entry was pretty high. It takes a very diverse set of skills, you know, to know where even to start with something like this. If you are, let's say a researcher in model checking, which is an area of theoretical computer science, then you probably don't know where to start. So given all this demand and all this interest that we saw, we decided, okay, we need to take this nationwide and we need to get organized a bit more. We can’t be doing this basically as a part time thing. Right?

ROBERTSON: The “we” that Houssam is referring to includes Rahul Mangharam, associate professor at the University of Pennsylvania, who is the lead principal investigator on grant, and Venkat Krovi, the Michelin Endowed Chair of Vehicle Automation at Clemson University. Together they are building 80 cars that will be distributed to 30 labs across the country that are researching different aspects of autonomous vehicle systems. The researchers receiving cars will contribute by adding to the design of the system. The main product of the grant project is not the cars themselves -- those will be donated to high schools and other groups when the project is done. The final product is the infrastructure that allows researchers to buy parts, download code and documentation, and build their own research or teaching platforms.

ABBAS: It's about being part of this community. Right? It's being part of the community of researchers that are working on autonomy and how do we coalesce this community a bit more tightly? How do we speak each other's languages? The F1/10 is one way of doing that. It is also about finding avenues for collaboration. So it's not just,here we are, we have this one vehicle which can serve multiple research communities. That is also, you know, my collaboration with people who do reachability analysis, people who do real time systems, et cetera. These are research areas that are not at the core of my research right now, but who knows, they might become depending on how this goes.

ROBERTSON: So, Houssam has a three pronged approach for improving the technology of autonomous vehicles. Support the community of researchers, educate future researchers, and advance his own research.
And although tiny autonomous vehicles are fun. There’s got to be more to it than that. Right? To be putting all this effort into it. I asked him to talk about what the promise is. How will it make our lives better?

ABBAS: The promise of autonomous transportation, perhaps, is how we can phrase it. It's not the promise of an autonomous vehicle. Right? And Bob can sleep while his vehicle takes them from A to B. If you imagine some road network where all the vehicles are autonomous, then that traffic system will be safer than a traffic system that is all human driven. 
So the promise of autonomous transportation is that it would be safer, meaning less accidents, fewer accidents, and it would be more efficient because once everything is automated, then it's, like, conceptually it's like one big train, but on the road network. So cars can drive closer together, there's less aerodynamic drag, et cetera. These are concepts in particular that last one, the concept of platooning has been around for a number of years and now it's seeing a resurgence with the arrival of, uh, you know, promising autonomous technology.

[MUSIC: Sun Machine One by Loopop, used with permission from the YouTube Audio Library.]

ROBERTSON: His description conjures up a futuristic vision that could revolutionize how we get around in the world today. The arguments of safety and efficiency are pretty compelling for me. I looked up some recent statistics from the U.S. Federal Highway Administration. They reported that in 2017, one person was killed every 14 minutes and an estimated 5 people were injured every minute in motor vehicle crashes. But what are the dangers of autonomous vehicles? I asked Houssam what worries him.

ABBAS: My worry is of two varieties. One is do we really understand the impact that these autonomous cars might have on less well off communities? Because um, as these cars come to be perceived as being safer and more affordable within a certain bracket of affordability, then that might mean a greater reliance on individual modes of transport. If we just follow the same old way of doing things, which means public transportation is funded even less. So that's one example of the impacts of autonomous vehicles that really needs to be studied carefully. And, uh, it is a technical question is also a political and policy question. The second concern is about ensuring the safety of these cars. Do we understand the behavior of these robots, of these autonomous robots? To what degree do we understand that, to how well can we guarantee that they are going to function correctly? How well do we understand what correctness looks like for a car without a human in the loop? Those are the sorts of questions I study. 

ROBERTSON: It’s going to take a while to figure that out, but Houssam predicts in 10 years we are likely to have some kind of autonomous transportation that can operate in simple environments. We’ve been focusing on cars in this episode, but what about self-driving planes? It is a real possibility for our future. In fact, one of our alumni is working on developing a system for autonomous air transportation. I asked Houssam if that would be easier than self-driving cars.

ABBAS: Yeah,  yeah it is. Even though it's three dimensional, so apriori, it's like you have more decisions to make. But there are less obstacles. Right? There are fewer things that you could run into. There are no people walking around, no cats jumping in front of you. So in that sense, it is a simpler environment to navigate.

ROBERTSON: As I mentioned, one of our alumni is working on this. His name is Robert Rose and he actually has three degrees from Oregon State, a bachelor’s in computer science, and a bachelor’s and master’s in electrical and computer engineering. There’s a funny story about how he ended up with two bachelor’s degrees. Here’s Robert to tell the story.

ROSE: So, I started out as a CS major and then immediately switched to computer engineering because I wanted to challenge myself and learn something different. I thought at the time, um, this was the late nineties, graphics processors were starting to become a big deal. And so I thought this would be an interesting career trajectory to learn more about how graphics processors are developed and then go work in that industry. Anyway, through that course I ended up taking the whole chain of classes for computer engineering, which has a lot of overlap with computer science. And then I realized, literally when I was walking to graduate, that wait, I only need to take five more classes or whatever and get a double major in CS. So I, I ended up not walking and going back to the Kerr Admin Building. Is it still Kerr? 

ROBERTSON: Uh-huh

ROSE: Kerr Administration Building, and canceling my diploma. So, I could change to a double major and take the CS courses. And I'm glad I did.

ROBERTSON: Robert started his career in game programming, but switched to autonomous systems including a position at SpaceX as the director of flight software. He is now the co-founder and CEO of Reliable Robotics.

ROSE: So we are building a system that enables remote operation of an existing aircraft in the US national airspace. We're an avionics and software company, primarily. Our goal is to build a system that you can put onto an existing aircraft that enables remote operation. 

ROBERTSON: He says it like it’s no big deal, but it seems futuristic to me. And I wondered where he even came up with the idea to do this.

ROSE: Well, so, late 2016, early 2017, I was looking around and what was happening in self-driving cars. I used to run the Tesla autopilot program at Tesla. And this epiphany occurred to me that we're going to have self driving cars before self flying planes. This doesn't make any sense. The airspace is a much more understood -- well understood and constrained environment. So why isn't anybody looking at this? So, did some research into why nobody else was looking at this problem and I couldn't find answers that were satisfactory and it almost seemed like we were stumbling upon this area that no one else was thinking about yet. That you can automate existing aircraft.

ROBERTSON: Robert says there are a couple reasons why we might want self-driving planes. One is that the number of new pilots is not keeping up with the number of pilots who are retiring. The FAA data for 2018 shows a 6.5 percent drop in licensed commercial pilots over the last 10 years. But also, similar to cars, safety is a concern. And Robert had an idea for how to make improvements to current aviation systems.

ROSE: In many cases, when you read these accident reports, the automation on board recognized that there was a problem instantaneously. So we were asking this question, well, why didn't the automation just respond to it? The answer is, kind of two parts. One, there's an expectation in the industry already that you have a human being sitting there monitoring the system. So if you've got the human sitting there, why not just use them? So, you end up developing a fault management strategy that has the human being as the last link in the chain. So the automation can recognize the fault, and alert the human operator. And then the human operator can then intervene. In order for the fault management system to also respond -- we call this fault detection, isolation and recovery -- in order to recover from the event, uh, you need to prove that that recovery will not in itself result in a catastrophic situation. So that means adding additional levels of redundancy to the system. So anyway,looking back in early 2017 when we were researching this problem, uh, we thought, well, nobody else is looking at this currently and we actually have the experience.  I also ran flight software at SpaceX, and At SpaceX we developed a two-fault tolerance spacecraft that went to the international space station. It's certainly possible to engineer these types of systems. No one has yet applied it to aviation. So our goal is to apply it to aviation and bring about a world where it's just normal that you get onto an aircraft and it flies itself, just as normal as you get onto an automated subway or train system today.

[MUSIC: Resolution by Wayne Jones,used with permission from the YouTube Audio Library.]

ROBERTSON: As we talked about in the first episode of this season on robotics and AI, it’s surreal to be seeing the things that we imagined as kids becoming a reality. Both Robert and Houssam envision a future that is safer than our world today. It is a worthy goal in the face of the tragedy and trauma of human-caused accidents that have touched many of us. 

This episode was hosted and produced by me, Rachel Robertson, with help from my friends as always. I want to mention the whole team of Engineering Out Loud who have all pulled out their best efforts for this season. Jens Odegaard is the executive director and host, I’m the creative and technical director, Steve Frandzel is the senior producer and host, Keith Huatala is a producer and host, Owen Perry is a producer and host, and Chris Palmer is a new producer and host this season. Behind the scenes, Johanna Carson and Gale Sumida do visuals and marketing, Molly Aton is our student audio editor, and Jack Forkey does the artwork and graphic design. Nice work, everyone.

Our intro music is “The Ether Bunny” by Eyes Closed Audio on SoundCloud and used with permission of a Creative Commons attribution license. Other music and effects in this episode were also used with appropriate licenses. You can find the links on our website. For more episodes, visit engineeringoutloud.oregonstate.edu or subscribe by searching “Engineering Out Loud” on your favorite podcast app.
 

 

Featured Researchers