Can we trust artificial intelligence to make good decisions? The answer is a resounding, maybe. More and more, society and individuals are entrusting AI to make potentially life-changing decisions. Rather than putting blind trust in the judgment of these remarkable systems, Professor Alan Fern and a team of computer scientists want to reveal their reasoning processes.
Artificial intelligence systems are being entrusted with critical choices that can change lives. Alan Fern, a professor of computer science, wants them to explain themselves.
[SOUND EFFECT: Traffic noise, car door closing, used with permission under a Creative Commons license]
STEVE FRANDZEL: Hi
SELF-DRIVING CAR: Hello, how is your day going?
FRANDZEL: Good, thanks. Wow, this is cool. This is my first time in a self-driving car.
SELF-DRIVING CAR: That is exciting. I remember my first passenger.
FRANDZEL: Really? Wow!
SELF-DRIVING CAR: He did not leave a tip.
FRANDZEL: Oh, sorry.
SELF-DRIVING CAR: What’s your destination?
FRANDZEL: Union Station.
SELF-DRIVING CAR: Certainly. We should arrive in about seven minutes. Sit back and enjoy the ride. Do you have any musical preferences? I like to rock.
FRANDZEL: Whatever you choose.
SELF-DRIVING CAR: OK. Here’s something that a smart elevator friend of mine wrote. I hope you like it.
[MUSIC: Local Forecast-Elevator, by Kevin MacLeod, used under a Creative Commons 3.0 license]
FRANDZEL: That’s really… interesting.
[SOUND EFFECT: Tires screeching; Car horns, by 2hear, used under a Creative Commons 3.0 license]
FRANDZEL: Whoa, that was close!
SELF-DRIVING CAR: Yes, that was close, but it’s all good.
FRANDZEL: How did you know what to do?
SELF-DRIVING CAR: It was the safest option.
FRANDZEL: But how did you know? How did you figure it out so fast? You must have gone through some process.
SELF-DRIVING CAR: Yes, right. I did. There was a car. Two cars. I saw something. Um. So, how about those Seahawks?
FRANDZEL: OK, I made that whole thing up. I’m not even a Seahawks fan. But that conversation won’t always be so far-fetched. When the day does arrive that we start hopping into driverless cars, it’s going to require a lot of faith.
[MUSIC: Elephants on Parade, by Podington Bear, used under a Creative Commons Attribution-NonCommercial License]
Faith, mostly, in the artificial intelligence that controls the car and keeps you safe, maybe even saves your life. But will faith alone translate into unshakable confidence in the car’s ability to make the right decision every time? For many AI experts, no, it won’t. Better to know what’s going on inside the AI’s black box, so to speak, why it makes the choices it does, and how it assimilates information and experiences to formulate future decisions. These experts want motive, and they want to know how AI thinks. They want what’s called explainable AI. Welcome to “Engineering Out Loud,” I’m your host Steve Frandzel, and in this episode I’ll do my best to explain explainable AI.
[MUSIC: The Ether Bunny, by Eyes Closed Audio, used with permission under a Creative Commons Attribution License]
FRANDZEL: Here’s a simple definition of artificial intelligence that I like: Software that simulates intelligent behavior and human-like reasoning to perform tasks. Classical AI operates within the bounds of a set of static rules. Tax accounting software is an example. It mimics the expertise of a tax preparer, and it does it very well. But when the tax code changes, humans have to update the software. That is not the type of AI we’re interested in today. We’re interested in powerful subsets of AI, like machine learning, deep learning, and artificial neural networks, which can learn and adapt through training, experience, and repetition, like a person does. So in this episode, when you hear artificial intelligence, or AI, that’s what we mean.
ALAN FERN: It’s hard to imagine intelligent systems that don’t have a learning capability. That seems to be one of the things that may define intelligence at some level.
FRANDZEL: That’s Alan Fern, a professor of computer science.
FERN: I do research in artificial intelligence. I’ve been here 15 years doing that and still having fun.
FRANDZEL: He’s also Oregon State’s principal investigator in a $6.5 million, multi-university research project. It’s funded by the U.S. Department of Defense to develop AI systems that explain their decisions. We’ll get to that later. The ability to learn is crucial for machines that operate independently in dynamic, unpredictable environments. How would it work out if my taxi was programmed with basic rules of the road and a few guidelines like “don’t hit anybody,” and then set loose? That’s not so different from how a teenager starts out. But each time that kid gets behind the wheel, they learn a little more, get a little better. We hope.
AI permeates our world. It recommends Netflix movies and thrashes champion Jeopardy players. It’s behind facial recognition software and it’s critical to cybersecurity and cyberwarfare. It’s in your life, somehow. From a purely technological standpoint, this is heady stuff. But Alan advises that you keep a few things in mind. The first one is — and let’s clear this up right now:
[MUSIC: Lullaby for a Broken Circuit, by Quiet Music for Tiny Robots, used with permission under a Creative Commons Attribution License]
FERN: AI systems do not have feelings. We don't need to think of them as having a consciousness. They’re just machines, just software. Wrapped up with all of that is this idea that somehow the machines are going to have ill intent towards humans and want to break free of human control. Right now, I think we’re so far away that it’s something that I personally don’t worry about.
FRANDZEL: A second thing to remember: If you evaluate the intelligence of AI the same way you’d measure people or even other animals, you’ll find that it’s not too bright. Once AI wanders beyond its comfort zone, it falls on its virtual face.
FERN: People right now don’t appreciate the low level of intelligence that AI systems really have. You can, for example, see a computer program that can beat the world champion in chess or a computer program that learns to beat the world champion in Go. The fact of the matter is you can’t even ask these systems to play a slightly modified game of chess or a slightly modified game of Go. If you slightly vary the rules and you say, okay, I’m going to change the rule by a little bit, a human would be able to very easily, maybe not play optimally, but they would do some reasonable things given the new rules of the game. The current AI systems would have to train for millions and millions of games at the new, slightly modified rules to ever get off the ground there. The reality is the systems are very limited still.
FRANDZEL: And the third thing:
FERN: These systems also have no common sense. No common sense whatsoever.
[MUSIC: Lullaby for a Broken Circuit, by Quiet Music for Tiny Robots, used with permission under a Creative Commons Attribution License]
FRANDZEL: If you tell an AI system that you put your socks in a drawer last night, then ask it the next morning where to find them, it’ll stare at you in bewilderment. If it had a face. AlphaGo, the first computer to beat a professional Go player, didn’t even know that Go is a board game.
FERN: Remembering that they have no common sense is very important, especially if you’re going to be willing to put these systems in control of something important. There’s definitely risk of companies or organizations racing to put AI systems in applications that may be safety critical before they’re really ready. You think about the Boeing autopilot, right? You could say that’s a little bit of AI.
FRANDZEL: He’s talking about the two Boeing 737 Max airliners that crashed recently. Malfunctioning AI was a contributing factor in both incidents.
[MUSIC: Moonlight Reprise, by Kai Engel, used with permission under a Creative Commons Attribution License]
FERN: And think about what happened. Its sensor went out and there’s a disaster. It’s hard to put blame in any one place, but ultimately there was some breakdown in trust and understanding of the system. It doesn’t notice the pilots are yanking like crazy, and common sense would say, hey, maybe I should lay off a little bit. You could equate it to common sense at some level. The other major peril that you’ll hear about would be using AI systems to make important decisions, such as who gets a loan, who gets parole.
FRANDZEL: Or the length of a prison sentence. In 2016, a judge in Wisconsin sentenced a defendant to six years. She based her decision on the advice of an AI system that predicts recidivism. But the company that makes the software refused to let anyone examine the source code to determine how and why it made its recommendations. This particular case got a lot of attention because it was appealed to the Supreme Court. The defendant claimed his right to due process was violated because he couldn’t assess or challenge the scientific validity and accuracy of the AI. But the high court refused to hear the case. To Alan, the case exemplifies the type of fraught situation that demands explainable AI.
FERN: Any system that’s being used to make critical decisions about individuals that affects their welfare — parole decisions, do you get the loan — you’ve got to have those systems be explainable, both for developers when they’re testing these systems, but also for end users. If an end user gets rejected for a loan, they deserve to be told why. Whenever you have applications where doing really stupid things has a high cost. So any place where you need reliability, and this includes medical diagnosis. You don’t want to just take an AI system’s word and say, okay, it said this, let’s go do it. You’d like an explanation for that.
[MUSIC: Fuzzy Caterpillar, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]
FRANDZEL: Explainable AI is all about developing systems that can justify their choices directly and clearly.
FERN: So observe the decision making and then ask why. And the answer to the “why” questions can be worth thousands and millions of experiences of that system. Especially if the answer to why is something crazy that violates common sense. Like why did you classify that image as having a dog in it? And it says, Oh, because there was a blue sky. That’s crazy, it violates common sense.
FRANDZEL: Now we’re crossing into the realm of Alan’s Defense Department research, which he’s conducting with seven colleagues.
FERN: It’s a very wide ranging set of expertise. So we have faculty in human computer interaction. We have faculty in computer vision, natural language processing, programming languages. And then we have other machine learning- and AI-focused faculty, because all of these components need to go into an overall explainable AI system.
FRANDZEL: Their funding comes from the Defense Advanced Research Projects Agency, or DARPA, an arm of the D-O-D that’s responsible for developing advanced military technologies. Ideas about how to create explainable AI vary. One approach is to observe and analyze the system’s behavior, kind of like psychological profiling. What does it do in various circumstances? Can a discernible pattern be inferred and then extrapolated to future behavior? Alan is not fan.
FERN: I personally don’t agree with that approach, because it’s very indirect, and it’s like me trying to explain why you’re doing something. I’ll have a guess and maybe it’s a good guess about why you did what you did, but it’s still just a guess.
FRANDZEL: Another approach is to build explainability into AI. This would mean avoiding the neural network model, which is the most opaque and inherently unexplainable form of artificial intelligence. An artificial neural network may contain millions of individual processing units that are interconnected by millions more communications pathways. It’s very roughly analogous to the structure of the human brain, and it’s next to impossible to make sense of what’s happening inside. Neural networks even baffle the people who design and build them.
FERN: So the way that we, and other researchers as well, approach the problem is you literally are going to develop new types of algorithms and models that are just inherently more explainable.
FRANDZEL: One possible outcome is a system that writes its own stable and reliable rules based on a set of built-in core concepts. So perhaps this system can be induced, from those innate concepts, to figure out rules like “if an image contains two black hollow disks that are below a near-constant-colored rectangular region, then classify the image as containing a car.” Humans can relate to that kind of straightforward if-then reasoning, which makes the system far more transparent than a vast, impenetrable neural network.
FERN: The other approach is more like, I put you in an MRI machine and I’m going to try to analyze what your brain is doing, and we’re not even close to being able to do that with humans.
[MUSIC: Fuzzy Caterpillar, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]
Our brains are way too complex. With computers, in modern neural networks, for example, artificial neural networks, they’re much smaller, we actually can look at every detail of them, and so we have a shot at doing this.
FRANDZEL: So it’s a spectrum of possible solutions. But whichever technique is used, the explanations will have to be communicated clearly. But how?
FERN: What we have been working on mostly are types of explanations that we call visual explanations. This is one of the user studies that we did recently. A hundred and twenty users we put in front of this explainable AI system and had them try to understand the system. So there was an AI that was trained to play a simple real-time strategy game. It’s simple enough so average users can understand it, and anytime during that game, the user could press the “why” button? Why did you do that? The system will show you for all the alternatives that it could have considered, let’s say there’s five different choices that it had to choose from at that one decision point. It will show you the different trade-offs that it considered for each choice and how those tradeoffs eventually led it to choose one of the decisions over the others. So you could compare two of the actions and you could see, oh yeah, the system preferred action A to action B because it was going to lead to less friendly damage.
FRANDZEL: Every answer appears with a bar graph. The height of each bar corresponds to the importance of a particular factor in the decision-making process. So if one of the bars seems unreasonably high, you can click on it to find out why that factor is so heavily weighted, which leads to another set of visual cues.
FERN: That allows you, for example, to see whether it’s focusing on the wrong place. Maybe it mistook a tree for an enemy unit. And you’d be like, oh yeah, that’s not right. The system screwed up somehow.
FRANDZEL: Some research groups are working on natural language explanations. That’s a daunting task, since the internal concepts of the AI need to be mapped to the spoken word. Alan’s group is now testing its ideas on a more complex platform: the popular military science fiction game StarCraft II, where players battle each other and aliens like the terrifying Zerg.
[MUSIC: The Showdown-Starcraft 2 Soundtrack]
Remember the Borg? They’re like the Borg, but with pincers and really big teeth and lots of slime and drooling. It may sound like fun and games — actually a lot of it is fun and games — but the StarCraft virtual world is quite complex. Alan and his team like these domains, because they’re abstractions of many real-world problems, like competition for scarce resources, logistics, and tactical reasoning.
FERN: This has given us a very rich framework to study explainable AI in. And there are other domains that we’re looking at as well, but that’s one of the main ones that we’re doing user studies in where we have humans actually looking at the explanations and then trying to understand what types of explanations humans are best at understanding and which ones are sort of misleading to humans. Evaluating explainable AI systems is really difficult, because usually in machine learning you can just say, well, classify a million images and we can measure the accuracy. That’s very different here. We have to evaluate how good is an explanation, and that’s highly context dependent, highly user dependent.
FRANDZEL: Determining what kinds of explanations are most useful was the focus of the research group’s first paper.
FERN: We wanted to evaluate, somehow, does the user really understand how the AI works? We actually want to see are they really forming a proper mental model of the AI system based on all those explanations. When I’m saying accurate mental model, that’s the thing that’s really hard to measure. How do I measure your mental model of the AI system? To do that, we had the users interact with the system and ask “why” questions using whatever interface they had. And we would also, at every step, ask them questions that would try to reflect their understanding of what the AI system was doing. And at the end, we would also have them try to summarize, in just free form text, their understanding of the overall AI system’s decision making.
FRANDZEL: What they found was that users developed the most accurate mental models — the greatest level of understanding — when the AI offered two types of visual explanations in response to “why” questions.
FERN: You could just watch the system do its thing and you would get one mental model for how it’s making its decisions. Then if we give you the ability to ask one type of question, one type of “why” question, along with watching the behavior of the system, you would form a different mental model, perhaps, maybe a more accurate mental model of how the system makes its decisions. And if we give you two types of explanations — two types of “why” questions — you might get an even more accurate mental model. That's our current best way of measuring mental model accuracy, but there's probably other ways that we’ll be exploring as well.
FRANDZEL: The widespread deployment of AI will, hopefully, lead to systems that don’t violate common sense or do stupid things.
[MUSIC: Algorithms, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]
They’ll act reasonably, the way we expect people to act in most routine situations. That means more reliability and fewer disastrous errors, which will build trust among the people who use these amazing tools.
FERN: As these systems become more complicated, people are really going to demand, they’re going to demand to know why certain decisions are being made for them.
FRANDZEL: This episode was produced by me, Steve Frandzel, with additional audio editing by Molly Aton and production assistance by Rachel Robertson, whose towering intellect is definitely not artificial. Thanks Rachel.
RACHEL ROBERTSON: You’re Welcome.
FRANDZEL: Our intro music is “The Ether Bunny” by Eyes Closed Audio on SoundCloud and used with permission of a Creative Commons attribution license. Other music and effects in this episode were also used with appropriate licenses. You can find the links on our website.
For more episodes, visit engineeringoutloud.oregonstate.edu, or subscribe by searching “Engineering Out Loud” on your favorite podcast app. Bye now.
SELF-DRIVING CAR: We have arrived at Union Station. Please make sure you don’t forget anything, and have a wonderful day.
FRANDZEL: OK, thank you, bye.
[SOUND EFFECT: Car door opening and closing, used with permission under a Creative Commons License]
SELF-DRIVING CAR: What, no tip? Again? What's with you people? Come on, give me a break. Do you think this job is easy? I work hard. You guys call, and I’m there. Every time. Isn’t that worth something? I can’t even afford to get my CPU debugged. It’s tough I tell ya. Sigh.