Applying AI to Real World Problems: An Interview with Professor Prasad Tadepalli - Full Interview

Can you give an overview of the AI program at Oregon State? What does it encapsulate?

Tadepalli: The program is intended to solve a need for specializing in AI, but also to serve people who have expertise in other disciplines. So, it’s an interdisciplinary program that is distinct from, for example, getting a degree in computer science and specializing in AI, which is the traditional route. And one reason we thought this would be a good idea is that, over time, the field of AI has moved more toward mathematics and statistical and optimization approaches, which are studied in other fields in addition to computer science. So, the way I look at AI is that it is a kind of outward-looking face of computer science, more toward how to apply efficient algorithms and optimization methods to problems of the real world, whether they are problems about climate change or oceanography or engineering. So, AI offers an application-oriented framework so that anybody who wants to apply computers would find themselves wanting to talk to AI people because they are kind of problem-solvers.

Basically, coming from that motivation, we realize that there need to be much more flexible requirements, because students could be coming from multiple disciplines, and we want to make them quickly capable of doing research. So, unlike the computer science degree — which we expect people to come with a computer science background, and then they take some computer science classes, as well as the AI classes — it becomes a broader degree. So here, we are trying to go toward the key AI components that we need to learn, which means that we simplified the requirements. So, we have a four-course requirement in AI. And then, for additional courses, that could be AI, and also could be other disciplines, based on research needs. If somebody’s coming to AI from a biology background, and they want to do computational biology, then they will be taking some biology courses, and they will be using some of their expertise from the biology classes they have already taken.

And statistics is very important for AI these days, so we are expecting people to take some statistics classes as well. And then we have, basically, three required classes. One of them is a broad introduction to AI, called Big Ideas in AI, that exposes people to the main key ideas.

And then we require them to take algorithms, because that's kind of a foundational part of computer science. And then we have a course on social and ethical issues in AI. That has become very important for people to learn about.

Well, you know, those are the requirements, and we have both Ph.D. and master’s degrees. And the other requirements are similar to what we have in EECS; we have a qualifying exam and final defense for Ph.D. And then one innovation we made for a master’s degree: We introduced a capstone class, which is basically an efficient way of delivering advice for projects. We used to have one-on-one advising for a master’s degree, which is quite labor-intensive. So, we try to make it more streamlined and more efficient by introducing a class. People can do team projects there and satisfy the project requirements.

Are there any upstart companies have come out of the collaborative elements of some of those team projects?

Tadepalli: We invited companies to submit projects, and we had, actually, an abundance of projects. We had about three times as many projects as people, 30 or so, 33 projects. We had to downsize. Because, yeah, we were expecting it could be the other way around. So, Alan Fern, who is a professor of computer science, is teaching that, and he is selecting projects based on feasibility of the projects and their attractiveness to the students.

What would you consider the biggest strength of the AI program at Oregon State?

Tadepalli: The biggest strength is the faculty. I think we have a lot of good people, most of them very young and recently hired people. And, yeah, right now, AI research is in a very exciting place, with a lot of new things going on, especially in the neural networks area, applying that to computer vision, language, and even games and problem-solving. So, we cover all these areas. And we have more people doing things like theoretical work on causal networks, trying to understand the causal relationships between different variables in any domain. That is very fundamental to making policy decisions. For example, if you make some tax cuts, how does that impact people? If you want to study anything like that, then you want to understand the cause and effect. We hired one person in that area, Karthika Mohan, who is really bright and trying to do some research there. And companies are also interested in that, if you want to kind of, for example, increase the customer base, what kind of interventions would be useful? It’s kind of machine learning with steroids, I would say.

What do you see as trends, the biggest trends in the AI industry?

Tadepalli: Right now, the biggest trend is there is a lot of work on what we call symbolic AI in the last 20 to maybe 30 years. And right now, the recent focus has been on applying neural networks and trying to solve more real-world problems that involve vision, language, and also, perhaps, more challenging domains.

For example, Google has a program that could construct the secondary and tertiary structure of proteins by looking at sequence data. That has been a big challenge for us until now, because they used to do it one by one, and it’s a very slow process. And they were able to do this for thousands of proteins automatically, using some of the latest techniques in neural networks. It’s called Alpha Fold.

And what’s the benefit of being able to look at those protein structures?

Tadepalli: Well, if you want to do anything like drug design, or try to understand how a disease works, you need to understand the internal structure of the protein. In the molecular structure, it tells you how the bonds are created between different molecules. But you want to understand how they organize in 3D space.

We hear a lot about the high cost of developing new drugs. Do you see AI being able to lower these costs?

Tadepalli: The cost is because, until recently, drug design is basically a big trial-and-error search problem. You design a drug, you see if it works, and again, you design a new drug and then you see if it works, and so on. I would say, it’s a bad way to design drugs. So how to do that in a more methodical way, like how we design other things, like planes and automobiles? Yeah, you have your specifications, and you implement them based on first principles. So, if you want to do that, you need some basic understanding of the biology. And protein structure is a key component of that.

Any other outstanding trends that come to mind?

Tadepalli: The project we have on agricultural AI, there are a whole bunch of problems there. We’re working on prediction problems. How to predict local weather, at the level of a small farm, and how it affects what we call “cold hardiness.” So, for example, we are looking at grapes, and they’re very sensitive to temperature, and during the wintertime, they develop resistance to cold. And studying how resistant they are helps us develop intervention policies. For example, heating: If it becomes too cold when the plant is not ready, then that will kill the grapes, basically. The crop is lost. So, we need to make sure that it doesn’t reach that level. That’s just one example. There are good examples about irrigation: How much water does a plant need? How to control the water at a plant level, and also at a more global, watershed level.

What do you see as the future of the program? 

Tadepalli: The so-called interdisciplinary nature of the program is still kind of ambition more than a reality at this point. We’d like to make that richer by involving faculty from other departments and we already added five more faculty members to the group. Most of them are from EECS, but we also want to continue to do that and try to recruit from other departments at Oregon State. There is a lot of interest in applying AI to biomedical domains, for example. And chemical molecular synthesis, like in materials science. And Alan Fern does a lot of work on robotics.

So yeah, the future is pretty bright, and we are trying to hire more people. There is a lot of demand for faculty in computer vision and natural language processing. One of our natural language faculty members, Liang Huang, associate professor of computer science, works on real-time machine translation, so you don’t need human interpreters. You can speak in Chinese, or whatever, and that gets translated into English simultaneously.

What do you see as the biggest challenges in the field?

Tadepalli: One thing that we are working on is trying to make these neural networks more transparent. We call this explainable AI. One problem, a computer does something, we look at the thing and say, “Well, look at that thing in the phone, or in the computer,” but we don’t understand why it is there. Sometimes when the system makes a mistake, there is no explanation for that either. So, for an end user, that’s a very unsatisfactory kind of state. So, think of a high-powered task like translation. If it mistranslates something, you don’t know why. If you ask the machine, it doesn’t have an answer because it has seen such examples before, and it has translated it, but it doesn’t know how to explain what it has done. This is a serious problem. And computers these days are used for all kinds of applications. So, for example, applying for credit, it is a computer program that decides whether or not you’re eligible for credit. And if it says no, you want to know why.

And even things like recidivism, when people get out of jail, sometimes these days computers are used to decide whether or not this person is safe to release. And definitely, you would want an explanation there, especially if you get a negative answer. There are life or death decisions that you cannot really rely on a black box to make.

Is that because some variables are not simply black or white? And a call has to be made?

Tadepalli: Variables can be black or white, but still, the decision is very complicated. For example, you go to a doctor, and he looks at you, looks at all the symptoms, and he’s going to decide something. Now, if you want an explanation, he will give you some explanation. Well, you may not quite understand it, but at least it will be in a language that an expert can understand. It’s quite different from a computer because the computer looks at your data and says something, but so far, there isn’t an explanation. The reason there isn’t an explanation is because this explanation consists of this huge neural network that has a lot of weights and the outcome is the answer. There is no human interpretable representation of the explanation.

Once you start adding more variables, some of which compete with each other, or are less than certain, is this where it gets complicated?

Tadepalli: Yes, there are a lot of variables, a lot of uncertainty, and there is a lot of complicated processing going on inside of the computer. And getting that complicated processing into words a human can understand is very difficult. People can do it, but even people have problems with it, when it really gets complicated. People can use a language that you and I can understand. For example, doctors, maybe somebody else who knows the field, can understand their explanation for a decision. But computers are far away from that level.

First of all, computers are knowledgeable in some things. They have a lot of data, so they mine the data and produce the right result. But the process by which they arrive at the decision from the data is quite different from people. And it’s not as knowledge-intensive as people. It’s not like the computer went to medical school for five years to learn some things. Computers just manage millions of millions of records and then try to extract patterns from them. So yeah, the process is different. There are no mechanisms to make that process more transparent and interpretable.

What about your personal background? What got you into the field?

Tadepalli: That’s a good question. I guess I was very inspired by the early pioneers in artificial intelligence. One of them is Alan Turing. He was the one who actually laid foundations for both computer science and artificial intelligence, and the inspiration comes from him asking the questions, “What is thought?” and “How do I answer the question of whether something thinks or not?”

In the movie, “The Imitation Game,” he talks about the philosophical question of, “How do you distinguish the object that has a mind versus the object that doesn't have a mind?” The way he answers it is by trying to design this test where you can ask questions of a black box, and look at the answers, and evaluate whether they make sense or not, or whether they look like they’re coming from a human. His key idea is that the intelligence of the entity can be figured out by this questioning. You can ask questions, get the answers, and see whether they make sense. And again, that comes back to the explainability I was referring to earlier. 

So, my interest in AI is partly philosophical. I was kind of curious about how to make machines think. Because that was something that didn’t occur to me that you can make machines actually act like people. And I’m also interested in things like chess and puzzles, so that’s one field where AI was used early on, to play games like chess. And I was curious how to make computers do things like that. And later I was interested in how to make machines learn over time. I got interested in how to build machines that learn how to play games.

Well, finally, what made you choose Oregon State as your place of work in AI?

Tadepalli: The reason I came here is because of Tom Dietterich who was the founder of AI at Oregon State. And he has been an inspiration and a leader. He worked a lot in machine learning. He was friends with my advisor, Tom Mitchell. I started attending a number of talks and read his papers, and they were all inspiring. At that time, there were only two AI people at Oregon State, and then I came.

I wasn’t initially sure that I would be staying here. But I guess I fell in love with the place and the collaborative opportunities here. People have been very nice and kind. And I like small cities, so Corvallis has been very good for raising my family. Mostly, I think the people and the collaborations I could build are what I liked. And more recently, the AI program, which is, I think, unique in the country, because there's no other program that offers Ph.D. degrees in AI. There's a great opportunity for me to build a strong program in AI and make it much more visible.

Is there anything you’re really excited about right now? 

Tadepalli: There are a couple of things I’m working on. One is the explainable AI project; I’m very excited about that. This is about how do you make computers more transparent? How do you try to explain their reasoning? We are working in computer vision on these things. So, for example, if the computer says, “This is a bird.” What kind of reasoning can you give? What are the parts that it's focusing on right now? In another example, we have a scene, a kind of kitchen versus an office. What objects are responsible for the decisions being made there?

I’m also starting a couple of students on trying to apply machine learning to software testing. For example, if you look at a piece of code, how do you test it? What are the inputs that you try it on to make sure that your program is working? We are trying to apply machine learning to figure out how to test programs.

So, people develop their own things, ways to test things, but we want to do it at a larger scale. You want the automatic testing tools, for example. So, one day, hopefully, machine learning will be useful to that.

Anything you’d like to add?

Tadepalli: Well, AI safety is something I’m passionate about, and I think it’s an important thing to keep in mind. AI is an exciting thing, and there are a lot of opportunities to do a lot of good, but we should also be aware of the potential abuse of AI and its proper application to problems. For example, we could talk about surveillance. Those are the potential ways where AI can be abused. In another example, we are seeing a lot of disinformation, so we can imagine what happens if these tools become so pervasive that anybody can create fake videos. That's the reason we are trying to train students on ethical and social aspects of AI, which is very important. We need to teach that aspect of AI to be sure that the technology is not abused.

Is there any kind of watchdog out there?

Tadepalli: Yeah, the Association for the Advancement of Artificial Intelligence, and there are also conferences which are specifically designed to address these safety issues. There is a lot more awareness than a few years ago, even. So, I’m optimistic.
 

If you’re interested in connecting with the AI and Robotics Program for hiring and collaborative projects, please contact AI-OSU@oregonstate.edu.

Subscribe to AI @ Oregon State

Return to AI @ Oregon State