
The talk presented in this podcast, “Where do Ethics Belong in Artificial Intelligence?”, explores how philosophers and engineers think about ethics in artificial Intelligence. It was presented at Oregon State University by Houssam Abbas, assistant professor of electrical engineering and Alicia Patterson, assistant professor of philosophy, as part of an AI seminar series.
ALICIA PATTERSON: So even if no harm comes to a particular person. So, for example, even if I am not, say, negatively impacted — maybe I'm never detained by the police or wrongly identified — we can still see that something is seemingly gone wrong here.
HOUSSAM ABBAS: Sometimes the effective function of the use of AI can be to focus attention on the technology and away from how it's used by whom and to what ends. The ethical considerations don't start at the point after you've deployed the system and now you're seeing some results. The ethical considerations ought to come early on. And so, no one can hide behind their hand and say, oh, I didn't see this coming.
[MUSIC: “The Ether Bunny,” by Eyes Closed Audio, licensed under CC by 3.0]
RACHEL ROBERTSON: From the College of Engineering at Oregon State University, this is Engineering Out Loud.
Hi everyone. I’m Rachel Robertson. Today, on Engineering Out Loud you will hear more from Alicia Patterson and Houssam Abbas. They are both assistant professors at Oregon State University studying ethics in AI, but from very different perspectives. Alicia has her Ph.D. in philosophy and her research focuses on the ethics of emerging technology, especially privacy and data ethics. Houssam has a Ph.D. in electrical engineering and his research interests are in the verification and control of cyber-physical systems and formal ethical theories for autonomous agents like unpiloted ground and aerial vehicles. This recording is from a talk entitled “Where do Ethics Belong in Artificial Intelligence?” They designed this talk for a general audience for a presentation at OMSI, the Oregon Museum of Science and Industry. This recording is from a talk at Oregon State University. Now, enjoy a deep dive into how philosophers and engineers are thinking about ethics in artificial intelligence.
ABBAS: Where do ethics belong in artificial intelligence? So, start with a story. And the story begins in the 1950s in the psychiatric ward of Bellevue Hospital in New York where Bill McPhee was committed. While committed there, he developed his idea for a voter prediction machine. The, the point of it was, and here I quote from Jill Lepore's book, “If Then: How the Simulmatics Corporation invented the Future.” So, the point of it was to create, literally quote, “a set of rules of thinking so that it is possible to reproduce or closely approximate the thinking behaviors of voters.” Bill McPhee was then released from the psychiatric ward at Bellevue and finished his dissertation at Columbia University about a program for, to create a fully observable electorate. You would feed the machine massive amounts, massive amounts of microscopic data about users and issues, voters and issues. And it would be possible to assess the effect of any proposed course of action by political candidate based on the likely reactions of the voters. Ed Greenfield, who's was an ad man and was one of the key founders of the Simulmatics Corporation which monetized the ideas of Bill McPhee and sold them.
He put it like this, he said, suppose that a candidate was proposing to make a major civil rights speech in the South. Remember this was the early sixties. We would be able, quote, “through our model to predict what the meaning of that speech to over a thousand subgroups of a given population. And thus, we'll be able to predict any the smallest percentage difference in votes, that is, that such a speech would have.” The Simulmatics Corporation actually sold their services to the Kennedy campaign. Kennedy then went on to win the presidential election that year. Now, it's not clear how much influence they actually had, because the recommendations that Simulmatics gave the campaign were largely seen to be common sense and would have been adopted anyway. But I think you can already see the how that sort of foreshadows the general theme of users/voters/people prediction and then control.
The Cambridge Analytica — they basically sold a more sophisticated version of what Simulmatics was trying to sell the Kennedy campaign to the Trump campaign in 2016 and to the Brexit Leave campaign in 2015. And the reason I, I start with this anecdote is to directly cut through to what perhaps is the biggest point of unease that some people might feel when they fret about AI and ethics. The notion that human behavior is predictable and then controllable. So, in particular, most of you have probably heard of the Facebook experiment on playing basically and controlling user's emotions based on what they bubble up in their feeds.
Now, if users or people are predictable, at least in the aggregate and controllable, then why not program them into robots? And that then brings us to the notion of embodied intelligence, embodied artificial intelligence. One half of this is a humanoid of its own inventor. I'll give you a couple of seconds to try to guess, which is the humanoid and which is the flesh and blood human, right. This is the Geminiod DK by a Danish inventor whose name I now forget, oh, it's in my notes. Henrik Scharf
So, so hopefully this gives you sort of the corner cases or some of the corner cases when discussing AI and ethics in, in the way that they relate mostly to us, like the notion that we might be predictable and controllable. If you want to know what researchers are actually working on you can literally pick up like any issue of the communications of the ACM from the last couple of years. And there will be an article in there about AI and ethics, not ethics and CS in general, but specifically AI from bias and credit rating to problems with certain problems with computer vision systems, and so on and so forth. So rather than give a laundry list of that, I'll hand it over to Alicia to help us think through a framework about these problems. Starting with a cautionary note that ethics is the hottest product in Silicon Valley's hype cycle today. So, in order to step away from the hype and perhaps think about it more productively: Alicia.
PATTERSON: Okay, so philosophers have a set of tools to think about ethical problems that has been our bread and butter for forever, basically
So, if I say deny, deny someone the right to vote I've disrespected an important value and I've denied something, they've that they've, they're owed, right? As a citizen, right? They're owed the right to vote. Perhaps you know, if the person is in like, you know, not a swing state, their vote maybe doesn't make much of a difference. So maybe I didn't harm them very much, but we can still see that I've done something wrong to them, right? I've denied them something that they're owed. So, harms and wrongs usually go together, but it's important to understand sort of their different contours to be able to fully map ethical problems. So, some AI harms this when we originally made this, this wasn't such a big thing in the news, but Cigna, a health insurance company uses algorithms, right?
To sort of batch deny claims. And doctors are supposed to go through and, and review a patient's file, right? And see if the sort of medical claim is medically a medical necessity. And according to research by ProPublica doctors were spending an average of 1.2 seconds on each case. I don't know about you, but I, I feel like I could speed read a little bit, but 1.2 seconds on a very serious medical decision seems that seems pretty bad, right? So, in this instance, we get an algorithm, right, that caused a particular kind of harm, right? It may probably denied thousands of people certain procedures. That means patients, doctors, hospital staff have to do all of this extra labor to get the medical care that they deserve. And some people just might not be able to like fully work the system and then give up and have to pay for things out of pocket.
So here we can see the way in which the system made people worse off, it harmed them. In contrast a wrong may not necessarily involve a concrete harm, right? So, an example of this is the use of facial recognition by law enforcement. One of the, sort of, wrongs that we see is that people, a lot of times in AI systems are treated sort of as near data in large data sets. And data is scraped in mass from the internet, right? This is like all known to us. And there's no sort of regard for sort of fundamental values about how these people might want to be treated, right? In this instance, the value of privacy. So, this is a really interesting read, if you wanna check it out put together by researchers at Georgetown. So, research at Georgetown found that one in two Americans have their faces and data sets that are available to the police. And this is the exact kind of thing that we should worry about, right? That we're sort of infringing on important values like democratic freedoms or privacy. So even if no harm comes to a particular person, so for example, even if I am not, say negatively impacted, maybe I'm never detained by the police or wrongly identified we can still see that something is seemingly gone wrong here.
So, within the sort of scope of wrongs, injustice usually sits within it because one of the things we want to pick out about injustice is that it's a certain form of disrespect. So, when someone's committed an injustice, what's been … what value or what principle has been violated is the value or principle of fairness or equality. Oftentimes, injustice involves both a harm and a wrong, right? So, you're treated unfairly and your life is made worse. So, if you say, are denied a promotion on the, say, the basis of your race or gender, we can say like, you're worse off. You don't have the income that you would've had, right? The, the career opportunities you would've had, but also, you've been disrespected, right? You've been treated in a way that you shouldn't have. So those are some ways actually, lemme give an example of injustice first.
So, an example of injustice, there's too many to really even pick which one to talk about. But one example of automated bias, right? Is in racial disparities in credit lending study by faculty in Stanford and University of Chicago looked into this into sort of alternative models of credit scoring and found that low income and minority groups were somewhere between five and 10% less, less likely to receive a loan than high income or non-minority groups. So here we can see, right? Being denied the alone on the basis of your race is both wrong and you're harmed, right? You're, you don't get the house and you're treated in a, in a way that you should not have been treated. So, this is sort of the background when we think about where is ethics coming into conversations in, in engineering and design, this is sort of my like, toy example of the like design lifecycle.
Oftentimes, these sort of questions about ethics, about harms and wrongs and things like that come in after a product has been designed, their research has been done in it for years, and it's either launched or about to be launched, and some problems are surfacing and people ask questions about you know, what sort of guardrail should be in place? And one of the problems with this is that it's sort of too little too late, right? When we ask questions about ethics, we should ask them from the very beginning, right? Because asking them from the beginning allows us to ask a series of questions about values through the entire design lifecycle, through the entire research process, right? And that allows us to ask a broader set of questions, rather than this narrow question of what sort of guardrail should we put in place?
Or what sort of use cases should this be prohibited or encouraged for? So, a variety of questions that we should ask are things like, before we even begin to develop this, is this technology the right solution to the problem at all? Science and technology scholars critique this question not being asked a lot, and they call this techno solutionism, the idea that a technology can be sort of the solution for these big social problems that we're often not sitting back and saying, is this the right, is this the right way to solve this problem? The next question that needs to be asked, and this is, I'll say, I'm going through a series of questions, but this is by no means an exhaustive list. Does the community want the problem solved in this way? Whose input do we even have in thinking about what the problem is?
So, in design justice sort of like a canon text in in sort of intersection between ethics and design, people have this phrase of nothing for us without us, right? So, do people who are impacted by the technology have any sort of meaningful ways to shape the values in design of it? The second or third, I guess we're on third we should foresee the, we should look into the potential unforeseen consequences again, before we've like created a product or we've put a ton of research into a certain type of algorithm, right? So feminist technology scholars, for example, often point out that like the first public adoption of a technology usually comes in the form of harm for women. So, one example, there's many I could give you, is that when DeepFakes were first, like, open source available to people on the internet, they were used in mass for non-consensual pornography or image-based sexual abuse.
So, here we can see like being the subject of pornography is something that women are significantly more likely to be subject to. And so, we might have wanted the developers to have thought about that or grapple with that before they released this technology, right? So, these are the sorts of things that maybe we can solve for, prevent for early on. We also might wanna ask, is this technology compatible with important social goods, like democratic values, right? When we are creating something that say, involves a form of surveillance we might wanna understand, will this significantly impact, say, people's ability to freely assemble. Freedom of assembly is considered say, a core democratic value of value that you sort of a thing, a right that you need to be able to express to be able to exist in a democracy at all. We also might wanna ask, are we getting data in a way that respects important rights? As I mentioned before, like a common practice and machine learning. It's just to do this mass web scraping. And we might wanna ask a variety of questions about that. Like, are we respecting people's right to say not participate in oppressive systems? Are we re respecting their rights to privacy?
We might also wanna ask our, some people going to be more harmed by this than others, right? Maybe this question comes up the most when we're thinking about automating certain forms of labor. Who are the winners and losers? And who might be especially vulnerable to the impacts of a system? We wanna ask questions about bias before a product is launched. We wanna be able to grapple with that. We don't want something. So as so often happens something gets launched and then users who are negatively affected, who are of that group that the technology is biased against are the ones being like, this doesn't work for us. This has happened again and again, right? So that's again, something that we wanna catch early and often. We wanna ask what are the potential misuses, for example, have we created a tool that's like really amazing, but also enables a lot of fraud? And lastly then we can come to these questions about, say, guardrails. What sort of laws, regulation policies and norms need to be put in place to protect people or, or to ensure this sort of ethical use?
ABBAS: Thank you. So, within that framework or against the background of that framework, we can look at two major approaches so far in engineering to tackle the question of how does one in a nutshell program certain ethical norms or social norms more generally in engineering systems. This is a very high-level simplified view of that. One is utilitarianism, the other is non-utilitarianism, or as sometimes referred to as deontology. I'll start with Deontology from the Greek they own duty and laws for science or knowledge of. And whereas most of us probably think of the right thing to do is the think that maximizes certain benefits, maximizes certain goods. Deontology says, no, the way you assess whether some, there are certain moral norms that the moral agent ought to obey, regardless of the consequences, regardless of the amount of good that that action produces, or the amount of bad that that action leads to.
So, the deontology is a normative theory regarding which choices are morally required, forbidden, or permitted. Those are three key categories. In contrast, for example, to moral theories, which tell you what sort of person you ought to be, right? What makes a choice, right, is its conformity with a moral norm. So, if we adopt such an approach, and that's sort of the perspective I take in my own research on algorithmic ethics. One then asks, how do we program duties into an AI agent, right? So, if you have statements of which choices are morally required, forbidden or permitted, um, can we think of them as design requirements, uh, for a subsystem of this overall system that we're trying to create? So, you have your objective function, right? And this is the only math we're going to see. I think you have your objective function, you want, uh, something to run fast, for example.
And then you have those things that are permitted, forbidden, and required, and you express those in some mathematical formalism. And the idea is that once you've done, you've used these mathematical formalisms to express these two different things. And that's sort of the point I wanna stress here. That object, the performance objectives are not the normative constraints. Magic, magic, magic, it turns into zeros and ones, and then lo and behold, Cassie can run a 5K without kicking a rabbit on the way, right? So, what is this mathematical formalism that allows us to express normative requirements? This, I think most of us are or ought to be familiar with, we need basically a mathematical language that allows us to express such normative statements, which you and I can understand because understanding is actually key here, right? An objective function with a thousand parameters is not something you and I understand or anybody, and whoever tells you that is probably fooling themselves and you. So, we want something that you and I can understand, but which is also amenable to algorithmic, which is to say automatic, automatic or automated processing and reasoning.
And the, and the language most of us use is formal logic. You're all familiar with Boolean logic, but there's a zoo of logics out there, some of which are appropriate or to some degree appropriate for reasoning about normative statements. Once you do that, once you make that move, then reasoning becomes a calculation, right? The dream of Leibniz was that when there are disputes among persons, we can simply say, let us calculate, right? You, you run your argument, I run my argument, and the machine tells us, well, which argument is a sound argument, right? Does A follow from B? And so, without further ado, we can see who is right. We're nowhere near that, obviously. But that, that's the overall idea. Now, formal methods and formal logic are heavily used for, um, safety purposes, for designing safe systems, especially in safety critical applications like avionics, for example, right?
And in fact, in hardware, any hardware company today makes heavy use of formal logic. But what we are talking about is using formal logics in order to formalize normative requirements, ethical and social requirements. There's some work I've done. There's work done by Matthias Scheutz at Tufts. Next week's AI seminar. Speaker Adnan Darwiche also makes use heavy use of formal logic for explainable AI in his case. And I think the center of gravity really of, of this sort of work still is in Europe especially in Vienna. So there, if you're curious, you can look up that project that just launched centered in Vienna, but with collaboration with Maryland. And the point one of the last thing I wanna say here is that this formalism allows you to express and reason about normative requirements. It doesn't prevent you from doing performance optimization the way we usually do.
The question is how do you do the two together? So, you know, you can still look at graphs,
The other the other way of trying to program normative requirements is through utilitarianism, where you have, you know, the good is a function of your actions. You have lots of optimizers that you throw at this mysterious function, and it gives you the best action, which is supposedly the most ethical action because it produces the most amount of good, right? It's, it goes only so far in my view, because there, those are on some level inherently incommensurate things, right? So even, let's take a very trivial example from this already old 2015 paper from Chris Gerdes, right? So, say we want, you want the car to get to its destination, that's a performance objective without collisions. That's a normative requirement. 'cause You, you know, a car can collide with someone and keep going, right? So, it's not necessarily a performance requirement while minimizing emissions, that's both performance and normative because emissions are bad for people in general.
So, what does it mean to weigh all these? Like how do you weigh these in meters? What sort of objective function are you gonna put all of these in? What sort of reward function are you gonna use in order to express all of these? Not to mention that sometimes ethical requirements is not something that you can choose more or less of something, you just have to do it, right? And so then you go into the multi-agent case, which complicates the picture. Nonetheless, there is in fact the AI alignment literature. Most of it is on the side of, let's see, how is it possible to design, for example, a reward function for an RL agent such that it avoids catastrophic outcomes? Among that literature is reinforcement learning from human feedback, right? Where the idea is suppose, you know, you want, for example, chat GPT to produce the ethically normed answers.
So what we do is we'll ask check GPT the same question 10 times. It's a stochastic machine, it'll give us slightly different answers 10 times, and then we give it feedback on which one is the best. And this way it tweaks its own parameters. But, now that you might say, okay, sounds reasonable, but then it's, it's a market economy. So, we'll say, oh, are we gonna really gonna hire a thousand people to do this a thousand times? No, what do we do? We put another AI in there. So now you have an AI guiding another AI as, so then you climb the AI ladder to nowhere
So, I'm not gonna pretend there's a good segue, we're just switching gears.
So even if this thing is great, you still need for legal moral purposes, for example, a lawyer in the loop, a lawyer in the war room, and I'll take an example from the Israeli Army's use of AI for lethal targeting. So, this is from March 2023, so before the the latest war. So, so in from this article, so several high ranking IDF officers informed the press that Israel is deploying AI tools as part of its military arsenal. Skip, skip, skip. The public statements acknowledge that some of the targets attacked by the IDF are produced by AI tools. Skip, skip. One encouraging aspect is that it seems that the, the IDF seeks to use tools that complement human decision making rather than a substitute for the human factor. So here, this is a human in the loop or a soldier basically in the loop acting as a backstop to the decision or to to check the decision or the recommendation of the AI tool. The question is, how likely is it that this person or this soldier is actually doing due diligence? We saw earlier for the insurance claim that basically doctors are spending 1.2 seconds checking an insurance claim.
So, this is from the Guardian. And the point here I'm making is that having a human in the loop, a worker in the loop is no panacea because that human is subject to the same social forces that led to the creation of that AI system in the first place. And that AI system has a purpose, and the purpose is to be automated. It is not to be slowed down by, in this case, the human. So this is from a Guardian article in April, 2024 sourced on like six sources who worked with the Lavender system that the IDF was using in Gaza. “We were constantly being pressured, bring us more targets. They really shouted at us. We were told, now we have to f-up Hamas no matter what the cost. At its peak, the system managed to generate 37,000 people as potential human targets. But the numbers changed all the time because it depends on where you set the bar and what a Hamas operative is.”
This is a system with many parameters. It has many knobs. So, depending on where you set the knobs, which is something as engineers we always do, right? We're tweaking parameters for performance in this case for lethal targeting. So, the numbers change all the time. So, guess in which direction the numbers are going to be trending in such a situation. Now, who's going to vet 37,000 targets or presumed targets. Skip to December 24. This is from a Washington Post story. Soldiers who are poorly trained in using the technology attacked human targets without corroborating Lavender's predictions at all. Lavender is the name of the system. At certain times the only corroboration required was that the target was a male. So, the point here is that if the, the worker in the loop, the soldier in the loop and depending on how the, the scale of which the AI is being deployed is basically useless.
Even if that worker in the loop is a lawyer, there is a whole literature on so-called war lawyers, right? Some legal experts skip, skip harness the law in the service of the war effort. Now, the point being made in this book is not simply that the lawyers are like a cog in the machine just basically giving blank checks to whoever wants to target. It's a more nuanced point than that. But the point is, it's a system as a whole that exists. And too, so to go back to Alicia's point earlier, the ethical considerations don't start at the point after you've deployed the system and now you're seeing some results. The ethical considerations ought to come early on. And so no one can hide behind their hand and say, oh, I didn't see this coming.
So, and I owe my thinking on this point to Zhanpei Fang who's here in the audience, AI is a rhetorical tool. Sometimes the effective function of the use of AI can be to focus attention on the technology and away from how it's used by whom and to what ends. It's like, oh, is the AI parameter, are the AI parameters correctly set? Like, no, why is it being used? And this isn't a question one asks only about AI, one can ask it about just technology in general, right? But we're discussing AI here.
Thank you.
PATTERSON: Okay, so continuing on this problem of responsibility in AI systems. How many of you familiar with the concept of a crumple zone? Excellent. This is so much different than a public talk
And this is a bit strange, right? It's weird because human actors who are involved in these systems tend to have very limited control and limited understanding of how the system works. But they're often this like sort of last line of defense, right? Where they're like, the buck stops here, I'm the last thing that can intervene. And humans in the loop or workers in the loop are often there because there are all these sorts of edge cases or modes of failure that are hard to predict or can't be ruled out. And so, we need someone there to try to stop these different modes of failure. But as a result of being sort of the last one in the line of responsibility is that the workers in the loop become this moral crumple zone where responsibility for the system is attributed to the human actor who ultimately has limited control over the automated or autonomous system.
Ellish also describes this as liability sponges. Again, this sort of like absorbing the impact metaphor. One prominent example of this is in 2018 a self-driving car for Uber hit a pedestrian. The self-driving feature failed to detect the pedestrian, right? And the safety driver was intervened too late to stop the collision. However, it was only the driver that faced criminal charges, criminal legal charges as a result of this incident, and Uber did not, right? So, we can see here the driver absorbing the legal, the legal impact of the death of a pedestrian. While the technological system, Uber, did not face any sort of penalty. So, there's a couple different problems with putting people in this role. So, first is deskilling. Increased automation leads to the atrophy of skills over time. This is why we require pilots to do so much flight simulation, right?
Because no one wants a deskilled pilot, but actually most of flying, right, is automated is on autopilot. So once your skills have deskilled, say you've got a doctor doing sort of a radiologist who's been reliant on a sort of MRI algorithm for years, we might worry that, you know, their ability to sort of see and notice these sort of edge cases is going to get worse over time. The other thing is what they call the handoff problem. Maybe some of you're familiar with this, but this is the idea that you're sort of doing nothing
In the case of self-driving cars, right? You're sitting there, you have, you're not doing anything but all of a sudden you need to sort of like debug in an emergency situation. And humans just aren't great at, at sort of like not having anything to do and then intervening on, in sort of like a crisis or high stakes situation. So, this is one of the things where we put people in these situations where they're sort of psychologically not super gr... they're just not, we just don't have great capacities for this. And the last thing is that people have a lot of trust in machines. My significant other always tells me not to trust Google Maps autonomously. You know, like always check the route first because you might like know things or have different ways to think about the route that the, that the map doesn't.
But people tend to be like, “oh, wow, like, look at all this math, like surely it knows more than I can do.” And so, we're supposed to be having workers or people come in and bring context or expertise, but their deference to the machine that they think knows more than them makes it so that they might revisit their judgment or feel really scared to intervene when they don't have say like all the numbers. And so, there's a social tendency to overestimate the, the capabilities of machines. So, what's wrong with this? From a, from a philosophical perspective, not just like this labor perspective. This seems like a crappy situation to put workers in. The problem is that the existence of these moral crumple zones violates a basic moral principle about blame worthiness. And this is what in philosophy we call the control principle.
So, to give an example of how the control principle works, it's highly intuitive. Let's say you're on a bus and the bus driver breaks really hard and the person next to you has their coffee. You know, it's the morning you're on your way to work and their coffee spills all over your shirt. And you're like, oh god, now I have to go to work, and I have like coffee-stained shirt. In this scenario you might be like really bummed about your shirt being stained or ruined, but you are not gonna blame the person next to you, right? 'cause “You're like, oh, well, it's not their fault that the coffee spilled. It was 'cause the bus brake so aggressively.” And maybe, I mean, depending on the scenario, you might not even blame the bus driver because they might not have been able to do otherwise as well, right?
They had no control over the situation. Maybe they wanted to avoid a collision. But if you're like, whenever I teach this, my students are like, no, the person with a coffee mug should have had a better coffee mug. You know, like, who are these irresponsible people with their like coffee mugs that can spill on others? And even if you think that in this scenario, what you're really arguing about is about what sorts of actions are under your control, right? You're saying that the coffee-mug person is blameworthy because they should have exerted more control than they did, even though I think that's like kind of ridiculousness scenario. We agree in the end. Okay? So the control principle, right? Is the idea that you should only be morally responsible for what's in your control. And there are like a bunch of reasons why we might intuitively accept this.
I mean, if you've ever been blamed for something that your sibling did growing up, like you get the sense in which this is like unfair to be blamed for things that are not your fault. But it's also really unproductive, right? If you are blamed for things that are outside of your control, the thing that ultimately caused the problem or was the source of failure might not sort of be accountable or punished or sort of changed in the right sorts of ways, right? So, when humans in the loop get blamed for errors of a technological system it leads to sort of disruption of, of accountability, which might make us sort of less safe long term, but also, it's just unfair. So, going back to our harms and wrongs, right? It's wrong and also a harm. So, in conclusion, we've given this whole talk about AI ethics,
Greenwashing? Yeah. Yeah. Okay. So greenwashing is the idea that companies often take on this superficial sort of commitment to environmental action by, you know I don't know, putting some like part of their product that's recyclable or doing some sort of superficial change to change their sort of public image or avoid from
ANONOMOUS SPEAKER: BP.
PATTERSON: Yeah, exactly. Exactly. So, ethics washing is something that we can also see in this sort of, is it an analogous concept. And we see it a lot in this sort of hype cycle within ai, is this sort of practice of feigning or engaging with ethical considerations in a superficial way to sort of improve the perception of your organization, right? Or maybe to avoid regulation, like you don't need to regulate us, we're regulating ourselves. So in this example, right? Open AI talks about being a leader in safety about DeepFakes and private information, even though, you know, they're creating this text of video generator and text to voice generator that propose massive, massive problems, right? For privacy and sort of fraud or things like that. So, I mean, how can, it's a little bit weird to be like, we're the leader in safety, but also we're like heavily contributing to a certain type of problem.
So the last thing I'll just say is one of the big things about sort of the failure of ethics in AI is that so often they sort of end up actually creating this sort of superficial reassurance while actually justifying the press forward for innovation or whatever. And part of that is because these sort of the various different ethical programs that are implemented are disconnected from power. So people don't actually have a ton of power over corporations, over researchers, over sort of decisions. They get made. And so it's very hard to actually enforce or sort of change the status quo when there's giant power asymmetries between people and these forms of technology.
ABBAS: So we'll conclude on the following. It's the question that Alicia and I off well often enough ask, which is, is there anything new here when we talk about ethics and AI, is this any different from ethics and automation or ethics and technology? And part of the reason that that is a question is one the understandable conflation of AI and automation. So not all automation is AI. So some of the questions we are asking about ethics and AI are really about questions about ethics and automation more generally. So you know, who's going to lose their jobs, et cetera. That's been asked for the last 150 years. The understandable, or as Alicia pointed out intentional, sometimes confusion on what constitutes AI intentional in the sense that there is a lot of hype in this domain from people who want to sell products or who want to raise funds.
So, you know, you buy an LED and that's like a smart LED now
ROBERTSON: Thanks for listening. I hope you learned something about how philosophers and engineers are thinking about this the topic of ethics in AI. This is just a small sample of what goes on here at Oregon State University in the area of AI, where we have a long history of research, reaching back decades. More recently, Oregon State became the first institution in the country to offer both master’s and doctoral degrees in AI. If you’d like to learn more check out the bonus content on your podcast app or at engineeringoutloud.oregonstate.edu.