Image
Oregon State Professor Houssam Abbas explaining engineering principles using on a whiteboard
Johanna Carson

‘Moral’ machines: Building ethical behavior into autonomous AI systems

Key Takeaways

Modeling the consequences of AI systems’ actions helps developers integrate ethical behavior.

Important applications of AI ethics include robotics and multi-agent interactions.

Collaborative research is helping to develop safer AI systems.

As autonomous systems become more prevalent in daily life, engineers face an enormous challenge. It’s not enough for these human-scale systems to complete tasks safely; they also need to abide by the social considerations we humans rely upon to guide and regulate our interactions.

Houssam Abbas, assistant professor of electrical and computer engineering at Oregon State, studies how to integrate ethical norms into artificial intelligence systems. Abbas came to Oregon State in 2019, drawn in part because of groundbreaking work in AI and security conducted at the university. Prior to that, he was a design automation engineer with Intel for 8 years.

“My background, and my abiding interest, is in developing formal methods for verification of engineered systems,” Abbas said. “It’s not just about testing them a certain number of times and being able to say, ‘It seems to work.’ It's about establishing actual proofs of correctness. That is necessary for safety-critical systems.”

Unveiling the consequences of actions taken by AI systems

As engineers, Abbas and his colleagues do not independently determine or assert what is or is not an ethical choice within the context of AI. Rather, in conversation with developers, they enable modeling of the consequences of these systems’ actions.

“We are providing engineering tools so that both the developers and the whole of society know a little bit better what it is that they’re getting through the deployment of a particular AI,” Abbas said.

Take the example of an unpiloted aerial vehicle tasked with delivering biohazardous material to a hospital. That UAV must balance several ethical imperatives, such as delivering the material swiftly to administer lifesaving care while also minimizing risks to people on the ground. Using a formal methods approach, Abbas works to translate these ethical considerations from English into mathematical formulas to compute a control policy for the UAV.

“When you push that button, our algorithm produces the controller that is guaranteed, mathematically, to satisfy the requirements,” he said.

While the logic Abbas works with is particularly well suited for reinforcement learning in robotics, it has applications in other areas as well, including what are called multiagent problems in AI, where interactions occur between AIs and humans or between AIs and other AIs.

Safer AI systems through collaboration

Abbas is part of several collaborative projects that aim to improve the safety of AI systems. One large project is investigating failures of automation in multi-UAV scenarios. That effort is a collaboration sponsored by the Federal Aviation Administration, with 11 principal investigators from five universities looking at failures of automation in aerial systems, as part of the FAA’s larger ASSURE (Alliance for System Safety of UAS through Research Excellence) consortium. Partner universities include Drexel University, University of North Dakota, Ohio State University, and Kansas University.

Another project involves collaboration with HiddenLayer, an AI security startup working on the security of large language models. Ph.D. student Amelia Kawasaki, who is also a researcher at HiddenLayer, has developed software that runs in tandem with an LLM to catch “jailbreak” prompts intended to bypass the LLM’s safeguards.

Other industry partners have included Toyota Research Institute of North America, working on control and monitoring of UAVs, and Intel, working on formal methods to reinforce security of control systems.

AI ethics versus system safety

When working with industry, Abbas says, he often encounters questions about how ethical norms in AI differ from safety considerations. The two are inextricably linked, Abbas explains.

“Safety is always interpreted, even if implicitly, in the context of an ethical code. When we determine that hitting the brakes in a car is the right thing to do, we are saying it is morally right — because it prevents injuries for the passengers, for example, and that’s the right thing to do in this context,” he said. “So, even defining safety for a fully autonomous system requires reasoning about the ethical content of a situation, and implementing it requires the robot itself to perform some of that reasoning.”

Ultimately, Abbas says, trust is essential to the safety and the overall success of autonomous systems.

“You can have a robot that has the right safety guards programmed into it. But if you as a person don’t perceive that it does, then the interaction is still not going to be successful,” he said. “If I don’t trust that it is going to behave with me in an ethical manner, I'm not going to be safe around it, and it's not going to be safe around me.”

Contact Houssam Abbas with ideas for collaborative research at houssam.abbas@oregonstate.edu. To learn more about ethics in AI, attend the talk, “Where do Ethics Belong in AI?” presented by Abbas and Alicia Patterson, the Mary Jones and Thomas Hart Horning Assistant Professor of Applied Philosophy on Jan. 17 at 2:00 p.m.

Subscribe to AI @ Oregon State

Jan. 12, 2025

Related Researchers

Houssam Abbas
Houssam Abbas

Assistant Professor

Related Stories