As robotics and artificial intelligence (AI) continue to evolve, researchers are finding countless ways to apply these tools to solve grand problems, improve efficiency, and make life easier. But with progress advancing rapidly, how can we ensure these technologies continue to serve humanity and do not outpace our ability to understand and adapt to their far-reaching impact on society?
At the Oregon State University College of Engineering, researchers at the Collaborative Robotics and Intelligent Systems (CoRIS) Institute are working to tackle these and other questions.
Julie A. Adams, professor of computer science, is one of those researchers.
As the institute’s associate director for deployed systems and policy, Adams, along with her colleagues, is working to build new systems that increase humanity’s ability to respond to major threats. An example of such a system is seen in a project she’s working on with Geoff Hollinger, assistant professor of mechanical engineering. The two are developing algorithms for unmanned marine vehicles that can coordinate with each other to monitor changes in ocean conditions, providing crucial data to scientists trying to understand the impact of climate change. Adams also designs robots that can perform search and rescue missions in hostile environments such as wildfires, while other researchers are building robots that can treat patients during infectious disease outbreaks.
But Adams’ focus reaches beyond the development of new technology. She leads her team in exploring robotics and intelligent systems holistically – considering their impact on people and how they integrate into our lives. She consults with policymakers to develop rules and regulations relating to policy and safety issues, such as the integration of drones into U.S. airspace.
Adams says that even as engineers design increasingly beneficial robots, there is still a sense of apprehension among the general public.
“Movies and TV shows often portray artificial intelligence and robotics in a negative way – as artificial superintelligence that takes over the world and kills all humans,” Adams said. “However, unless there is some major technological advancement, it’s going to be a long time before we see true artificial general intelligence, and even longer for artificial superintelligence.”
Still, Adams admits that people’s fears about AI are understandable.
“Today, you can’t ask AI to explain how it arrived at a particular decision,” she said. “It’s often a big black box with a lot of data going in, and it’s hard to know exactly what’s going on inside.” In other words, it can be hard to trust.
That may soon change. Last year, eight computer science professors at Oregon State received a $6.5 million grant from the Defense Advanced Research Projects Agency to make AI-based systems more trustworthy. The four-year grant, led by Alan Fern, CoRIS’s associate director for research, will support the development of a method to look inside that black box and use simple sentences and visualizations to explain to humans the system’s decision-making processes.
Another layer of complexity arises when bringing manned and unmanned systems together. The best example of this is the self-driving vehicle. In the coming years, before fully autonomous vehicles become ubiquitous, there will be what Adams refers to as a “hybrid” period, when humans sitting in the front seat are supposed to be “in charge.”
“As we move toward fully autonomous modes, we cannot expect that humans will be able, or even willing, to continuously watch what’s going on around them,” said Adams, whose current research also includes a project studying how swarms of autonomous vehicles coordinate their movements. “This will lead to situations where humans are not even able to respond if the car’s sensors fail.”
Another potential issue in integrating self-driving vehicles into our world is that cars are currently designed to operate on roads built for human drivers. Road markers like white lines and stop signs only have meaning to vehicles if they are programmed to recognize them.
“If we built a new infrastructure system specifically for autonomous vehicles, we wouldn’t need these,” Adams said. But that infrastructure won’t be built overnight. So, people will have to live with some level of uncertainty as we move from semi-autonomous to fully autonomous self-driving cars, and integrate the many other types of robots and intelligent systems that are sure to come.
As new intelligent systems are developed, Adams and her colleagues will strive to make these transitions as smooth and anxiety-free as possible. “One of the biggest concerns,” she said, “is making sure we understand the technology as we put proper safeguards in place to ensure that we as humans maintain control of that technology.
MOMENTUM, College of Engineering, Winter 2018
MOMENTUM Issue Archives
Questions: editor@engr.oregonstate.edu