
We are interested in building and deploying service mobile robots to assist with arbitrary end-user tasks in everyday environments. In such open-world settings, how can we ensure that robots 1) perceive relevant factors to successfully operate in the world in ways consistent with social and other unwritten norms; and 2) correctly complete the tasks expected of them? How can we teach robots to "get it right" without having to enumerate all relevant entities, norms, and specifications of interest for it? In this talk, I will survey these technical challenges, and present several promising directions to address them. To "get it right", robots will have to reason about unexpected sources of failures in the real world and learn to overcome them; glean appropriate contextual information from perception to understand how to operate in the world; and reason about what correct task execution actually entails.
Joydeep Biswas is an associate professor in the Department of Computer Science at the University of Texas at Austin, and Visiting Professor at Nvidia. He earned his B.Tech in Engineering Physics from the Indian Institute of Technology Bombay in 2008, and his M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2010 and 2014 respectively. From 2015 to 2019, he was an assistant professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. His research spans perception and planning for long-term autonomy, with the ultimate goal of having service mobile robots deployed in human environments for years at a time, without the need for expert corrections or supervision. Prof. Biswas received the NSF CAREER award in 2021, an Amazon Research Award in 2018 and 2024, and a JP Morgan Faculty Research Award in 2018 and 2024.