Gaurav Dixit
325 Graf Hall
Corvallis, OR 97331
United States
Gaurav Dixit is a Research Associate with the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University. Previously, he was a postdoctoral researcher in the Autonomous Agents and Distributed Intelligence Lab, working with the AI-Caring Institute. He completed his Ph.D. in Robotics at Oregon State University, where he developed learning and diversity-search techniques for cooperative multiagent systems.
Gaurav’s research sits at the intersection of reinforcement learning, evolutionary algorithms, game theory, and ethics. He focuses on developing algorithms that enable asymmetric agents—agents with distinct capabilities, roles, and objectives—to coordinate effectively and form robust, adaptive teams.
Within the AI-Caring Institute, Gaurav’s work advances multi-objective solution concepts for care coordination, where embodied agents must reason over high-level, dynamic, and sometimes conflicting human-centered objectives such as maintaining an older adult’s independence, safety, and well-being. His work spans three complementary areas of research.
First, in collaborative robotics, he studies how multiple mobile agents can coordinate in manipulation, exploration, and control tasks, emphasizing methods that enable heterogeneous robots to work together safely and efficiently.
Second, within the AI-Caring Institute, he develops multi-objective frameworks for care coordination, identifying representative caregiving tasks and modeling the capabilities, priorities, and cooperative outcomes required to support the independence and well-being of older adults.
Third, he advances algorithms for adaptation and behavioral diversity, creating evolutionary and reinforcement-learning methods that allow teams of agents to discover diverse strategies, adapt to shifting objectives, and remain robust to changes in environment or team composition.
Across these efforts, Gaurav aims to develop principled foundations and practical algorithms for multiagent learning that support reliable, value-aligned cooperation in complex human-robot ecosystems.