AI Seminar: Reinforcement Learning with Exogenous States and Rewards

Event Speaker
George Trimponias
Sr. Applied Scientist, Amazon
Event Type
Artificial Intelligence
Date
Event Location
Rogers 230
Event Description

Exogenous state variables and rewards can slow down reinforcement learning by injecting uncontrolled variation into the reward signal. We formalize exogenous state variables and rewards and show that if the reward function decomposes additively into endogenous and exogenous components, the MDP can be decomposed into an exogenous Markov Reward Process (based on the exogenous reward) and an endogenous Markov Decision Process (optimizing the endogenous reward). Any optimal policy for the endogenous MDP is also an optimal policy for the original MDP, but because the endogenous reward typically has reduced variance, the endogenous MDP is easier to solve. We consider settings where the decomposition of the state space into exogenous and endogenous variables or subspaces is not given but must be discovered. We introduce and prove correctness of algorithms for discovering the exogenous and endogenous subspaces of the state space when they are mixed through linear combination. These algorithms can be applied during reinforcement learning to discover the exogenous space, remove the exogenous reward, and focus reinforcement learning on the endogenous MDP. Experiments on a variety of challenging synthetic MDPs show that these methods, applied online, discover surprisingly large exogenous subspaces and  produce large speedups in reinforcement learning.

Speaker Biography

George Trimponias has been an Applied Scientist with Amazon Search in Luxembourg since 2020, focusing on natural language processing for improved information retrieval. He was a Researcher at Huawei Noah’s Ark Lab in Hong Kong from 2015 to early 2020, where he conducted machine learning research for communication networks. He received his PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology. His research interests include machine learning, game theory and optimization. His recent focus is on the design of efficient algorithms that can accelerate reinforcement learning in the presence of exogenous states and rewards.