
Zoom: https://oregonstate.zoom.us/j/91611213801?pwd=Wm9JSkN1eW84RUpiS2JEd0E5T…
Modern machine learning predictors often suffer from overconfidence. I will explain why neural predictors with a softmax output layer exhibit arbitrarily high confidence away from the training data. I will describe a simple modification to the softmax layer to prevent this type of overconfidence. I will also describe how product-of-expert and mixture-of-expert approximations in Bayesian inference can lead to over or under confidence. Finally, I will describe a simple interpolation technique to enhance the calibration of predictions in distributed machine learning applications including federated learning.
Pascal Poupart is a Professor in the David R. Cheriton School of Computer Science at the University of Waterloo (Canada). He is also a Canada CIFAR AI Chair at the Vector Institute and a member of the Waterloo AI Institute. He serves on the advisory board of the NSF AI Institute for Advances in Optimization (2022-present) at Georgia Tech. He served as Research Director and Principal Research Scientist at the Waterloo Borealis AI Research Lab at the Royal Bank of Canada (2018-2020). He also served as scientific advisor for ProNavigator (2017-2019), ElementAI (2017-2018) and DialPad (2017-2018). His research focuses on the development of algorithms for Machine Learning with application to Natural Language Processing and Material Discovery. He is most well-known for his contributions to the development of Reinforcement Learning algorithms. Notable projects that his research team are currently working on include inverse constraint learning, mean field RL, RL foundation models, Bayesian federated learning, uncertainty quantification, probabilistic deep learning, conversational agents, transcription error correction, sport analytics, adaptive satisfiability and material discovery for CO2 recycling.