
Thomas G. Dietterich
2067 Kelley Engineering Center
Corvallis, OR 97331
United States
Thomas G. Dietterich (AB Oberlin College 1977; M.S. University of Illinois 1979; Ph.D. Stanford University 1984) is one of the founders of the field of Machine Learning. Among his research contributions was the application of error-correcting output coding to multiclass classification, the formalization of the multiple-instance problem, the MAXQ framework for hierarchical reinforcement learning, and the development of methods for integrating non-parametric regression trees into probabilistic graphical models (including conditional random fields and latent variable models). Among his writings are Chapter XIV (Learning and Inductive Inference) of the Handbook of Artificial Intelligence, the book Readings in Machine Learning (co-edited with Jude Shavlik), and his frequently-cited review articles Machine Learning Research: Four Current Directions and Ensemble Methods in Machine Learning.
He served as Executive Editor of Machine Learning (1992-98) and helped co-found the Journal of Machine Learning Research. He is currently the editor of the MIT Press series on Adaptive Computation and Machine Learning. He also served as co-editor of the Morgan-Claypool Synthesis Series on Artificial Intelligence and Machine Learning. He has organized several conferences and workshops including serving as Technical Program Co-Chair of the National Conference on Artificial Intelligence (AAAI-90), Technical Program Chair of the Neural Information Processing Systems (NIPS-2000) and General Chair of NIPS-2001 He is a Fellow of the ACM, AAAI, and AAAS. He served as founding President of the International Machine Learning Society, and he is currently a member of the Steering Committee of the Asian Conference on Machine Learning.
Research Description
I am interested in all aspects of machine learning. There are three major strands of my research. First, I am interested in the fundamental questions of artificial intelligence and how machine learning can provide the basis for building integrated intelligent systems. This includes learning for sequential decision making, particularly hierarchical reinforcement learning, and understanding how intelligent systems can detect anomalies and manage both the “known unknowns” and the “unknown unknowns” of the worlds in which they operate.
Second, I am interested in ways that people and computers can collaborate to solve challenging problems. How can we create rich interactions between people and computers so that learning can occur very quickly and easily? How can machine learning system learn “in the wild” without an engineer intervening to adjust parameters or change features and where the user feedback may be very noisy and indirect? How can we develop and refine the practice of software engineering of adaptive systems, so that BSCS engineers can build effective learning systems? How can an AI system recognize and understand the goals and actions of the user so that it can provide useful assistance?
Third, I am interested in applying machine learning to problems in the ecological sciences and ecosystem management as part of the emerging field of Computational Sustainability. This includes data cleaning and anomaly detection for sensor data, automated insect recognition for biodiversity surveys, computer vision for recognizing and understanding animal behavior, machine learning models of species distributions and migrations, and methods for solving large-scale ecosystem management problems. A related topic is the application of machine learning to model and control office buildings, such as the Kelley Engineering Center.