AI Seminar: Considerations for More Scalable Trustworthy AI

Event Speaker
Richard Mallah
Director of AI Projects, The Future of Life Institute
Event Type
Artificial Intelligence
Date
Event Location
Zoom: https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5UT09
Event Description

As AI systems become less narrow in their capabilities and more general-purpose, new classes and levels of pitfalls present themselves, so techniques we bring to bear for the safety and ethical alignment of these systems will need to scale in new ways and require innovation increasingly en par with the sophistication of the primary learning functions of the system. As the differences between what a system can do and what it should do grow, desiderata we encounter even with narrower AI systems, like establishing safe bounds, practical interpretability, verification of key models or algorithms, minimizing negative side effects, maintaining operator control, and mitigation of unwanted biases each will need to account for levels of indirection previously unseen.

In this talk, we attempt to explore the cultivation of a security mindset with respect to what more general systems can do wrong, and application of that to critical evaluation and design of safety techniques regarding their scalability or amenability to generality. It is notable, for instance, with increasing generality, that modeling context is increasingly important and that AI safety and AI ethics increasingly overlap.

Speaker Biography

Richard Mallah is Director of AI Projects at The Future of Life Institute, where he does metaresearch, analysis, advocacy, strategy, and field building regarding technical, strategy, and policy aspects of transformative AI safety. From December 2015 Richard was on the founding team of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and continues to serve on its Executive Committee. He co-chairs the recurring SafeAI and AISafety technical safety workshops at AAAI and IJCAI, and in 2021 was the founding Executive Director of the Consortium on the Landscape of AI Safety, for which he was drafted because of his AI safety field landscaping & synthesis work at FLI. Richard has served in the Safety and the Labor & Economy Working Groups at Partnership on AI, has chaired the IEEE GIEAIS committees on AGI and on lethal autonomous weapons, and is an Honorary Senior Fellow at the Foresight Institute.

Mr. Mallah has been working in machine learning and AI in industry for over twenty years, spanning many roles across R&D including algorithms research, research management, product team management, CTO, chief scientist, and strategy; in total he’s worked on over a hundred AI/ML-related technical projects from these different perspectives. Ever-focused on innovation yet mindful of managing risks, Richard has regularly aligned applied research drivers with novel research directions in trustworthy AI. In so doing, safety-related R&D he’s led has included: multiobjective GP agents with safety objectives, debiasing novel latent spaces, active-learning-enhanced automated ontology refactoring and alignment, explainability-enhanced conditional quasi-metric spaces, uncertainty-aware tight blends of symbolic and subsymbolic knowledge representations, liability-bearing-anomaly recognition, confidence-enhanced bayesian monte carlo analysis of operational risk, more robust multimodal LSTM systems, information-retrieval-enhanced transformer-based systems for more truthful NLG, and AI auditing methods. Richard advises AI safety startups, VC funds, incubators, academics, governments, international multistakeholder bodies, and NGOs on trustworthy AI, scalable AI safety, scalable AI ethics, wide-angle sustainability, ML model risk management, and assurance. Mr. Mallah has given dozens of invited talks globally on long-termist foresight, assurance, robustness, interpretability, and control of advanced AI and autonomous systems. He holds a degree in Intelligent Systems from Columbia University.