"What should be in an explanation and what should they look like?” This is a fundamental question to answer in order for Explainable Artificial Intelligience (XAI) to gain the trust of human assessors. To this end, we conducted a pair of studies investigating generation, content, and form of explanations in the Real-Time Strategy (RTS) domain, specifically StarCraft II. First, we observed expert explainers' (shoutcasters) foraging patterns and speech, as they provide explanations in real-time. Second, we used a lab study to examine how participants investigated agent behavior in the same domain - but without the real-time constraint. By conducting this pair of studies, we are able to study both (1) explanations supplied by experts and (2) explanations demanded by assessors. Throughout our studies, we adopted an Information Foraging Theory (IFT) perspective, which allows us to generalize our results. In this talk, we present what these results tell us about how to explain AI systems.
Jonathan Dodge is a Ph.D. student in EECS, working on Explainable AI.