Title: Compressing Mental Model Spaces and Modeling Human Strategic Intent

Abstract: A wide swath of multidisciplinary areas in autonomous multiagent systems benefit from modeling interacting agents. However, the space of candidate mental models is generally very large and grows disproportionately as the interaction progresses. Sometimes, application constraints and data may limit this space. In this talk, I will present principled and domain-independent ways of compressing the model spaces. The general approach is to partition spaces by forming equivalence classes of models and retaining representatives from each class. These innovative methods for compression include those that are exact as these do not result in any associated loss of information for the modeling agent (lossless) but which are computationally intensive, and approximate that are computationally more efficient but lossy. The latter may violate an important epistemic condition on the model space, which is seldom considered in the plan recognition literature. While these methods are broadly useful, I will focus on the impact of the compression on the scalability and quality of planning in large and partially observable multiagent settings within the framework of interactive POMDPs.

Human strategic thinking is hobbled by low levels of recursive reasoning (reasoning of the type “I think that you think that I think...”) in general contexts. Recent studies demonstrate that in simple and competitive contexts, strategic reasoning can go deeper than previously thought, even up to three levels. I will present a computational model of the behavioral data obtained from the studies using the interactive POMDP appropriately simplified and augmented with well-known models simulating human learning and decision. These studies and the psychologically-plausible process models provide insights into the specific ways by which humans attribute strategic intent. The models could be viable ways for computationally modeling strategic behavior in mixed human-agent settings.

Bio: Prashant Doshi is an associate professor of computer science at the University of Georgia (UGA) and directs the THINC lab. He is also a faculty member of the Institute for AI at UGA. He received his doctorate from the University of Illinois at Chicago in 2005. His research interests lie in decision making and specifically in decision making under uncertainty in multiagent settings. He is also interested in studying and computationally modeling strategic decision making by humans. Prashant has coconducted successful tutorials on decision making in multiagent settings for the last six years at the AAMAS conferences, co-organized the past three MSDM workshops and two workshops on interactive decision and game theory. He received UGA's Creative Research Medal in 2011 and the 2009 NSF CAREER award. More details about his research are available at http://thinc.cs.uga.edu.