Learning is a process of persuasion. If we could all "persuade" ourselves to pursue learning, then perhaps we would have stopped at the oldest intelligent tutoring systems—books. The truth of the matter is that not all of us are intrinsically motivated to read a book, yet we need knowledge and skills to function in the modern world. Persuasion goes beyond initiation of learning. It persists throughout the process of engaging our minds, receiving new information and retaining it. Reading a book is not always the best way to persuade our minds to learn. The challenge is then how do we design technologies to turn static book pages into an engaging learning experience that best "persuade" our minds to effectively and efficiently receive and retain knowledge.
As a learning science researcher, my mission is to design technologies to inspire students and to transform their learning experiences. This technology can take on the role of a learning companion, such as a virtual schoolmate; or a youngster who is your pupil; or a virtual double of you that you can train; or a battalion of soldiers that simulate the effect of your plans and commands. As a researcher in artificial intelligence, my goal is to use lessons learned in designing intelligent tutoring technology, to design explainable AI algorithms that make the decision-making behind automatons, such as virtual agents and robots, more transparent to everyday users. My ultimate goal is to use persuasive technologies to foster better human-AI understanding and promote optimal human-AI team performance.
I am interested in innovative training technologies that motivate and inspire one to learn, and transform learning experiences. This technology can take on the role of a learning companion, such as a virtual schoolmate; or youngster who's your pupil, or a virtual double of you that you can train, or a battalion of soldiers that simulate the effect of your plans and commands. I am also interested in applying research in Human Computer Interaction to develop new ways to allow learner to interface with the knowledge in a way that's natural to their cognitive process. Soft skills, or knowledge in ill-defined domains, are challenge to learn through traditional classroom activities. It is a challenge that my research aims to tackle.
One of the biggest challenges for training is how to facilitate learning transfer from inside (real or virtual) classrooms to outside the classrooms. Motivation to transfer learning to the job, and the behavioral changes that ultimately lead to organizational changes are two of the key measures researchers aim to improve through intelligent tutoring. The project seeks to improve learning transfer and promote post-training behavioral change by introducing a novel emerging technology called the Rapid Avatar Capture and Simulation (RACAS) to enhance two learning paradigms. The first learning paradigm is based on a powerful behavioral alteration intervention in social science called induced hypocrisy. The induced hypocrisy approach creates a state of cognitive dissonance within an individual. As a result, the individual is motivated to alter his/her behavior in order to resolve the dissonance. A second learning paradigm is based on the research in a particular type of pedagogical agents called teachable agents. In this learning paradigm, the learner teaches an intelligent agent, helping himself/herself learn in the process. The RACAS technology, developed at USC/ICT, scans a human subject and creates a fully animatable virtual 3D model of that person at low cost and high speed. Using this digital doppelgänger, we can potentially strengthen the effect of induced hypocrisy and teachable agents. The work builds on decades of research in social science (e.g. social cognition theory and cognitive dissonance theory) and learning science (e.g. learning by teaching) and extends the research with cutting edge technologies (e.g. RACAS). It can potentially be widely applicable to many training applications particularly training simulations with a virtual character that looks, behaves and thinks (e.g. through a teachable agent) like the learner. The work lays the groundwork for how to best utilize such technology to promote learning transfer and behavioral change.
Technological advances offer the promise of robotic systems that work with people to form human-machine teams that are more capable than their individual members. Unfortunately, the increasing capability of such autonomous systems has often failed to increase the capability of the human-machine team unit. One critical aspect of a successful human-machine interaction is trust. This project undertakes algorithm development and empirical evaluations to study domain-independent mechanisms for robots to actively maintain proper trust relationships with human teammates. The project aims to gain greater understanding of the process by which robots can explicitly reason about establishing and maintaining the trust of their human teammates, thus providing a critical step toward maximizing the capabilities of human-robot teams.
Founded by NSF, the Rall-E project is a collaboration with Alelo Inc and Robokind. The project takes on one of the key challenges in learning foreign language—anxiety and lack of self-confidence—and addresses then by using RoboKind's Zeno R25 robot and adapting it into a language learning tool. A robot with social intelligence, such as the one this project propose to develop, can help break down these barriers to learning. Many kids will be motivated to learn foreign languages, such as Chinese, just so they can interact with the robot. The robots will be piloted in school districts in Virginia to help grow and maintain enrollments in language courses.
Built on the PsychSim framework, the Social Interaction Modeling project aims to model and simulate human social interactions at both small and large scales. SIM models social entities, individuals or groups, as goal-seeking decision-makers that can have beliefs about other entities. SIM has been used in a range of intelligent training applications. In such applications, the key challenge is to develop a general language and mechamism to describe pedagogy and to ensure pedagogy is correctly coded. Such a mechanism can give scenario authors immediate feedback about their content and bypass the time-consuming play-testing with human players for identifying the range of possible outcomes of training scenarios. [pdf1, pdf2, pdf3, pdf4]
Communication is more effective and persuasive when participants establish rapport. When people interact, their speech prosody, gesture, gaze, posture, and facial expression contribute to establishment of a sense of rapport. The Rapport project uses machine vision and prosody analysis to create virtual humans that can detect and respond in real-time to human gestures, facial expressions and emotional cues and create a sense of rapport. This research informs the design of agents and avatars in computer mediated communication. [pdf1, pdf2, pdf3, pdf4]
Emotions influence how we perceive the world, how we make decisions and how we interact with each other. Research in emotions has expanded over a wide range of disciplines, raising our understanding of the role emotion plays in human behavior. There has been extensive work in computational models of human emotions, but little work has been done in validating these models. This research uses rigorous empirical studies to assess the human behavior fidelity of the EMA computational model of emotion. We develop models that allow synthetic characters to derive an emotional response to events in the world and respond with behaviors consistent with that emotional state. [pdf1, pdf2]
The objective of this project is to develop tools to support individualized language learning, and apply them to the acquisition of tactical languages: subsets of linguistic, gestural, and cultural knowledge and skills necessary to accomplish specific missions. In order to maximize learner motivation and give learners effective practice opportunities, learners will practice on vocabulary items and learn gestures, and then apply them in simulated missions. In the simulation, they will interact with avatars and virtual characters. The training system enables learners to communicate directly with on-screen characters using a speech recognition interface. The objective is to make the toolset easily applicable to new languages, missions, and training contexts. This project was initially a research project funded by DARPA. It has since become a commercial product of Alelo Inc. used by military personnel from the U.S., NATO and many other countries. [pdf1, pdf2, pdf3]
An interdisciplinary project funded by the National Science Foundation, the goal of the Social Intelligence Project is to develop animated pedagogical agents with well-developed social skills, which can be employed to promote learning. These agents can express emotions and attitudes, exhibit empathy, and understanding of when and how to interact in socially appropriate ways. [pdf1, pdf2, pdf3, pdf4, pdf5]
The goal of the EVG project is to create an experiment platform that allows researchers to systematically explore factors that elicit emotions and to study realistic and spontaneous emotional responses. The Emotion Evoking Game is a dungeon-crawler computer game that uses a carefully designed series of game events to induce emotions from human players. The events are designed based on dimensions from the appraisal theories of emotions. Facial expressions of the players are captures using super high speed cameras. Dynamics of facial action units for expressions, such as anger and disgust, are unveiled. [pdf1, pdf2]
The Virtual Cultural Awareness Trainers are training software developed at Alelo Inc., aimed to help trainees develop true intercultural competence, which is necessary for conducting successful military operations. Trainees are put in 3D virtual environments where they can practice critical decision-making in contextualized environments. The collaboration with Alelo focuses on evaluation of VCAT, which is a mandatory training course for US deploying forces.
Interactive Foreign Language Teaching. Johnson, W.L., Samtani, P., Valente, A., Vilhjalmsson, H., Wang, N. US 20070015121 A1
Assessing Progress in Mastering Social Skills in Multiple Categories. Johnson, W.L., Vilhjalmsson, H., Valente, A., Samtani, P., Wang, N. US 20070082324 A1
Pynadath, D. V., Barnes, M. J., Wang, N., Chen, J. Y. C. (2018) Transparency Communication for Machine Learning in Human-Automation Interaction. In Zhou, J. and Chen, F. (Eds.) Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent. New York, NY: Springer.
Wang, N., Pynadath, D. V., & Marsella, S. C. (2015). Subjective Perceptions in Wartime Negotiation. IEEE Transactions on Affective Computing, 6(2), 118-126.
Gratch, J., Kang, S., Wang, N. (2014) Using Social Agents to Explore Theories of Rapport and Emotional Resonance. In J. Gratch & S. Marsella (Eds.), Social Emotions in Nature and Artifact. Oxford, New York: Oxford University Press.
Johnson, W. L., Wang, N. (2008). The Role of Politeness in Interactive Educational Software. In C. Hayes and C. Miller (Eds.) Human-Computer Etiquette. New York, NY: Taylor and Francis.
Wang, N., Rizzo, S. (2008). Avatars and Agents, International Encyclopedia of Communication
Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66(2), 98-112. (Most Cited Paper Award).
Pynadath, D.V., Wang, N., & Barnes, M.J. (2018). Transparency Communication for Reinforcement Learning in Human Robot Interactions. In Proceedings of the Workshop on Explainable Artificial Intelligence (XAI) of the 27th International Joint Conference on Artificial Intelligence.
Pynadath, D. V., Wang, N., Rovira, E., Barnes, M. J.(2018). Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams. In Proceedings of the In Proceedings of the 17th International Conference on Autonomous Agents & Multiagent Systems.
Wang, N., Shapiro, A., Feng, A., Zhuang, C., Merchant, C., Schwartz, D., & Goldberg, S. L. (2018). An Analysis of Student Belief and Behavior in Learning by Explaining to a Digital Doppelganger. In Proceedings of the Workshop on Personalization Approaches in Learning Environments (PALE) of the 19th International Conference on Artificial Intelligence in Education.
Pynadath, D.V., Wang, N., & Yang, R. (2018). Simulating Collaborative Learning Through Decision-Theoretic Agents. In Proceedings of the Workshop on Assessment and Intervention during Team Tutoring of the 19th International Conference on Artificial Intelligence in Education.
Wang, N., Shapiro, A., Feng, A., Zhuang, C., Merchant, C., Schwartz, D., & Goldberg, S. L. (2018). Learning by Explaining to a Digital Doppelganger. In Proceedings of the 14th International Conference on Intelligent Tutoring Systems.
Wang, N., Pynadath, D. V., Rovira, E., Barnes, M. J., Hill, S. G. (2018). Is It My looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams In Proceedings of the 13th International Conference on Persuasive Technology.
Wang, N., Pynadath, D. V., Barnes, M. J., Hill, S. G. (2018). Comparing Two Automatically Generated Explanations on the Perception of a Robot Teammate. In Proceedings of the Workshop on Explainable Robotic Systems of the 13th Annual ACM/IEEE International Conference on Human Robot Interaction.
Pynadath, D. V., Wang, N., Rovira, E., Barnes, M. J.(2018). A Nearest-Neighbor Approach to Recognizing Subjective Beliefs in Human-Robot Interaction. In Proceedings of The AAAI Workshop on Plan, Activity, and Intent Recognition (PAIR).
Rovira, E., Wang, N., Pynadath, D. V. (2017). Human robotic interaction: investigating perceptions of trust. In Proceedings of the International Conference on Applied Human Factors and Ergonomics.
Wang, N., Pynadath, D. V., Hill, S. G., Merchant, C. (2017). The dynamics of human-agent trust with POMDP-generated explanations. In International Conference on Intelligent Virtual Agents (IVA), pp. 459-462.
Wang, N., & Johnson, W. L. (2016). Pilot Study with RALL-E: Robot-Assisted Language Learning in Education. In Proceedings of the 13th International Conference on Intelligent Tutoring Systems. (pp. 514).
Wang, N., Pynadath, D. V., & Hill, S. G. (2016, May). The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (pp. 997-1005). International Foundation for Autonomous Agents and Multiagent Systems.
Wang, N., Pynadath, D. V., Hill, S. G. (2016). Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. In Proceeding of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
Wang, N., Pynadath, D. V., Hill, S. G. (2015). Building Trust in a Human-Robot Team with Automatically Generated Explanations. In Proceeding of the Interservice/Industry Training, Simulation and Education Conference (I/ITSEC).
Pynadath, D. V., Wang, N., Merchant, C. (2015). Toward Acquiring a Human Behavior Model of Competition vs. Cooperation. In Proceedings of the Interservice/Industry Training, Simulation and Education Conference (I/ITSEC).
Wang, N., Pynadath, D. V., Unnikrishnan, K. V., Shankar, S., & Merchant, C. (2015). Intelligent Agents for Virtual Simulation of Human-Robot Interaction. In Proceedings of the 7th International Conference on Virtual, Augmented and Mixed Reality, Held as Part of HCI International (pp. 228-239). Springer International Publishing. (Talk)
Wang, N., Pynadath, D. V., Marsella, S. C. Subjective Perceptions in Wartime Negotiation. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, 2013 (Nominated for Best Paper Award).
Pynadath, D. V., Wang, N., Marsella, S. C. Are you thinking what I'm thinking? An Evaluation of a Simplified Theory of Mind. In Proceedings of the International Conference on Intelligent Virtual Agents, 2013.
Pynadath, D. V., Wang, N., Marsella, S. C. Computational Models of Human Behavior in Wartime Negotiations. In Proceedings of the 35th annual meeting of the Cognitive Science Society, 2013.
Wang, N., Pynadath, D., Marsella, S. Toward Automatic Verification of Multiagent Systems for Training Simulations. In Proceedings of 11th International Conference on Intelligent Tutoring Systems, 2012
Georgila, K., Wang, N., Gratch, J. Cross-domain speech disfluency detection. In Proceeding of the 11th Annual SIGdial meeting on Discourse and Dialogue, 2010.
Wang, N., Gratch, J. Don’t Just Stare at Me. In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI), 2010.
Wang, N., Gratch, J., W. Lewis Johnson. Facial Expressions and Politeness Effect in Foreign Language Training System. In Proceedings of 10th International Conference on Intelligent Tutoring Systems, 2010.
Wang, N., Gratch, J. Rapport and Facial Expression. In Proc. of The International Conference on Affective Computing and Intelligent Interaction, 2009.
Gratch, J., Marsella, S., Wang, N., Stankovic, B. Assessing the validity of appraisal-based models of emotion. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, 2009. (Best Paper Award).
Marsella, S., Gratch, J., Wang, N., Stankovic, B. Assessing the validity of a computational model of emotional coping. International Conference on Affective Computing and Intelligent Interaction. Amsterdam, IEEE. 2009.
Wang, N., Gratch, J. Can Virtual Human Build Rapport and Promote Learning? In Proceedings of the 14th International Conference on Artificial Intelligence in Education, 2009.
Wang, N., Johnson, W. L. The Politeness Effect in an Intelligent Foreign Language Tutoring System. In Proceedings of International Conference on Intelligent Tutoring Systems, 2008.
Wang, N., Marsella, S., Hawkins, T. Individual Differences in Expressive Response: A Challenge for ECA Design. In Proceedings of the 7th International Conference on Autonomous Agents and Multiagents Systems, 2008.
Kang, S.H., Gratch, J., Wang, N., Watt, J. Does the Contingency of Agents’ Nonverbal Feedback Affect Users’ Social Anxiety? In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, 2008
Kang, S.H., Gratch, J., Wang, N., Watt, J.H. Agreeable People Like Agreeable Virtual Humans. 8th International Conference on Intelligent Virtual Agents, Tokyo, Japan 2008
Kang, S.H., Watt, J.H., Gratch, J., Wang, N. Associations between interactants’ personality traits and their feelings of rapport in interactions with virtual humans. The 59th Annual Conference of the International Communication Association. Chicago, 2009.
Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R. Creating Rapport with Virtual Agents. 7th International Conference on Intelligent Virtual Agents, Paris, France 2007 (Nominated for Best Paper Award).
Wang, N. The Rapport Agent. Gathering of Animated Lifelike Agents at 7th International Conference on Intelligent Virtual Agents, Paris, France 2007 (Finalist for GALA Award).
Johnson, W.L., Wang, N. Experience with serious games for learning foreign languages and cultures. SimTecT 2007, Brisbane, Queensland, Australia.
Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe, F., Marsella, S. Morales, M. Can virtual humans be more engaging than real ones? 12th International Conference on Human-Computer Interaction, 2007.
Wang, N., Marsella, S. Introducing EVG: An Emotion Evoking Game. The 6th International Conference on Intelligent Virtual Agents, 2006.
Wang, N., Johnson, W.L., Mayer, R.E., Rizzo, R., Shaw, E., Collins, H. The politeness effect: Pedagogical agents and learning gains. The 12th International Conference on Artificial Intelligence in Education, 2005 (Best Student Paper Award).
Wang, N., Johnson, W.L., Rizzo, P., Shaw, E., Mayer, R.E. Experimental evaluation of polite interaction tactics for pedagogical agents. International Conference on Intelligent User Interfaces, 2005.
Qu, L., Wang, N., Johnson, W.L., Using Learner Focus of Attention to Detect Learner Motivation Factors. The 10th International Conference on User Modeling, 2005.
Rizzo, P., Lee, H., Shaw, E., Johnson, W.L., Wang, N., Mayer, R.E., A Semi-Automated Wizard of Oz Interface for Modeling Tutorial Strategies. 10th International Conference on User Modeling, 2005.
Johnson, W.L., Rizzo, P., Lee, H., Wang, N., Shaw, E. Modeling Motivational and Social Aspects of Tutorial Dialog, Workshop on Modelling Human Teaching Tactics and Strategies, 2004.
Qu, L., Wang, N., Johnson, W.L. Choosing when to interact with learners. International Conference on Intelligent User Interfaces, 2004.
Overview: This course provides an overview of the field of Artificial Intelligence: foundations of symbolic intelligent systems, search, logic, knowledge representation, planning, learning.
Lectures: 5:00pm - 6:20pm on Tuesdays and Thursdays in SGM 123
Office Hour: 3:30pm - 4:30pm on Tuesdays before lecture
Let's get in touch!