Current Projects

Foundations of Transfer in Reinforcement Learning for Dialogue Domains
Funded by the ARO. I am the Principal Investigator.
Dialogue systems in a particular domain cannot currently leverage knowledge and experience from other domains. This project aims to apply transfer in reinforcement learning an approach that has been successful in other research problems to dialogue. This machine learning approach can allow us to leverage knowledge and experience from existing dialogue domains, so that when we move to a new related dialogue domain we do not have to start from scratch.

Evaluating Speech Synthesis in a Dialogue Context
Funded by the ARO. I am the Principal Investigator.
The current practice in speech synthesis evaluation is to synthesize isolated utterances (out-of-context) using a particular voice, and then ask humans to rate these utterances in terms of a few aspects, usually naturalness and intelligibility. This project aims to develop a novel evaluation framework for speech synthesis in dialogue systems that will take into account the dialogue context.

Reinforcement Learning of Negotiation Dialogue Policies in Socio-Cultural Settings
Funded by the ARO. I am the Principal Investigator.
This project aims to investigate how we can use reinforcement learning to build agents capable of negotiating with other agents or humans from various socio-cultural backgrounds, and that can themselves exhibit behavior that takes into account socio-cultural aspects of negotiation.

Reinforcement Learning of Multi-Party Negotiation Dialogue Policies
Funded by the NSF. I am the Principal Investigator.
Natural-language-based dialogue systems have a dialogue policy that determines what the system should do based on the dialogue context. Previous work on statistical natural-language-based dialogue systems has mainly addressed two-party dialogue between one computer agent (system) and one human user. This project aims to explore the use of reinforcement learning for multi-party negotiation dialogue.

New Dimensions in Testimony
In collaboration with the ICT Natural Language Dialogue Group, of which I am a member, the ICT Graphics Lab and the USC Shoah Foundation.
This project aims to develop technologies that allow conversation between a live person and recordings of someone who is not temporally co-present (time-offset interaction). Our current focus is on building systems that enable conversations with Holocaust survivors.

Social Simulation
Funded by the ARO. In collaboration with David Pynadath and Ali Jalal-Kamali.
Multi-agent social simulation requires a computational model of how people incorporate their observations of real-world events into their beliefs about the state of their world. This project aims to develop computational models that can accurately simulate the behavior of human individuals and groups in real-world scenarios.

Natural Language Dialogue for Virtual Humans
Funded by the ARO. In collaboration with the ICT Natural Language Dialogue Group, of which I am a member.
This project aims to provide dialogue capabilities for virtual humans. We investigate all aspects of dialogue: speech recognition, natural language understanding, dialogue management, natural language generation, speech synthesis, authoring tools, different dialogue genres, emotions, grounding, incremental processing, etc.

Engage: Promoting Engagement in Virtual Learning Environments
Funded by the ARO. In collaboration with Mark Core, Benjamin Nye, and Daniel Auerbach.
This project seeks to investigate motivation and engagement in game-based, virtual learning experiences. Specifically, the project focuses on how interactions with virtual humans can be made more effective and compelling for learners.


Recent Projects

Reinforcement Learning for Realistic Statistical Spoken Dialogue Systems Beyond Slot-Filling Applications
Funded by the NSF. I am the Principal Investigator and David Traum is the co-Principal Investigator.
Statistical spoken dialogue systems use reinforcement learning to learn a dialogue policy that decides what to do based on the dialogue context. Previous work on this problem has mainly addressed slot-filling dialogue, in which the user presents a complex request (e.g., an appointment booking), and the system tries to fill a set of slots (e.g., date and time) to satisfy the user's request. This project aims to investigate dialogue policy learning for other genres of dialogue including question-answering and negotiation.

Robust Speaker-Adaptive Statistical Parametric Speech Synthesis
Funded by the ARO. I am the Principal Investigator.
Virtual humans are artificial conversational agents designed to mimic the behavior of real humans. In order for virtual humans to successfully simulate real humans, they need to sound like real humans, that is, they need to utter sentences in an intelligible manner, be able to express emotions, and give the impression that they are engaged in the conversation when interacting with real humans. This project aims to investigate how we can build realistic synthetic voices for virtual humans from small amounts of data.

Detection and Computational Analysis of Psychological Signals
Funded by DARPA. In collaboration with the ICT Natural Language Dialogue Group, of which I am a member, the ICT MultiComp Lab, the ICT Integrated Virtual Humans Group and the ICT MedVR Lab.
This project aims to develop innovative tools that can detect depression by analyzing facial expressions, body gestures, and speech. This will help in assessing the psychological status of warfighters in the hopes of improving psychological health awareness and enabling them to seek timely help.

Modeling Cultural Factors in Collaboration and Negotiation
MURI funded by the ARO. In collaboration with the ICT Natural Language Dialogue Group, of which I am a member, and the CMU Robotics Institute (Katia Sycara and Geoff Gordon).
This multidisciplinary research aims to develop validated theories and techniques for descriptive and predictive models of dynamic collaboration and negotiation that consider cultural and social factors.

Advancing Speech Recognition Technology to Support Virtual Human Training
Funded by TATRC. In collaboration with the ICT Natural Language Dialogue Group, of which I am a member, and the USC Signal Analysis and Interpretation Laboratory.
This project aims to achieve better integration and synergy between speech recognition and language understanding.