Special Topic: Recognizing User Affect in Dialogue

Systems that exhibit or try to detect and model emotion or other attitudes (e.g. surprise, uncertainty, engagement), are sometimes called "affective". Currently there are lots of varieties of affective aspects that are considered, models of affect, and how they are used in dialogue systems.

Required Readings

  1. Kate Forbes-Riley and Diane Litman Predicting Emotion in Spoken Dialogue from Multiple Knowledge Sources Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004
  2. Chul Min Lee, and Shrikanth S. Narayanan, Toward Detecting Emotions in Spoken Dialogs IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005
  3. Grafsgaard, J., Wiggins, J., Boyer, K. E., Wiebe, E., & Lester, J. (2014, July). Predicting learning and affect from multimodal data streams in task-oriented tutorial dialogue. In Educational Data Mining 2014.
  4. Giota Stratou and Louis-Philippe Morency MultiSense—Context-Aware Nonverbal Behavior Analysis Framework: A Psychological Distress Use Case in IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 8, NO. 2, APRIL-JUNE 2017

Other Readings
  1. A Survey of Affect Recognition Methods: Audio, Visual and Spontaneous Expressions Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang, in ICMI 2007
  2. Prosody-Based Automatic Detection of Annoyance and Frustration in Human-Computer Dialog Jeremy Ang,Rajdip Dhillon,Ashley Krupski,Elizabeth Shriberg, Andreas Stolcke, in Proc. ICSLP 2002
  3. Vogt, T., André, E. and Bee, N. 2008. EmoVoice – A framework for online recognition of emotions from voice. Proc. Workshop on Perception and Interactive Technologies for Speech-Based Sys- tems, Springer, Kloster Irsee, Germany, (June 2008).