Schedule

Start End Activity
13:30 13:35 Opening
13:35 13:55 Ice Breaking
13:55 15:10 Oral Session
15:10 15:30 Break
15:30 16:15 Keynote Speech
16:15 17:40 Group Activity
17:40 17:45 Closing
17:45 17:50 Commemorative Photo
17:50 16:00 Questionnaire

Keynote Speaker


Khiet P. Truong
Assistant Professor
University of Twente

Abstract: Listening to more than words

Since it is becoming more and more common to talk to a device, the need for methods to make this interaction more smooth, enjoyable and natural increases. Spoken language is more than just words. The way people talk not only reveals information about their age, sex, or region they are from, it also reveals information about one's socio-affective, mental, and physical state. If agents can automatically extract this kind of information from the way the user talks, this will help regulate human-agent interaction and opens up opportunities for innovative talking agents.
In this talk, i will present an overview of our work on human-agent interaction and how we endow agents with (social) human-like capabilities such as backchanneling and detecting engagement. Acknowledging that you are listening to someone talking by nodding and saying uh-huh is something that comes naturally to humans but is rather challenging for virtual agents as we will see. Similarly, detecting engagement in a group of children is challenging for a machine that has to deal with naturalistic observations.

Bio

Khiet Truong is an assistant professor in the Human Media Interaction group, University of Twente. Her interests lie in the automatic analysis and understanding of verbal and nonverbal (vocal) behaviors in human-human and human-machine interaction, and the design of socially interactive technology to support human needs. Taking an interdisciplinary approach within the realms of affective computing and social signal processing, she aims to develop socially and affective intelligent interfaces (e.g. virtual conversational agents, social robots) that can recognize and display social and affective signals, and she aims to study how humans interact with this new kind of technology. Coming from a background in (computational) paralinguistics and speech analysis, her main focus is on analysing the vocal modality of expression, in addition to the visual (e.g. facial expressions, eye gaze) and physiological (e.g. heart rate, galvanic skin response) modalities in social interaction.

Webpage:

    • khiettruong.space
    • hmi.ewi.utwente.nl/Member/khiet_truong
    • Google Scholar

Proceeding

Authors Title
Antonia Eisenkoeck
James Moore
Differences in the Intentionality Bias when Judging Human and Robotic Action Download PDF
Hiroyasu Ide
Takashi Okuda
Behavioral Characteristics of Humanoid Robot to Suppress Bullying in School Download PDF
Masato Ikami
Yoichi Utsunomiya
Takashi Okuda
Suggestion and Evaluation of a Model which Express Career Behavior and Social Network using Multi-Agent Simulation Download PDF
Saki Nawata
Tatsuo Unemi
Toward a design of human-computer co-drawing system: preliminary experiments on effects of imitation and interference Download PDF