Telling Stories to Robots: The Effect of Backchanneling on a Child's Storytelling

While there has been a growing body of work in child-robot interaction, we still have very little knowledge regarding young children's speaking and listening dynamics and how a robot companion should decode these behaviors and encode its own in a way children can understand. In developing a bac...

Full description

Bibliographic Details
Main Authors: Park, Hae Won (Author), Gelsomini, Mirko (Author), Lee, Jin Joo (Author), Breazeal, Cynthia Lynn (Author)
Other Authors: Massachusetts Institute of Technology. Media Laboratory (Contributor), Massachusetts Institute of Technology. Personal Robots Group (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2020-09-09T13:32:13Z.
Subjects:
Online Access:Get fulltext
LEADER 02355 am a22002173u 4500
001 127208
042 |a dc 
100 1 0 |a Park, Hae Won  |e author 
100 1 0 |a Massachusetts Institute of Technology. Media Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Personal Robots Group  |e contributor 
700 1 0 |a Gelsomini, Mirko  |e author 
700 1 0 |a Lee, Jin Joo  |e author 
700 1 0 |a Breazeal, Cynthia Lynn  |e author 
245 0 0 |a Telling Stories to Robots: The Effect of Backchanneling on a Child's Storytelling 
260 |b Association for Computing Machinery (ACM),   |c 2020-09-09T13:32:13Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/127208 
520 |a While there has been a growing body of work in child-robot interaction, we still have very little knowledge regarding young children's speaking and listening dynamics and how a robot companion should decode these behaviors and encode its own in a way children can understand. In developing a backchannel prediction model based on observed nonverbal behaviors of 4-6 year-old children, we investigate the effects of an attentive listening robot on a child's storytelling. We provide an extensive analysis of young children's nonverbal behavior with respect to how they encode and decode listener responses and speaker cues. Through a collected video corpus of peer-to-peer storytelling interactions, we identify attention-related listener behaviors as well as speaker cues that prompt opportunities for listener backchannels. Based on our findings, we developed a backchannel opportunity prediction (BOP) model that detects four main speaker cue events based on prosodic features in a child's speech. This rule-based model is capable of accurately predicting backchanneling opportunities in our corpora. We further evaluate this model in a human-subjects experiment where children told stories to an audience of two robots, each with a different backchanneling strategy. We find that our BOP model produces contingent backchannel responses that conveys an increased perception of an attentive listener, and children prefer telling stories to the BOP model robot. 
520 |a National Science Foundation (U.S.) (NSF grant IIS-1523118) 
546 |a en 
655 7 |a Article 
773 |t HRI'17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction