Improving Team's Consistency of Understanding in Meetings

Upon concluding a meeting, participants can occasionally leave with different understandings of what had been discussed. Detecting inconsistencies in understanding is a desired capability for an intelligent system designed to monitor meetings and provide feedback to spur stronger shared understandin...

Full description

Bibliographic Details
Main Authors: Kim, Joseph (Contributor), Shah, Julie A (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2016-11-17T22:56:05Z.
Subjects:
Online Access:Get fulltext
LEADER 01919 am a22001933u 4500
001 105348
042 |a dc 
100 1 0 |a Kim, Joseph  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Aeronautics and Astronautics  |e contributor 
100 1 0 |a Kim, Joseph  |e contributor 
100 1 0 |a Shah, Julie A  |e contributor 
700 1 0 |a Shah, Julie A  |e author 
245 0 0 |a Improving Team's Consistency of Understanding in Meetings 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2016-11-17T22:56:05Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/105348 
520 |a Upon concluding a meeting, participants can occasionally leave with different understandings of what had been discussed. Detecting inconsistencies in understanding is a desired capability for an intelligent system designed to monitor meetings and provide feedback to spur stronger shared understanding. In this paper, we present a computational model for the automatic prediction of consistency among team members' understanding of their group's decisions. The model utilizes dialogue features focused on the dynamics of group decision-making. We trained a hidden Markov model using the AMI meeting corpus and achieved a prediction accuracy of 64.2%, as well as robustness across different meeting phases. We, then, implemented our model in an intelligent system that participated in human team planning about a hypothetical emergency response mission. The system suggested topics that the team would derive the most benefit from reviewing with one another. Through an experiment with 30 participants, we evaluated the utility of such a feedback system and observed a statistically significant increase of 17.5% in objective measures of the teams' understanding compared with that obtained using a baseline interactive system. 
546 |a en_US 
655 7 |a Article 
773 |t IEEE Transactions on Human-Machine Systems