Bilateral Recognition of Facial Expressions for Human- Robot Interaction Applications

碩士 === 國立臺灣大學 === 電機工程學研究所 === 99 === Growing of elderly population raises the demands of social welfare medical cares, and kinds of public services. In terms of technology, it is a very important issue to help elders live with a comfortable, safe and healthy life assisted by intelligent service rob...

Full description

Bibliographic Details
Main Authors: Ming-Chieh Tsai, 蔡明傑
Other Authors: 羅仁權
Format: Others
Language:en_US
Published: 2011
Online Access:http://ndltd.ncl.edu.tw/handle/93294377103355627921
Description
Summary:碩士 === 國立臺灣大學 === 電機工程學研究所 === 99 === Growing of elderly population raises the demands of social welfare medical cares, and kinds of public services. In terms of technology, it is a very important issue to help elders live with a comfortable, safe and healthy life assisted by intelligent service robots. The field of intelligent robotics is a high-priority development industry in 21st century. In the robotic area, robot will take some actions depending on different facial expressions. Human-Robot-Interaction (HRI) is an important role of intelligent robotics field, and we are interested in how robots interact with facial expressions. For example, according to facial expressions, robots can interaction with human. The recognition of emotional information is a key step toward giving robot the ability to interact more naturally and intelligently with human. Robot is able to understand humans and not just take orders via the mouse or the keyboard. Therefore, we develop a real-time facial expressions system. In this thesis, we use active shape model to get feature points plus facial color to improve the accuracy and DAG support vector machine to classify them. Kernel of support vector machine is radial basis function. According to the recognition results, our Robot head can make facial expressions for human interaction. In this thesis, we propose a method which presents a real-time system for facial expressions. The speech interface is implemented with Microsoft Speech API (SAPI). All the systems, user interface, software frameworks and applications proposed in this thesis are implemented with native C++ programming language and Open CV. The experiments are conducted on young Einstein robot head which is an intelligent robot developed by the Intelligent Robotics and Automation (IRA) Laboratory at National Taiwan University.