Gesture-mediated Multimedia Player for Tai Chi Chuan Instruction

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 102 === In addition to the traditional way of learning, multimedia learning materi-als are widely used in training of various kinds of exercises and dancing. With the accessibility of these materials, people can do the training any time and any where. Despite the fact...

Full description

Bibliographic Details
Main Authors: Ru-Han Wu, 吳儒涵
Other Authors: 洪一平
Format: Others
Language:en_US
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/54372604980367685080
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 102 === In addition to the traditional way of learning, multimedia learning materi-als are widely used in training of various kinds of exercises and dancing. With the accessibility of these materials, people can do the training any time and any where. Despite the fact that learning by using multimedia is convenient (such as watching videos), the interaction with teacher in training process is hard to be simulated. Learners usually need to manually adjust the playback progress and repeat it again and again since the monotonous and lack of flex-ibility of video. On the other hand, it is difficult to confirm the correctness and details of gestures the user learned. In order to solve above problems, we proposed the gesture-mediated mul-timedia player application, ”Follow-Me”, to learning Tai Chi Chuan, which built up with accelerometer-enabled smart watches and commercial mobile devices. It provided an interaction between user and multimedia according to progress of user’s hand gesture. We applied an incomplete time series match-ing method to get the progress ,completeness of gestures and fulfil automatic segmentation. The video playback design is based on the automatic segmen-tation to reach the goal of mediating video content with the alteration of gestures. In experiments and user study, we asked users to perform gestures in var-ious levels of speed to evaluate the relative error time and percentage error of progress prediction. The result demonstrated low percentage error was achieved. Furthermore, users gave us some positive feedback toward our real-time video feedback system. A percentage of 71% was reported when we questioned participants about how much time they felt the adjustment of video according to gestures was correct. Our results show the effectiveness of the gesture-mediated method.