Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism

博士 === 國立成功大學 === 工業設計學系 === 104 === People with autism spectrum disorders (ASD) have a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions and other nonverbal social cues. Evidence shows that adolescents with ASD miss out on nonverbal cue...

Full description

Bibliographic Details
Main Authors: I-JuiLee, 李易叡
Other Authors: Chien-Hsu Chen
Format: Others
Language:en_US
Published: 2015
Online Access:http://ndltd.ncl.edu.tw/handle/69401322376919443031
id ndltd-TW-104NCKU5038003
record_format oai_dc
collection NDLTD
language en_US
format Others
sources NDLTD
description 博士 === 國立成功大學 === 工業設計學系 === 104 === People with autism spectrum disorders (ASD) have a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions and other nonverbal social cues. Evidence shows that adolescents with ASD miss out on nonverbal cues to social interaction, which leads to difficultly understanding and appropriately responding to other people. Therefore, in this research, we created a different visual strategy would improve the focus that adolescents with ASD have on the facial expressions and other nonverbal social cues of other people and help them understand their emotions and intentions to appropriately respond to others in social situations. This dissertation aims to examine (a) how to attract the attention of adolescents with ASD to the nonverbal cues that help most people socially interact with others; (b) how to train adolescents with ASD to understand their own emotional expressions and those of others to promote their social skills; (c) Help them to construct the information on the stable visual contents and imagine the emotional situation to pretend appropriate facial expressions. (d) Promote their social emotional judgment ability. In addition, related works were surveyed and given precise commentaries in four studies. In the first study, we retrieved social signals from commercials to create stop motion videos (SMVs) to make training materials that contain sequences of nonverbal social cues to support this visual method. We reviewed the differences in judgment data from adolescents with ASD (4 boys, 2 girls) after they had viewed two types of advertising videos: Video-Based Advertising (VBA) and SMVs. The results indicated that SMV materials offered structured and specific social signals of close-up images for adolescents with ASD, helping raise their levels of perceptions judgment and situation comprehension. Furthermore, we confirmed that the non-verbal social cues in the video can effectively improve social emotional and situational awareness for ASD. In the Cliplets-Based Video (CBV) study we are more interested in the dynamic and static visual design that can increase the focus of adolescents with ASD's attention on only a part of the image. We found that using static or fragmented images is too limited and not ecologically valid. Dynamic videos are advantageous, but adolescents with ASD have trouble focusing their attention on these materials. Microsoft Cliplets provides a halfway point: it allows animation of a specific element and concomitantly permits all of the surrounding elements be static and not distracting. Therefore, we used software technology that allows for the easy creation of half-static and half-dynamic video materials to attract the attention of adolescents with ASD to nonverbal facial cues to teach them the six basic facial expressions in typical social situations. We recruited new participants through the Taiwan Autism Association, six adolescents (4 boys, 2 girls) with ASD, and used a multiple baseline design across participants. This interventional learning system provided a simple yet effective way for adolescents with ASD to select and focus on the important nonverbal facial cues related to social situations. Furthermore, in the augmented reality (AR)-based video modeling (VM) storybook (ARVMS) study, we hoped to increase the motivation of the adolescents and to strengthen their attention to the story event on the screen. To accomplish this, we recruited more new participants (5 boys, 1 girl) for the ARVMS study and used a method that included an ARVMS that created a layer between a tangible book and virtual dynamic video clips. In this research, AR has multiple functions: it extends the social features of the story, and it also restricts attention to the most important parts of the videos. After three phases (baseline, intervention, and maintenance) of test data had been collected, the results showed that ARVMS intervention provided an augmented visual indicator which had effectively attracted and maintained the attention of adolescents with ASD to nonverbal social cues and helped them better understand the facial expressions and emotions of the storybook characters. Finally, this study, Augmented-Reality-Based Self-Facial Modeling Learning System (ARSFM) focus on the linkage between the specific facial expressions and corresponding social events related through the participants’ own three-dimensional (3D) facial expressions so that they can learn from their own point of view to mirror facial expressions of different mood states, and to promote the development of the appropriate corresponding emotional expression, and compare them with the expressions of others. The AR system provided 3D animations of six basic facial expressions overlaid on the participants’ faces; we recruited new participants (2 boys, 1 girl) to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, we found that AR intervention improved the appropriate recognition and response to facial emotional expressions seen in the situational task. In this research, the findings of these descriptive and correlational four studies allowed the researcher to understand how to use this visual strategy to help adolescents with ASD construct their visual structure and improve their ability to recognize the emotional expressions in social situations. We carefully describe our results in the following chapters. We used this visual strategy to push adolescents with ASD as our design partners and testers by associating the ideas and feelings of the characters in the scenario settings that we created. We hypothesized that this would positively affect the ability of adolescents with ASD to take advantage of visual cues in the situations that depend on shifting attention to nonverbal social cues. Based on this hypothesis, we verified that the adolescents with ASD would improve their social skills.
author2 Chien-Hsu Chen
author_facet Chien-Hsu Chen
I-JuiLee
李易叡
author I-JuiLee
李易叡
spellingShingle I-JuiLee
李易叡
Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
author_sort I-JuiLee
title Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
title_short Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
title_full Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
title_fullStr Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
title_full_unstemmed Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism
title_sort augmented reality technology for promoting the emotional expression and social skills of adolescents with autism
publishDate 2015
url http://ndltd.ncl.edu.tw/handle/69401322376919443031
work_keys_str_mv AT ijuilee augmentedrealitytechnologyforpromotingtheemotionalexpressionandsocialskillsofadolescentswithautism
AT lǐyìruì augmentedrealitytechnologyforpromotingtheemotionalexpressionandsocialskillsofadolescentswithautism
AT ijuilee yùnyòngkuòzēngshíjìngjìshùyúzìbìzhèngqīngshǎoniánqíngxùbiǎoxiànyǔshèjiāojìqiǎoxùnliàn
AT lǐyìruì yùnyòngkuòzēngshíjìngjìshùyúzìbìzhèngqīngshǎoniánqíngxùbiǎoxiànyǔshèjiāojìqiǎoxùnliàn
_version_ 1718541311771934720
spelling ndltd-TW-104NCKU50380032017-10-01T04:29:45Z http://ndltd.ncl.edu.tw/handle/69401322376919443031 Augmented Reality Technology for Promoting the Emotional Expression and Social Skills of Adolescents with Autism 運用擴增實境技術於自閉症青少年情緒表現與社交技巧訓練 I-JuiLee 李易叡 博士 國立成功大學 工業設計學系 104 People with autism spectrum disorders (ASD) have a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions and other nonverbal social cues. Evidence shows that adolescents with ASD miss out on nonverbal cues to social interaction, which leads to difficultly understanding and appropriately responding to other people. Therefore, in this research, we created a different visual strategy would improve the focus that adolescents with ASD have on the facial expressions and other nonverbal social cues of other people and help them understand their emotions and intentions to appropriately respond to others in social situations. This dissertation aims to examine (a) how to attract the attention of adolescents with ASD to the nonverbal cues that help most people socially interact with others; (b) how to train adolescents with ASD to understand their own emotional expressions and those of others to promote their social skills; (c) Help them to construct the information on the stable visual contents and imagine the emotional situation to pretend appropriate facial expressions. (d) Promote their social emotional judgment ability. In addition, related works were surveyed and given precise commentaries in four studies. In the first study, we retrieved social signals from commercials to create stop motion videos (SMVs) to make training materials that contain sequences of nonverbal social cues to support this visual method. We reviewed the differences in judgment data from adolescents with ASD (4 boys, 2 girls) after they had viewed two types of advertising videos: Video-Based Advertising (VBA) and SMVs. The results indicated that SMV materials offered structured and specific social signals of close-up images for adolescents with ASD, helping raise their levels of perceptions judgment and situation comprehension. Furthermore, we confirmed that the non-verbal social cues in the video can effectively improve social emotional and situational awareness for ASD. In the Cliplets-Based Video (CBV) study we are more interested in the dynamic and static visual design that can increase the focus of adolescents with ASD's attention on only a part of the image. We found that using static or fragmented images is too limited and not ecologically valid. Dynamic videos are advantageous, but adolescents with ASD have trouble focusing their attention on these materials. Microsoft Cliplets provides a halfway point: it allows animation of a specific element and concomitantly permits all of the surrounding elements be static and not distracting. Therefore, we used software technology that allows for the easy creation of half-static and half-dynamic video materials to attract the attention of adolescents with ASD to nonverbal facial cues to teach them the six basic facial expressions in typical social situations. We recruited new participants through the Taiwan Autism Association, six adolescents (4 boys, 2 girls) with ASD, and used a multiple baseline design across participants. This interventional learning system provided a simple yet effective way for adolescents with ASD to select and focus on the important nonverbal facial cues related to social situations. Furthermore, in the augmented reality (AR)-based video modeling (VM) storybook (ARVMS) study, we hoped to increase the motivation of the adolescents and to strengthen their attention to the story event on the screen. To accomplish this, we recruited more new participants (5 boys, 1 girl) for the ARVMS study and used a method that included an ARVMS that created a layer between a tangible book and virtual dynamic video clips. In this research, AR has multiple functions: it extends the social features of the story, and it also restricts attention to the most important parts of the videos. After three phases (baseline, intervention, and maintenance) of test data had been collected, the results showed that ARVMS intervention provided an augmented visual indicator which had effectively attracted and maintained the attention of adolescents with ASD to nonverbal social cues and helped them better understand the facial expressions and emotions of the storybook characters. Finally, this study, Augmented-Reality-Based Self-Facial Modeling Learning System (ARSFM) focus on the linkage between the specific facial expressions and corresponding social events related through the participants’ own three-dimensional (3D) facial expressions so that they can learn from their own point of view to mirror facial expressions of different mood states, and to promote the development of the appropriate corresponding emotional expression, and compare them with the expressions of others. The AR system provided 3D animations of six basic facial expressions overlaid on the participants’ faces; we recruited new participants (2 boys, 1 girl) to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, we found that AR intervention improved the appropriate recognition and response to facial emotional expressions seen in the situational task. In this research, the findings of these descriptive and correlational four studies allowed the researcher to understand how to use this visual strategy to help adolescents with ASD construct their visual structure and improve their ability to recognize the emotional expressions in social situations. We carefully describe our results in the following chapters. We used this visual strategy to push adolescents with ASD as our design partners and testers by associating the ideas and feelings of the characters in the scenario settings that we created. We hypothesized that this would positively affect the ability of adolescents with ASD to take advantage of visual cues in the situations that depend on shifting attention to nonverbal social cues. Based on this hypothesis, we verified that the adolescents with ASD would improve their social skills. Chien-Hsu Chen Ling-Yi Lin 陳建旭 林玲伊 2015 學位論文 ; thesis 125 en_US