Audiovisual speech recognition: A review and forecast
Audiovisual speech recognition is a favorable solution to multimodality human–computer interaction. For a long time, it has been very difficult to develop machines capable of generating or understanding even fragments of natural languages; the fused sight, smelling, touching, and so on provide machi...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2020-12-01
|
Series: | International Journal of Advanced Robotic Systems |
Online Access: | https://doi.org/10.1177/1729881420976082 |
id |
doaj-d792100701354f779f8759f679acf45c |
---|---|
record_format |
Article |
spelling |
doaj-d792100701354f779f8759f679acf45c2020-12-24T05:03:35ZengSAGE PublishingInternational Journal of Advanced Robotic Systems1729-88142020-12-011710.1177/1729881420976082Audiovisual speech recognition: A review and forecastLinlin Xia0Gang Chen1Xun Xu2Jiashuo Cui3Yiping Gao4 School of Automation Engineering, , Jilin, China School of Automation Engineering, , Jilin, China Institute for Superconducting and Electronic Materials, University of Wollongong, Wollongong, Australia School of Automation Engineering, , Jilin, China School of Automation Engineering, , Jilin, ChinaAudiovisual speech recognition is a favorable solution to multimodality human–computer interaction. For a long time, it has been very difficult to develop machines capable of generating or understanding even fragments of natural languages; the fused sight, smelling, touching, and so on provide machines with possible mediums to perceive and understand. This article presents a detailed review of recent advances in audiovisual speech recognition area. After explicitly representing audiovisual speech recognition development phase divided by timeline, we focus on typical audiovisual speech database descriptions in terms of single view and multi-view, since the public databases for general purpose should be the first concern for audiovisual speech recognition tasks. For the following challenges that are inseparably related to the feature extraction and dynamic audiovisual fusion, the principal usefulness of deep learning-based tools, such as deep fully convolutional neural network, bidirectional long short-term memory network, 3D convolutional neural network, and so on, lies in the fact that they are relatively simple solutions of such problems. As the principle analyses and comparisons related to computational load, accuracy, and applicability of well-developed audiovisual speech recognition frameworks have been conducted, we further illuminate our insights into the future audiovisual speech recognition architecture design. We argue that end-to-end audiovisual speech recognition model and deep learning-based feature extractors will guide multimodality human–computer interaction directly to a solution.https://doi.org/10.1177/1729881420976082 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Linlin Xia Gang Chen Xun Xu Jiashuo Cui Yiping Gao |
spellingShingle |
Linlin Xia Gang Chen Xun Xu Jiashuo Cui Yiping Gao Audiovisual speech recognition: A review and forecast International Journal of Advanced Robotic Systems |
author_facet |
Linlin Xia Gang Chen Xun Xu Jiashuo Cui Yiping Gao |
author_sort |
Linlin Xia |
title |
Audiovisual speech recognition: A review and forecast |
title_short |
Audiovisual speech recognition: A review and forecast |
title_full |
Audiovisual speech recognition: A review and forecast |
title_fullStr |
Audiovisual speech recognition: A review and forecast |
title_full_unstemmed |
Audiovisual speech recognition: A review and forecast |
title_sort |
audiovisual speech recognition: a review and forecast |
publisher |
SAGE Publishing |
series |
International Journal of Advanced Robotic Systems |
issn |
1729-8814 |
publishDate |
2020-12-01 |
description |
Audiovisual speech recognition is a favorable solution to multimodality human–computer interaction. For a long time, it has been very difficult to develop machines capable of generating or understanding even fragments of natural languages; the fused sight, smelling, touching, and so on provide machines with possible mediums to perceive and understand. This article presents a detailed review of recent advances in audiovisual speech recognition area. After explicitly representing audiovisual speech recognition development phase divided by timeline, we focus on typical audiovisual speech database descriptions in terms of single view and multi-view, since the public databases for general purpose should be the first concern for audiovisual speech recognition tasks. For the following challenges that are inseparably related to the feature extraction and dynamic audiovisual fusion, the principal usefulness of deep learning-based tools, such as deep fully convolutional neural network, bidirectional long short-term memory network, 3D convolutional neural network, and so on, lies in the fact that they are relatively simple solutions of such problems. As the principle analyses and comparisons related to computational load, accuracy, and applicability of well-developed audiovisual speech recognition frameworks have been conducted, we further illuminate our insights into the future audiovisual speech recognition architecture design. We argue that end-to-end audiovisual speech recognition model and deep learning-based feature extractors will guide multimodality human–computer interaction directly to a solution. |
url |
https://doi.org/10.1177/1729881420976082 |
work_keys_str_mv |
AT linlinxia audiovisualspeechrecognitionareviewandforecast AT gangchen audiovisualspeechrecognitionareviewandforecast AT xunxu audiovisualspeechrecognitionareviewandforecast AT jiashuocui audiovisualspeechrecognitionareviewandforecast AT yipinggao audiovisualspeechrecognitionareviewandforecast |
_version_ |
1724372149041364992 |