Efficient deep learning models based on tension techniques for sign language recognition

Communication by speaking prevails among the various ways of self-expression and communication between people. Speech presents a significant challenge for some disabled people, such as deaf people, deaf and hard of hearing, dumb and wordless persons. Therefore, these people rely on sign language to...

詳細記述

書誌詳細
出版年:Intelligent Systems with Applications
主要な著者: Nehal F. Attia, Mohamed T. Faheem Said Ahmed, Mahmoud A.M. Alshewimy
フォーマット: 論文
言語:英語
出版事項: Elsevier 2023-11-01
主題:
オンライン・アクセス:http://www.sciencedirect.com/science/article/pii/S2667305323001096
その他の書誌記述
要約:Communication by speaking prevails among the various ways of self-expression and communication between people. Speech presents a significant challenge for some disabled people, such as deaf people, deaf and hard of hearing, dumb and wordless persons. Therefore, these people rely on sign language to interact with others. Sign language is a system of movements and visual messages that ensure the integration of these individuals into groups that communicate vocally. On the other side, it is necessary to understand these individuals' gestures and linguistic semantics. The main objective of this work is to establish a new model that enhances the performance of the existing paradigms used for sign language recognition. This study developed three improved deep-learning models based on YOLOv5x and attention methods for recognizing the alphabetic and numeric information hand gestures convey. These models were evaluated using the MU HandImages ASL and OkkhorNama: BdSL datasets. The proposed models exceed those found in the literature, where the accuracy reached 98.9 % and 97.6 % with the MU HandImages ASL dataset and the OkkhorNama: BdSL dataset, respectively. The proposed models are light and fast enough to be used in real-time ASL recognition and to be deployed on any edge-based platform.
ISSN:2667-3053