Multivideo Models for Classifying Hand Impairment After Stroke Using Egocentric Video

Objectives: After stroke, hand function assessments are used as outcome measures to evaluate new rehabilitation therapies, but do not reflect true performance in natural environments. Wearable (egocentric) cameras provide a way to capture hand function information during activities of daily living (...

詳細記述

書誌詳細
出版年:IEEE Transactions on Neural Systems and Rehabilitation Engineering
主要な著者: Anne Mei, Meng-Fen Tsai, Jose Zariffa
フォーマット: 論文
言語:英語
出版事項: IEEE 2025-01-01
主題:
オンライン・アクセス:https://ieeexplore.ieee.org/document/11115139/
その他の書誌記述
要約:Objectives: After stroke, hand function assessments are used as outcome measures to evaluate new rehabilitation therapies, but do not reflect true performance in natural environments. Wearable (egocentric) cameras provide a way to capture hand function information during activities of daily living (ADLs). However, while clinical assessments involve observing multiple functional tasks, existing deep learning methods developed to analyze hands in egocentric video are only capable of considering single ADLs. This study presents a novel multi-video architecture that processes multiple task videos to make improved estimations about hand impairment. Methods: An egocentric video dataset of ADLs performed by stroke survivors in a home simulation lab was used to develop single and multi-input video models for binary impairment classification. Using SlowFast as a base feature extractor, late fusion (majority voting, fully-connected network) and intermediate fusion (concatenation, Markov chain) were investigated for building multi-video architectures. Results: Through evaluation with Leave-One-Participant-Out-Cross-Validation, using intermediate concatenation fusion to build multi-video models was found to achieve the best performance out of the fusion techniques. The resulting multi-video model for cropped inputs achieved an F1-score of <inline-formula> <tex-math notation="LaTeX">$0.778\pm 0.129$ </tex-math></inline-formula> and significantly outperformed its single-video counterpart (F1-score of <inline-formula> <tex-math notation="LaTeX">$0.696\pm 0.102$ </tex-math></inline-formula>). Similarly, the multi-video model for full-frame inputs (F1-score of <inline-formula> <tex-math notation="LaTeX">$0.796\pm 0.102$ </tex-math></inline-formula>) significantly outperformed its single-video counterpart (F1-score of <inline-formula> <tex-math notation="LaTeX">$0.708\pm 0.099$ </tex-math></inline-formula>). Conclusion: Multi-video architectures are beneficial for estimating hand impairment from egocentric video after stroke. Significance: The proposed deep learning solution is the first of its kind in multi-video analysis, and opens the door to further applications in automating other multi-observation assessments for clinical use.
ISSN:1534-4320
1558-0210