Development and Evaluation of Human-Computer Cooperation Sleep Scoring System Based on The Reliability Analysis of Sleep Stage Changes

碩士 === 國立成功大學 === 資訊工程學系 === 102 === Sleep occupy more than one-third of human life, have a good sleep for the quality of life has a significant help. But not everyone has a good quality of sleep, many people have been plagued by sleep-related disorders. Therefore, the clinical use Polysomnog- raphy...

Full description

Bibliographic Details
Main Authors: Peng-YuChen, 陳鵬宇
Other Authors: Sheng-Fu Liang
Format: Others
Language:en_US
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/65443768855772828629
Description
Summary:碩士 === 國立成功大學 === 資訊工程學系 === 102 === Sleep occupy more than one-third of human life, have a good sleep for the quality of life has a significant help. But not everyone has a good quality of sleep, many people have been plagued by sleep-related disorders. Therefore, the clinical use Polysomnog- raphy (PSG) to recording sleep physiological signals. The collection of physiological signals will be manual scoring by expert for diagnosis. Since manual scoring is a very subjective and time-consuming work, so there are many sleep automatic scoring methods have been proposed. Although the agreement of these methods on a good performance, but only provides a final interpretation of the results. Experts can not know the basis of automatic scoring methods, so experts need to re-scoring when the scoring result not to be believed, in this way automatic scoring method can not achieve its designed purpose, to reduce interpretation time. Therefore, this study proposes a human-computer cooperation scoring system which based on sleep physiological signals, to providing reliability as reference. Advantages of this system is the scoring results is divided into two parts, high reliability and low reliability. Scorer can skip the high reliability epochs to reduce scoring time, and let experts believe the overall scoring re- sults. The system has been testing by two scorer, the average agreement of full manual scoring and work with cooperation systems can reach 88.47%, kappa coefficient was 0.82, and can reduce 56.2% scoring time. We hope this human-computer cooperation scoring system can practical on clinical applications, providing a more reliable scoring results, in addition to reduce scoring time.