An approach of head Movement Compensation for wearable eye tracker

碩士 === 國立臺灣師範大學 === 電機工程學系 === 103 === This paper proposed an approach, by using a 3-D rotation matrix, the errors caused by head movements in 2-D mapping, which mapped the glint-pupil difference vector obtained from the eye image on to a screen for estimating the Point of Gaze (POG), could be kept...

Full description

Bibliographic Details
Main Authors: Shih-Chen Tseng, 曾士誠
Other Authors: Chi-Wu Huang
Format: Others
Language:zh-TW
Online Access:http://ndltd.ncl.edu.tw/handle/86300946299774178682
Description
Summary:碩士 === 國立臺灣師範大學 === 電機工程學系 === 103 === This paper proposed an approach, by using a 3-D rotation matrix, the errors caused by head movements in 2-D mapping, which mapped the glint-pupil difference vector obtained from the eye image on to a screen for estimating the Point of Gaze (POG), could be kept under a predefined accuracy even the head was moving away from the original calibration position. Hence, it could free the tracker user from uncomfortably confined his head in a chin rest during eye tracking. By the analyze of recent eye tracking techniques, either 2-D polynomial mapping or 3-D modeling basically was tracking the glints of eye images, a bright reflected point of the light source from the eye surface, and the rapidly moving pupil, to find the POG. 2-D mapping used the selected polynomial functions to compute the POG on screen as mentioned above while 3-D modeling is actually measured as well as computed the pupil center and the glint in 3-D position such that the visual axis of the eye could be reconstructed; POG was then found when visual axis was intersecting on a screen or any other subject in the real world Before eye tracking started, a simple calibration procedure was performed in 2-D mapping by using several predefined points on screen to estimate the coefficients of the selected polynomial functions to be used during tracking while in 3-D models, the calibrations are complicated depending on different system configurations, such as Mono-camera measurements, stereo vision measurements. They were also expensive because some models needed additional auxiliary wide angle Stereo-cameras, and 3-D digitizer for system calibration. This approach used two PS3 cameras, one for eye and one for scene, with open source software to construct a low cost (under $100) wearable eye tracker capable of performing eye-controlled typing with quite satisfactory accuracy. Eye-controlled typing is one of the important Human Computer Interface (HCI) applications, especially for disable people. Currently, some commercial wearable eye trackers are available with the price at least over $10,000. The homemade eye tracker in our laboratory was mainly based on 2-D tracking with some self-developed application software, such as Scan-path Trace, Hot-zone Display, Interest-region Search, and Eye-controlled Typing. In addition to modify 2-D mapping with rotation matrix, the 3-D based tracking is planned to be developed and hopefully is capable of working in the real world tracking environment instead of screen only for wider applications.