Multiresolutional Graph Cuts for Brain Extraction from MR Images

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 99 === We proposed a multiresolutional brain extraction framework which utilize graph cuts technique to classify head magnetic resonance (MR) images into brain and non-brain regions.Our goal is to achieve both high sensitivity and specificity results of brain extract...

Full description

Bibliographic Details
Main Authors: Yi-Ting Wang, 王億婷
Other Authors: Yong-Sheng Chen
Format: Others
Language:en_US
Published: 2010
Online Access:http://ndltd.ncl.edu.tw/handle/65734665324397004425
Description
Summary:碩士 === 國立交通大學 === 資訊科學與工程研究所 === 99 === We proposed a multiresolutional brain extraction framework which utilize graph cuts technique to classify head magnetic resonance (MR) images into brain and non-brain regions.Our goal is to achieve both high sensitivity and specificity results of brain extraction. Started with an extracted brain with high sensitivity and low specificity, we refine the segmentation results by trimming non-brain regions in a coarse-to-fine manner. The extracted brain at the coarser level will be propagated to the finer level to estimate foreground/background seeds as constraints. The short-cut problem of graph cuts is reduced by the proposed pre-determined foreground from the coarser level. In order to consider the impact of the intensity inhomogeneities, we estimated the intensity distribution locally by partitioning volume images of each resolution into different numbers of smaller cubes. The graph cuts method is individually applied for each cube. The proposed method was compared to four existing methods, Brain Surface Extractor, Brain Extraction Tool, Hybrid Watershed algorithm, and ISTRIP, by using four data sets, the first and the second IBSR data set of the Internet Brain Segmentation Repository, BrainWeb phantom images from the Montreal Neurological Institute, and healthy subjects collected by Taipei Veterans General Hospital. The performance evaluation for brain extraction, our method outperforms others for the first/second IBSR data set and BrainWeb phantom data set, and performs comparably with the BET and ISTRIP methods when using the VGHTPE data set.