Summary: | 碩士 === 國立臺灣大學 === 資訊工程學研究所 === 107 === Breast cancer is the common and leading cause of cancer death in women worldwide. However, early examination and improved treatment can increase the survival rate. In the clinical examination, ultrasound (US) images are usually used to evaluate the malignancy of breast tumors. Breast Imaging Reporting and Data System (BI-RADS) defines five lexicons in the masses tissue of ultrasound images to assess the BI-RADS grade for evaluating tumor malignancy. Hence, we proposed an automatic BI-RADS grading system for tumor diagnosis. At first, we adopted the generative adversarial network (GAN)-based segmentation method to separate each US image into different image regions for providing different image information. Then, we predict each lexicon by the convolutional neural networks (CNN) models to assess the BI-RADS grade and evaluate tumor malignancy. There are 335 biopsy-proven tumors used to evaluate our proposed system in this study, including 148 benign tumors and 187 malignant tumors. The final BI-RADS grade assessment of all the tumors before biopsies is BI-RADS 3 for 90 cases, BI-RADS 4 for 114 cases, and BI-RADS 5 for 131 cases. The accuracy of the proposed system in the BI-RADS grade assessment and tumor diagnosis was 71.64% (240/335) and 85.97% (288/335), respectively. We further employed the CNN models directly with different input images to assess the BI-RADS grade and evaluate tumor malignancy to compare with the proposed system. The accuracy of using the CNN models with the original US images in the BI-RADS grade assessment and tumor diagnosis was 60.00% (201/335) and 78.51% (263/335), respectively. In conclusion, the proposed system can provide accurate BI-RADS grades and diagnostic results for radiologists, and the proposed system has the better performances than using the CNN models directly with different input images in the BI-RADS grade assessment and tumor diagnosis.
|