Uncovering What Network Sees by Noise Covering

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 106 === The mystery of how deep neural networks (DNNs) make decision has discouraged us to fully trust them. This writeup presents an optimization-based method for visualizing the clues that may explain the reason behind the DNN classified results of an image. Our met...

Full description

Bibliographic Details
Main Authors: Chien-Chi Liao, 廖建棋
Other Authors: 吳家麟
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/kc263g
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 106 === The mystery of how deep neural networks (DNNs) make decision has discouraged us to fully trust them. This writeup presents an optimization-based method for visualizing the clues that may explain the reason behind the DNN classified results of an image. Our method masks the inputs with varying noises to extract the truly effective and recognizable features. We did the empirical comparisons with related works on ImageNet like dataset, and the obtained saliency maps provide better visual quality and higher relevance score in general. We found some insights into the recognition processes of three notable CNNs by applying our approach to the corresponding intermediate layers. Besides, the realization of our approach doesn''t require any modification of existing models and the cost function of the optimization process can be easily formulated based on modern deep learning libraries.