Understanding what a captioning network doesn't know

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
Other Authors: Antonio Torralba.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/122996
Description
Summary:This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (page 29). === While recent years have seen significant advances in the capabilities of image recognition and classification neural networks, we still know little about the relationship between the activation of hidden layers and human-understandable concepts. Recent work in network interpretability has provided a framework for analyzing hidden nodes and layers, showing that in many convolutional architectures, there exists a significant correlation between groups of nodes and human-understandable concepts. We use this framework to investigate the encoding of images produced by standard image classification networks. We do this in the context of encoder-decoder image classification networks. These provide a natural way to observe the effect that perturbing node activations has on the image encoding by observing the generated captions, which are inherently understandable by humans and thus convenient and informative to use. We also generate and analyze captions of images modified by inserting small sub-images of single, human-interpretable concepts. These modifications and the resulting captions show the existence of training-triggered correlations between semantically dissimilar words. === by Richard B. Yip. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science