Interpreting black-box models through sufficient input subsets

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Carter, Brandon M.
Other Authors: David K. Gifford.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/123008
Description
Summary:This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (pages 73-77). === Recent progress in machine learning has come at the cost of interpretability, earning the field a reputation of producing opaque, "black-box" models. While deep neural networks are often able to achieve superior predictive accuracy over traditional models, the functions and representations they learn are usually highly nonlinear and difficult to interpret. This lack of interpretability hinders adoption of deep learning methods in fields such as medicine where understanding why a model made a decision is crucial. Existing techniques for explaining the decisions by black-box models are often restricted to either a specific type of predictor or are undesirably sensitive to factors unrelated to the model's decision-making process. In this thesis, we propose sufficient input subsets, minimal subsets of input features whose values form the basis for a model's decision. Our technique can rationalize decisions made by a black-box function on individual inputs and can also explain the basis for misclassifications. Moreover, general principles that globally govern a model's decision-making can be revealed by searching for clusters of such input patterns across many data points. Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques. We demonstrate the utility of our interpretation method on various neural network models trained on text, genomic, and image data. === by Brandon M. Carter. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science