Disentangled representations in neural models
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 57-62). === Representation learning is the foundation for the recent success of neural net...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | English |
Published: |
Massachusetts Institute of Technology
2017
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/106449 |
id |
ndltd-MIT-oai-dspace.mit.edu-1721.1-106449 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-MIT-oai-dspace.mit.edu-1721.1-1064492019-05-02T15:51:22Z Disentangled representations in neural models Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology Joshua B. Tenenbaum. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. Cataloged from PDF version of thesis. Includes bibliographical references (pages 57-62). Representation learning is the foundation for the recent success of neural network models. However, the distributed representations generated by neural networks are far from ideal. Due to their highly entangled nature, they are difficult to reuse and interpret, and they do a poor job of capturing the sparsity which is present in real-world transformations. In this paper, I describe methods for learning disentangled representations in the two domains of graphics and computation. These methods allow neural methods to learn representations which are easy to interpret and reuse, yet they incur little or no penalty to performance. In the Graphics section, I demonstrate the ability of these methods to infer the generating parameters of images and rerender those images under novel conditions. In the Computation section, I describe a model which is able to factorize a multitask learning problem into subtasks and which experiences no catastrophic forgetting. Together these techniques provide the tools to design a wide range of models that learn disentangled representations and better model the factors of variation in the real world. by William Whitney. M. Eng. 2017-01-12T18:33:57Z 2017-01-12T18:33:57Z 2016 2016 Thesis http://hdl.handle.net/1721.1/106449 967665733 eng M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582 62 pages application/pdf Massachusetts Institute of Technology |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
Electrical Engineering and Computer Science. |
spellingShingle |
Electrical Engineering and Computer Science. Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology Disentangled representations in neural models |
description |
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 57-62). === Representation learning is the foundation for the recent success of neural network models. However, the distributed representations generated by neural networks are far from ideal. Due to their highly entangled nature, they are difficult to reuse and interpret, and they do a poor job of capturing the sparsity which is present in real-world transformations. In this paper, I describe methods for learning disentangled representations in the two domains of graphics and computation. These methods allow neural methods to learn representations which are easy to interpret and reuse, yet they incur little or no penalty to performance. In the Graphics section, I demonstrate the ability of these methods to infer the generating parameters of images and rerender those images under novel conditions. In the Computation section, I describe a model which is able to factorize a multitask learning problem into subtasks and which experiences no catastrophic forgetting. Together these techniques provide the tools to design a wide range of models that learn disentangled representations and better model the factors of variation in the real world. === by William Whitney. === M. Eng. |
author2 |
Joshua B. Tenenbaum. |
author_facet |
Joshua B. Tenenbaum. Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology |
author |
Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology |
author_sort |
Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology |
title |
Disentangled representations in neural models |
title_short |
Disentangled representations in neural models |
title_full |
Disentangled representations in neural models |
title_fullStr |
Disentangled representations in neural models |
title_full_unstemmed |
Disentangled representations in neural models |
title_sort |
disentangled representations in neural models |
publisher |
Massachusetts Institute of Technology |
publishDate |
2017 |
url |
http://hdl.handle.net/1721.1/106449 |
work_keys_str_mv |
AT whitneywilliammengwilliamfmassachusettsinstituteoftechnology disentangledrepresentationsinneuralmodels |
_version_ |
1719029243403304960 |