The Partial Information Decomposition of Generative Neural Network Models

In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of informati...

Full description

Bibliographic Details
Main Authors: Tycho M.S. Tax, Pedro A.M. Mediano, Murray Shanahan
Format: Article
Language:English
Published: MDPI AG 2017-09-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/19/9/474
id doaj-777412e939814e378913568cc977be8c
record_format Article
spelling doaj-777412e939814e378913568cc977be8c2020-11-25T02:34:32ZengMDPI AGEntropy1099-43002017-09-0119947410.3390/e19090474e19090474The Partial Information Decomposition of Generative Neural Network ModelsTycho M.S. Tax0Pedro A.M. Mediano1Murray Shanahan2Corti, Nørrebrogade 45E 2, 2200 Copenhagen N, DenmarkDepartment of Computing, Imperial College London, London SW7 2RH, UKDepartment of Computing, Imperial College London, London SW7 2RH, UKIn this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.https://www.mdpi.com/1099-4300/19/9/474partial information decompositionneural networksinformation theory
collection DOAJ
language English
format Article
sources DOAJ
author Tycho M.S. Tax
Pedro A.M. Mediano
Murray Shanahan
spellingShingle Tycho M.S. Tax
Pedro A.M. Mediano
Murray Shanahan
The Partial Information Decomposition of Generative Neural Network Models
Entropy
partial information decomposition
neural networks
information theory
author_facet Tycho M.S. Tax
Pedro A.M. Mediano
Murray Shanahan
author_sort Tycho M.S. Tax
title The Partial Information Decomposition of Generative Neural Network Models
title_short The Partial Information Decomposition of Generative Neural Network Models
title_full The Partial Information Decomposition of Generative Neural Network Models
title_fullStr The Partial Information Decomposition of Generative Neural Network Models
title_full_unstemmed The Partial Information Decomposition of Generative Neural Network Models
title_sort partial information decomposition of generative neural network models
publisher MDPI AG
series Entropy
issn 1099-4300
publishDate 2017-09-01
description In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.
topic partial information decomposition
neural networks
information theory
url https://www.mdpi.com/1099-4300/19/9/474
work_keys_str_mv AT tychomstax thepartialinformationdecompositionofgenerativeneuralnetworkmodels
AT pedroammediano thepartialinformationdecompositionofgenerativeneuralnetworkmodels
AT murrayshanahan thepartialinformationdecompositionofgenerativeneuralnetworkmodels
AT tychomstax partialinformationdecompositionofgenerativeneuralnetworkmodels
AT pedroammediano partialinformationdecompositionofgenerativeneuralnetworkmodels
AT murrayshanahan partialinformationdecompositionofgenerativeneuralnetworkmodels
_version_ 1724808162387689472