Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.

Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of ne...

Full description

Bibliographic Details
Main Authors: Ryan C Williamson, Benjamin R Cowley, Ashok Litwin-Kumar, Brent Doiron, Adam Kohn, Matthew A Smith, Byron M Yu
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2016-12-01
Series:PLoS Computational Biology
Online Access:http://europepmc.org/articles/PMC5142778?pdf=render
id doaj-4674c0bf4f3a419c95d91997e3ec1b21
record_format Article
spelling doaj-4674c0bf4f3a419c95d91997e3ec1b212020-11-25T01:46:01ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582016-12-011212e100514110.1371/journal.pcbi.1005141Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.Ryan C WilliamsonBenjamin R CowleyAshok Litwin-KumarBrent DoironAdam KohnMatthew A SmithByron M YuRecent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.http://europepmc.org/articles/PMC5142778?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Ryan C Williamson
Benjamin R Cowley
Ashok Litwin-Kumar
Brent Doiron
Adam Kohn
Matthew A Smith
Byron M Yu
spellingShingle Ryan C Williamson
Benjamin R Cowley
Ashok Litwin-Kumar
Brent Doiron
Adam Kohn
Matthew A Smith
Byron M Yu
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
PLoS Computational Biology
author_facet Ryan C Williamson
Benjamin R Cowley
Ashok Litwin-Kumar
Brent Doiron
Adam Kohn
Matthew A Smith
Byron M Yu
author_sort Ryan C Williamson
title Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
title_short Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
title_full Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
title_fullStr Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
title_full_unstemmed Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
title_sort scaling properties of dimensionality reduction for neural populations and network models.
publisher Public Library of Science (PLoS)
series PLoS Computational Biology
issn 1553-734X
1553-7358
publishDate 2016-12-01
description Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.
url http://europepmc.org/articles/PMC5142778?pdf=render
work_keys_str_mv AT ryancwilliamson scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT benjaminrcowley scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT ashoklitwinkumar scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT brentdoiron scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT adamkohn scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT matthewasmith scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
AT byronmyu scalingpropertiesofdimensionalityreductionforneuralpopulationsandnetworkmodels
_version_ 1725021278586273792