Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are po...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2013-07-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fnins.2013.00124/full |
id |
doaj-5998d9000eed4f6aab9fb2c1707b452b |
---|---|
record_format |
Article |
spelling |
doaj-5998d9000eed4f6aab9fb2c1707b452b2020-11-24T22:57:49ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2013-07-01710.3389/fnins.2013.0012447621Perception of stochastically undersampled sound waveforms: A model of auditory deafferentationEnrique A Lopez-Poveda0Enrique A Lopez-Poveda1Enrique A Lopez-Poveda2Pablo eBarrios3Pablo eBarrios4University of SalamancaInstituto de Investigación Biomédica de SalamancaUniversidad de SalamancaHospital Universitario de SalamancaInstituto de Investigación Biomédica de SalamancaAuditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.http://journal.frontiersin.org/Journal/10.3389/fnins.2013.00124/fullAudiometry, Pure-ToneHearing LossInformation TheorySpeech PerceptionModelauditory deafferentation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Enrique A Lopez-Poveda Enrique A Lopez-Poveda Enrique A Lopez-Poveda Pablo eBarrios Pablo eBarrios |
spellingShingle |
Enrique A Lopez-Poveda Enrique A Lopez-Poveda Enrique A Lopez-Poveda Pablo eBarrios Pablo eBarrios Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation Frontiers in Neuroscience Audiometry, Pure-Tone Hearing Loss Information Theory Speech Perception Model auditory deafferentation |
author_facet |
Enrique A Lopez-Poveda Enrique A Lopez-Poveda Enrique A Lopez-Poveda Pablo eBarrios Pablo eBarrios |
author_sort |
Enrique A Lopez-Poveda |
title |
Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation |
title_short |
Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation |
title_full |
Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation |
title_fullStr |
Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation |
title_full_unstemmed |
Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation |
title_sort |
perception of stochastically undersampled sound waveforms: a model of auditory deafferentation |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Neuroscience |
issn |
1662-453X |
publishDate |
2013-07-01 |
description |
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. |
topic |
Audiometry, Pure-Tone Hearing Loss Information Theory Speech Perception Model auditory deafferentation |
url |
http://journal.frontiersin.org/Journal/10.3389/fnins.2013.00124/full |
work_keys_str_mv |
AT enriquealopezpoveda perceptionofstochasticallyundersampledsoundwaveformsamodelofauditorydeafferentation AT enriquealopezpoveda perceptionofstochasticallyundersampledsoundwaveformsamodelofauditorydeafferentation AT enriquealopezpoveda perceptionofstochasticallyundersampledsoundwaveformsamodelofauditorydeafferentation AT pabloebarrios perceptionofstochasticallyundersampledsoundwaveformsamodelofauditorydeafferentation AT pabloebarrios perceptionofstochasticallyundersampledsoundwaveformsamodelofauditorydeafferentation |
_version_ |
1725649041430151168 |