Suboptimal human multisensory cue combination
Abstract Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational pr...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Publishing Group
2019-03-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-018-37888-7 |
id |
doaj-c6e96bc883aa4dcca483f48993757a38 |
---|---|
record_format |
Article |
spelling |
doaj-c6e96bc883aa4dcca483f48993757a382020-12-08T08:24:10ZengNature Publishing GroupScientific Reports2045-23222019-03-019111110.1038/s41598-018-37888-7Suboptimal human multisensory cue combinationDerek H. Arnold0Kirstie Petrie1Cailem Murray2Alan Johnston3School of Psychology, The University of QueenslandSchool of Psychology, The University of QueenslandSchool of Psychology, The University of QueenslandExperimental Psychology, University of NottinghamAbstract Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.https://doi.org/10.1038/s41598-018-37888-7 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Derek H. Arnold Kirstie Petrie Cailem Murray Alan Johnston |
spellingShingle |
Derek H. Arnold Kirstie Petrie Cailem Murray Alan Johnston Suboptimal human multisensory cue combination Scientific Reports |
author_facet |
Derek H. Arnold Kirstie Petrie Cailem Murray Alan Johnston |
author_sort |
Derek H. Arnold |
title |
Suboptimal human multisensory cue combination |
title_short |
Suboptimal human multisensory cue combination |
title_full |
Suboptimal human multisensory cue combination |
title_fullStr |
Suboptimal human multisensory cue combination |
title_full_unstemmed |
Suboptimal human multisensory cue combination |
title_sort |
suboptimal human multisensory cue combination |
publisher |
Nature Publishing Group |
series |
Scientific Reports |
issn |
2045-2322 |
publishDate |
2019-03-01 |
description |
Abstract Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception. |
url |
https://doi.org/10.1038/s41598-018-37888-7 |
work_keys_str_mv |
AT derekharnold suboptimalhumanmultisensorycuecombination AT kirstiepetrie suboptimalhumanmultisensorycuecombination AT cailemmurray suboptimalhumanmultisensorycuecombination AT alanjohnston suboptimalhumanmultisensorycuecombination |
_version_ |
1724391105542225920 |