Forced fusion in multisensory heading estimation.
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investi...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2015-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC4418840?pdf=render |
id |
doaj-ecb8cf80c78c402db015a42212f0eaac |
---|---|
record_format |
Article |
spelling |
doaj-ecb8cf80c78c402db015a42212f0eaac2020-11-24T21:23:43ZengPublic Library of Science (PLoS)PLoS ONE1932-62032015-01-01105e012710410.1371/journal.pone.0127104Forced fusion in multisensory heading estimation.Ksander N de WinkelMikhail KatliarHeinrich H BülthoffIt has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.http://europepmc.org/articles/PMC4418840?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ksander N de Winkel Mikhail Katliar Heinrich H Bülthoff |
spellingShingle |
Ksander N de Winkel Mikhail Katliar Heinrich H Bülthoff Forced fusion in multisensory heading estimation. PLoS ONE |
author_facet |
Ksander N de Winkel Mikhail Katliar Heinrich H Bülthoff |
author_sort |
Ksander N de Winkel |
title |
Forced fusion in multisensory heading estimation. |
title_short |
Forced fusion in multisensory heading estimation. |
title_full |
Forced fusion in multisensory heading estimation. |
title_fullStr |
Forced fusion in multisensory heading estimation. |
title_full_unstemmed |
Forced fusion in multisensory heading estimation. |
title_sort |
forced fusion in multisensory heading estimation. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2015-01-01 |
description |
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people. |
url |
http://europepmc.org/articles/PMC4418840?pdf=render |
work_keys_str_mv |
AT ksanderndewinkel forcedfusioninmultisensoryheadingestimation AT mikhailkatliar forcedfusioninmultisensoryheadingestimation AT heinrichhbulthoff forcedfusioninmultisensoryheadingestimation |
_version_ |
1725991480146788352 |