Semantic-Based Crossmodal Processing During Visual Suppression

To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, 1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and 2) whether crossmodal semantic priming is the mechanism responsi...

Full description

Bibliographic Details
Main Authors: Dustin eCox, Sang Wook eHong
Format: Article
Language:English
Published: Frontiers Media S.A. 2015-06-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fpsyg.2015.00722/full
id doaj-ba433c90a3784a1a92e4db700ec189ac
record_format Article
spelling doaj-ba433c90a3784a1a92e4db700ec189ac2020-11-24T22:43:22ZengFrontiers Media S.A.Frontiers in Psychology1664-10782015-06-01610.3389/fpsyg.2015.00722144658Semantic-Based Crossmodal Processing During Visual SuppressionDustin eCox0Sang Wook eHong1Florida Atlantic UniversityFlorida Atlantic UniversityTo reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, 1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and 2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression (CFS), we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.http://journal.frontiersin.org/Journal/10.3389/fpsyg.2015.00722/fullVisual awarenessmultisensory integrationsemantic primingSemantic ProcessingContinuous Flash Suppression (CFS)
collection DOAJ
language English
format Article
sources DOAJ
author Dustin eCox
Sang Wook eHong
spellingShingle Dustin eCox
Sang Wook eHong
Semantic-Based Crossmodal Processing During Visual Suppression
Frontiers in Psychology
Visual awareness
multisensory integration
semantic priming
Semantic Processing
Continuous Flash Suppression (CFS)
author_facet Dustin eCox
Sang Wook eHong
author_sort Dustin eCox
title Semantic-Based Crossmodal Processing During Visual Suppression
title_short Semantic-Based Crossmodal Processing During Visual Suppression
title_full Semantic-Based Crossmodal Processing During Visual Suppression
title_fullStr Semantic-Based Crossmodal Processing During Visual Suppression
title_full_unstemmed Semantic-Based Crossmodal Processing During Visual Suppression
title_sort semantic-based crossmodal processing during visual suppression
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2015-06-01
description To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, 1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and 2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression (CFS), we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
topic Visual awareness
multisensory integration
semantic priming
Semantic Processing
Continuous Flash Suppression (CFS)
url http://journal.frontiersin.org/Journal/10.3389/fpsyg.2015.00722/full
work_keys_str_mv AT dustinecox semanticbasedcrossmodalprocessingduringvisualsuppression
AT sangwookehong semanticbasedcrossmodalprocessingduringvisualsuppression
_version_ 1725696219202715648