Multisensorische Integration von redundanten Reizen
Wenn eine Versuchsperson die Aufgabe hat, in der gleichen Weise auf Reize zweier Modalitaeten (z.B. ein Licht- und ein Tonsignal) zu reagieren, beobachtet man deutlich schnellere Reaktionen, wenn die beiden Reize gleichzeitig dargeboten werden, als wenn nur einer der beiden Reize dargeboten wir...
Saved in:
Main Author: | |
---|---|
Contributors: | |
Format: | Doctoral Thesis |
Language: | German |
Published: |
Philipps-Universität Marburg
2005
|
Subjects: | |
Online Access: | PDF Full Text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Table of Contents:
If participants simultaneously receive two target stimuli of different modalities (e.g. auditory and visual), they respond to them faster than would be expected from their reaction times to simple stimuli (redundant target effect, RTE). This speeding of reaction time indicates that the information of the two
sensory channels is integrated at a particular processing stage. The present
thesis investigates this reaction time gain in five experiments in which
the number of redundant targets, sequence effects, the spatial relationship
and spatial attention were varied systematically. In addition to reaction
time analyses, event-related potentials were measured in two experiments.
In chapter 1, a new method to investigate redundancy gains in the
reaction times to trimodal auditory-visual-tactile stimuli is described.
Especially it is shown that redundancy gains in trimodal stimuli can
entirely be explained within the framework of bisensory interactions.
Responses to ipsimodal stimuli (e.g. an auditory stimulus following
another auditory stimulus) are faster than to crossmodal stimuli (e.g.
an auditory stimulus following a visual stimulus). Since the modality
of at least one component of a bimodal stimulus always matches the
modality of the preceding stimulus, it could be argued that bimodal
stimuli are always ipsimodal. In chapter 2, it is demonstrated that this
can yield artificial redundancy effects, and a method to avoid this
potential problem is described.
A frequent approach to study interactions of the auditory and the
visual system with event-related potentials (ERPs) is to measure the
ERP to auditory-visual stimuli (AV) and to compare it with the sum of
the ERPs to auditory and visual stimuli (A+V). A problem of this
ERP comparison is that the three ERPs should be free of common
activity. In chapter 3, I describe an alternative comparison which
is robust with respect to common activity.
Does the spatial relationship of the two components of a bimodal
stimulus affect the way in which the information of the two
sensory channels is integrated? In chapter 4, it is shown that
processing of spatially congruent redundant stimuli is more efficient
than if the two stimuli are presented at different locations. Moreover,
ERPs to spatially congruent bimodal stimuli differed from the ERPs
to spatially incongruent stimuli, at parietal recording sites.
This indicates that polymodal areas in the parietal cortex might
be involved in the processing of bimodal stimuli.
In chapter 5, it is shown that the redundancy gain depends on
whether the participant is focusing the location of the bimodal
stimulation. This indicates that multisensory integration of
redundant stimuli is not only a stimulus-driven 'bottom up'
process. It seems rather to occur at higher levels of processing,
as suggested by recent models of visual information processing.