Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: Alexis Pérez-Bellido x
  • Search level: All x
Clear All

Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.

In: Seeing and Perceiving

The level of processing at which different modalities interact to either facilitate or interfere with detection has been a matter of debate for more than half a century. This question has been mainly addressed by means of statistical models (Green, ), or by biologically plausible models (Schnupp et al., ). One of the most widely accepted statistical frameworks is the signal detection theory (SDT; Green and Swets, ) because it provides a straightforward way to assess whether two sensory stimuli are judged independently of one another, that is when the detectability (d′) of the compound stimulus exceeds the Pythagorean sum of the d′ of the components. Here, we question this logic, and propose a different baseline to evaluate integrative effects in multi-stimuli detection tasks based on the probabilistic summation. To this aim, we show how a simple theoretical hypothesis based on probabilistic summation can explain putative multisensory enhancement in an audio-tactile detection task. In addition, we illustrate how to measure integrative effects from multiple stimuli in two experiments, one using a multisensory audio-tactile detection task (Experiment 1) and another with a unimodal double-stimulus auditory detection task (Experiment 2). Results from Experiment 1 replicate extant multisensory detection data, and also refuse the hypothesis that auditory and tactile stimuli integrated into a single percept, leading to any enhancement. In Experiment 2, we further support the probabilistic summation model using a unimodal integration detection task.

In: Seeing and Perceiving