Author: Charles Spence

Abstract

A wide variety of crossmodal correspondences, defined as the often surprising connections that people appear to experience between simple features, attributes, or dimensions of experience, either physically present or else merely imagined, in different sensory modalities, have been demonstrated in recent years. However, a number of crossmodal correspondences have also been documented between more complex (i.e., multi-component) stimuli, such as, for example, pieces of music and paintings. In this review, the extensive evidence supporting the emotional mediation account of the crossmodal correspondences between musical stimuli (mostly pre-recorded short classical music excerpts) and visual stimuli, including colour patches through to, on occasion, paintings, is critically evaluated. According to the emotional mediation account, it is the emotional associations that people have with stimuli that constitutes one of the fundamental bases on which crossmodal associations are established. Taken together, the literature that has been published to date supports emotional mediation as one of the key factors underlying the crossmodal correspondences involving emotionally-valenced stimuli, both simple and complex.

In: Multisensory Research

Abstract

Are alternation and co-occurrence of stimuli of different sensory modalities conspicuous? In a novel audio-visual oddball paradigm, the P300 was used as an index of the allocation of attention to investigate stimulus- and task-related interactions between modalities. Specifically, we assessed effects of modality alternation and the salience of conjunct oddball stimuli that were defined by the co-occurrence of both modalities. We presented (a) crossmodal audio-visual oddball sequences, where both oddballs and standards were unimodal, but of a different modality (i.e., visual oddball with auditory standard, or vice versa), and (b) oddball sequences where standards were randomly of either modality while the oddballs were a combination of both modalities (conjunct stimuli). Subjects were instructed to attend to one of the modalities (whether part of a conjunct stimulus or not). In addition, we also tested specific attention to the conjunct stimuli. P300-like responses occurred even when the oddball was of the unattended modality. The pattern of event-related potential (ERP) responses obtained with the two crossmodal oddball sequences switched symmetrically between stimulus modalities when the task modality was switched. Conjunct oddballs elicited no oddball response if only one modality was attended. However, when conjunctness was specifically attended, an oddball response was obtained. Crossmodal oddballs capture sufficient attention even when not attended. Conjunct oddballs, however, are not sufficiently salient to attract attention when the task is unimodal. Even when specifically attended, the processing of conjunctness appears to involve additional steps that delay the oddball response.

In: Multisensory Research
In: Multisensory Research

Abstract

In the original double flash illusion, a visual flash (e.g., a sharp-edged disk, or uniformly filled circle) presented with two short auditory tones (beeps) is often followed by an illusory flash. The illusory flash has been previously shown to be triggered by the second auditory beep. The current study extends the double flash illusion by showing that this paradigm can not only create the illusory repeat of an on-off flash, but also trigger an illusory expansion (and in some cases a subsequent contraction) that is induced by the flash of a circular brightness gradient (gradient disk) to replay as well. The perception of the dynamic double flash illusion further supports the interpretation of the illusory flash (in the double flash illusion) as similar in its spatial and temporal properties to the perception of the real visual flash, likely by replicating the neural processes underlying the illusory expansion of the real flash. We show further that if a gradient disk (generating an illusory expansion) and a sharp-edged disk are presented simultaneously side by side with two sequential beeps, often only one visual stimulus or the other will be perceived to double flash. This indicates selectivity in auditory–visual binding, suggesting the usefulness of this paradigm as a psychophysical tool for investigating crossmodal binding phenomena.

In: Multisensory Research

Abstract

Beats are among the basic units of perceptual experience. Produced by regular, intermittent stimulation, beats are most commonly associated with audition, but the experience of a beat can result from stimulation in other modalities as well. We studied the robustness of visual, vibrotactile, and bimodal signals as sources of beat perception. Subjects attempted to discriminate between pulse trains delivered at 3 Hz or at 6 Hz. To investigate signal robustness, we intentionally degraded signals on two-thirds of the trials using temporal-domain noise. On these trials, inter-pulse intervals (IPIs) were stochastic, perturbed independently from the nominal IPI by random samples from zero-mean Gaussian distributions with different variances. These perturbations produced directional changes in the IPIs, which either increased or decreased the likelihood of confusing the two pulse rates. In addition to affording an assay of signal robustness, this paradigm made it possible to gauge how subjects’ judgments were influenced by successive IPIs. Logistic regression revealed a strong primacy effect: subjects’ decisions were disproportionately influenced by a trial’s initial IPIs. Response times and parameter estimates from drift-diffusion modeling showed that information accumulates more rapidly with bimodal stimulation than with either unimodal stimulus alone. Analysis of error rates within each condition suggested consistently optimal decision making, even with increased IPI variability. Finally, beat information delivered by vibrotactile signals proved just as robust as information conveyed by visual signals, confirming vibrotactile stimulation’s potential as a communication channel.

In: Multisensory Research

Abstract

Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory–haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object–spatial tasks, but not in spatial–object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory–haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.

In: Multisensory Research

Abstract

Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.

In: Multisensory Research

Abstract

Research on serial order memory has traditionally used tasks where participants passively view the items. A few studies that included hand movement showed that such movement interfered with serial order memory. In the present study of three experiments, we investigated whether and how hand movements improved spatial serial order memory. Experiment 1 showed that manual tracing (i.e., hand movements that traced the presentation of stimuli on the modified eCorsi block tapping task) improved the performance of backward recall as compared to no manual tracing (the control condition). Experiment 2 showed that the facilitation effect resulted from voluntary hand movements and could not be achieved via passive viewing of another person’s manual tracing. Experiment 3 showed that it was the temporal, not the spatial, signal within manual tracing that facilitated spatial serial memory.

In: Multisensory Research

Abstract

We recently showed that auditory illusions of self-motion can be induced in the absence of physically accurate spatial cues (Mursic et al., 2017). The current study was aimed at identifying which features of this auditory stimulus (the Shepard–Risset glissando) were responsible for this metaphorical auditory vection, as well as confirming anecdotal reports of motion sickness for this stimulus. Five different types of auditory stimuli were presented to 31 blindfolded, stationary participants through a loudspeaker array: (1) a descending Shepard–Risset glissando; (2) a descending discrete Shepard scale; (3) a descending sweep signal; (4) a phase-scrambled version of (1) (auditory control type 1); and (5) white noise (auditory control type 2). We found that the auditory vection induced by the Shepard–Risset glissando was stronger than both types of auditory control, and the discrete Shepard scale stimulus. However, vection strength was not found to differ between the Shepard–Risset glissando and the sweep signal. This suggests that the continuous, gliding structure of both these auditory stimuli was integral to the induction of vection. Consistent with anecdotal reports that the Shepard–Risset glissando is also capable of generating motion sickness (as measured by the Fast Motion Sickness Scale and the Simulator Sickness Questionnaire), the likelihood and severity of sickness for these stimuli was found to increase with the strength of the auditory vection.

In: Multisensory Research

Abstract

In the dynamic 3D space, it is critical for survival to perceive size of an object and rescale it with distance from an observer. Humans can perceive distance via not only vision but also audition, which plays an important role in the localization of objects, especially in visually ambiguous environments. However, whether and how auditory distance information contributes to visual size perception is not well understood. To address this issue, we investigated the efficiency of size–distance scaling by using auditory distance information that was conveyed by binaurally recorded auditory stimuli. We examined the effects of absolute distance information of a single sound sequence (Experiment 1) and relative distance information between two sound sequences (Experiment 2) on visual size estimation performances in darkened and well-lit environments. We demonstrated that humans could perform size–distance disambiguation by using auditory distance information even in darkness. Curiously, relative distance information was more efficient in size–distance scaling than absolute distance information, suggesting a high reliance on relative auditory distance information in our visual spatial experiences. The results highlight a benefit of audiovisual interaction for size–distance processing and calibration of external events under visually degraded situations.

In: Multisensory Research