Search Results

You are looking at 1 - 10 of 15 items for

  • Author or Editor: Michael Barnett-Cowan x
  • Search level: All x
Clear All

Over the last decade, emerging studies have suggested that older adults integrate multisensory information differently than younger adults, suggesting changes in central processing of multisensory information (de Dieuleveult et al., 2017). This special issue on Multisensory Processing and Aging focuses on evidence for these central changes in integrating multisensory information using an aging brain and seeks to provide a comprehensive overview of current key findings on multisensory processing and aging.

The central nervous system (CNS) receives information about the environment from all the senses. To safely interact with the environment, the CNS must quickly make sense of this

In: Multisensory Research

Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. From a review of this literature it is speculated that this perceived latency of vestibular stimulation may reflect the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, that additional computations are required for processing vestibular information, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset. Empirical investigation of these theoretical predictions is encouraged in order to fully understand this surprising result, its implications, and to advance the field.

In: Multisensory Research

Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. Following a review of this literature I will argue that this perceived latency of vestibular stimulation likely reflects the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset.

In: Seeing and Perceiving

Abstract

Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer’s physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.

In: Multisensory Research

Abstract

Previous studies have found that semantics, the higher-level meaning of stimuli, can impact multisensory integration; however, less is known about the effect of valence, an affective response to stimuli. This study investigated the effects of both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via response time (RT) and temporal-order judgement (TOJ) tasks [assessing processing speed (RT), Point of Subjective Simultaneity (PSS), and time window when multisensory stimuli are likely to be perceived as simultaneous (temporal binding window; TBW)]. Through an online study with 40 participants (mean age: 26.25 years; females = 17), we found that both congruence and valence had a significant main effect on RT (congruency and positive valence decrease RT) and an interaction effect (congruent/positive valence condition being significantly faster than all others). For TOJ, there was a significant main effect of valence and a significant interaction effect where positive valence (compared to negative valence) and the congruent/positive condition (compared to all other conditions) required visual stimuli to be presented significantly earlier than auditory stimuli to be perceived as simultaneous. A subsequent analysis showed a positive correlation between TBW width and RT (as TBW widens, RT increases) for the categories that were furthest from true simultaneity in their PSS (Congruent/Positive and Incongruent/Negative). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.

In: Multisensory Research

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.

In: Seeing and Perceiving

The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. Yet, despite similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated (Barnett-Cowan and Harris, , ). However, these studies paired a vestibular stimulation of long duration (∼1 s) and of a continuously changing temporal envelope with brief (10–50 ms) sound pulses. In the present study the stimuli were matched for temporal envelope. Participants judged the temporal order of the onset of an active head movement and of brief (50 ms) or long (1400 ms) sounds with a square or raised-cosine shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order to be perceived as simultaneous. Head movements paired with long square sounds (∼100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (∼115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (duration and temporal envelope). Rather, differences among sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is larger compared to auditory stimuli.

In: Seeing and Perceiving

The restricted operational space of dynamic driving simulators requires the implementation of motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is limited below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research, where additional sensory cues to self-motion are removed. These limits might be too conservative for an ecological driving simulation, which provides a variety of complex visual and vestibular cues as well as demands of attention which vary with task difficulty. We measured roll rate detection threshold in active driving simulation, where visual and vestibular stimuli are provided as well as increased cognitive load from the driving task. Here thresholds during active driving are compared with tilt rate detection thresholds found in the literature (passive thresholds) to assess the effect of the driving task. In a second experiment, these thresholds (active versus passive) are related to driving preferences in a slalom driving course in order to determine which roll rate values are most appropriate for driving simulators so as to present the most realistic driving experience. The results show that detection threshold for roll in an active driving task is significantly higher than the limits currently used in motion cueing algorithms, suggesting that higher tilt limits can be successfully implemented to better optimize simulator operational space. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings indicate that increasing task complexity in driving simulation can decrease motion sensitivity allowing for further expansion of the virtual workspace environment.

In: Seeing and Perceiving

The ability to successfully integrate simultaneous information relayed across multiple sensory systems is an integral aspect of everyday functioning (Saxon et al., 2010). To date, however, multisensory integration processes have not been thoroughly evaluated and their relation to important clinical outcomes in healthy and disease populations is not entirely known; this has been recognized in several influential editorials as a major knowledge gap in the field (Meyer and Noppeney, 2011; Wallace, 2012).

One specific population in which multisensory integration research could prove valuable is in older adults, given that healthy aging presents many challenges to the central nervous

In: Multisensory Research

Abstract

Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by book-body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters (‘p’-from-‘d’; ‘i’-from-‘!’) and ambiguous faces used in popular visual illusions (‘young woman’-from-‘old woman’; ‘grinning man’-from-‘frowning man’) in a forced-choice paradigm. The two transition points (e.g., ‘p-to-d’ and ‘d-to-p’; ‘young woman-to-old woman’ and ‘old woman-to-young woman’) were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters — which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently — possibly to facilitate the specific demands of face and letter recognition.

In: Vestibular Cognition