Search Results

Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. From a review of this literature it is speculated that this perceived latency of vestibular stimulation may reflect the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, that additional computations are required for processing vestibular information, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset. Empirical investigation of these theoretical predictions is encouraged in order to fully understand this surprising result, its implications, and to advance the field.

In: Multisensory Research

Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. Following a review of this literature I will argue that this perceived latency of vestibular stimulation likely reflects the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset.

In: Seeing and Perceiving

Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters (‘p’-from-‘d’; ‘i’-from-‘!’) and ambiguous faces used in popular visual illusions (‘young woman’-from-‘old woman’; ‘grinning man’-from-‘frowning man’) in a forced-choice paradigm. The two transition points (e.g., ‘p-to-d’ and ‘d-to-p’; ‘young woman-to-old woman’ and ‘old woman-to-young woman’) were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters — which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently — possibly to facilitate the specific demands of face and letter recognition.

In: Multisensory Research

The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. Yet, despite similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated (Barnett-Cowan and Harris, 2009, 2011). However, these studies paired a vestibular stimulation of long duration (∼1 s) and of a continuously changing temporal envelope with brief (10–50 ms) sound pulses. In the present study the stimuli were matched for temporal envelope. Participants judged the temporal order of the onset of an active head movement and of brief (50 ms) or long (1400 ms) sounds with a square or raised-cosine shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order to be perceived as simultaneous. Head movements paired with long square sounds (∼100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (∼115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (duration and temporal envelope). Rather, differences among sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is larger compared to auditory stimuli.

In: Seeing and Perceiving

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.

In: Seeing and Perceiving

The restricted operational space of dynamic driving simulators requires the implementation of motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is limited below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research, where additional sensory cues to self-motion are removed. These limits might be too conservative for an ecological driving simulation, which provides a variety of complex visual and vestibular cues as well as demands of attention which vary with task difficulty. We measured roll rate detection threshold in active driving simulation, where visual and vestibular stimuli are provided as well as increased cognitive load from the driving task. Here thresholds during active driving are compared with tilt rate detection thresholds found in the literature (passive thresholds) to assess the effect of the driving task. In a second experiment, these thresholds (active versus passive) are related to driving preferences in a slalom driving course in order to determine which roll rate values are most appropriate for driving simulators so as to present the most realistic driving experience. The results show that detection threshold for roll in an active driving task is significantly higher than the limits currently used in motion cueing algorithms, suggesting that higher tilt limits can be successfully implemented to better optimize simulator operational space. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings indicate that increasing task complexity in driving simulation can decrease motion sensitivity allowing for further expansion of the virtual workspace environment.

In: Seeing and Perceiving

Abstract

Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by book-body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters (‘p’-from-‘d’; ‘i’-from-‘!’) and ambiguous faces used in popular visual illusions (‘young woman’-from-‘old woman’; ‘grinning man’-from-‘frowning man’) in a forced-choice paradigm. The two transition points (e.g., ‘p-to-d’ and ‘d-to-p’; ‘young woman-to-old woman’ and ‘old woman-to-young woman’) were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters — which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently — possibly to facilitate the specific demands of face and letter recognition.

In: Vestibular Cognition

Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).

In: Seeing and Perceiving