Search Results

The restricted operational space of dynamic driving simulators requires the implementation of motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is limited below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research, where additional sensory cues to self-motion are removed. These limits might be too conservative for an ecological driving simulation, which provides a variety of complex visual and vestibular cues as well as demands of attention which vary with task difficulty. We measured roll rate detection threshold in active driving simulation, where visual and vestibular stimuli are provided as well as increased cognitive load from the driving task. Here thresholds during active driving are compared with tilt rate detection thresholds found in the literature (passive thresholds) to assess the effect of the driving task. In a second experiment, these thresholds (active versus passive) are related to driving preferences in a slalom driving course in order to determine which roll rate values are most appropriate for driving simulators so as to present the most realistic driving experience. The results show that detection threshold for roll in an active driving task is significantly higher than the limits currently used in motion cueing algorithms, suggesting that higher tilt limits can be successfully implemented to better optimize simulator operational space. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings indicate that increasing task complexity in driving simulation can decrease motion sensitivity allowing for further expansion of the virtual workspace environment.

In: Seeing and Perceiving

The simultaneous visuo–tactile stimulation of an individual’s body and a virtual body (avatar) is an experimental method used to investigate the mechanisms of self-experience. Studies incorporating this method found that it elicits the experience of bodily ownership over the avatar. Moreover, as part of our own research we found that it also has an effect on the experience of agency, spatial presence, as well as on the perception of self-motion, and thus on self-localization. However, it has so far not been investigated whether these effects represent distinct categories within conscious experience. We stroked the back of 21 male participants for three minutes while they watched an avatar getting synchronously stroked within a virtual city in a head-mounted display setup. Subsequently, we assessed their avatar and their spatial presence experience with 23 questionnaire items. The analysis of the responses to all items by means of nonmetric multidimensional scaling resulted in a two-dimensional map (stress = 0.151) on which three distinct categories of items could be identified: a cluster (Cronbach’s alpha = 0.89) consisting of all presence items, a cluster (Cronbach’s alpha = 0.88) consisting of agency-related items, and a cluster (Cronbach’s alpha = 0.93) consisting of items related to body ownership as well as self-localization. The reason that spatial presence formed a distinct category could be that body ownership, self-localization and agency are not reported in relation to space. Body ownership and self-localization belonged to the same category which we named identification phenomena. Hence, we propose the following three higher-order categories of self-experience: identification, agency, and spatial presence.

In: Seeing and Perceiving

The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. Yet, despite similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated (Barnett-Cowan and Harris, , ). However, these studies paired a vestibular stimulation of long duration (∼1 s) and of a continuously changing temporal envelope with brief (10–50 ms) sound pulses. In the present study the stimuli were matched for temporal envelope. Participants judged the temporal order of the onset of an active head movement and of brief (50 ms) or long (1400 ms) sounds with a square or raised-cosine shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order to be perceived as simultaneous. Head movements paired with long square sounds (∼100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (∼115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (duration and temporal envelope). Rather, differences among sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is larger compared to auditory stimuli.

In: Seeing and Perceiving

Abstract

Self-motion through an environment stimulates several sensory systems, including the visual system and the vestibular system. Recent work in heading estimation has demonstrated that visual and vestibular cues are typically integrated in a statistically optimal manner, consistent with Maximum Likelihood Estimation predictions. However, there has been some indication that cue integration may be affected by characteristics of the visual stimulus. Therefore, the current experiment evaluated whether presenting optic flow stimuli stereoscopically, or presenting both eyes with the same image (binocularly) affects combined visual–vestibular heading estimates.

Participants performed a two-interval forced-choice task in which they were asked which of two presented movements was more rightward. They were presented with either visual cues alone, vestibular cues alone or both cues combined. Measures of reliability were obtained for both binocular and stereoscopic conditions.

Group level analyses demonstrated that when stereoscopic information was available there was clear evidence of optimal integration, yet when only binocular information was available weaker evidence of cue integration was observed. Exploratory individual analyses demonstrated that for the stereoscopic condition 90% of participants exhibited optimal integration, whereas for the binocular condition only 60% of participants exhibited results consistent with optimal integration. Overall, these findings suggest that stereo vision may be important for self-motion perception, particularly under combined visual–vestibular conditions.

In: Seeing and Perceiving

Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., ) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, ; Sanders et al., ).

In: Seeing and Perceiving