statically. This perceptual phenomenon is called visually induced self-motion perception or vection (Fischer and Kornmüller, 1930). Under natural circumstances, human self-motion perception is accomplished through multisensory information including vision, vestibular, somatosensory, or kinesthetic sensations
of one’s own body motion on the subjective experience of time. In this study, we suggest a new way to study this effect of self-motion. We used immersive virtual reality (VR) technology to achieve high control over stimulus variables while maintaining external validity. We aimed to explore whether
Humans integrate multisensory information to reduce perceptual uncertainty when perceiving the world (Hillis et al., , ) and self (Butler et al., ; Prsa et al., ) and it has been shown that two multisensory cues are combined and give rise to a single percept only if attributed to the same causal event (Koerding et al., ; Parise et al., ; Shams and Beierholm, ). A growing body of literature studies the limits of such integration for bodily self-consciousness and the perception of self-location under normal and pathological conditions (Ionta et al., ). We extend this research by investigating whether human subjects can learn to integrate two arbitrary visual and vestibular cues of self-motion due to their temporal co-occurrence.
We conducted two experiments ( each) in which whole-body rotations were used as the vestibular stimulus and optic flow as the visual stimulus. The vestibular stimulus provided a yaw self-rotation cue, the visual — a roll (experiment 1) or pitch (experiment 2) rotation cue. Subjects made a relative size comparison between a standard rotation size and a variable test rotation size. Their discrimination performance was fit with a psychometric function and perceptual discrimination thresholds were extracted. We compared experimentally measured thresholds in the bimodal condition with theoretical predictions derived from the single cue thresholds.
Our results show that human subjects can learn to combine and optimally integrate vestibular and visual information, each signaling self-motion around a different rotation axis (yaw versus roll as well as pitch). This finding suggests that the experience of two temporally co-occurring but spatially unrelated self-motion cues leads to inferring a common cause to these two initially unrelated sources of information about self-motion.
riding in a vehicle, retinal motion arises from selfmotion induced by locomotion, object motion evoked by independently moving objects (other vehicles or pedestrians), eye and head movements in order to monitor the surrounding traffic (e.g., oncoming vehicles, lead car, etc.) and accompanying vestibular
Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., ) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, ; Sanders et al., ).
When people observe uniform motion of a visual pattern that occupies the entire area of their visual field, they perceive illusory self-motion in the opposite direction to the visual stimulus. This phenomenon is called visually induced self-motion perception or vection
When we move about in the world, the characteristic pattern of motion generated on our retinae as a result of our self-motion is termed optic flow. When dynamic objects are present in the scene while we are moving, our perceptual system needs to disambiguate the pattern of
Seeing and Perceiving 24 (2011) 203–222 brill.nl/sp Self-Motion Reproduction Can Be Affected by Associated Auditory Cues Anna von Hopffgarten ∗ and Frank Bremmer Department of Neurophysics, Philipps-University Marburg, Karl-von-Frisch-Str. 8a, 35043 Marburg, Germany Received 25 February 2011
by the following: mismatches in VR displays between accommodation and vergence cues (e.g., Hoffman et al. , 2008 ; Wann et al. , 1995 ); latencies between self-motion and the response of the visual stimulus to these movements; spatial resolution issues; and mismatches between multimodal sensory
person inside a stationary train observes a train on an adjacent track beginning to move, they are likely to perceive that it is their own train that is moving in the opposite direction, whereas observers standing outside on the platform looking at the train rarely experience any illusion of selfmotion