Save

Vection Latency Is Reduced by Bone-Conducted Vibration and Noisy Galvanic Vestibular Stimulation

In: Multisensory Research
Authors:
Séamas Weech 1Department of Psychology, Queen’s University, Kingston, ON, Canada

Search for other papers by Séamas Weech in
Current site
Google Scholar
PubMed
Close
and
Nikolaus F. Troje 1Department of Psychology, Queen’s University, Kingston, ON, Canada
2Department of Biology, Queen’s University, Kingston, ON, Canada
3School of Computing, Queen’s University, Kingston, ON, Canada

Search for other papers by Nikolaus F. Troje in
Current site
Google Scholar
PubMed
Close
Open Access

Studies of the illusory sense of self-motion elicited by a moving visual surround (‘vection’) have revealed key insights about how sensory information is integrated. Vection usually occurs after a delay of several seconds following visual motion onset, whereas self-motion in the natural environment is perceived immediately. It has been suggested that this latency relates to the sensory mismatch between visual and vestibular signals at motion onset. Here, we tested three techniques with the potential to reduce sensory mismatch in order to shorten vection onset latency: noisy galvanic vestibular stimulation (GVS) and bone conducted vibration (BCV) at the mastoid processes, and body vibration applied to the lower back. In Experiment 1, we examined vection latency for wide field visual rotations about the roll axis and applied a burst of stimulation at the start of visual motion. Both GVS and BCV reduced vection latency by two seconds compared to the control condition, whereas body vibration had no effect on latency. In Experiment 2, the visual stimulus rotated about the pitch, roll, or yaw axis and we found a similar facilitation of vection by both BCV and GVS in each case. In a control experiment, we confirmed that air-conducted sound administered through headphones was not sufficient to reduce vection onset latency. Together the results suggest that noisy vestibular stimulation facilitates vection, likely due to an upweighting of visual information caused by a reduction in vestibular sensory reliability.

1. Introduction

Recent developments in graphics technologies permit real-time rendering of complex three-dimensional environments that approximate the natural world with a high degree of realism (see Scarfe and Glennerster, 2015, for a review). Virtual reality (VR) environments can be rendered to provide stereoscopic views with a veridical centre of projection at high refresh rates and with low motion-to-photon latency. These technological advancements continue at a rapid rate (e.g., Friston et al., 2016; Greer et al., 2016). Developments in this area are likely to be crucial for bridging the gap between real and artificial conditions that can impair the ability for participants to gather information and perform naturally in the virtual world (Riecke, 2011; Slater, 2009). Researchers have taken advantage of this emerging technology in myriad studies of human perception-action coupling in VR that appear to generalize well to real-world situations (Jain and Backus, 2010; Linkenauger et al., 2015; Vignais et al., 2015; also see Hardiess et al., 2015, and Wilson and Soranzo, 2015, for reviews).

Although these studies shine a new light on the process by which sensory information guides action, there are a number of physical differences and resulting perceptual discrepancies between the real world and the virtual worlds that are commonly used in studies of human perception and action. The physical differences between natural and VR conditions can be exemplified by the following: mismatches in VR displays between accommodation and vergence cues (e.g., Hoffman et al., 2008; Wann et al., 1995); latencies between self-motion and the response of the visual stimulus to these movements; spatial resolution issues; and mismatches between multimodal sensory cues (note this is not a comprehensive list). As a result, perception in VR differs in many ways from the way we perceive the real world. Examples include: consistent underestimation observed when evaluating egocentric distances in VR settings (Loomis and Knapp, 2003; Willemsen and Gooch, 2002); simulator sickness produced by VR tasks that are not nauseogenic in the real world (Sharples et al., 2008); the deforming effect of display latency on the visual 3D environment (Deering, 1992); and differences between self-motion perception in the virtual world compared with the natural environment (McCauley and Sharkey, 1992). The process of linking the various physical discrepancies between VR and real world settings with the perceptual or physiological differences observed between these conditions has been the focus of much attention. A number of hypotheses exist, for example, that link sensory mismatch in VR with self-motion perception and simulator sickness (for a review see Shupak and Gordon, 2006).

Given the growth and future potential for VR in studying naturalistic human behaviour, the characteristics of self-motion perception in VR are of particular importance. In the natural environment, active or passive movement of the body through space results in immediate perception of body motion (Dichgans and Brandt, 1978). The most often studied case involves visually evoked illusions of self-motion, known as ‘vection’, which emerged from a rich history of research that demonstrates the robust link between visual flow and control of bodily posture. Seminal work by Gibson (1950, 1966) outlined the basis for the specification of body movement through optic flow, and a series of contemporaries went on to provide compelling examples of the laws that Gibson described. Examples include the elegant demonstration by Lishman and Lee (1973) showing that a room that swayed around a stationary observer could give rise to illusory self-motion and coherent postural responses (also see Lee and Aronson, 1974; Lee and Lishman, 1975). Vection was also documented by Johansson (1977), where the phenomenon was called the ‘elevator illusion’ since illusory upwards self-motion was evoked by downward optic flow. A similar dependency between visual motion and perceived body movement was shown by Warren (1976), who used optic flow consisting of a simple dot display to show that observers perceived vection that felt similar to real movement.

Real self-motion is associated with cues from senses other than vision, including auditory, haptic, or proprioceptive signals. Self-motion illusions are facilitated by auditory (Väljamäe, 2009) and haptic movement cues (Riecke et al., 2008). Entirely non-visual illusions of self-motion have been demonstrated, including so-called ‘auditory vection’ produced by auditory cues that imply self-motion (Lackner, 1977; Väljamäe et al., 2004). Body stimulation in blindfolded participants is also sufficient to produce self-motion illusions, termed ‘haptokinetic vection’ for tactile stimulation that implies motion (informal observations were recorded by Dichgans and Brandt, 1978) and ‘arthrokinetic vection’ for tonic limb rotation (Brandt et al., 1977). Even ‘vestibular vection’ has been identified in cases where the vestibular organs are stimulated using a caloric method (Fasold et al., 2002).

The immediacy of self-motion produced by real motion is not observed in the case of vection. The latencies between the onset of visual motion and the establishment of a sense of self-motion typically range between one and ten seconds, depending on how the visual stimulus is rendered and presented. For example, vection tends to occur faster and feels stronger for roll rotation than for pitch rotations, and is experienced more quickly for pitch rotations than for yaw rotations, although this pattern can differ depending on the mode of presentation (Tanahashi et al., 2012; Ujike et al., 2004). The latency between visual motion onset and the impression of self-motion has the potential to drastically alter the way in which participants act in tasks involving VR.

Researchers have identified a potential cause for vection onset latency in the mismatch between visual and non-visual sensory cues at the onset of the motion stimulus (e.g., Flanagan et al., 2004; Israël and Warren, 2005; Wong and Frost, 1978, 1981). Perceiving the degree to which ones’ own body is moving mainly relies on an integration of visual and vestibular information (Angelaki et al., 2011; Israël and Warren, 2005). A variety of human psychophysics studies and macaque imaging studies, as summarized by Greenlee and colleagues (2016), show that self-motion perception is likely served by interconnected populations of neurons that respond to both visual and vestibular cues in particular.

Behavioural studies often probe this self-motion network by inducing vection in observers. The classic conditions required to induce vection involve the presentation of a large-field visual stimulus that specifies the direction and magnitude of virtual self-motion through optic flow. Given that the participant is stationary in space (often seated in a chair in front of a screen or computer monitor), the vestibular organs do not receive the corroborating activation that would arise if the participant actually started to move. Sensory mismatch occurs at visual motion onset, when the visual stimulus suggests acceleration into self-motion but no accelerations are detected by the vestibular organs (when the visual stimulus is moving at constant velocity, no mismatch occurs). Eventually an observer acquires sufficient evidence that they are likely to be in motion from the visual cues, and the feeling of vection takes hold (Israël and Warren, 2005). The relation between sensory mismatch and vection has been supported by research that shows that physical rotation of an observer at visual motion onset results in a decrease in vection onset latency, although this effect was only produced when visual and vestibular cues were coherent in their direction (Brandt et al., 1974; Riecke et al., 2006; Wong and Frost, 1981). Schulte-Pelkum (2007) confirmed the results of Wong and Frost (1981) where a vestibular ‘kick’ through body rotation caused a large reduction in latency for linear vection in a VE. On the other hand, conflicting visual and vestibular cues seem to suppress rather than enhance vection (Ash and Palmisano, 2012; Lackner and Teixeira, 1977; Young et al., 1973). These results support the idea that vection onset latency can be attributed to the delay required by the nervous system to acquire sufficient visual self-motion information in order to disregard the sensory mismatch.

The idea that visual-vestibular mismatch underlies vection onset latency gained further support from studies on vestibular dysfunction patients. Patients with a low vestibular sensitivity to a specific direction of head rotation show decreased vection latency for that direction compared to other directions (Wong and Frost, 1981). A similar negative relationship between vestibular threshold and vection onset latency was identified in a healthy population (Lepecq et al., 1999). This finding highlights the importance of cue uncertainty in guiding the decision about whether the body has moved or not. Participants with vestibular deficits appear to rely strongly upon visual motion signals to decide whether or not the body is likely to be in motion, given that they cannot rely on information provided by the vestibular sense.

Other research has shown that the use of galvanic vestibular stimulation (GVS) in VR can strongly influence the strength of vection experienced by users. This technique involves applying a small direct current to stimulate the vestibular system, normally via electrodes placed at the mastoid processes (Curthoys and MacDougall, 2012; Day and Fitzpatrick, 2005; Swaak and Oosterveld, 1975). Cress and colleagues (1997) showed that GVS can increase vection magnitude if applied during observation of a vection-inducing visual stimulus. As well, Lepecq and colleagues (2006) provided evidence that directional GVS can modify the perceived direction of illusory self-motion. This research is in line with the studies discussed above that show facilitation of vection when visually-consistent body rotation is applied at the onset of visual motion (Brandt et al., 1974; Riecke et al., 2006; Wong and Frost, 1981).

The research discussed above has shown that vection latency is shorter when the body is physically moved to corroborate visual motion cues. Likewise, GVS has been shown to influence vection if the stimulation is congruent with visual motion. On the other hand, several studies have indicated that in some cases sensory mismatch is irrelevant to the experience of vection, and may even lead to enhanced vection sensation (Ash and Palmisano, 2012; Palmisano and Keane, 2004; Palmisano and Kim, 2009; Palmisano et al., 2000, 2011). For example, introducing visually simulated viewpoint jitter into a visual stimulus can enhance vection despite introducing a significant degree of visual-vestibular conflict. These findings are intriguing as they appear to contradict a variety of the literature cited here, and the effects have been shown to be robust to changes in experimental methodology and instruction (Palmisano and Chan, 2004). It is possible that viewpoint jitter increases the compelling nature of vection by adding ecological validity to the visual signals (see Palmisano et al., 2011, for a discussion). However, viewpoint jitter that does not cause a sensory conflict is associated with stronger vection (Ash and Palmisano,