Rotating One’s Head Modulates the Perceived Velocity of Motion Aftereffect

In: Multisensory Research

Abstract

As a prominent illusion, the motion aftereffect (MAE) has traditionally been considered a visual phenomenon. Recent neuroimaging work has revealed increased activities in MT+ and decreased activities in vestibular regions during the MAE, supporting the notion of visual–vestibular interaction on the MAE. Since the head had to remain stationary in fMRI experiments, vestibular self-motion signals were absent in those studies. Accordingly, more direct evidence is still lacking in terms of whether and how vestibular signals modulate the MAE. By developing a virtual reality approach, the present study for the first time demonstrates that horizontal head rotation affects the perceived velocity of the MAE. We found that the MAE was predominantly perceived as moving faster when its direction was opposite to the direction of head rotation than when its direction was the same as head rotation. The magnitude of this effect was positively correlated with the velocity of head rotation. Similar result patterns were not observed for the real motion stimuli. Our findings support a ‘cross-modal bias’ hypothesis that after living in a multisensory environment long-term the brain develops a strong association between signals from the visual and vestibular pathways. Consequently, weak biasing visual signals in the associated direction can spontaneously emerge with the input of vestibular signals in the multisensory brain areas, substantially modulating the illusory visual motion represented in those areas as well. The hypothesis can also be used to explain other multisensory integration phenomena.

Abstract

As a prominent illusion, the motion aftereffect (MAE) has traditionally been considered a visual phenomenon. Recent neuroimaging work has revealed increased activities in MT+ and decreased activities in vestibular regions during the MAE, supporting the notion of visual–vestibular interaction on the MAE. Since the head had to remain stationary in fMRI experiments, vestibular self-motion signals were absent in those studies. Accordingly, more direct evidence is still lacking in terms of whether and how vestibular signals modulate the MAE. By developing a virtual reality approach, the present study for the first time demonstrates that horizontal head rotation affects the perceived velocity of the MAE. We found that the MAE was predominantly perceived as moving faster when its direction was opposite to the direction of head rotation than when its direction was the same as head rotation. The magnitude of this effect was positively correlated with the velocity of head rotation. Similar result patterns were not observed for the real motion stimuli. Our findings support a ‘cross-modal bias’ hypothesis that after living in a multisensory environment long-term the brain develops a strong association between signals from the visual and vestibular pathways. Consequently, weak biasing visual signals in the associated direction can spontaneously emerge with the input of vestibular signals in the multisensory brain areas, substantially modulating the illusory visual motion represented in those areas as well. The hypothesis can also be used to explain other multisensory integration phenomena.

1. Introduction

The motion aftereffect (MAE) is a well-known visual aftereffect in which exposure to motion in one direction causes illusory motion of a static pattern in the opposite direction (Anstis et al., 1998). It was first reported by Aristotle (about 330 BC), and is also known as the waterfall illusion (Addams, 1834).

As a visual illusion, the MAE has traditionally been thought to result from an imbalance in responsiveness of oppositely tuned motion detectors (Barlow and Hill, 1963, Huk et al., 2001), and to be selective in retinotopic coordinates (Knapen et al., 2009; Wenderoth and Wiese, 2008). However, recent studies argue that the MAE can also be anchored in spatiotopic (Mikellidou et al., 2017; Turi and Burr, 2012) and hand-centered coordinates (Matsumiya and Shioiri, 2014). In the spatiotopic MAE (aka ‘positional MAE’ or ‘PMAE’), adaptation to motion within a window produced an aftereffect at the adapting spatial location after the subjects had made a saccade to a new fixation point. That is, the adapter and test were presented at the same spatial but different retinal locations. The hand-centered MAE was found when the positions of adapter and test were rendered the same relative to a seen hand but non-overlapping on the retina. The finding of a hand-centered MAE implies that visual–proprioceptive integration (Graziano, 1999) might modulate the MAE, and that the MAE should be more than a pure visual phenomenon. Thus, an intriguing question is whether the MAE can also be modulated by the interactions between visual and other non-visual sensory, e.g., vestibular, signals or not.

In the past decades, a number of neuroimaging studies have been conducted to investigate the neural mechanisms underlying the MAE. Several pioneer studies have highlighted the role of the human MT+ area in representing the MAE (Culham et al., 1999; He et al., 1998; Taylor et al., 2000; Théoret et al., 2002; Tootell et al., 1995). Although Huk and colleagues called in question that MT+ activity during the MAE is caused by attention rather than the MAE itself (Huk et al., 2001), Castelo-Branco et al.’s work shows the same activation when attention is not focused on a motion feature (Castelo-Branco et al., 2009). Moreover, Seiffert et al. found increased magnitude of the first-order MAE from early to later visual areas, especially MT+, yet a similar pattern was not observed in real motion (Seiffert et al., 2003). By using multivariate pattern classification, Hogendoorn and Verstraten further reported that in area MT+, the MAE is encoded differently from real motion in the same perceived direction (Hogendoorn and Verstraten, 2013). Both of the latter works indicate that area MT+ is somehow unique in representing the MAE (Hogendoorn and Verstraten, 2013; Seiffert et al., 2003).

Area MT+ includes the middle temporal (MT) plus other adjacent motion-sensitive areas, including the medial superior temporal (MST) area (Zeki et al., 1991). Area MST receives strong projections from the MT area that encodes the basic motion information (Maunsell and van Essen, 1983). Moreover, visual and vestibular heading signals converge in the dorsal medial superior temporal area (MSTd), an area thought to be involved in self-motion perception (Britten and van Wezel, 1998; Gu et al., 2008; Page and Duffy, 2003). Therefore, the activation of MT+ during the MAE potentially relates the MAE with visual–vestibular interaction (possibly in MSTd). Supporting this notion, recent fMRI work reports two important findings that during the MAE, MST (but not MT/V5) solely activates, and the vestibular core region OP2 (the human homologue of macaque’s PIVC) deactivates (Rühl et al., 2018). All these findings hint that visual–vestibular interactions are related to the MAE.

Figure 1.
Figure 1.

Illustration of the apparatus and stimuli. We developed a virtual reality system (a). The visual stimuli were presented on the goggles screens. In the head movement condition, subjects rotated their heads back and forth in the horizontal plane, with the head movement data tracked in real-time by a three-space sensor. The graphs in (b) show the stimuli in Experiment 1a. The red and blue arrows show the directions of real motion and motion aftereffect, respectively, which were not presented actually. The head remained still during the initial (30 s) and top-up (10 s each) adaptation phases. Between every two successive adaptation phases was a test phase in which the subject made a single head rotation from one side to the other. Subjects indicated which test grating appeared to move faster at the end of each head rotation. The graphs in (c) show the stimuli in Experiments 1b. On each trial, subjects rotated the heads to one side. At the end of each head rotation, subjects indicated which of the two gratings appeared to move faster. Thereafter, a white noise image flickered to eliminate any residual aftereffect.

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

However, it should be noted that Rühl et al.’s (2018) report, as well as other related neuroimaging literature, can serve only as indirect evidence for the notion of visual–vestibular interactions during the MAE. The major reason is that the subject’s head has to remain stationary in an fMRI experiment. Thus, no vestibular self-motion input signals emerge during the perception of MAE, making the existence of visual–vestibular interactions a pure speculation. Ideally, a more direct test of vestibular modulations on the MAE should involve head movements during the experiment, because this manipulation allows an empirical observation of how vestibular signals affect the MAE. Unfortunately, head movements are usually not permitted in an fMRI experiment. Taking advantage of the slow dynamic of the blood-oxygen-level-dependent (BOLD) signal, Schindler and Bartles (2018) introduced a new method to study the influence of head movements on processing real visual motion stimuli (Schindler and Bartels, 2018). After rotating two times away from a center position and back in a trial phase, the subject’s head was rapidly stabilized by inflatable aircushions. An immediately subsequent acquisition phase was used to measure the delayed hemodynamic responses to the real visual motion stimuli presented during the voluntary head rotations in the trial phase. However, the characteristics of the MAE and the slow timing of their method constitute big challenges for investigating the MAE during head movements. For example, the inter-session and inter-individual variations of the MAE duration make it unlikely to precisely set an appropriate length for the fMRI acquisition phase before the scanning starts. Another nuisance is that the relatively long trial phase (e.g., 10 s in their work) leads to de-adaptation, making the top-up adaptation paradigm a necessity. Unfortunately, this would cause contamination of the BOLD signals from the MAE by those from the top-up adapters. Thus, unless a more revolutionary neuroimaging technique is developed, psychophysical approaches are still more preferable to directly examine whether vestibular signals can modulate the MAE or not.

Accordingly, the present study adopted a recently developed virtual-reality method (Bai et al., 2019). Visual stimuli were presented on a head-mounted display; meanwhile, the subjects rotated their heads back and forth (Fig. 1a). This new method allowed us to measure how head rotation in the horizontal plane affected the perceived velocity of the MAE. The perceived velocity has been considered to reflect the magnitude or strength of the MAE; thus, one of the neutral test methods for measuring the strength of an MAE is a matching procedure used primarily to estimate the speed of an MAE (Pantle, 1998). If vestibular signals can modulate the MAE, one may expect to see a different pattern of the perceived velocity of the MAE when the head is rotating than when the head is stationary. Recent work has shown that the perceived velocity of real motion can be affected by self-motion (Hogendoorn et al., 2017a, b). Since the MAE to some extent resembles a certain slow real motion in appearance, we also examined the influences of head rotation on the perceived velocity of real motion, and compared those results with the observations for the MAE.

2. Material and Methods

Experimental procedures in all the experiments of the present study were approved by the Institutional Review Board of the Institute of Psychology, Chinese Academy of Sciences. The work has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. Informed consent was obtained from all subjects. All subjects had normal or corrected-to-normal vision.

Stimuli were presented to the subjects on Sony HMZ-T3 (Sony Corp., Tokyo, Japan) head-mounted goggles (49.4° × 27.8° visual angle, 1280 × 720 pixel resolution at 60 Hz) connected to a Dell XPS 8700 (Dell, Round Rock, TX, USA) computer, and programmed in Matlab (The MathWorks, Natick, MA) and Psychtoolbox (Brainard, 1997). A three-space sensor (TSS-WL Sensor, YEI Technology, Portsmouth, OH, USA), which was used to record the subject’s head movement data in real time, was attached on top of the helmet of the goggles (Fig. 1a). Communication with the three-space sensor was realized through a customized computer program that was developed previously. Visual stimuli presented to the subjects were also displayed on an LCD monitor by which the experimenter could see what the subjects were viewing.

2.1. Experiment 1

2.1.1. Participants

Twenty normal adults (eleven females, nine males, age range 19–24 years) participated in Experiments 1a and 1b. Ten (six females, four males, age range 18–24 years) were tested for the matched condition of Experiment 1b, eight of whom also participated in Experiment 1a and other conditions of 1b.

2.1.2. Experiment 1a: MAE

In this experiment subjects simultaneously adapted to a grating drifting leftward and another grating drifting rightward while their heads remained stationary. A top-up adaptation paradigm (i.e., each test phase was followed by a re-adaptation phase) was used to avoid fast dissipation of the MAE. During each test phase, subjects made a single head rotation in the horizontal plane; meanwhile they were presented with static test gratings which appeared to move in the opposite direction to the adapting gratings due to the MAE. Therefore, the direction of head rotation coincided with the direction of illusory motion of one test grating, and so was opposite to that of the other test grating. By the end of each head rotation, the subjects were required to make a binary judgment which of the two gratings moved faster.

In everyday life, a head turn usually results in retinal motion in the opposite direction. Therefore, in the present study the MAE or real motion of a grating was referred to as ‘congruent’ if its drifting direction was the same as the direction of head-rotation-induced retinal motion in everyday life. For example, when the head is rotating to the left, a grating drifting rightward is considered ‘congruent’ (defined for both physical and illusory motion), and the one drifting leftward is called ‘incongruent’. If head rotation modulated the perceived velocity of MAE, its influence would likely be direction-specific. Thus we expected the subjects to report one particular test grating moving faster than the other one in a major proportion of trials.

2.1.2.1. Stimuli and Procedure.

A black fixation point (0.46° in diameter) was presented on the center of a mid-gray background. Subjects were told to maintain a good fixation throughout an experimental session. During adaptation, subjects were presented with two full-contrast vertical gratings in the upper and lower visual fields (Fig. 1b). One grating was placed 0.39° above the fixation point, and the other 0.39° below the fixation. Both gratings subtended 25° (horizontal) by 7° (vertical), the spatial frequency of which was 0.13 cycle/°. They drifted at 11.59°/s in the opposite direction. The grating in the upper visual field always drifted leftward, whereas the one in the lower visual field drifted rightward.

Figure 2.
Figure 2.

The mean velocity and angle of head rotation across the subjects in each experiment and condition (a). Error bars represent standard errors of means (SEM). The graphs in (b) show the velocity profile of the subjects’ head rotation in each experiment. Solid lines indicate the grand average value, and the shaded areas indicate 1 SEM. ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote the conditons of fast real motion, medium-speed real motion, slow real motion, real motion matching the head rotation velocity, voluntary self-motion, and passive self-motion, respectively.

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

A top-up paradigm was adopted for visual motion adaptation. Specifically, the participants first adapted to the drifting gratings for 30 s (i.e., the initial adaptation phase) before the first test probe appeared. In each subsequent trial, the adapting grating (also called top-up) was displayed for 10 s, followed by a test phase. The stimuli in the test phases were the same as in the adaptation phases except that both gratings were stationary and the fixation point was changed from black to red.

There were two experimental conditions — head movement and head still. In the head movement condition (Fig. 1b), subjects’ initial head (yaw) positions were always on the rightmost side. After they pressed the space bar, the adapting gratings were presented for 30 s, during which subjects adapted to the drifting gratings while keeping their heads stationary. Immediately after this initial adaptation phase, the fixation became red. This meant the start of the first test phase, and the two gratings (i.e., test probe) became physically stationary. The subjects were required to immediately make a head (yaw) rotation with a subjectively constant speed. Note that ‘subjectively constant speed’ was part of the given instruction. The use of such an instruction was to encourage the subject to keep about the same average speed of head rotation across the trials. Within each trial, however, the actual speed of head rotation was not constant. The actual grand average profile of speed is shown in Fig. 2b. Because the adapting motion’s direction was leftward (rightward) in the upper (lower) visual field, the MAE was rightward (leftward) in the upper (lower) visual field. The duration of the test phase was dependent on the head movement. Once their heads had been rotated to the leftmost side, subjects were required to report which of the two test gratings appeared to move relatively faster than the other by pressing the UpArrow or DownArrow key. This keypress would terminate the test phase by changing the fixation point to black and start the next top-up adaptation phase by resuming the physical drifts of the two gratings. Subjects adapted to the drifting gratings for 10 more seconds while keeping their heads still. Afterwards, the fixation point turned red again. Subjects then rotated their heads in approximately the same way to the rightmost side, and were presented with the test probe. Once their heads reached the rightmost side, they made a response, which again triggered the next top-up adaptation phase. They repeated this top-up procedure until they finished a total of 30 trials per session.

In the head-still condition, the stimuli and task were the same as in the head-movement condition, except that subjects kept their heads still all the time. Each subject completed two head-movement sessions and two head-still sessions, with the session sequence counter-balanced both within and between the subjects.

2.1.2.2. Analysis.

For the head-movement condition, we computed the proportion of trials in which the congruent grating was perceived as moving faster than the incongruent grating. This index was referred to as ‘percent congruent’. For the head-still condition, there was not a congruent direction because there was no head rotation during the test phases. Therefore, we computed the proportion of trials in which the leftward-or rightward-drifting grating was perceived as moving faster, respectively. The latter proportion (i.e., rightward reported faster) was statistically compared to the chance level (50%). Before the statistical comparison, we conducted a Shapiro–Wilk test to examine normality of the data, which was also done for the other conditions and experiments. If normality could not be proved, a Wilcoxon signed-rank test (two-tailed) would be used for the statistical comparison and the effect size (r) would be assessed. If the comparison between the ‘percent rightward’ and the chance level did not show a significant difference, the chance level would serve as baseline for simplicity, and the percent congruent values in the head-movement condition would be compared to the chance level.

2.1.3. Experiment 1b: Real Motion

If we observed a direction-specific effect of head rotation on the perceived velocity of MAE, one might question whether a similar modulation could be found in real motion. Thus, Experiment 1b examined the influences of head rotations on the perceived velocity of real motion. No adaptation phase was involved in this experiment (Fig. 1c). Subjects made head rotations back and forth. During each single head rotation, they were presented with two horizontally drifting gratings that moved in the opposite directions. As in Experiment 1a, we also required the subjects to make a binary judgment: Which of the two gratings moved faster. If head rotation also modulated the perceived velocity of real motion in a direction-specific manner, we would expect the subjects to report one particular test grating moving faster than the other one in a major proportion of trials. We also examined whether the effects of head rotation showed a similar pattern for real motion with different velocity or not.

2.1.3.1. Stimuli and Procedure.

Four different velocity conditions were tested, 11.59°/s (fast), 2.32°/s (medium), 0.58°/s (slow), and the same real-time velocity as the head rotation (matched). The fast, medium and slow velocities were determined arbitrarily during a pilot experiment which covered a reasonably wide range from the velocity approximating the perceived velocity of MAE (based on one author’s experience) to the velocity of the rapidly moving adapters in Experiment 1a. The matched condition was included because we were interested whether head movements had any special influence on the real motion that had the same real-time velocity as head rotation. The stimulus parameters were similar to those in Experiment 1a. Each session included 30 trials. Subjects were told to maintain a good central fixation throughout a session.

In the head-movement condition, the head position was always at the rightmost side in the beginning of a session. Subjects pressed the space bar to start a trial. Immediately after the keypress, the two gratings started to drift horizontally in the opposite directions, and the subjects rotated the heads to the left. The drifting directions of the gratings were fixed within a trial but pseudo-randomized across the trials. Once the heads reached the leftmost side, subjects pressed the UpArrow or DownArrow key to report which of the two gratings appeared to move faster. After the keypress, the gratings disappeared, and the display was replaced with a whole-screen white noise image counter-phase flickering at 10 Hz for 5 s to avoid any residual MAE between successive trials. Thereafter, the subjects pressed the space bar to start the second trial. Meanwhile, they rotated the heads from the leftmost side to the right. This procedure was repeated for 30 trials.

In the head-still condition, the stimuli and task were the same as in the head-movement condition, except that subjects kept the heads still all the time. Each subject completed two head-movement sessions and two head-still sessions, with the session sequence counter-balanced both within and between the subjects.

The stimulus parameters and procedure for the matched condition were identical to those of the other velocity conditions except the following. The drifting velocity of the gratings were identical to that of the head rotation in real time. In the head-still condition, the drifting speed of the gratings was the average velocity of head rotation in its preceding head-movement session. Each subject completed five sessions, including three head-movement sessions and two head-still sessions. The session order was HM–HS–(HM)–HS–HM in half the subjects and (HM)–HS–HM–HM–HS in the rest of subjects, where HM and HS were the abbreviations of ‘head movement’ and ‘head still’. The HM sessions in the parentheses were only used to provide the average velocity of head rotation for their following HS sessions.

The same analysis was performed as in Experiment 1a.

2.2. Experiment 2

One concern in Experiment 1a was whether the subjects had a response bias during the head rotation. To rule out this alternative explanation, we conducted Experiment 2 where in most trials the test gratings were physically static but the subjects were instructed that the gratings were moving in the same way as in Experiment 1a but extremely slowly. The response bias explanation would predict the same result pattern in this experiment as in Experiment 1a.

2.2.1. Participants

The same group of participants in Experiment 1a participated in Experiments 2.

2.2.1.1. Stimuli and Procedure.

The stimuli and procedure were the same as in Experiment 1b except the following. Each session included 35 trials. In 5 of these trials, the gratings drifted at 0.58°/s. In the other 30 trials, the gratings were in fact stationary, though the subjects were told that in all the trials the grating in the upper visual field drifted extremely slowly to the right, and the grating in the lower visual field drifted extremely slowly to the left. The 5 trials with slow physical motion were used to encourage the subjects to believe the instruction and to engage sufficient attention during the experiment.

2.2.1.2. Analysis.

The analysis was the same as in Experiment 1a. However, we only analyzed the data for the 30 trials where the gratings were physically static.

2.3. Experiment 3

If voluntary head rotation gave rise to a direction-specific effect on the perceived velocity of the MAE and real motion, was it due to an efference copy signal generated in motor programming (Sperry, 1950; von Holst and Mittelstaedt, 1950)? Therefore, Experiment 3 replicated all the sub-experiments of Experiment 1 and included both the voluntary and passive head movement conditions.

In the voluntary condition, subjects sat in a swivel chair and used their feet and legs to rotate the swivel chair back and forth. In this way, their heads rotated in space but kept still relative to their bodies. The voluntary condition was slightly different from the head movement condition in Experiment 1, making it easier to directly compare the voluntary and passive conditions. In the passive condition, the subjects sat still in the swivel chair while an experimenter rotated the chair back and forth. In other words, the subjects’ heads rotated in space passively (i.e., without voluntary motor actions). As the subjects were not allowed to move their feet away from the ground (for a better control of rotation), the angular range of head rotations in space was a bit narrower than that in Experiment 1.

Because the percent congruent did not show a significant difference from the chance level in the head-still condition in Experiment 1 (see the Results section), we did not include a head-still condition in Experiment 3 (otherwise the subjects would be too tired).

2.3.1. Participants

Twenty normal adults (ten females, ten males, age range 18–24 years) participated in Experiments 3. Eight of them had participated in Experiments 1a and the three constant-velocity conditions of Experiment 1b. Ten of them were tested in the matched condition of Experiment 1b.

2.3.2. Experiment 3a: MAE

2.3.2.1. Analysis.

The chance level served as baseline. The percent congruent values in both the voluntary and passive conditions were compared to the chance level with a Wilcoxon signed-rank test (two-tailed).

Since the voluntary condition in Experiment 3 largely resembled the head movement condition in Experiment 1, we performed a replication Bayes factor analysis (Verhagen and Wagenmakers, 2014) using the ReplicationBF package in RStudio (Harms, 2018; RStudio Team, 2016) to verify whether the findings for the head-movement condition in Experiment 1 were well replicated in Experiment 3.

2.3.3. Experiment 3b: Real Motion

2.3.3.1. Stimuli and Procedure.

Except for the voluntary and passive self-motion conditions, the parameters of the stimuli and procedure were identical to those in Experiment 1b. The same analysis was performed as in Experiment 3a.

3. Results

3.1. Normality Check

We tested the normality of the data using Shapiro–Wilk tests. The results are listed in Table 1. In quite a few conditions, normality was not proved (p<0.05). Therefore, we used the Wilcoxon signed-rank tests (one-sample or paired) for statistical comparisons in all the experiments, and calculated the Spearman’s rank correlation coefficient (Spearman’s ρ). The t-test and Pearson’s correlation results are reported in the Supplementary Materials.

Table 1.
Table 1.

The results (p-values) of Shapiro–Wilk tests

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

3.2. Velocity and Angle Range of Head Rotation

The grand average velocity and angular range of head rotation in space are plotted in Fig. 2a. The angular range data for the first (in Experiments 1 and 2) or two (in the matched condition of Experiment 3) subjects were missed due to a programing mistake.

3.3. Experiment 1

3.3.1. Experiment 1a: MAE

For the head-still condition, we first computed the proportion of trials in which the leftward- or rightward-drifting grating was perceived as moving faster, respectively. The proportion of rightward being reported faster was then compared to the chance level (50%) by using a Wilcoxon signed-rank test, which did not show a significant difference (Md = 45.83%, Z=1.88, p=0.061, r=0.419; the average proportion for rightward motion was 44.08% ± 13.23%, see Fig. 3a). The non-significant trend was largely due to the subjects’ bias, especially that of subject #11. Data normality was basically regained (Shapiro–Wilk test, p=0.110) after this subject’s data were removed, and the statistics of the comparison remained non-significant (Z=1.61, p=0.107, r=0.370).

Figure 3.
Figure 3.

The graph in (a) shows the proportion of trials in which the rightward drifting grating was perceived as moving faster in the head-still condition. In the present study, a grating was considered ‘congruent’ if its drifting direction was consistent with the usual direction of retinal motion induced by head rotation in everyday life. The index ‘percent congruent’ was defined as the proportion of trials in which the congruent grating was perceived as moving faster than the incongruent grating. The graph in (b) shows the grand average percent congruent value in Experiments 1 and 2. Each cross represents a subject. The bars show the grand average data. The asterisks indicate significant differences from the chance (50%) level (p<0.05 for the single asterisk, p<0.01 for the double asterisks). Error bars represent standard errors of means. ‘Fst’, ‘Mdm’, ‘Slw’, and ‘Mch’ denote the conditions of fast real motion, medium-speed real motion, slow real motion, and real motion matching the head rotation velocity, respectively.

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

Since the response proportion in the head-still condition was not significantly different from the chance level (50%), the percent congruent data in the head-movement condition was then compared to the chance level with a one-sample Wilcoxon signed-rank test. The results showed that in most cases subjects reported seeing faster congruent gratings rather than incongruent gratings (Md = 76.26%, Z=3.92, p<104, r=0.877). On average, the congruent gratings appeared to be faster in 77.21% (SD = 15.32%) of trials (see Fig. 3b).

Statistical results for all the experiments are also listed in Table 2.

Table 2.
Table 2.

The summary of the statistics (non-parametric) for all the experiments

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

3.3.2. Experiment 1b: Real Motion

For all the four real-motion conditions when the head was still, the proportion of rightward being reported faster did not differ significantly from the chance level (50%) according to the t-test comparison (see Fig. 3a and Table 2 for detailed statistics).

Accordingly, the percent congruent data in the head movement condition was then compared to the chance level with a Wilcoxon signed-rank test. Unlike Experiment 1a, the results for the fast and medium-speed conditions revealed that in most trials subjects reported seeing faster incongruent gratings instead of congruent gratings; whereas the result for the slow speed condition showed the same pattern as in Experiment 1a (see Fig. 3b and Table 2 for detailed statistics).

To examine whether the effects for the MAE in Experiment 1a and slow real-motion condition in Experiment 1b were equally strong or not, we further compared the percent congruent data between the two conditions. Interestingly, we found that the percent congruent values in Experiment 1a (77.21% ± 15.32%, Md = 76.26%) were significantly higher than those (63.79% ± 19.18%, Md = 65.00%) for the slow real-motion condition (Z=2.30, p=0.022, r=0.363), indicating a stronger effect for the MAE.

3.4. Experiment 2

Because for the head-still condition the proportion of rightward being reported faster did not differ significantly from the chance level (see Fig. 3a and Table 2), the percent ‘congruent’ data in the head-movement condition were then compared to the chance level. However, we did not find a significant difference between them (see Fig. 3a and Table 2 for detailed statistics). On average, the ‘congruent’ gratings appeared to be faster in 54.17% (SD = 12.24%) of trials (see Fig. 3b). Note that the word ‘congruent’ is in quotes because the ‘congruent’ gratings in this experiment were actually stationary. They were defined as ‘congruent’ based on the instruction.

This result suggested that the finding in Experiment 1a was very unlikely due to a response bias during head rotation.

3.5. Experiment 3

3.5.1. Experiment 3a: MAE

Similar to the results in Experiment 1a, the congruent MAE was perceived as moving faster in most trials (see Fig. 4 and Table 2 for detailed statistics). Therefore, the finding of Experiment 1a was replicated in both the voluntary and passive conditions. Moreover, the percent congruent data were comparable between the voluntary and passive conditions (Z=0.55, p=0.584, r=0.087).

Figure 4.
Figure 4.

The graph in (a) shows the grand average percent congruent values in Experiment 3. The asterisks indicate significant differences from the chance (50%) level (p<0.05 for single asterisk, p<0.01 for double asterisks). The open circles and crosses represent individual data. The error bars represent standard errors of means. Here, ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote conditons of fast, medium, slow, match, voluntary, and passive, respectively.

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

3.5.2. Experiment 3b: Real Motion

Similar to the results of Experiment 1b, the subjects predominantly reported perceiving the incongruent gratings as moving faster in the fast and medium-speed real-motion conditions (see Fig. 4 and Table 2). However, the subjects did not show a clear predominance to perceive which grating was moving faster in the slow real-motion condition (see Fig. 4 and Table 2 for detailed statistics). Except in the fast real-motion condition (Z=2.39, p=0.017, r=0.378), there were no significant differences of percent congruent values between the voluntary and passive conditions (all ps>0.31).

To compare the percent congruent data between Experiment 3a (MAE test) and the slow real-motion condition in Experiment 3b, we performed a 2 (MAE vs. slow motion) by 2 (rotation type: voluntary vs. passive) repeated-measurements ANOVA. The analysis revealed a significant main effect of Experiment [F(1,76)=31.13, p<105]. However, neither the main effect of rotation type [F(1,76)<1, p=0.99] nor the interaction [F(1,76)<1, p=0.96] reached statistical significance. We then performed paired Wilcoxon signed-rank tests between the two experiments for the voluntary and passive conditions, respectively. In both conditions, the congruent gratings were perceived as moving faster more frequently during the MAE test than during the slow real motion test (voluntary condition, Z=3.42, p<0.001, r=0.541; passive condition, Z=3.47, p<0.001, r=0.549).

3.6. Replication Bayes Factor Analysis

The results of the replication Bayes factor analysis are listed in Table 3. The value of log10(BFr0) exceeded 2 for Experiment 3a, the fast, medium-, and matched-speed real-motion conditions in Experiment 3b, indicating a decisive (or extremely strong) support for the replication hypothesis (Kass and Raftery, 1995). By contrast, the value of log10(BFr0) was between 1/2 and 1 [i.e., log10(BFr0)=log10(BFr0)=0.81] for the slow real-motion condition of Experiment 3b, indicating a substantial (or medium) support for the null hypothesis (Kass and Raftery, 1995). Therefore, the results of Experiment 3 replicated the findings in Experiment 1 well, except the slow real-motion condition.

Table 3.
Table 3.

Experiment 3 (voluntary condition) as a replication of Experiment 1

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

3.7. The Relationship Between Head Rotation Velocity and MAE

For all the experiments, we computed the correlation between the subjects’ percent congruent data and average velocities of head rotation. As shown in Fig. 5, a significant correlation was observed only in the MAE conditions (i.e., Experiment 1a and Experiment 3a, see Table 2 for detailed statistics), suggesting that the subjects with faster head rotations showed higher percent congruent values for the MAE but not for the real-motion stimuli.

Figure 5.
Figure 5.

The linear correlation between the percent congruent value and head rotation velocity in each experiment. Each circle represents a subject. Solid lines show the linear fits on the individual data. The red and blue lines represent significant correlations (p<0.01 for the red line, p<0.05 for the blue lines), whereas the black lines represent non-significance. ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote conditons of fast, medium, slow, matched, voluntary, and passive, respectively.

Citation: Multisensory Research 33, 2 (2020) ; 10.1163/22134808-20191477

Two possible explanations are provided for the correlation results. First, Fig. 2a revealed that in most cases of an experiment head rotations covered a similar angle range. Thus, faster average velocity of head rotation should relate with stronger acceleration and deceleration of head movement. Because faster head rotation corresponded to stronger vestibular input signal, and was found here to be correlated with stronger modulation of the perceived velocity of MAE (rather than the real motion), we propose that the MAE was modulated by the vestibular signal in a more special and efficient way as compared to the real motion. This explanation is in agreement with the notion that the MAE is strongly related to visual–vestibular interaction, and is different from real motion with respect to such interaction.

Alternatively, the correlation result may be contributed by the variance of the test phase duration, since slower head rotation corresponded to a longer test phase in the top-up adaptation paradigm, leading to a larger extent of de-adaptation. It should be noted that this explanation also speaks against a common vestibular modulation of the MAE and real-motion signals. Because of the hypothetical effect of de-adaptation, the subjects with slower head rotations (hereafter called ‘slow’ subjects) would show weaker MAE. Imagine that the subjects kept their heads stationary, yet received exactly the same visual stimulation and adopted the same test phase durations as in Experiment 1a. Since the perceived velocity of an MAE is thought to reflect the magnitude or strength of an MAE (Pantle, 1998), the influence of de-adaptation would cause the ‘slow’ subjects to perceive slower MAE than the ‘fast’ subjects. If head rotation affected both the MAE and real motion in a common manner, an MAE signal would in theory be equivalent to a certain slow real-motion signal in terms of the vestibular modulation. We could then expect that once the subject rotated the head the percent congruent values for the ‘slow’ subjects would be higher than those for the ‘fast’ subjects, because the results for the real-motion conditions in Fig. 3b clearly showed that the percent congruent value increased as the motion speed decreased. Accordingly, one may expect to see a positive correlation between the percent congruent data and test phase duration, or a negative correlation between the percent congruent data and head rotation velocity. However, as shown in Fig. 5, this expectation was contrary to the fact. Accordingly, the account of test phase duration also agrees with the notion that the signals of MAE and real motion are modulated differently by head rotation.

To evaluate which explanation accounted for the MAE-specific correlation results better, we conducted a correlation analysis between the subjects’ percent congruent data and test phase durations. No significant correlation was found in Experiment 1a (Spearman’s ρ=0.28, p=0.240). In Experiment 3a, we found a significant negative correlation only in the passive condition (Spearman’s ρ=0.74, p<0.001) but not the voluntary condition (Spearman’s ρ=0.35, p=0.130). These results generally indicated that the variation of test phase duration did not produce a determinative consequence of de-adaptation. Therefore, our correlation results can be better accounted for by a special and efficient visual–vestibular interaction on the MAE. Also, the visual–vestibular interaction in the MAE was essentially different from that in real motion, given both the MAE-specific correlation results and the distinct pattern of percent congruent shown in Fig. 3b and Fig. 4.

4. Discussion

The present study investigated how head rotation affected the speed perception in illusory (MAE) and physical motion. We found that the MAE was predominantly perceived as moving faster when its direction was opposite to the direction of head rotation than when its direction was the same as head rotation. However, a reversed pattern was observed for physical stimuli moving at a fast or medium speed. As to slow real motion, we obtained the mixed results. A similar (though weaker) pattern like Experiment 1a was observed in the slow-motion condition in Experiment 1b. However, this finding was not replicated in Experiment 3b where a null effect of head rotation was verified by using a replication Bayes factor analysis. Experiment 2 verified that the effect for the MAE was not due to the subjective response bias during the head rotation. Furthermore, Experiment 3 indicated that the modulation of head rotation was not very likely due to efference copy signals (the only exception was the fast real-motion condition).

For both Experiments 1 and 3, we examined the correlation between the subjects’ percent congruent values and average velocities of head rotations. Reliable positive correlations were observed in both the Experiments 1a and 3a where we tested the influences of head rotations on the MAE. However, no significant correlations were found in the other sub-experiments where the physical motion stimuli at different speed were tested. These MAE-specific correlation results lend credence to the notion that an MAE-specific mechanism underlies the findings in Experiments 1a and 3a. The phenomenon we observed in Experiments 1a and 3a cannot be simply accounted for by the modulation of head movement on speed perception that applies for both real and illusory motion. Rather, it suggests distinct vestibular modulations of the MAE.

Since the literature (Hogendoorn and Verstraten, 2013; Rühl et al., 2018; Seiffert et al., 2003) has lent support to the notion that the MAE is represented in area MT+ (especially the MST), the velocity processing for the MAE is likely executed there as well. To provide a comprehensive explanation for the present findings, we propose a cross-modal bias hypothesis. Specifically, the bias essentially reflects the way in which multisensory integration usually works. In human’s everyday life, a lot of retinal signals are caused by self-motion. On the one hand, the neural system suppresses the self-motion-induced retinal signals, in order to highlight the actual object motion signals in the environment (Miall and Wolpert, 1996; Wallach, 1987), a consequence similar to the P&B-effect and freezing illusion (Mesland and Wertheim, 1996; Pavard and Berthoz, 1977; Wertheim and Reymond, 2007). This may explain the findings in the fast and medium-speed real-motion conditions in Experiments 1b and 3b. On the other hand, the neural system may learn to develop a natural association between the vestibular self-motion signals and the self-motion-induced retinal signals. Over a long time of life, the association becomes so strong that the signals from one modality could produce a bias signal in a congruent direction for the other modality. The rationale of establishing and expressing the association may be rooted in Hebb synaptic learning (Hebb, 1949). We hypothesize that the cross-modal bias signal is relatively weak (to avoid substantial interferences with feed-forward signals representing the real world more faithfully) and produced locally in the MST area. This hypothesis has received some preliminary support from our empirical observations. According to the hypothesis, in the real-motion condition the influence of the bias signal might be corrected or diluted to some extent by feed-forward motion signals transmitted from earlier processing stages. By contrast, the MAE signal is believed to arise locally in the MST, which has the same origination as the bias signal. Accordingly, the hypothesis predicts that the MAE may be more susceptible to the cross-modal bias signal than the real motion. An extreme case is that the bias signal even participates in forming the MAE. The above prediction was supported by the finding of significant correlation between the percent congruent value and head rotation velocity in Experiments 1a and 3a but not the other sub-experiments. Note that the same correlation was not observed in the slow real-motion condition in Experiment 1b, even though the percent congruent value in that condition was also higher than the chance level. Furthermore, according to the Bayesian context of multisensory integration (Fetsch et al., 2013; Gu et al., 2008; Knill and Pouget, 2004), the influences of either bias or bottom-up signals will depend on their respective reliabilities. Thus, the influence of the bias signal on real motion can be evident only as the veridical feed-forward signals are sufficiently weak relative to the bias (i.e., sufficiently slow motion). This is because when strong veridical feed-forward signals are present, intrinsic neural noise within the perceptual system may cause the weak cross-modal bias signals to be overridden. However, in practice, whether a certain slow real motion is sufficiently weak or not depends on individual differences. This perhaps explains why the finding of the slow real-motion condition in Experiment 1b was not replicated in Experiment 3b, considering that the samples between the two experiments were substantially non-overlapping. Future work should examine the reliability of the signals for the real motion and MAE more closely to confirm this hypothesis.

Our MAE-specific correlation results are less consistent with the principle of inverse effectiveness in multisensory integration that is proposed in the research of audiovisual integration in cat superior colliculus (Meredith and Stein, 1983; Stein and Stanford, 2008). The principle of inverse effectiveness states that multisensory integration increases as unisensory responses decrease (e.g., under low-intensity stimulation). Evidently, faster head rotation corresponds to stronger vestibular signals. Interestingly, in Experiments 1a and 3a such stronger unisensory responses were in relation with the more profound consequences of multisensory integration (i.e., higher percent congruent value), a violation of the principle of inverse effectiveness. Nevertheless, the relationship between the principle of inverse effectiveness and the current observations awaits further investigations at both the behavioral and the neural levels.

It should be noted that during the passive rotations involuntary micro-movements of the head with respect to the trunk might occur occasionally, though the subjects were instructed to remain stationary relative to the chair. Thus, our findings might be contributed by the integration of visual signals with both vestibular and proprioceptive signals. However, the head yaw rotations in the experiments were believed to be greater than the micro-movements of the head on the trunk by a few orders of magnitude. Therefore, the potential contribution from the proprioceptive signals was presumably minor as compared to that from the vestibular signals. Future work with more sensors will further examine to what extent the present phenomena are contributed by visual–vestibular and (or) visual–proprioceptive interactions, which is beyond the scope of the current work.

Finally, although our work is focused on visual–vestibular modulations of the illusory motion, the cross-modal bias hypothesis we propose here can also be extended to explain other interesting phenomena in audiovisual, audiotactile and olfacto–visual integration (Lunghi et al., 2014; Shams et al., 2000; Zhou et al., 2010). With more advanced neuroimaging technique developed in the future, the underlying mechanisms for the cross-modal bias signal would be further explored.

*These two authors contributed equally to this work.
**To whom correspondence should be addressed. E-mail: baom@psych.ac.cn
Acknowledgements

This research was supported by the National Natural Science Foundation of China (31571112, 31871104, 31271175, 31525011 and 31830037) and the Key Research Program of Chinese Academy of Sciences (XDB02010003 and QYZDB-SSW-SMC030). Authors’ contributions: M.B. conceived the study; J.B., M.B., Y.J. and T.Z. designed the experiments; J.B. and X.H. performed the experiments; X.H., J.B. and M.B. analyzed the data; M.B., X.H., T.Z., and Y.J. wrote the paper.

Conflict of Interest

The authors have no conflict of interest to declare.

References

  • Addams, R. (1834). An account of a peculiar optical phænomenon seen after having looked at a moving body, Lond. Edinb. Philos. Mag. J. Sci. 5, 373–374.

    • Search Google Scholar
    • Export Citation
  • Anstis, S., Verstraten, F. A. J. and Mather, G. (1998). The motion aftereffect, Trends Cogn. Sci. 2, 111–117.

  • Bai, J., Bao, M., Zhang, T. and Jiang, Y. (2019). A virtual reality approach identifies flexible inhibition of motion aftereffects induced by head rotation, Behav. Res. Methods. 51, 96–107.

    • Search Google Scholar
    • Export Citation
  • Barlow, H. B. and Hill, R. M. (1963). Evidence for a physiological explanation of the waterfall phenomenon and figural after-effects, Nature 200, 1345–1347.

    • Search Google Scholar
    • Export Citation
  • Britten, K. H. and van Wezel, R. J. A. (1998). Electrical microstimulation of cortical area MST biases heading perception in monkeys, Nat. Neurosci. 1, 59–63.

    • Search Google Scholar
    • Export Citation
  • Castelo-Branco, M., Kozak, L. R., Formisano, E., Teixeira, J., Xavier, J. and Goebel, R. (2009). Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects, J. Neurophysiol. 102, 3016–3025.

    • Search Google Scholar
    • Export Citation
  • Culham, J. C., Dukelow, S. P., Vilis, T., Hassard, F. A., Gati, J. S., Menon, R. S. and Goodale, M. A. (1999). Recovery of fMRI activation in motion area MT following storage of the motion aftereffect, J Neurophysiol. 81, 388–393.

    • Search Google Scholar
    • Export Citation
  • Fetsch, C. R., Deangelis, G. C. and Angelaki, D. E. (2013). Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons, Nat. Rev. Neurosci. 14, 429–442.

    • Search Google Scholar
    • Export Citation
  • Graziano, M. S. A. (1999). Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position, Proc. Natl Acad. Sci. USA 96, 10418–10421.

    • Search Google Scholar
    • Export Citation
  • Gu, Y., Angelaki, D. E. and DeAngelis, G. C. (2008). Neural correlates of multisensory cue integration in macaque MSTd, Nat. Neurosci. 11, 1201–1210.

    • Search Google Scholar
    • Export Citation
  • Harms, C. (2018). ReplicationBF — R package to calculate replication Bayes factors. Available at: github.com/neurotroph/ReplicationBF.

  • He, S., Cohen, E. R. and Hu, X. (1998). Close correlation between activity in brain area MT/V5 and the perception of a visual motion aftereffect, Curr. Biol. 8, 1215–1218.

    • Search Google Scholar
    • Export Citation
  • Hebb, D. O. (1949). The Organization of Behavior: a Neuropsychological Theory. John Wiley and Sons, New York, NY, USA.

  • Hogendoorn, H. and Verstraten, F. A. (2013). Decoding the motion aftereffect in human visual cortex, NeuroImage 82, 426–432.

  • Hogendoorn, H., Alais, D., Macdougall, H. and Verstraten, F. A. J. (2017a). Velocity perception in a moving observer, Vision Res. 138, 12–17.

    • Search Google Scholar
    • Export Citation
  • Hogendoorn, H., Verstraten, F. A. J., Macdougall, H. and Alais, D. (2017b). Vestibular signals of self-motion modulate global motion perception, Vision Res. 130, 22–30.

    • Search Google Scholar
    • Export Citation
  • Huk, A. C., Ress, D. and Heeger, D. J. (2001). Neuronal basis of the motion aftereffect reconsidered, Neuron 32, 161–172.

  • Kass, R. E. and Raftery, A. E. (1995). Bayes factors, J. Am. Stat. Assoc. 90, 773–795.

  • Knapen, T., Rolfs, M. and Cavanagh, P. (2009). The reference frame of the motion aftereffect is retinotopic, J. Vis. 9, 16. DOI:10.1167/9.5.16.

    • Search Google Scholar
    • Export Citation
  • Knill, D. C. and Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation, Trends Neurosci. 27, 712–719.

    • Search Google Scholar
    • Export Citation
  • Lunghi, C., Morrone, M. C. and Alais, D. (2014). Auditory and tactile signals combine to influence vision during binocular rivalry, J. Neurosci. 34, 784–792.

    • Search Google Scholar
    • Export Citation
  • Matsumiya, K. and Shioiri, S. (2014). Moving one’s own body part induces a motion aftereffect anchored to the body part, Curr. Biol. 24, 165–169.

    • Search Google Scholar
    • Export Citation
  • Maunsell, J. H. and van Essen, D. C. (1983). The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey, J. Neurosci. 3, 2563–2586.

    • Search Google Scholar
    • Export Citation
  • Meredith, M. A. and Stein, B. E. (1983). Interactions among converging sensory inputs in the superior colliculus, Science 221, 389–391.

    • Search Google Scholar
    • Export Citation
  • Mesland, B. S. and Wertheim, A. H. (1996). A puzzling percept of stimulus stabilization, Vision Res. 36, 3325–3328.

  • Miall, R. C. and Wolpert, D. M. (1996). Forward models for physiological motor control, Neural Netw. 9, 1265–1279.

  • Mikellidou, K., Turi, M. and Burr, D. C. (2017). Spatiotopic coding during dynamic head tilt, J. Neurophysiol. 117, 808–817.

  • Page, W. K. and Duffy, C. J. (2003). Heading representation in MST: sensory interactions and population encoding, J. Neurophysiol. 89, 1994–2013.

    • Search Google Scholar
    • Export Citation
  • Pantle, A. (1998). How do measures of the motion aftereffect measure up?, in: Motion Aftereffect, a Modern Perspective, G. Mather, F. Verstraten and S. Anstis (Eds), pp. 25–39. MIT Press, Cambridge, MA, USA.

    • Search Google Scholar
    • Export Citation
  • Pavard, B. and Berthoz, A. (1977). Linear acceleration modifies the perceived velocity of a moving visual scene, Perception 6, 529–540.

    • Search Google Scholar
    • Export Citation
  • Rstudio Team (2016). RStudio: Integrated Development for R. RStudio, Boston, MA, USA. Available at: http://www.rstudio.com/. Retrieved November 12, 2018.

    • Search Google Scholar
    • Export Citation
  • Rühl, R. M., Bauermann, T., Dieterich, M. and zu Eulenburg, P. (2018). Functional correlate and delineated connectivity pattern of human motion aftereffect responses substantiate a subjacent visual–vestibular interaction, NeuroImage 174, 22–34.

  • Schindler, A. and Bartels, A. (2018). Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain, NeuroImage 172, 597–607.

    • Search Google Scholar
    • Export Citation
  • Seiffert, A. E., Somers, D. C., Dale, A. M. and Tootell, R. B. H. (2003). Functional MRI studies of human visual motion perception: texture, luminance, attention and after-effects, Cereb. Cortex 13, 340–349.

    • Search Google Scholar
    • Export Citation
  • Shams, L., Kamitani, Y. and Shimojo, S. (2000). What you see is what you hear, Nature 408, 788.

  • Sperry, R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion, J. Comp. Physiol. Psychol. 43, 482–489.

    • Search Google Scholar
    • Export Citation
  • Stein, B. E. and Stanford, T. R. (2008). Multisensory integration: current issues from the perspective of the single neuron, Nat. Rev. Neurosci. 9, 255–266.

    • Search Google Scholar
    • Export Citation
  • Taylor, J. G., Schmitz, N., Ziemons, K., Grosse-Ruyken, M.-L., Gruber, O., Mueller-Gaertner, H.-W. and Shah, N. J. (2000). The network of brain areas involved in the motion aftereffect, NeuroImage 11, 257–270.

    • Search Google Scholar
    • Export Citation
  • Théoret, H., Kobayashi, M., Ganis, G., Di Capua, P. and Pascual-Leone, A. (2002). Repetitive transcranial magnetic stimulation of human area MT/V5 disrupts perception and storage of the motion aftereffect, Neuropsychologia 40, 2280–2287.

    • Search Google Scholar
    • Export Citation
  • Tootell, R. B., Reppas, J. B., Dale, A. M., Look, R. B., Sereno, M. I., Malach, R., Brady, T. J. and Rosen, B. R. (1995). Visual motion aftereffect in human cortical area MT revealed by functional magnetic resonance imaging, Nature 375, 139–141.

    • Search Google Scholar
    • Export Citation
  • Turi, M. and Burr, D. (2012). Spatiotopic perceptual maps in humans: evidence from motion adaptation, Proc. Biol. Sci. 279, 3091–3097.

    • Search Google Scholar
    • Export Citation
  • Verhagen, J. and Wagenmakers, E. J. (2014). Bayesian tests to quantify the result of a replication attempt, J. Exp. Psychol. Gen. 143, 1457–1475.

    • Search Google Scholar
    • Export Citation
  • von Holst, E. and Mittelstaedt, H. (1950). Das Reafferenzprinzip. Wechselwirkungen zwischen Zentralnervensystem und Peripherie, Naturwissenschaften 37, 464–476.

    • Search Google Scholar
    • Export Citation
  • Wallach, H. (1987). Perceiving a stable environment when one moves, Annu. Rev. Psychol. 38, 1–27.

  • Wenderoth, P. and Wiese, M. (2008). Retinotopic encoding of the direction aftereffect, Vision Res. 48, 1949–1954.

  • Wertheim, A. H. and Reymond, G. (2007). Neural noise distorts perceived motion: the special case of the freezing illusion and the Pavard and Berthoz effect, Exp. Brain Res. 180, 569–576.

    • Search Google Scholar
    • Export Citation
  • Zeki, S., Watson, J. D., Lueck, C. J., Friston, K. J., Kennard, C. and Frackowiak, R. S. (1991). A direct demonstration of functional specialization in human visual cortex, J. Neurosci. 11, 641–649.

    • Search Google Scholar
    • Export Citation
  • Zhou, W., Jiang, Y., He, S. and Chen, D. (2010). Olfaction modulates visual perception in binocular rivalry, Curr. Biol. 20, 1356–1358.

    • Search Google Scholar
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Addams, R. (1834). An account of a peculiar optical phænomenon seen after having looked at a moving body, Lond. Edinb. Philos. Mag. J. Sci. 5, 373–374.

    • Search Google Scholar
    • Export Citation
  • Anstis, S., Verstraten, F. A. J. and Mather, G. (1998). The motion aftereffect, Trends Cogn. Sci. 2, 111–117.

  • Bai, J., Bao, M., Zhang, T. and Jiang, Y. (2019). A virtual reality approach identifies flexible inhibition of motion aftereffects induced by head rotation, Behav. Res. Methods. 51, 96–107.

    • Search Google Scholar
    • Export Citation
  • Barlow, H. B. and Hill, R. M. (1963). Evidence for a physiological explanation of the waterfall phenomenon and figural after-effects, Nature 200, 1345–1347.

    • Search Google Scholar
    • Export Citation
  • Britten, K. H. and van Wezel, R. J. A. (1998). Electrical microstimulation of cortical area MST biases heading perception in monkeys, Nat. Neurosci. 1, 59–63.

    • Search Google Scholar
    • Export Citation
  • Castelo-Branco, M., Kozak, L. R., Formisano, E., Teixeira, J., Xavier, J. and Goebel, R. (2009). Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects, J. Neurophysiol. 102, 3016–3025.

    • Search Google Scholar
    • Export Citation
  • Culham, J. C., Dukelow, S. P., Vilis, T., Hassard, F. A., Gati, J. S., Menon, R. S. and Goodale, M. A. (1999). Recovery of fMRI activation in motion area MT following storage of the motion aftereffect, J Neurophysiol. 81, 388–393.

    • Search Google Scholar
    • Export Citation
  • Fetsch, C. R., Deangelis, G. C. and Angelaki, D. E. (2013). Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons, Nat. Rev. Neurosci. 14, 429–442.

    • Search Google Scholar
    • Export Citation
  • Graziano, M. S. A. (1999). Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position, Proc. Natl Acad. Sci. USA 96, 10418–10421.

    • Search Google Scholar
    • Export Citation
  • Gu, Y., Angelaki, D. E. and DeAngelis, G. C. (2008). Neural correlates of multisensory cue integration in macaque MSTd, Nat. Neurosci. 11, 1201–1210.

    • Search Google Scholar
    • Export Citation
  • Harms, C. (2018). ReplicationBF — R package to calculate replication Bayes factors. Available at: github.com/neurotroph/ReplicationBF.

  • He, S., Cohen, E. R. and Hu, X. (1998). Close correlation between activity in brain area MT/V5 and the perception of a visual motion aftereffect, Curr. Biol. 8, 1215–1218.

    • Search Google Scholar
    • Export Citation
  • Hebb, D. O. (1949). The Organization of Behavior: a Neuropsychological Theory. John Wiley and Sons, New York, NY, USA.

  • Hogendoorn, H. and Verstraten, F. A. (2013). Decoding the motion aftereffect in human visual cortex, NeuroImage 82, 426–432.

  • Hogendoorn, H., Alais, D., Macdougall, H. and Verstraten, F. A. J. (2017a). Velocity perception in a moving observer, Vision Res. 138, 12–17.

    • Search Google Scholar
    • Export Citation
  • Hogendoorn, H., Verstraten, F. A. J., Macdougall, H. and Alais, D. (2017b). Vestibular signals of self-motion modulate global motion perception, Vision Res. 130, 22–30.

    • Search Google Scholar
    • Export Citation
  • Huk, A. C., Ress, D. and Heeger, D. J. (2001). Neuronal basis of the motion aftereffect reconsidered, Neuron 32, 161–172.

  • Kass, R. E. and Raftery, A. E. (1995). Bayes factors, J. Am. Stat. Assoc. 90, 773–795.

  • Knapen, T., Rolfs, M. and Cavanagh, P. (2009). The reference frame of the motion aftereffect is retinotopic, J. Vis. 9, 16. DOI:10.1167/9.5.16.

    • Search Google Scholar
    • Export Citation
  • Knill, D. C. and Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation, Trends Neurosci. 27, 712–719.

    • Search Google Scholar
    • Export Citation
  • Lunghi, C., Morrone, M. C. and Alais, D. (2014). Auditory and tactile signals combine to influence vision during binocular rivalry, J. Neurosci. 34, 784–792.

    • Search Google Scholar
    • Export Citation
  • Matsumiya, K. and Shioiri, S. (2014). Moving one’s own body part induces a motion aftereffect anchored to the body part, Curr. Biol. 24, 165–169.

    • Search Google Scholar
    • Export Citation
  • Maunsell, J. H. and van Essen, D. C. (1983). The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey, J. Neurosci. 3, 2563–2586.

    • Search Google Scholar
    • Export Citation
  • Meredith, M. A. and Stein, B. E. (1983). Interactions among converging sensory inputs in the superior colliculus, Science 221, 389–391.

    • Search Google Scholar
    • Export Citation
  • Mesland, B. S. and Wertheim, A. H. (1996). A puzzling percept of stimulus stabilization, Vision Res. 36, 3325–3328.

  • Miall, R. C. and Wolpert, D. M. (1996). Forward models for physiological motor control, Neural Netw. 9, 1265–1279.

  • Mikellidou, K., Turi, M. and Burr, D. C. (2017). Spatiotopic coding during dynamic head tilt, J. Neurophysiol. 117, 808–817.

  • Page, W. K. and Duffy, C. J. (2003). Heading representation in MST: sensory interactions and population encoding, J. Neurophysiol. 89, 1994–2013.

    • Search Google Scholar
    • Export Citation
  • Pantle, A. (1998). How do measures of the motion aftereffect measure up?, in: Motion Aftereffect, a Modern Perspective, G. Mather, F. Verstraten and S. Anstis (Eds), pp. 25–39. MIT Press, Cambridge, MA, USA.

    • Search Google Scholar
    • Export Citation
  • Pavard, B. and Berthoz, A. (1977). Linear acceleration modifies the perceived velocity of a moving visual scene, Perception 6, 529–540.

    • Search Google Scholar
    • Export Citation
  • Rstudio Team (2016). RStudio: Integrated Development for R. RStudio, Boston, MA, USA. Available at: http://www.rstudio.com/. Retrieved November 12, 2018.

    • Search Google Scholar
    • Export Citation
  • Rühl, R. M., Bauermann, T., Dieterich, M. and zu Eulenburg, P. (2018). Functional correlate and delineated connectivity pattern of human motion aftereffect responses substantiate a subjacent visual–vestibular interaction, NeuroImage 174, 22–34.

  • Schindler, A. and Bartels, A. (2018). Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain, NeuroImage 172, 597–607.

    • Search Google Scholar
    • Export Citation
  • Seiffert, A. E., Somers, D. C., Dale, A. M. and Tootell, R. B. H. (2003). Functional MRI studies of human visual motion perception: texture, luminance, attention and after-effects, Cereb. Cortex 13, 340–349.

    • Search Google Scholar
    • Export Citation
  • Shams, L., Kamitani, Y. and Shimojo, S. (2000). What you see is what you hear, Nature 408, 788.

  • Sperry, R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion, J. Comp. Physiol. Psychol. 43, 482–489.

    • Search Google Scholar
    • Export Citation
  • Stein, B. E. and Stanford, T. R. (2008). Multisensory integration: current issues from the perspective of the single neuron, Nat. Rev. Neurosci. 9, 255–266.

    • Search Google Scholar
    • Export Citation
  • Taylor, J. G., Schmitz, N., Ziemons, K., Grosse-Ruyken, M.-L., Gruber, O., Mueller-Gaertner, H.-W. and Shah, N. J. (2000). The network of brain areas involved in the motion aftereffect, NeuroImage 11, 257–270.

    • Search Google Scholar
    • Export Citation
  • Théoret, H., Kobayashi, M., Ganis, G., Di Capua, P. and Pascual-Leone, A. (2002). Repetitive transcranial magnetic stimulation of human area MT/V5 disrupts perception and storage of the motion aftereffect, Neuropsychologia 40, 2280–2287.

    • Search Google Scholar
    • Export Citation
  • Tootell, R. B., Reppas, J. B., Dale, A. M., Look, R. B., Sereno, M. I., Malach, R., Brady, T. J. and Rosen, B. R. (1995). Visual motion aftereffect in human cortical area MT revealed by functional magnetic resonance imaging, Nature 375, 139–141.

    • Search Google Scholar
    • Export Citation
  • Turi, M. and Burr, D. (2012). Spatiotopic perceptual maps in humans: evidence from motion adaptation, Proc. Biol. Sci. 279, 3091–3097.

    • Search Google Scholar
    • Export Citation
  • Verhagen, J. and Wagenmakers, E. J. (2014). Bayesian tests to quantify the result of a replication attempt, J. Exp. Psychol. Gen. 143, 1457–1475.

    • Search Google Scholar
    • Export Citation
  • von Holst, E. and Mittelstaedt, H. (1950). Das Reafferenzprinzip. Wechselwirkungen zwischen Zentralnervensystem und Peripherie, Naturwissenschaften 37, 464–476.

    • Search Google Scholar
    • Export Citation
  • Wallach, H. (1987). Perceiving a stable environment when one moves, Annu. Rev. Psychol. 38, 1–27.

  • Wenderoth, P. and Wiese, M. (2008). Retinotopic encoding of the direction aftereffect, Vision Res. 48, 1949–1954.

  • Wertheim, A. H. and Reymond, G. (2007). Neural noise distorts perceived motion: the special case of the freezing illusion and the Pavard and Berthoz effect, Exp. Brain Res. 180, 569–576.

    • Search Google Scholar
    • Export Citation
  • Zeki, S., Watson, J. D., Lueck, C. J., Friston, K. J., Kennard, C. and Frackowiak, R. S. (1991). A direct demonstration of functional specialization in human visual cortex, J. Neurosci. 11, 641–649.

    • Search Google Scholar
    • Export Citation
  • Zhou, W., Jiang, Y., He, S. and Chen, D. (2010). Olfaction modulates visual perception in binocular rivalry, Curr. Biol. 20, 1356–1358.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Illustration of the apparatus and stimuli. We developed a virtual reality system (a). The visual stimuli were presented on the goggles screens. In the head movement condition, subjects rotated their heads back and forth in the horizontal plane, with the head movement data tracked in real-time by a three-space sensor. The graphs in (b) show the stimuli in Experiment 1a. The red and blue arrows show the directions of real motion and motion aftereffect, respectively, which were not presented actually. The head remained still during the initial (30 s) and top-up (10 s each) adaptation phases. Between every two successive adaptation phases was a test phase in which the subject made a single head rotation from one side to the other. Subjects indicated which test grating appeared to move faster at the end of each head rotation. The graphs in (c) show the stimuli in Experiments 1b. On each trial, subjects rotated the heads to one side. At the end of each head rotation, subjects indicated which of the two gratings appeared to move faster. Thereafter, a white noise image flickered to eliminate any residual aftereffect.

  • View in gallery

    The mean velocity and angle of head rotation across the subjects in each experiment and condition (a). Error bars represent standard errors of means (SEM). The graphs in (b) show the velocity profile of the subjects’ head rotation in each experiment. Solid lines indicate the grand average value, and the shaded areas indicate 1 SEM. ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote the conditons of fast real motion, medium-speed real motion, slow real motion, real motion matching the head rotation velocity, voluntary self-motion, and passive self-motion, respectively.

  • View in gallery

    The results (p-values) of Shapiro–Wilk tests

  • View in gallery

    The graph in (a) shows the proportion of trials in which the rightward drifting grating was perceived as moving faster in the head-still condition. In the present study, a grating was considered ‘congruent’ if its drifting direction was consistent with the usual direction of retinal motion induced by head rotation in everyday life. The index ‘percent congruent’ was defined as the proportion of trials in which the congruent grating was perceived as moving faster than the incongruent grating. The graph in (b) shows the grand average percent congruent value in Experiments 1 and 2. Each cross represents a subject. The bars show the grand average data. The asterisks indicate significant differences from the chance (50%) level (p<0.05 for the single asterisk, p<0.01 for the double asterisks). Error bars represent standard errors of means. ‘Fst’, ‘Mdm’, ‘Slw’, and ‘Mch’ denote the conditions of fast real motion, medium-speed real motion, slow real motion, and real motion matching the head rotation velocity, respectively.

  • View in gallery

    The summary of the statistics (non-parametric) for all the experiments

  • View in gallery

    The graph in (a) shows the grand average percent congruent values in Experiment 3. The asterisks indicate significant differences from the chance (50%) level (p<0.05 for single asterisk, p<0.01 for double asterisks). The open circles and crosses represent individual data. The error bars represent standard errors of means. Here, ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote conditons of fast, medium, slow, match, voluntary, and passive, respectively.

  • View in gallery

    Experiment 3 (voluntary condition) as a replication of Experiment 1

  • View in gallery

    The linear correlation between the percent congruent value and head rotation velocity in each experiment. Each circle represents a subject. Solid lines show the linear fits on the individual data. The red and blue lines represent significant correlations (p<0.01 for the red line, p<0.05 for the blue lines), whereas the black lines represent non-significance. ‘Fst’, ‘Mdm’, ‘Slw’, ‘Mch’, ‘Vol’, and ‘Pas’ denote conditons of fast, medium, slow, matched, voluntary, and passive, respectively.

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 50 50 50
PDF Downloads 11 11 11