Multiple Spatial Coordinates Influence the Prediction of Tactile Events Facilitated by Approaching Visual Stimuli.

Interaction with other sensory information is important for prediction of tactile events. Recent studies have reported that the approach of visual information toward the body facilitates prediction of subsequent tactile events. However, the processing of tactile events is influenced by multiple spatial coordinates, and it remains unclear how this approach effect influences tactile events in different spatial coordinates, i.e., spatial reference frames. We investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs. Participants were asked to place their arms on a desk and required to respond tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on the corresponding side. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition. The results showed that N1 amplitudes elicited by high-probability tactile stimuli only decreased in the uncrossed condition. These results suggest that the prediction of a tactile stimulus facilitated by approaching visual information is influenced by multiple spatial coordinates.


Introduction
Interaction with other sensory information is important for tactile processing. Physical contact with dangerous objects often hurts our body. We can avoid dangerous objects and protect our bodies by predicting 'where', 'when', and 'what' tactile events will occur prior to contact; however, predicting these events is difficult in the tactile modality, because tactile sensation is evoked after physical contact has occurred. Therefore, it is thought that the prediction of tactile events is enabled by using other sensory information. In particular, many studies have reported that visual information influences the processing of subsequent tactile sensation (for a review, see Spence, 2010).
In addition, recent electrophysiological studies reported that prior visual stimuli approaching the body facilitate prediction spatially, temporally, and in terms of the type (i.e., 'where', 'when', and 'what') of subsequent tactile events (Kimura and Katayama, 2015, 2017, 2018. In a study by Kimura and Katayama (2015), participants were instructed to respond to a tactile stimulus to the hand; the left (or right) hand usually received the stimulus, and the opposite hand rarely received it. Before the tactile stimuli were presented, visual stimuli approached the hand where the tactile stimuli were frequently presented, or they did not move; e.g., if the frequent tactile stimulus was presented to the left hand, visual stimuli were presented from the right to the left in the approach situation. The comparison of event-related brain potentials (ERPs) for tactile stimuli showed that the approaching visual stimuli facilitated the prediction of tactile stimuli presented on the approach side and increased the deviant response for the opposite side. This approach effect of the visual and the tactile also facilitated prediction of timing and type Katayama, 2017, 2018). These studies suggest that the spatial relationship between external visual information and the body is important for the prediction of tactile events.
The previous studies reported that the approach of visual information facilitates the prediction of subsequent tactile events; however, it is unclear how this approach effect influences tactile events in different spatial coordinates (i.e., spatial reference frames). The processing of tactile events is influenced by multiple spatial coordinates. In particular, the hands relate to body-centered coordinates (i.e., anatomical sagittal midline-based spatial frame) and body parts-centered coordinates (i.e., skin-based spatial frame; see Heed et al., 2015 for a review). In the majority of situations, our left (right) hand operates on the left (right) side of the workspace, and the body parts-centered coordinate and the body-centered coordinate are congruent; however, we can move the left (right) hand to the right (left) side (i.e., the opposite side) because hands are mobile sensory organs. For example, we can use our right hand to pick up an object on the left side if we cannot use our left hand, and we can hit a right-side tennis ball back with our left backhand. In this situation, the body parts-centered coordinate and the body-centered coordinate are incongruent, and the processing of tactile events is influenced by this spatial coordinate mismatch. For example, many studies reported that crossing the hands impaired performance on tactile temporal-order judgment (TOJ) tasks (e.g., Heed et al., 2012;Shore et al., 2002;Yamamoto and Kitazawa, 2001). In addition, several studies reported that this crossing-the-hands effect also occurs between a prior visual cue (or distractor) and a subsequent tactile target (e.g., Holmes et al., 2006;Spence et al., 2004). These crossing-the-hands effects may be explained by theories of touch remapping (see Heed and Azañón, 2014 for a review). For example, we refer to a body-centered coordinate first and then remap it to a body parts-centered coordinate (space-to-body projection account; Kitazawa, 2002;Yamamoto and Kitazawa, 2001) or vice versa (spatial conflict account; Cadieux et al., 2010;Shore et al., 2002), or both coordinate activation at the same time and are influenced by top-down modulation (spatial integration account; Badde et al., 2014). These theories suggest that the processing of tactile events and visuo-tactile events are influenced by both spatial coordinates. Therefore, it is possible that this crossing-the-hands effect also influences prediction of tactile events facilitated by the approach of visual information.
To test this hypothesis, we manipulated hand positions when participants saw visual stimuli approaching their hands and responded to subsequent tactile stimuli, and compared the resulting ERPs. It is known that visual stimuli approaching the hands influence the predictions of a subsequent tactile stimulus and ERPs elicited by that stimulus (e.g., Kimura and Katayama, 2015, 2017, 2018. We investigated the relationship between the prediction of a tactile stimulus facilitated by this approach effect and the crossing-the-hands effect using the paradigm of these studies. In this study, participants were required to respond to tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on its usual side. In the crossed condition, each hand was placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial locations of the tactile stimulus and the hand were consistent in the uncrossed condition and inconsistent in the crossed condition. Two conditions were administered in separate blocks. The frequency with which the tactile stimuli would be presented to each index finger was told to participants before each block. Therefore, the approach of the visual stimuli was irrelevant information in this simple reaction time task.
We focused on ERPs, especially N1, N2, P3, and contingent negative variation (CNV), as an index of the crossing-the-hand effect and prediction caused by approaching visual stimuli. These ERPs are known to have a high level of sensitivity to prediction and deviation from it. N1 is elicited by a tactile stimulus; habituation occurs with repeated presentation of the stimulus, and then this amplitude decreases (e.g., Kekoni et al., 1997). We predicted that amplitudes of N1 with high-probability tactile stimuli decrease under both conditions if only one spatial coordinate is used, because visual stimuli approach one hand or one side, and then the tactile stimulus is repeatedly presented (i.e., high probability) to this hand or space. If both spatial coordinates are used, we predicted that the habituation occurs and N1 amplitude decreases in the uncrossed condition because the coordinates are congruent and high-probability tactile stimuli are processed as the same stimuli presented at the same location for both coordinates, and that the habituation does not occur and N1 amplitude does not decrease in the crossed condition because the coordinates are incongruent and high-probability tactile stimuli are not processed as the same stimuli presented at the same location.
Moreover, N2 is elicited by the deviation from stimulus context and reflects pre-attentional processing (e.g., Kekoni et al., 1996Kekoni et al., , 1997Kimura and Katayama, 2018). We predicted that N2 amplitude does not differ between both conditions if only one spatial coordinate is used, because prediction of the high-probability tactile stimulus is facilitated by the approaching visual stimuli, and the low-probability tactile stimulus presented at the opposite hand or space deviates from this context. If both spatial coordinates are used, we predicted that N2 is elicited by the low-probability stimulus in the uncrossed condition in which both coordinates are congruent: the N2 is not elicited or its amplitude decreases in the crossed condition.
Furthermore, P3 is elicited by deviation from prediction. P3 is elicited by an unpredicted stimulus, and this amplitude reflects the intensity of a deviation from prediction (e.g., Donchin, 1981;Duncan-Johnson and Donchin, 1977;Katayama and Polich, 1996). In addition, when the prediction is facilitated by the approaching visual stimuli, the stimulus deviating from this prediction increases the P3 amplitude (e.g., Kimura and Katayama, 2015). We predicted that P3 amplitudes do not differ between the two conditions if only one spatial coordinate is used, because prediction of the high-probability tactile stimulus is facilitated by the approaching visual stimuli, and the intensity of deviation for the low-probability tactile stimulus presented at the opposite hand or space is the same. If both spatial coordinates are used, we predicted that P3 is elicited by the low-probability stimulus in the uncrossed condition in which both coordinates are congruent; the P3 is not elicited or this amplitude decreases in the crossed condition.
Finally, CNV is elicited between the prior stimulus and subsequent stimulus when the timing of the subsequent stimulus can be predicted by the previous stimulus (e.g., Walter et al., 1964). A previous visuo-tactile study reported that CNV was elicited between a prior visual stimulus and a subsequent tactile stimulus (e.g., Kimura and Katayama, 2015). Therefore, if participants are able to predict the timing of tactile stimulus in both conditions, CNV will not differ between conditions.

Participants
Fifteen undergraduate and graduate students (7 females, 8 males; 19-24 years of age; mean: 21.93 years; standard error: 1.39) participated in the experiment. All participants were right-handed, according to their self-report, and had normal or corrected-to-normal vision. This experiment was approved by The Institute of Scientific and Industrial Research's Research Ethics Review Board under Osaka University Regulations. Written informed consent was obtained from all participants, and their rights as experimental subjects were protected. Figure 1 shows the positioning of the tactile and visual stimuli. The participants were seated and put their hands and forearms on an obliquely oriented board in front of them. Tactile stimuli were generated by a vibration stimulus generator (FB-2006D, Uchida Denshi Corporation, Tokyo, Japan). To present tactile stimuli to participants, solenoid vibrators (FB-1005, Uchida Denshi Corporation) were put on their index fingers. The vibration was 250 Hz with a duration of 200 ms. These stimuli were presented to the left (or right) index finger with a high probability (80%), and to the opposite index finger with a low probability (20%). These stimuli were presented in random order from trial to trial, and the order of the location (left or right) of the stimulus presentation at high (or low) probability was counterbalanced across blocks. Three white light-emitting diodes (LEDs) were used as visual stimuli. Each LED was a square with 0.8 cm sides. Three LEDs were placed at equal distances (8.0 cm intervals) between the arms on an obliquely oriented board. The visual stimuli were single block pulses of 200 ms in duration. The light intensity was 25 cd.

Procedure
Each trial was composed of three visual stimuli and one tactile stimulus. The stimulus interval (SOA) was set to 1000 ms. The interval between trials was either 1000 or 1200 ms at random with equal probability. Each block was composed of 84 trials (high-probability tactile stimuli: 64 trials; low-probability tactile stimuli: 16 trials; no tactile stimuli (catch trial): 4 trials), which took approximately 7 min. Two blocks were presented for each condition. The interval between blocks was 2 min, and after the second block, the participants rested for 10 min and then started the remaining two blocks. The order of conditions was randomized between participants.
The two conditions were distinguished by the position of the hands, and this position was administered in separate blocks. Figure 2 shows the procedure for each condition. In the uncrossed condition, each hand was placed on the same side, i.e., left (right) hand placed on the left (right) side. LEDs flashed sequentially toward the hand where the high-probability tactile stimulus was presented (i.e., if the high-probability tactile stimulus was set at the left index finger, the LED flashed sequentially right, center, and left), and the subsequent tactile stimulus was presented to the left (or right) index finger. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. The spatial locations of the tactile stimulus and the hand were inconsistent in the crossed condition; thus, if the high-probability tactile stimulus was set at the right index finger, the LED flashed sequentially toward the right hand placed on the left side (i.e., LED flashed sequentially right, center, and left).
In the experimental room, the participants were asked to sit in a chair and to place their hands and forearms on an obliquely oriented board in front of them. The distance between their hands was 32.0 cm. In addition, they were required to gaze at the center LED, in order to control their eye movements, and not to move their eyes and bodies more than necessary in each condition. Moreover, the participants were instructed to respond by pressing a button with the left (or right) foot whenever the tactile stimuli were presented, and to not respond when tactile stimuli were not presented (i.e., the catch trials). Half of the participants used the left foot and the other half used the right foot. Finally, they were told at the start of each block which hand would be presented with the high-(low-) probability stimuli.
To analyze the EEG data, the EEGLAB toolbox (Delorme and Makeig, 2004) and ERPLAB toolbox (Lopez-Calderon and Luck, 2014) on MATLAB (MathWorks Inc, Natick, MA, USA) were used. The data were digitally bandpass filtered at 0.01-30 Hz (6 dB/octave) using an IIR Butterworth analog simulation filter. Artifacts derived from eye movements and eye blinks were rejected using an automatic EEG artifact detector based on the joint use of spatial and temporal features (ADJUST) of the EEGLAB toolbox (Mognon et al., 2011). To extract N1, N2 and P3, the EEG epoch was set at 1000 ms (including a 200 ms prestimulus baseline). Epochs in which the EEG signal variation exceeded ±100 μV were excluded from averaging. Additionally, trials with RTs shorter than 200 ms or longer than 1500 ms and trials with incorrect responses were discarded from the analysis. After artifact rejection, the number of remaining trials ranged from 118 to 128 (0-7.2% of trials were rejected) for the high-probability stimulus and from 28 to 32 (0-12.5% rejected) for the low-probability stimulus. The time range of N1, N2, and P3 was defined (N1: 70-150 ms; N2: 180-230 ms; P3: 260-380 ms). These time ranges were decided by peak latencies of grand-averaged waves for all conditions used in the analysis.
In addition, to investigate CNV, the EEG epoch was set at 1200 ms (the baseline was a 200-0 ms prestimulus of the third visual stimulus, and the onset of the tactile stimulus occurred at 1000 ms). Epochs in which the EEG signal variation exceeded ±100 μV and trials with errors were excluded from averaging. After artifact rejection, the number of remaining trials was 155-160 (0-3.2% rejected). The mean CNV amplitude was obtained from a latency window of 500-1000 ms. The appropriate latency window was defined based on observation of the resultant ERP waveforms.
Two-way repeated measures analysis of variance (ANOVA) of reaction times (RTs) in response to the tactile stimuli was conducted with the two conditions (uncrossed condition and crossed condition) × two stimulus probabilities [high probability (80%) and low probability (20%)]. Moreover, the N1, N2, and P3 mean amplitudes were assessed with a three-way repeated measures ANOVA [2 conditions × 2 stimulus probabilities × 3 electrodes (Fz, Cz and Pz)]. These electrodes were chosen to check the distribution of N1, N2, and P3 amplitudes at the midline. These ANOVAs were conducted by applying Greenhouse-Geisser corrections to the degrees of freedom (Greenhouse and Geisser, 1959) when Mauchly's sphericity test was significant. The effect sizes have been indicated in terms of partial eta squared (η). Post-hoc comparisons were made using Shaffer's modified sequentially rejective multiple test procedure, which extends Bonferroni t tests in a stepwise fashion (Shaffer, 1986). In addition, the mean CNV amplitudes at Cz, where the CNV was elicited at maximum amplitude, were compared between conditions by a paired t test. The effect size was calculated by computing Cohen's d (Cohen, 1988). The significance level was set at p < 0.05 for all statistical analyses. Table 1 shows the mean RTs of all participants. The results of the ANOVA are summarized in Table 2 and revealed that the interaction effect was significant (p < 0.05). Post-hoc comparisons indicated that the RT to the lowprobability stimulus was longer than the RT to the high-probability stimulus in the uncrossed condition (p < 0.05). Moreover, the RT to the low-probability stimulus in the uncrossed condition was longer than the RT to the lowprobability stimulus in the crossed condition (p < 0.05). In addition, the main effect of stimulus probabilities was significant (p < 0.05), and the RT to the low-probability stimulus was longer than the RT to the high-probability stimulus. The main effect of conditions was not significant. Figure 3 shows the grand averages for ERPs elicited by tactile stimuli during the high-probability stimulus (black lines) and low-probability stimulus (red lines) from Fz, Cz, and Pz. The first negative deflection showed peak latency at about 110 ms (N1), the second negative deflection in the low-probability stimulus was at about 180-230 ms (N2), and the last positive deflection showed peak latency at about 320 ms (P3). Figure 4 illustrates (a) the topographic map at the time range of N1 (70-150 ms), and (b) the N1 mean amplitude in both conditions and probabilities. In the N1 analysis, a smaller amplitude means a larger N1 response. The results of the ANOVA are summarized in Table 3 and revealed that the main effect of stimulus probabilities was significant (p < 0.05), and that the N1 amplitude of the low-probability stimulus was larger than that of the highprobability stimulus. Moreover, the main effect of electrodes was significant (p < 0.05). Post-hoc comparisons showed that the N1 amplitudes at Fz and Cz were larger than at Pz (ps < 0.05). Furthermore, the interaction of conditions and stimulus probabilities was significant (p < 0.05). Post-hoc comparisons showed that the N1 amplitude of the high-probability stimulus in the uncrossed condition was smaller than that of the high-probability stimulus in the crossed condition (p < 0.05). The N1 amplitude of the high-probability stimulus was smaller than that of the low-probability stimulus in the uncrossed condition (p < 0.05). In addition, the interaction of stimulus probabilities and electrodes was significant. Post-hoc comparisons showed that the N1 amplitude of the low-probability stimulus was larger than that of the high-probability stimulus at Fz and Cz (ps < 0.05). The largest N1 amplitude was at Fz, followed in order by Cz and Pz in the high-probability stimulus, and the N1 amplitudes at Fz and Cz were larger than at Pz in the low-probability stimulus (ps < 0.05).

N1
In addition, the interaction of conditions, stimulus probabilities, and electrodes was significant. Table 4 shows the simple interaction effects and simple-simple main effects.
In the factor of uncrossed condition, the simple-simple main effect of stimulus probabilities was significant (p < 0.05), and the N1 amplitude of the high-probability stimulus was smaller than the N1 amplitude of the lowprobability stimulus. Moreover, the simple-simple main effect of electrodes was significant (p < 0.05), and the largest N1 amplitude was at Fz, followed in order by Cz and Pz (ps < 0.05). In the factor of crossed condition, the   simple interaction was significant (p < 0.05). Post-hoc comparisons indicated that the high-probability stimulus at Fz elicited smaller N1 amplitudes than the low-probability stimulus at Fz (p < 0.05). Moreover, N1 amplitudes at Fz and Cz were larger than at Pz for both stimulus probabilities (ps < 0.05). Furthermore, the simple-simple main effect of electrodes was significant (p < 0.05).
Post-hoc comparisons showed the same result of simple interaction of stimulus probabilities and electrodes in the crossed condition.
In the factor of high-probability stimulus, the simple-simple main effect of conditions was significant (p < 0.05), and the N1 amplitude in the uncrossed condition was smaller than in the crossed condition. Moreover, the simplesimple main effect of electrodes was significant (p < 0.05), and the largest N1 amplitude was at Fz, followed in order by Cz and Pz (ps < 0.05). In the factor of low-probability stimulus, the simple-simple main effect of electrodes was significant (p < 0.05), and the N1 amplitudes at Fz and Cz were larger than at Pz (ps < 0.05).
In the factor of Fz, the simple-simple main effect of stimulus probabilities was significant (p < 0.05), and the N1 amplitude of the high-probability stimulus was smaller than that of the low-probability stimulus. In the factor of Cz, the simple interaction was significant (p < 0.05). Post-hoc comparisons showed that the N1 amplitude of the high-probability stimulus in the uncrossed condition was smaller than that of the high-probability stimulus in the crossed condition (p < 0.05). The N1 amplitude of the high-probability stimulus was smaller than that of the low-probability stimulus in the uncrossed condition (p < 0.05). In the factor of Pz, the simple interaction was significant (p < 0.05). Post-hoc comparisons showed that the N1 amplitude of the high-probability stimulus in the uncrossed condition was smaller than that of the high-probability stimulus in the crossed condition (p < 0.05). The N1 amplitude of the high-probability stimulus was smaller than that of the lowprobability stimulus in the uncrossed condition (p < 0.05). In summary, N1 amplitude of the high-probability stimulus was smaller than that of the lowprobability stimulus in both conditions, and the amplitudes of N1 at the frontal and central electrodes were larger than at the parietal electrode. In addition, the N1 amplitude of the high-probability stimulus in the uncrossed condition was smaller than in the crossed condition. Figure 5 shows (a) the topographic map at the time range of N2 (180-230 ms), and (b) the N2 mean amplitude in both conditions and probabilities. In the N2 analysis, a smaller amplitude means a larger N2 response. The results of the ANOVA are summarized in Table 5 and revealed that the main effect of stimulus probabilities was significant (p < 0.05), and that the N2 amplitude of the low-probability stimulus was larger than that of the high-probability stimulus. Moreover, the main effect of electrodes was significant (p < 0.05). Post-hoc comparisons showed that the N2 amplitudes at Fz and Pz were larger than at Cz (ps < 0.05). Figure 6 shows (a) the topographic map at the time range of P3 (260-380 ms), and (b) the P3 mean amplitude in both conditions and probabilities. In the P3 analysis, a larger amplitude means a larger P3 response. The results of the ANOVA are summarized in Table 6 and revealed that the main effect of stimulus probabilities was significant (p < 0.05), and that the P3 amplitude    of the low-probability stimulus was larger than that of the high-probability stimulus. Moreover, the main effect of electrodes was significant (p < 0.05).

P3
Post-hoc comparisons showed that the P3 amplitudes at Cz and Pz were larger than at Fz (ps < 0.05).
In addition, the interaction of conditions, stimulus probabilities, and electrodes was significant (p < 0.05). Table 7 shows the simple interaction effects and simple-simple main effects.
In the factor of uncrossed condition, the simple-simple main effect of stimulus probabilities was significant (p < 0.05), and the P3 amplitude of the low-probability stimulus was larger than that of the high-probability stimulus. In the factor of crossed condition, the simple interaction was significant (p < 0.05). Post-hoc comparisons indicated that the P3 amplitude of the lowprobability stimulus was larger than that of the high-probability stimulus at all electrodes (ps < 0.05). Moreover, the P3 amplitudes at Cz and Pz were larger than at Fz for the low-probability stimulus (ps < 0.05). Furthermore, the simple-simple main effects of stimulus probability and electrodes were significant (p < 0.05). Post-hoc comparisons showed the same result of the simple interaction for the stimulus probabilities at all electrodes and for the electrodes to the low-probability stimulus (ps < 0.05).
In the factor of high-probability stimulus, the simple-simple main effect of electrodes was significant (p < 0.05), and the P3 amplitude at Cz was larger than at Fz (p < 0.05). In the factor of low-probability stimulus, the simplesimple main effect of electrodes was significant, and the P3 amplitudes at Cz and Pz were larger than at Fz (ps < 0.05).
In the factor of Fz, Cz, and Pz, the simple-simple main effect of electrodes was significant (p < 0.05), and the P3 amplitude of the low-probability stimulus was larger than that of the high-probability stimulus (ps < 0.05). In summary, P3 amplitude of the low-probability stimulus was larger than that of the high-probability stimulus in both conditions, and the amplitudes of P3 at the central and parietal electrodes were larger than at the frontal electrode in the low-probability stimulus. Figure 7 illustrates the grand average CNV elicited in all trials at Cz, where the CNV was elicited at maximum amplitude. The gray area indicates the time range of CNV (500-1000 ms). Comparisons between conditions by paired t tests of mean amplitude of CNV revealed no significant difference [t(14) = 0.11, p = 0.91, d = 0.02].

Discussion
The present study aimed to investigate the relationship between the prediction of tactile stimulus facilitated by approaching visual information and the crossing-the-hands effect. For this purpose, ERPs were compared between an uncrossed condition and a crossed condition. Similar to findings from previous research (e.g., Kekoni et al., 1997), our results showed that N1 amplitudes in the front-central region (Fz and Cz electrodes) were larger than in the parietal region (Pz electrode). Moreover, N1 amplitudes elicited by high-probability tactile stimuli decreased in the uncrossed condition. N1 is elicited by tactile stimuli; habituation occurs with repeated presentation of the stimulus, and then this amplitude decreases (e.g., Kekoni et al., 1997). In our study, visual stimuli approached one hand or one side where the high-probability tactile stimulus was presented in both conditions. The only difference between the conditions was whether participants crossed their hands or not; therefore, the difference in the N1 amplitude between the conditions is caused by the consistency of the spatial coordinates. This result indicates that the consistency between the coordinates influences the early stage of the stimulus processing in which habituation occurs and suggests that inconsistencies between the coordinates interferes in this stage.
N2 and P3 were elicited by low-probability tactile stimuli in both conditions, and these amplitudes did not differ between conditions. Previous studies reported that N2 reflects pre-attentional processing of a deviation from stimulus context (e.g., Kekoni et al., 1996Kekoni et al., , 1997 and that P3 reflects the intensity of a deviation from prediction (e.g., Donchin, 1981;Duncan-Johnson and Donchin, 1977;Katayama and Polich, 1996). Moreover, these amplitudes increased with the tactile stimulus deviating from this prediction of visual approaching effect (e.g., Katayama, 2015, 2018). This result indicated that consistency between the coordinates did not influence the attentional stage of the stimulus processing and suggested that inconsistencies between the coordinates are integrated in this stage.
The amplitude of CNV did not differ across the conditions, which is the same finding as in the previous visual approach study (Kimura and Katayama, 2015). This result indicates that the participants could predict the timing of the presentation of the tactile stimuli under both conditions, regardless of whether their hands were crossed.
In contrast to the N1 and CNV results, RT was shorter in the crossed condition than in uncrossed condition and the difference in RT between the high-probability tactile stimulus and the low-probability tactile stimulus was smaller in the crossed condition. Previous studies reported that the facilitation of RT by the congruency of location between cue and target (i.e., cuing effect) did not occur in the crossed-hands condition. Moreover, the RT of the crossedhands condition was shorter than that of the uncrossed-hands condition (e.g., Azañón and Soto-Faraco, 2008;Azañón et al., 2010). This phenomenon is called 'the inverse cuing effect' and is considered to reflect processing before the integration of the spatial coordinates (for a review, see Heed and Azañón, 2014). Our results suggest that this effect and processing also occur in prediction facilitated by approaching visual stimuli and in deviation from this prediction.
In summary, the present study indicated that body-centered coordinates and body parts-centered coordinates influence the facilitation effect of prediction of subsequent tactile events by visual stimuli approaching the hand and that the crossing-the-hands effect generated by inconsistency between the two coordinates also influences the prediction of tactile events. Moreover, this crossing-the-hands effect occurred only in the early stage (N1) of tactile stimulus processing and not in the pre-attention (N2) and evaluation (P3) processing stages. In other words, this effect influenced the bottom-up stage of sensory processing of stimuli, but it did not influence the stages of detecting changes in stimuli or of evaluation. These results suggest that there is a gradual process of integration of body-centered coordinate and body parts-centered coordinate information in prediction linking a visual and tactile modality.

Conclusion
In conclusion, the crossing-the-hands effect also influences prediction of subsequent tactile events facilitated by visual stimuli approaching the hand. This study extended our understanding of the process of integration of spatial coordinates and multisensory processing and prediction processing by examining ERPs for each processing stage.