Introduction to the Special Issue on Multisensory Space — Perception, Neural Representation and Navigation

We perceive our surrounding environment using all of our senses in parallel, building a rich multisensory representation of space (Chebat et al., 2018a; Harrar et al., 2018). This multisensory representation can be used to move through our environment (Chebat et al., 2011, 2015) and to interact spatially with our surroundings or to locate objects (Tal and Amedi, 2009) and to judge their speed and relative location in space (Sors et al., 2017). In turn, this representation is also influenced by our bodily sensations (Filbrich et al., 2017). Vision is the most suited sense to assist spatial perception (Strelow, 1985) and also certainly the most studied, but how essential is vision to the process by which we navigate? Although certain spatial abilities in auditory and tactile spatial tasks are slightly compromised by early blindness (Gori et al., 2014, 2017), people who are congenitally blind (CB) are still capable of avoiding obstacles through cluttered and complex environments in everyday life (Kellogg, 1962), of completing and integrating paths and remembering locations (Loomis et al., 2012), and are able to generate cognitive representations of space stemming from the remaining intact senses (Fortin et al., 2006; Thinus-Blanc and Gaunet, 1997). In addition, they also preserve the ability to recognize a traveled route and represent spatial information mentally (Marmor and Zaback, 1976; Passini et al., 1990). Moreover, CB can outperform the sighted in certain spatial tasks


Introduction
We perceive our surrounding environment using all of our senses in parallel, building a rich multisensory representation of space (Chebat et al., 2018a;Harrar et al., 2018). This multisensory representation can be used to move through our environment (Chebat et al., 2011(Chebat et al., , 2015 and to interact spatially with our surroundings or to locate objects (Tal and Amedi, 2009) and to judge their speed and relative location in space (Sors et al., 2017). In turn, this representation is also influenced by our bodily sensations (Filbrich et al., 2017). Vision is the most suited sense to assist spatial perception (Strelow, 1985) and also certainly the most studied, but how essential is vision to the process by which we navigate?
Although certain spatial abilities in auditory and tactile spatial tasks are slightly compromised by early blindness (Gori et al., 2014(Gori et al., , 2017, people who are congenitally blind (CB) are still capable of avoiding obstacles through cluttered and complex environments in everyday life (Kellogg, 1962), of completing and integrating paths and remembering locations (Loomis et al., 2012), and are able to generate cognitive representations of space stemming from the remaining intact senses (Fortin et al., 2006;Thinus-Blanc and Gaunet, 1997). In addition, they also preserve the ability to recognize a traveled route and represent spatial information mentally (Marmor and Zaback, 1976;Passini et al., 1990). Moreover, CB can outperform the sighted in certain spatial tasks (Loomis et al., 1993), and these abilities can be further improved with training (Likova and Cacciamani, 2018).
Thus, it would seem that vision per se is not that essential to the process by which we navigate. It is indeed quite helpful (for reviews, see Ekstrom, 2015;Jeamwatthanachai et al., 2019;McVea and Pearson, 2009), but even in its complete absence from birth the resulting deficit remains a purely perceptual one, and not a cognitive one (Merabet et al., 2005;Vecchi et al., 2004). This is indeed surprising given that most of the neuronal networks responsible for spatial tasks are volumetrically reduced (Cecchetti et al., 2016;Noppeney, 2007;Ptito et al., 2008a) compared to the sighted, and there is a volumetric reduction of the posterior portion of the hippocampus (Chebat et al., 2007a) which suggests that the taxing demands of learning to navigate without vision drives hippocampal plasticity and volumetric changes in CB (Chebat et al., 2007b, c;Leporé et al., 2010;Ptito et al., 2008b). In addition, there is also a cascade of other non-visual brain structures that undergo anatomical (Yang et al., 2014), morphological (Park et al., 2009), and morphometric (Aguirre et al., 2016;Maller et al., 2016;Rombaux et al., 2010;Tomaiuolo et al., 2014) alterations, as well as modifications in functional connectivity (Heine et al., 2015).
If then, indeed, CB only leads to a perceptual deficit and not a problem in the cognitive representation of space we should find that when substituting perceptual information that is usually only available through the visual modality for tactile or auditory information via sensory substitution devices CB should be able to perform as well as their sighted counterparts using vision for resolving certain spatial tasks. Which is indeed what we find (for review see Chebat et al., 2018a). Sensory substitution devices (SSDs) provide visual information via the tactile or auditory channel (for review, see Chebat et al., 2018b), and CB are able to navigate efficiently via the auditory (Chebat et al., 2015;Maidenbaum et al., 2014aMaidenbaum et al., , b, 2018 or tactile sense (Chebat et al., 2007c(Chebat et al., , 2011Kupers et al., 2010). They can locate objects (Auvray et al., 2007) and navigate around them (Chebat et al., 2011). They can even perform as well (Chebat et al., 2015(Chebat et al., , 2017 or outperform (Chebat et al., 2007c) their sighted counterparts in visuo-spatial tasks using SSDs.
Work from my laboratory using SSDs in route recognition demonstrated the recruitment of primary visual areas in CB individuals, but not in sighted blindfolded or in late-blind (LB) individuals (Kupers et al., 2010). In an obstacle avoidance task using a visuo-tactile SSD we found that the learning rates for obstacle detection and avoidance correlated significantly with regions of the dorsal-stream network and avoidance for both sighted controls (SC) and CB whereas for detection SC rely more on medial temporal lobe structures and CB on sensorimotor areas . This may indicate that CB may rely more on motor memory than their sighted counterparts to do the same type of spatial tasks. In line with these results, CB participants but also late blind and blindfolded sighted participants learned to use an SSD to navigate in real-life size mazes. We found that retinotopic regions, including both dorsal-stream regions (e.g., V6) and primary regions (e.g., peripheral V1), were selectively recruited for non-visual navigation after the participants mastered the use of this SSD, demonstrating rapid plasticity for non-visual navigation via SSD (Maidenbaum et al., 2018).
In this special issue some of the leading experts in the field review spatial navigation in the absence of vision from a variety of approaches to advance our understanding of multisensory spatial knowledge acquisition. We explore these issues in people who are congenitally blind, late blind and sighted using multisensory information to perceive spatial information, also discussing their abilities, strategies, and corresponding mental representations.

Outline of Special Issue
In the opening chapter, Zuanazzi and Noppeney provide an overview of the literature on the role of spatial attention and expectation on the perception of space via different sensory modalities. This chapter reviews the effects of attention and expectation on auditory and visual perception of space, as well as endogenous and exogenous sources of attention across different sensory modalities while attempting to disentangle the roles of expectation and attention on spatial perception.
The second chapter, by Martolini et al., explores the effects of an audiomotor training task on the improvement of spatial abilities in blindfolded sighted adults. They used an active audio-motor training task based on the reproduction of different movements combined with sounds to help improve the recognition of complex shapes presented as auditory stimuli. They propose that audio-motor training could be used to improve the perception of auditory shapes, and that this process, which is usually mediated by vision, can also be obtained through auditory feedback as well.
In the next contribution, Hanneton et al. explore sensorimotor coupling with the environment for sensorimotor rehabilitation and sensory substitution. They used a sensorimotor task to investigate accuracy when coupling the auditory perceptual information with movements of the hand vs. movements of the head. SSD often use cameras that are either placed on the head (see, for example, Chebat et al., 2007cChebat et al., , 2011Chebat et al., , 2020, vs. devices that use hand-held sensors (see Chebat et al., 2015). The results of this study help us better understand the mechanisms involved in sensorimotor coding for sensory substitution and multisensory integration in general.
Sensory deprivation from birth leads to a heightened sense of pain (Slimani et al., 2013). So, how exactly does visual information mediate the way we perceive nociceptive stimuli? In the fourth article, Manfron and colleagues explore the relationship between nociceptive crossmodal interactions and the visually perceived proximity between the visual stimuli and the limb on which nociceptive stimuli are applied.
Humans can transfer non-visual spatial knowledge between real and virtual environments, and CB participants can use sensory substitution-guided navigation to extract spatial information from the virtual world and apply it to significantly improve their behavioral performance in the real world and vice versa (Chebat et al., 2017). It is unknown how different types of virtual environments influence our ability to learn and transfer spatial information to the real world. In the fifth article, Hejtmanek et al. explore the transfer of spatial information between real and virtual environments using different forms of immersive technology. Immersive virtual reality includes a treadmill enabling the use of proprioceptive cues.
The next chapter features research by Santoro et al. They investigated whether the encoding of a spatial layout through verbal cues (spatial description) and motor cues (physical exploration of the environment) differently affect spatial navigation within a real room-size environment, in blindfolded sighted and late-blind participants. They show that encoding the environment through physical movement is more effective than through verbal descriptions in supporting active navigation.
Cacciamani et al.'s chapter seeks evidence of a leftward bias in a nonvisual haptic object location memory task and assesses the influence of a task-irrelevant sounds on this task. The results show that participants exhibit a leftward placement bias on no-sound trials. On sound trials, this leftward bias was corrected; placements were faster and more accurate (regardless of the direction of the sound).
In the closing chapter Ciricugno et al. investigate whether an abnormal binocular childhood experience affects spatial attention in the haptic modality, thus reflecting a supramodal effect of attention. They compared the performance of normally sighted, strabismic and early monocular blind participants in a visual and a haptic line bisection task. Their findings shed light on the mechanisms involved in pseudoneglect in the visual and haptic modalities.
Together these chapters represent a snapshot of the state of our understanding of the multisensory representation of space. I hope you enjoy reading them as much as I enjoyed the process of editing this special volume of Multisensory Research.