Studies of deaf or blind subjects often report enhanced perceptual abilities in the remaining senses. Compared to hearing subjects, psychophysical studies have revealed specific superior visual abilities in the early-deaf as well as enhanced auditory functions in the early-blind. The neural substrate for these superior sensory abilities has been hypothesized to reside in the deprived cerebral cortices that have been reorganized by the remaining sensory modalities through crossmodal plasticity. In this context, it has been proposed that auditory cortex of the deaf may be recruited to perform visual functions. However, a causal link between supranormal visual performance and the visual activity in the reorganized auditory cortex has never been demonstrated. Furthermore, if auditory cortex does mediate the enhanced visual abilities of the deaf, it is unknown if these functions are distributed uniformly across deaf auditory cortex, or if specific functions can be differentially localized to distinct portions of the affected cortices. These fundamental questions are of significant clinical importance now that restoration of hearing in prelingual deaf children is possible through cochlear prosthetics. Psychophysical, neuroanatomical, electrophysiological, and functional imaging studies will be described that demonstrate crossmodal plasticity in auditory cortex underlies the enhanced visual abilities of the early deaf.
continue to limit) the uptake of all but the simplest tactile information transfer devices by humans (see also Alliusi, 1960 ; Loomis et al. , 2012 ), no matter whether they happen to be suffering from some form of sensory loss or not (Williams et al. , 2011 — see Note 2). It is worth acknowledging
Jutta Billino and Knut Drewing
aging and its neuronal correlates have been provided (Hasher and Zacks, 1988 ; Park and Reuter-Lorenz, 2009 ; Salthouse, 1996 ). In contrast, understanding of perceptual aging lacks behind. Sensory decline represents a prevalent age-related change and losses have been described for essentially all
Clare Jonas, Mary Jane Spiller, Paul B. Hibbard and Michael Proulx
and Schroeder, 2006 ). We are now in a position to consider how multisensory processing might differ between groups and between individuals — an important question for our understanding of neurodivergent conditions, ageing, and sensory loss. It was with this question in mind that we ran a series of
Alisdair Daws, Robert Huber, Daniel Bergman, Jeremy McIntyre, Paul Moore and Corinne Kozlowski
perception of odour signals during agonistic interactions by blocking the chemo- and mechanoreceptors on the antennae and antennules to prevent reception of relevant cues communicating social status. Individuals ghting an opponent with this loss of sensory information were signi cantly more likely to
Stefania S. Moro and Jennifer K. E. Steeves
, specifically people with only one eye, it seems reasonable to expect that other intact sensory systems should function to the best of their ability or perhaps even better in order to adapt and compensate for the partial loss of vision. Unilateral eye enucleation (the surgical removal of one eye) is a unique
Bruno Diot, Petra Halavackova, Jacques Demongeot and Nicolas Vuillerme
information for controlling bipedal posture. However, in certain circumstances, such as for persons with impaired balance, a sudden change in sensory input can cause loss of balance and potentially a fall. This was for instance the case for the person with a double partial foot amputation who unsuccessfully
Thomas D. Wright, Jamie Ward, Sarah Simonon and Aaron Margolis
Sensory substitution is the representation of information from one sensory modality (e.g., vision) within another modality (e.g., audition). We used a visual-to-auditory sensory substitution device (SSD) to explore the effect of incongruous (true-)visual and substituted-visual signals on visual attention. In our multisensory sensory substitution paradigm, both visual and sonified-visual information were presented. By making small alterations to the sonified image, but not the seen image, we introduced audio–visual mismatch. The alterations consisted of the addition of a small image (for instance, the Wally character from the ‘Where’s Wally?’ books) within the original image. Participants were asked to listen to the sonified image and identify which quadrant contained the alteration. Monitoring eye movements revealed the effect of the audio–visual mismatch on covert visual attention. We found that participants consistently fixated more, and dwelled for longer, in the quadrant corresponding to the location (in the sonified image) of the target. This effect was not contingent on the participant reporting the location of the target correctly, which indicates a low-level interaction between an auditory stream and visual attention. We propose that this suggests a shared visual workspace that is accessible by visual sources other than the eyes. If this is indeed the case, it would support the development of other, more esoteric, forms of sensory substitution. These could include an expanded field of view (e.g., rear-view cameras), overlaid visual information (e.g., thermal imaging) or restoration of partial visual field loss (e.g., hemianopsia).
comments on earlier drafts of this paper and to the Science and Medical Research Councils (U.K.) for their support. 136 (MoRUZZI & MnGOUrr" 1943) of arousal as a state in which the cortical EEG becomes desynchronised as a result of sensory stimulation (a response which can be facilitated by stimulation
Stefania S. Moro, Stefania S. Moro, Laurence R. Harris, Stefania S. Moro, Laurence R. Harris and Jennifer K. E. Steeves
Previous research has shown that people with one eye have enhanced sound localization and lack visual dominance, commonly found in binocular and monocular viewing controls. These findings imply cross-sensory adaptation likely as compensation for their loss of binocularity. We assessed whether the advantage given to audition in people with one eye, when the auditory and visual systems were in competition, might also be found when the systems were integrated together to make unified judgements. Participants were asked to spatially localize perceptually fused audiovisual events in which the auditory and visual components were spatially disparate in order to quantify the relative weightings assigned to each system when the systems were integrated. There was no difference in the reliability assigned to localizing unimodal visual and auditory targets by people with one eye compared to controls. When localizing bimodal targets, the weightings assigned to each sensory modality in both people with one eye and controls were predictable from their unimodal performance in accordance with the Maximum Likelihood Estimation (MLE) model. People with one eye appear to integrate the auditory and visual components of multisensory events optimally when determining spatial location despite the fact that they do not show the typical dominance of vision over audition when the two systems are in competition. It is possible that attentional modifications to the processing of each component when they are processed in parallel may represent an adaptive cross-sensory compensatory mechanism for the loss of binocular visual input that does not alter how these signals are integrated.