Search Results

Sensory substitution devices (SSDs) aim at replacing or assisting one or several functions of a deficient sensory modality by means of another sensory modality. Despite the numerous studies and research programs devoted to their development and integration, SSDs have failed to live up to their goal of allowing one to ‘see with the skin’ (White et al., ) or to ‘see with the brain’ (Bach-y-Rita et al., ). These somewhat peremptory claims, as well as the research conducted so far, are based on an implicit perceptual paradigm. Such perceptual assumption accepts the equivalence between using a SSD and perceiving through a particular sensory modality. Our aim is to provide an alternative model, which defines the integration of SSDs as being closer to culturally-implemented cognitive extensions of existing perceptual skills such as reading. In this talk, we will show why the analogy with reading provides a better explanation of the actual findings, that is, both of the positive results achieved and of the limitations noticed across the field of research on SSDs. The parallel with the most recent two-route and interactive models of reading (e.g., Dehaene et al., ) generates a radically new way of approaching these results, by stressing the dependence of integration on the existing perceptual-semantic route. In addition, it enables us to generate innovative research questions and specific predictions which set the stage for future work.

In: Seeing and Perceiving

Visual-to-tactile sensory substitution devices are designed to assist visually impaired people by converting visual stimuli into tactile stimuli. The important claim has been made that, after training with these devices, the tactile stimuli can be moved from one body surface to another without any decrease in performance. This claim, although recurrent, has never been empirically investigated. Moreover, studies in the field of tactile perceptual learning suggest that performance improvement transfers only to body surfaces that are closely represented in the somatosensory cortex, i.e. adjacent or homologous contralateral body surfaces. These studies have however mainly used discrimination tasks of stimuli varying along only one feature (e.g., orientation of gratings) whereas, in sensory substitution, tactile information consists of more complex stimuli. The present study investigated the extent to which there is a transfer of tactile letter learning. Participants first underwent a baseline session in which the letters were presented on their belly, thigh, and shin. They were subsequently trained on only one of these body surfaces, and then re-tested on all of them, as a post-training session. The results revealed that performance improvement was the same for both the trained and the untrained surfaces. Moreover, this transfer of perceptual learning was equivalent for adjacent and non-adjacent body surfaces, suggesting that tactile learning transfer occurs independently of the distance on the body. A control study consisting of the same baseline and post-training sessions, without training in between, revealed weaker improvement between the two sessions. The obtained results support the claim that training with sensory substitution devices results in a relative independence from the stimulated body surface.

In: Multisensory Research
In: Multisensory Research

When we interact with objects in our environment, as a general rule we are not aware of the proximal stimulation they provide, but we directly experience the external object. This process of assigning an external cause is known as distal attribution. It is extremely difficult to measure how distal attribution emerges because it arises so early in life and appears to be automatic. Sensory substitution systems give us the possibility to measure the process as it occurs online. With these devices, objects in our environment produce novel proximal stimulation patterns and individuals have to establish the link between the proximal stimulation and the distal object. This review disentangles the contributing factors that allow the nervous system to assign a distal cause, thereby creating the experience of an external world. In particular, it highlights the role of the assumption of a stable world, the role of movement, and finally that of calibration. From the existing sensory substitution literature it appears that distal attribution breaks down when one of these principles is violated and as such the review provides an important piece to the puzzle of distal attribution.

In: Multisensory Research

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.

In: Multisensory Research

Abstract

Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.

In: Multisensory Research

We investigated some spontaneous crossmodal correspondences between audition and touch both in blind and sighted people. In four experiments, we tested the interactions between the direction of tactile movement (proximal–distal vs. distal–proximal movement on the fingertip) and change in auditory frequency (increasing vs. decreasing pitch). We measured the compatibility effect between congruent stimuli (proximal–distal tactile movements and increasing pitch, or distal–proximal tactile movement and decreasing pitch) and incongruent stimuli (i.e., the reverse association). The selective attention method, commonly used to test crossmodal correspondences, requires participants to focus on tactile or auditory signals while ignoring the other one presented simultaneously. The results with this method did not reveal any significant compatibility effect. However, a variant of the implicit association task (IAT, e.g., Parise and Spence, ) that relies on associations in the response buttons did reveal a significant compatibility effect. This effect was similar in the conditions where the arm was placed vertically and horizontally, that is whether or not the distal–proximal tactile movement corresponded to the free movement of an object subjected to gravity. Finally, in the IAT protocol, similar effects were obtained in blind and in sighted people, i.e., a crossmodal correspondence effect was obtained independently of the arm’s position. These results have methodological implications for the testing of crossmodal correspondences and for the design of sensory substitution devices. They indeed demonstrate the relevance of using spontaneous crossmodal correspondences, and not just arbitrary associations, in order to code aspects of the original signals in conversion systems for blind people.

In: Multisensory Research

Coding an object or events position in the environment involves recruitment of multiple reference frames or coordinate systems. Generally retino-, head-, trunk-, arm and object-centred representations have been shown to influence how we estimate relative object position. Visual information can primarily be coded in retinotopic or spatiotopic frames while tactile information does not provide such a one-to-one mapping between position in space and receptor activation. The question asked here is how tactile information presented to the hand, which is able to change its orientation in space and body coordinates, influences interpretation of tactile letter stimuli. As letters are generally perceived visually, when interpreting a letter traced on the skin, one might assume that the perceptual system defaultly employs a visual frame of reference, or at least a simple egocentric perspective. However, previous findings suggest that for cutaneous tracing the perspective taken is influenced in a complex manner by factors such as location, as opposed to receptor type, limb orientation, location of stimulation relative to other body parts and object coordinates (Parsons and Shimojo, ). In three experiments we explored the initial interpretation of letter stimuli d, b, p, & q when presented to the hand and fingertips. The stimuli are ambiguous along left/right and top/bottom axes and are therefore ideal for inferring reference frame adopted. The current findings suggest that orientation and position of the hand relative to the body, alters the reference frame selected when interpreting tactile letter stimuli. The results are consistent with a multiple reference frame model of information processing and allow us to disentangle the elements involved in perception of cutaneous stimulation at our fingertips.

In: Multisensory Research

Abstract

Understanding the processes underlying sensorimotor coupling with the environment is crucial for sensorimotor rehabilitation and sensory substitution. In doing so, devices which provide novel sensory feedback consequent to body movement may be optimized in order to enhance motor performance for particular tasks. The aim of the study reported here was to investigate audio-motor coupling when the auditory experience is linked to movements of the head or the hands. The participants had to localize and reach a virtual source with the dominant hand in response to sounds. An electromagnetic system recorded the position and orientation of the participants’ head and hands. This system was connected to a 3D audio system that provided binaural auditory feedback on the position of the virtual listener located on the participants’ body. The listener’s position was computed either from the hands or from the head. For the hand condition, the virtual listener was placed on the dominant hand (the one used to reach the target) in Experiment 1 and on the non-dominant hand, which was constrained in order to have similar amplitude and degrees of freedom as that of the head, in Experiment 2. The results revealed that, in the two experiments, the participants were able to localize a source within the 3D auditory environment. Performance varied as a function of the effector’s degrees of freedom and the spatial coincidence between sensor and effector. The results also allowed characterizing the kinematics of the hand and head and how they change with audio-motor coupling condition and practice.

In: Multisensory Research

Visual information is predominantly interpreted within an eye-centered reference frame. Tactile information, on the other hand, can be interpreted within different reference frames, i.e., local-surface, or whole-body centered. An important question is whether, given the different possibilities, each observer has a natural reference frame that they consistently adopt in different conditions, or they can freely adopt several reference frames and switch from one to another without cost. Recognition of ambiguous asymmetrical tactile letters (e.g., b, d, p, q) allows us to interrogate the different reference frames adopted by observers when interpreting tactile information. For such stimuli drawn on the skin, recognition requires assigning top–bottom, left–right and front–back axes to the letter. Across several experiments, participants had to recognize these letters when presented on different body surfaces, either with a freely adopted reference frame or an imposed one. In the unconstrained condition, participants consistently adopted one reference frame, with between-participant differences being evident. When adapted to a new reference frame, interpretation of the letters produced an important cost in response accuracies and latencies, indicating that the freely adopted reference frame corresponded to a natural reference frame rather than an arbitrary choice. This cost was greater for the freely-adopted egocentric (head- or body-centered) than for the non-egocentric (off-centered) reference frame. By training participants with a particular set of symbols, we then tested for generalization to novel stimuli and body surfaces. Our results have implications for the design of visuo-tactile sensory substitution devices and for understanding emergence of the distal attribution phenomenon.

In: Multisensory Research