Search Results

You are looking at 1 - 10 of 10 items for

  • Author or Editor: Malika Auvray x
  • Search level: All x
Clear All

Sensory substitution devices (SSDs) aim at replacing or assisting one or several functions of a deficient sensory modality by means of another sensory modality. Despite the numerous studies and research programs devoted to their development and integration, SSDs have failed to live up to their goal of allowing one to ‘see with the skin’ (White et al., ) or to ‘see with the brain’ (Bach-y-Rita et al., ). These somewhat peremptory claims, as well as the research conducted so far, are based on an implicit perceptual paradigm. Such perceptual assumption accepts the equivalence between using a SSD and perceiving through a particular sensory modality. Our aim is to provide an alternative model, which defines the integration of SSDs as being closer to culturally-implemented cognitive extensions of existing perceptual skills such as reading. In this talk, we will show why the analogy with reading provides a better explanation of the actual findings, that is, both of the positive results achieved and of the limitations noticed across the field of research on SSDs. The parallel with the most recent two-route and interactive models of reading (e.g., Dehaene et al., ) generates a radically new way of approaching these results, by stressing the dependence of integration on the existing perceptual-semantic route. In addition, it enables us to generate innovative research questions and specific predictions which set the stage for future work.

In: Seeing and Perceiving

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.

In: Multisensory Research

Abstract

Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.

In: Multisensory Research

Visual-to-tactile sensory substitution devices are designed to assist visually impaired people by converting visual stimuli into tactile stimuli. The important claim has been made that, after training with these devices, the tactile stimuli can be moved from one body surface to another without any decrease in performance. This claim, although recurrent, has never been empirically investigated. Moreover, studies in the field of tactile perceptual learning suggest that performance improvement transfers only to body surfaces that are closely represented in the somatosensory cortex, i.e. adjacent or homologous contralateral body surfaces. These studies have however mainly used discrimination tasks of stimuli varying along only one feature (e.g., orientation of gratings) whereas, in sensory substitution, tactile information consists of more complex stimuli. The present study investigated the extent to which there is a transfer of tactile letter learning. Participants first underwent a baseline session in which the letters were presented on their belly, thigh, and shin. They were subsequently trained on only one of these body surfaces, and then re-tested on all of them, as a post-training session. The results revealed that performance improvement was the same for both the trained and the untrained surfaces. Moreover, this transfer of perceptual learning was equivalent for adjacent and non-adjacent body surfaces, suggesting that tactile learning transfer occurs independently of the distance on the body. A control study consisting of the same baseline and post-training sessions, without training in between, revealed weaker improvement between the two sessions. The obtained results support the claim that training with sensory substitution devices results in a relative independence from the stimulated body surface.

In: Multisensory Research
In: Multisensory Research

When we interact with objects in our environment, as a general rule we are not aware of the proximal stimulation they provide, but we directly experience the external object. This process of assigning an external cause is known as distal attribution. It is extremely difficult to measure how distal attribution emerges because it arises so early in life and appears to be automatic. Sensory substitution systems give us the possibility to measure the process as it occurs online. With these devices, objects in our environment produce novel proximal stimulation patterns and individuals have to establish the link between the proximal stimulation and the distal object. This review disentangles the contributing factors that allow the nervous system to assign a distal cause, thereby creating the experience of an external world. In particular, it highlights the role of the assumption of a stable world, the role of movement, and finally that of calibration. From the existing sensory substitution literature it appears that distal attribution breaks down when one of these principles is violated and as such the review provides an important piece to the puzzle of distal attribution.

In: Multisensory Research
In: Multisensory Research

Abstract

Understanding the processes underlying sensorimotor coupling with the environment is crucial for sensorimotor rehabilitation and sensory substitution. In doing so, devices which provide novel sensory feedback consequent to body movement may be optimized in order to enhance motor performance for particular tasks. The aim of the study reported here was to investigate audio-motor coupling when the auditory experience is linked to movements of the head or the hands. The participants had to localize and reach a virtual source with the dominant hand in response to sounds. An electromagnetic system recorded the position and orientation of the participants’ head and hands. This system was connected to a 3D audio system that provided binaural auditory feedback on the position of the virtual listener located on the participants’ body. The listener’s position was computed either from the hands or from the head. For the hand condition, the virtual listener was placed on the dominant hand (the one used to reach the target) in Experiment 1 and on the non-dominant hand, which was constrained in order to have similar amplitude and degrees of freedom as that of the head, in Experiment 2. The results revealed that, in the two experiments, the participants were able to localize a source within the 3D auditory environment. Performance varied as a function of the effector’s degrees of freedom and the spatial coincidence between sensor and effector. The results also allowed characterizing the kinematics of the hand and head and how they change with audio-motor coupling condition and practice.

In: Multisensory Research