Abstract

Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of ‘artificial synaesthesia’. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the ‘artificial synaesthesia’ view of sensory substitution should be rejected.

In: Multisensory Research
Author: Bence Nanay

Abstract

It has been repeatedly suggested that synesthesia is intricately connected with unusual ways of exercising one’s mental imagery, although it is not always entirely clear what the exact connection is. My aim is to show that all forms of synesthesia are forms of (often very different kinds of) mental imagery and, further, if we consider synesthesia to be a form of mental imagery, we get significant explanatory benefits, especially concerning less central cases of synesthesia where the inducer is not sensory stimulation.

In: Multisensory Research

Human timing and interoception are closely coupled. Thus, temporal illusions like, for example, emotion-induced time dilation, are profoundly affected by interoceptive processes. Emotion-induced time dilation refers to the effect when emotion, especially in the arousal dimension, leads to the systematic overestimation of intervals. The close relation to interoception became evident in previous studies which showed increased time dilation when participants focused on interoceptive signals. In the present study we show that individuals with particularly high interoceptive accuracy are able to shield their timing functions to some degree from interference by arousal. Participants performed a temporal bisection task with low-arousal and high-arousal stimuli, and subsequently reported their interoceptive accuracy via a questionnaire. A substantial arousal-induced time dilation effect was observed, which was negatively correlated with participants’ interoceptive accuracy. Our findings support a pivotal role of interoception in temporal illusions, and are discussed in relation to neuropsychological accounts of interoception.

In: Timing & Time Perception

Abstract

The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.

In: Multisensory Research

Abstract

Should the vestibular system be counted as a sense? This basic conceptual question remains surprisingly controversial. While it is possible to distinguish specific vestibular organs, it is not clear that this suffices to identify a genuine vestibular sense because of the supposed absence of a distinctive vestibular personal-level manifestation. The vestibular organs instead contribute to more general multisensory representations, whose name still suggest that they have a distinct ‘sensory’ contribution. The vestibular case shows a good example of the challenge of individuating the senses when multisensory interactions are the norm, neurally, representationally and phenomenally. Here, we propose that an additional metacognitive criterion can be used to single out a distinct sense, besides the existence of specific organs and despite the fact that the information coming from these organs is integrated with other sensory information. We argue that it is possible for human perceivers to monitor information coming from distinct organs, despite their integration, as exhibited and measured through metacognitive performance. Based on the vestibular case, we suggest that metacognitive awareness of the information coming from sensory organs constitutes a new criterion to individuate a sense through both physiological and personal criteria. This new way of individuating the senses accommodates both the specialised nature of sensory receptors as well as the intricate multisensory aspect of neural processes and experience, while maintaining the idea that each sense contributes something special to how we monitor the world and ourselves, at the subjective level.

In: Multisensory Research

The subjective experience of time has many different facets. The present study focused on time awareness and its antipode timelessness as an expression of the extent one focuses on the passage of time. In an exploratory mixed-methods study, we investigated different extents of this time awareness and their relation to perceived valence of the environment, different states of consciousness, and strategies to cope with doing nothing. Thirty-three participants were tested for one hour or more with sitting and exploring as the within-subjects factor. For each condition, they stayed in one of two libraries characterized by their contemplative architecture. Then, participants answered quantitative questionnaires on their time experience and perceived valence and participated in a semi-structured interview. By means of grounded theory, we extracted four different types of time awareness from the qualitative data, of which three corresponded to the results of a cluster analysis on the dimensions of time awareness and perceived valence of the environment. In line with previous literature, we found relations between unpleasant high time awareness and boredom and pleasant low time awareness and flow. Additionally, the data revealed a pattern of high time awareness and positively perceived valence that was mainly experienced while sitting. Possible connections to states of consciousness such as relaxation, idleness, and a mindful attitude are outlined. Real-life settings, long durations, and level of activation are discussed as possible fostering factors for finding this pattern.

In: Timing & Time Perception

Painters mastered replicating the regularities of the visual patterns that we use to infer different materials and their properties, via meticulous observation of the way light reveals the world’s textures. The convincing depiction of bunches of grapes is particularly interesting. A convincing portrayal of grapes requires a balanced combination of different material properties, such as glossiness, translucency and bloom, as we learn from the 17th-century pictorial recipe by Willem Beurs. These material properties, together with three-dimensionality and convincingness, were rated in experiment 1 on 17th-century paintings, and in experiment 2 on optical mixtures of layers derived from a reconstruction of one of the 17th-century paintings, made following Beurs’s recipe. In experiment 3 only convincingness was rated, using again the 17th-century paintings. With a multiple linear regression, we found glossiness, translucency and bloom not to be good predictors of convincingness of the 17th-century paintings, but they were for the reconstruction. Overall, convincingness was judged consistently, showing that people agreed on its meaning. However, the agreement was higher when the material properties indicated by Beurs were also rated (experiment 1) than if not (experiment 3), suggesting that these properties are associated with what makes grapes look convincing. The 17th-century workshop practices showed more variability than standardization of grapes, as different combinations of the material properties could lead to a highly convincing representation. Beurs’s recipe provides a list of all the possible optical interactions of grapes, and the economic yet effective image cues to render them.

In: Art & Perception

Abstract

Multisensory integration is a fundamental form of sensory processing that is involved in many everyday tasks. Those with Attention-Deficit/Hyperactivity Disorder (ADHD) have characteristic alterations to various brain regions that may influence multisensory processing. The overall aim of this work was to assess how adults with ADHD process audiovisual multisensory stimuli during a complex response time task. The paradigm used was a two-alternative forced-choice discrimination task paired with continuous 64-electrode electroencephalography, allowing for the measurement of response time and accuracy to auditory, visual, and audiovisual multisensory conditions. Analysis revealed that those with ADHD (n=10) respond faster than neurotypical controls (n=12) when presented with auditory, visual, and audiovisual multisensory conditions, while also having race model violation in early response latency quantiles. Adults with ADHD also had more prominent multisensory processing over parietal-occipital brain regions at early post-stimulus latencies, indicating that altered brain structure may have important outcomes for audiovisual multisensory processing. The present study is the first to assess how those with ADHD respond to multisensory conditions during a complex response time task, and demonstrates that adults with ADHD have unique multisensory processing when assessing both behavioral response time measures and neurological measures.

In: Multisensory Research
In: Multisensory Research

Abstract

Cross-modal correspondence is the tendency to systematically map stimulus features across sensory modalities. The current study explored cross-modal correspondence between speech sound and shape (Experiment 1), and whether such association can influence shape representation (Experiment 2). For the purpose of closely examining the role of the two factors — articulation and pitch — combined in speech acoustics, we generated two sets of 25 vowel stimuli — pitch-varying and pitch-constant sets. Both sets were generated by manipulating articulation — frontness and height of the tongue body’s positions — but differed in terms of whether pitch varied among the sounds within the same set. In Experiment 1, participants made a forced choice between a round and a spiky shape to indicate the shape better associated with each sound. Results showed that shape choice was modulated according to both articulation and pitch, and we therefore concluded that both factors play significant roles in sound–shape correspondence. In Experiment 2, participants reported their subjective experience of shape accompanied by vowel sounds by adjusting an ambiguous shape in the response display. We found that sound–shape correspondence exerts an effect on shape representation by modulating audiovisual interaction, but only in the case of pitch-varying sounds. Therefore, pitch information within vowel acoustics plays the leading role in sound–shape correspondence influencing shape representation. Taken together, our results suggest the importance of teasing apart the roles of articulation and pitch for understanding sound–shape correspondence.

In: Multisensory Research