Abstract

Preliminary evidence showed a reduced temporal sensitivity (i.e., larger temporal binding window) to audiovisual asynchrony in obesity. Our aim was to extend this investigation to visuotactile stimuli, comparing individuals of healthy weight and with obesity in a simultaneity judgment task. We verified that individuals with obesity had a larger temporal binding window than healthy-weight individuals, meaning that they tend to integrate visuotactile stimuli over an extended range of stimulus onset asynchronies. We point out that our finding gives evidence in support of a more pervasive impairment of the temporal discrimination of co-occurrent stimuli, which might affect multisensory integration in obesity. We discuss our results referring to the possible role of atypical oscillatory neural activity and structural anomalies in affecting the perception of simultaneity between multisensory stimuli in obesity. Finally, we highlight the urgency of a deeper understanding of multisensory integration in obesity at least for two reasons. First, multisensory bodily illusions might be used to manipulate body dissatisfaction in obesity. Second, multisensory integration anomalies in obesity might lead to a dissimilar perception of food, encouraging overeating behaviours.

In: Multisensory Research

Abstract

Are alternation and co-occurrence of stimuli of different sensory modalities conspicuous? In a novel audio-visual oddball paradigm, the P300 was used as an index of the allocation of attention to investigate stimulus- and task-related interactions between modalities. Specifically, we assessed effects of modality alternation and the salience of conjunct oddball stimuli that were defined by the co-occurrence of both modalities. We presented (a) crossmodal audio-visual oddball sequences, where both oddballs and standards were unimodal, but of a different modality (i.e., visual oddball with auditory standard, or vice versa), and (b) oddball sequences where standards were randomly of either modality while the oddballs were a combination of both modalities (conjunct stimuli). Subjects were instructed to attend to one of the modalities (whether part of a conjunct stimulus or not). In addition, we also tested specific attention to the conjunct stimuli. P300-like responses occurred even when the oddball was of the unattended modality. The pattern of event-related potential (ERP) responses obtained with the two crossmodal oddball sequences switched symmetrically between stimulus modalities when the task modality was switched. Conjunct oddballs elicited no oddball response if only one modality was attended. However, when conjunctness was specifically attended, an oddball response was obtained. Crossmodal oddballs capture sufficient attention even when not attended. Conjunct oddballs, however, are not sufficiently salient to attract attention when the task is unimodal. Even when specifically attended, the processing of conjunctness appears to involve additional steps that delay the oddball response.

In: Multisensory Research

Abstract

Past studies suggest that learning a spatial environment by navigating on a desktop computer can lead to significant acquisition of spatial knowledge, although typically less than navigating in the real world. Exactly how this might differ when learning in immersive virtual interfaces that offer a rich set of multisensory cues remains to be fully explored. In this study, participants learned a campus building environment by navigating (1) the real-world version, (2) an immersive version involving an omnidirectional treadmill and head-mounted display, or (3) a version navigated on a desktop computer with a mouse and a keyboard. Participants first navigated the building in one of the three different interfaces and, afterward, navigated the real-world building to assess information transfer. To determine how well they learned the spatial layout, we measured path length, visitation errors, and pointing errors. Both virtual conditions resulted in significant learning and transfer to the real world, suggesting their efficacy in mimicking some aspects of real-world navigation. Overall, real-world navigation outperformed both immersive and desktop navigation, effects particularly pronounced early in learning. This was also suggested in a second experiment involving transfer from the real world to immersive virtual reality (VR). Analysis of effect sizes of going from virtual conditions to the real world suggested a slight advantage for immersive VR compared to desktop in terms of transfer, although at the cost of increased likelihood of dropout. Our findings suggest that virtual navigation results in significant learning, regardless of the interface, with immersive VR providing some advantage when transferring to the real world.

In: Multisensory Research

Abstract

During exposure to Virtual Reality (VR) a sensory conflict may be present, whereby the visual system signals that the user is moving in a certain direction with a certain acceleration, while the vestibular system signals that the user is stationary. In order to reduce this conflict, the brain may down-weight vestibular signals, which may in turn affect vestibular contributions to self-motion perception. Here we investigated whether vestibular perceptual sensitivity is affected by VR exposure. Participants’ ability to detect artificial vestibular inputs was measured during optic flow or random motion stimuli on a VR head-mounted display. Sensitivity to vestibular signals was significantly reduced when optic flow stimuli were presented, but importantly this was only the case when both visual and vestibular cues conveyed information on the same plane of self-motion. Our results suggest that the brain dynamically adjusts the weight given to incoming sensory cues for self-motion in VR; however this is dependent on the congruency of visual and vestibular cues.

In: Multisensory Research

Perspective plays an important role in the creation and appreciation of depth on paper and canvas. Paintings of extant scenes are interesting objects for studying perspective, because such paintings provide insight into how painters apply different aspects of perspective in creating highly admired paintings. In this regard the paintings of the Piazza San Marco in Venice by Canaletto in the eighteenth century are of particular interest because of the Piazza’s extraordinary geometry, and the fact that Canaletto produced a number of paintings from similar but not identical viewing positions throughout his career. Canaletto is generally regarded as a great master of linear perspective. Analysis of nine paintings shows that Canaletto almost perfectly constructed perspective lines and vanishing points in his paintings. Accurate reconstruction is virtually impossible from observation alone because of the irregular quadrilateral shape of the Piazza. Use of constructive tools is discussed. The geometry of Piazza San Marco is misjudged in three paintings, questioning their authenticity. Sizes of buildings and human figures deviate from the rules of linear perspective in many of the analysed paintings. Shadows are stereotypical in all and even impossible in two of the analysed paintings. The precise perspective lines and vanishing points in combination with the variety of sizes for buildings and human figures may provide insight in the employed production method and the perceptual experience of a given scene.

In: Art & Perception
In: Multisensory Research

Abstract

Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.

In: Multisensory Research

Abstract

The last few years have seen an explosive growth of research interest in the crossmodal correspondences, the sometimes surprising associations that people experience between stimuli, attributes, or perceptual dimensions, such as between auditory pitch and visual size, or elevation. To date, the majority of this research has tended to focus on audiovisual correspondences. However, a variety of crossmodal correspondences have also been demonstrated with tactile stimuli, involving everything from felt shape to texture, and from weight through to temperature. In this review, I take a closer look at temperature-based correspondences. The empirical research not only supports the existence of robust crossmodal correspondences between temperature and colour (as captured by everyday phrases such as ‘red hot’) but also between temperature and auditory pitch. Importantly, such correspondences have (on occasion) been shown to influence everything from our thermal comfort in coloured environments through to our response to the thermal and chemical warmth associated with stimulation of the chemical senses, as when eating, drinking, and sniffing olfactory stimuli. Temperature-based correspondences are considered in terms of the four main classes of correspondence that have been identified to date, namely statistical, structural, semantic, and affective. The hope is that gaining a better understanding of temperature-based crossmodal correspondences may one day also potentially help in the design of more intuitive sensory-substitution devices, and support the delivery of immersive virtual and augmented reality experiences.

In: Multisensory Research

Abstract

Human body sense is surprisingly flexible — in the Rubber Hand Illusion (RHI), precisely administered visuo-tactile stimulation elicits a sense of ownership over a fake hand. The general consensus is that there are certain semantic top-down constraints on which objects may be incorporated in this way: in particular, to-be-embodied objects should be structurally similar to a visual representation stored in an internal body model. However, empirical evidence shows that the sense of ownership may extend to objects strikingly distinct in morphology and structure (e.g., robotic arms) and the hypothesis about the relevance of appearance lacks direct empirical support. Probabilistic multisensory integration approaches constitute a promising alternative. However, the recent Bayesian models of RHI limit too strictly the possible factors influencing likelihood and prior probability distributions. In this paper, I analyse how Bayesian models of RHI could be extended. The introduction of skin-based spatial information can account for the cross-compensation of sensory signals giving rise to RHI. Furthermore, addition of Bayesian Coupling Priors, depending on (1) internal learned models of relatedness (coupling strength) of sensory cues, (2) scope of temporal binding windows, and (3) extension of peripersonal space, would allow quantification of individual tendencies to integrate divergent visual and somatosensory signals. The extension of Bayesian models would yield an empirically testable proposition accounting comprehensively for a wide spectrum of RHI-related phenomena and rendering appearance-oriented internal body models explanatorily redundant.

In: Multisensory Research

Abstract

The need to design products that engage several senses has being increasingly recognised by design and marketing professionals. Many works analyse the impact of sensory stimuli on the hedonic, cognitive, and emotional responses of consumers, as well as on their satisfaction and intention to purchase. However, there is much less information about the utilitarian dimension related to a sensory non-reflective analysis of the tangible elements of the experience, the sequential role played by different senses, and their relative importance. This work analyses the sensorial dimension of consumer interactions in shops. Consumers were filmed in two ceramic tile shops and their behaviour was analysed according to a previously validated checklist. Sequence of actions, their frequency of occurrence, and the duration of inspections were recorded, and consumers were classified according to their sensory exploration strategies. Results show that inspection patterns are intentional but shifting throughout the interaction. Considering the whole sequence, vision is the dominant sense followed by touch. However, sensory dominance varies throughout the sequence. The dominance differences appear between all senses and within the senses of vision, touch and audition. Cluster analysis classified consumers into two groups, those who were more interactive and those who were visual and passive evaluators. These results are very important for understanding consumer interaction patterns, which senses are involved (including their importance and hierarchy), and which sensory properties of tiles are evaluated during the shopping experience. Moreover, this information is crucial for setting design guidelines to improve sensory interactions and bridge sensory demands with product features.

In: Multisensory Research