Browse results

You are looking at 1 - 10 of 3,624 items for :

  • Upcoming Publications x
  • Just Published x
  • Search level: Chapters/Articles x
Clear All

Abstract

Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants’ gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.

In: Multisensory Research
In: Multisensory Research
In: Multisensory Research
In: Multisensory Research
Author: John H. Wearden

Abstract

This article presents a translation into English of most of a publication by the French philosopher Paul Janet, which appeared in 1877 (Janet, P.,. Une illusion d’optique interne. Revue Philosophique de la France et de l’Étranger, 3, 497–502). Here, it is proposed that the rate of passage of subjective time is proportional to the age of the person making the judgement. Janet further proposes that this proportionality will be most marked when judging time intervals remote from the present, such as past years or decades. He also suggests that the ‘acceleration’ of apparent passage of time with age can appear to reverse when old people consider the length of time that they believe to be left in their lives. A short commentary discusses how results from modern research on apparent passage of time and age can be linked to Janet’s proposal.

In: Timing & Time Perception

Abstract

What sound quality has led to exclude infrasound from sound in the conventional hearing range? We examined whether temporal segregation of pressure pulses is a distinctive property and evaluated this perceptual limit via an adaptive psychophysical procedure for pure tones and carriers of different envelopes. Further, to examine across-domain similarity and individual covariation of this limit, here called the critical segregation rate (CSR), it was also measured for various periodic visual and vibrotactile stimuli. Results showed that sequential auditory or vibrotactile stimuli separated by at least ~80‒90 ms (~11‒12-Hz repetition rates), will be perceived as perceptually segregated from one another. While this limit did not statistically differ between these two modalities, it was significantly lower than the ~150 ms necessary to perceptually segregate successive visual stimuli. For the three sensory modalities, stimulus periodicity was the main factor determining the CSR, which apparently reflects neural recovery times of the different sensory systems. Among all experimental conditions, significant within- and across-modality individual CSR correlations were observed, despite the visual CSR (mean: 6.8 Hz) being significantly lower than that of both other modalities. The auditory CSR was found to be significantly lower than the frequency above which sinusoids start to elicit a tonal quality (19 Hz; recently published for the same subjects). Returning to our initial question, the latter suggests that the cessation of tonal quality — not the segregation of pressure fluctuations — is the perceptual quality that has led to exclude infrasound (sound with frequencies < 20 Hz) from the conventional hearing range.

Open Access
In: Timing & Time Perception

Abstract

Although it has been demonstrated that multisensory information can facilitate object recognition and object memory, it remains unclear whether such facilitation effect exists in category learning. To address this issue, comparable car images and sounds were first selected by a discrimination task in Experiment 1. Then, those selected images and sounds were utilized in a prototype category learning task in Experiments 2 and 3, in which participants were trained with auditory, visual, and audiovisual stimuli, and were tested with trained or untrained stimuli within the same categories presented alone or accompanied with a congruent or incongruent stimulus in the other modality. In Experiment 2, when low-distortion stimuli (more similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual trials, but no significant difference between audiovisual and auditory trials. During testing, accuracy was significantly higher for congruent trials than unisensory or incongruent trials, and the congruency effect was larger for untrained high-distortion stimuli than trained low-distortion stimuli. In Experiment 3, when high-distortion stimuli (less similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual or auditory trials, and the congruency effect was larger for trained high-distortion stimuli than untrained low-distortion stimuli during testing. These findings demonstrated that higher degree of stimuli distortion resulted in more robust multisensory effect, and the categorization of not only trained but also untrained stimuli in one modality could be influenced by an accompanying stimulus in the other modality.

In: Multisensory Research

Abstract

A critical component to many immersive experiences in virtual reality (VR) is vection, defined as the illusion of self-motion. Traditionally, vection has been described as a visual phenomenon, but more recent research suggests that vection can be influenced by a variety of senses. The goal of the present study was to investigate the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting. To achieve this, 24 adults (M age = 25.04) were presented with a rotating stimulus aimed to induce circular vection. All participants completed trials that included a single sensory cue, a combination of two cues, or all three cues presented together. The size of the field of view (FOV) was manipulated across four levels (no-visuals, small, medium, full). Participants rated vection intensity and duration verbally after each trial. Results showed that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration. The presence of auditory and tactile cues further increased vection intensity and duration compared to conditions where these cues were not presented. These findings support the idea that vection can be induced via multiple types of sensory inputs and can be intensified when multiple sensory inputs are combined.

In: Multisensory Research

Abstract

Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., ‘takete’ or ‘maluma’, and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword–shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.

In: Multisensory Research

Abstract

The understanding of linguistic messages can be made extremely complex by the simultaneous presence of interfering sounds, especially when they are also linguistic in nature. In two experiments, we tested if visual cues directing attention to spatial or temporal components of speech in noise can improve its identification. The hearing-in-noise task required identification of a five-digit sequence (target) embedded in a stream of time-reversed speech. Using a custom-built device located in front of the participant, we delivered visual cues to orient attention to the location of target sounds and/or their temporal window. In Exp. 1 (n=14), we validated this visual-to-auditory cueing method in normal-hearing listeners, tested under typical binaural listening conditions. In Exp. 2 (n=13), we assessed the efficacy of the same visual cues in normal-hearing listeners wearing a monaural ear plug, to study the effects of simulated monaural and conductive hearing loss on visual-to-auditory attention orienting. While Exp. 1 revealed a benefit of both spatial and temporal visual cues for hearing in noise, Exp. 2 showed that only the temporal visual cues remained effective during monaural listening. These findings indicate that when the acoustic experience is altered, visual-to-auditory attention orienting is more robust for temporal compared to spatial attributes of the auditory stimuli. These findings have implications for the relation between spatial and temporal attributes of sound objects, and when planning devices to orient audiovisual attention for subjects suffering from hearing loss.

In: Multisensory Research