Search Results

Kosuke Motoki, Toshiki Saito, Rui Nouchi, Ryuta Kawashima and Motoaki Sugiura

they are sweet (Velasco et al. , 2019) or light-colored (Sunaga et al. , 2016). These findings all indicate that congruent sensory information positively influences consumer preference. The human voice plays an important role in marketing communication, and the role of voice in communication

Annett Schirmer, Tabitha Ng, Nicolas Escoffier and Trevor B. Penney

such as vocal or musical expression. One example is work by Voyer and colleagues who presented the word ‘bower’ spoken with an angry, happy, and neutral voice ( Fallow & Voyer, 2013; Voyer & Reuangrith, 2015 ). They found that compared to the neutral condition, durations in the two emotional conditions

John MacDonald

objects) and simultaneously playing them sounds (voices or non-speech sounds) and measuring how much visual attention they paid to these combinations. The question was whether the pattern of their visual attention was disrupted when either or both, the picture or sound was changed. Harry’s view was a

Annika Notbohm, Annika Notbohm, Marcus J. Naumer, Annika Notbohm, Marcus J. Naumer, Jasper J. F. van den Bosch, Annika Notbohm, Marcus J. Naumer, Jasper J. F. van den Bosch, Jochen Kaiser, Annika Notbohm, Marcus J. Naumer, Jasper J. F. van den Bosch, Jochen Kaiser and Jason S. Chan

Abstract from the 14th International Multisensory Research Forum, The Hebrew University of Jerusalem, Israel, 2013. Reference McGurk H. MacDonald J. ( 1976 ). Hearing lips and seeing voices , Nature 264 ( 5588 ), 746 – 748 .

Hanni Kiiski, Hanni Kiiski, Ludovic Hoyet, Hanni Kiiski, Ludovic Hoyet, Katja Zibrek, Hanni Kiiski, Ludovic Hoyet, Katja Zibrek, Carol O’Sullivan, Hanni Kiiski, Ludovic Hoyet, Katja Zibrek, Carol O’Sullivan and Fiona N. Newell

neurons responsive to the sight of actions , J. Cogn. Neurosci. 17 ( 3 ), 377 – 391 . Belin P. Fillion-Bilodeau S. Gosselin F. ( 2008 ). The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing , Behavior Research

Agnès Alsius, Martin Paré and Kevin G. Munhall

1. Introduction Forty years ago, Harry McGurk and John MacDonald published Hearing lips and seeing voices (McGurk and MacDonald, 1976 ), a manuscript in which they described a remarkable audiovisual speech phenomenon that would come to be known as the McGurk illusion or the McGurk effect

Niti Jaha, Stanley Shen, Jess R. Kerlin and Antoine J. Shahin

the sentences containing the voice of the gender of the speaker in the picture. A key manipulation was that each acoustic sentence had a 200-ms segment replaced by white noise. The purpose of the noise-replaced segment was to demonstrate that visually-mediated speech comprehension enhancement is also

Frank Pollick, Scott Love and Marianne Latinus

Seeing and Perceiving 24 (2011) 351–367 brill.nl/sp Cerebral Correlates and Statistical Criteria of Cross-Modal Face and Voice Integration ∗ Scott A. Love 1 , ∗∗ , Frank E. Pollick 1 and Marianne Latinus 1 , 2 1 School of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB

Sanne ten Oever, Sanne ten Oever, Alexander Sack, Sanne ten Oever, Alexander Sack, Katherine L. Wheat, Sanne ten Oever, Alexander Sack, Katherine L. Wheat, Nina Bien, Sanne ten Oever, Alexander Sack, Katherine L. Wheat, Nina Bien and Nienke van Atteveldt

Content and temporal cues have been shown to interact during audiovisual (AV) speech identification. Typically, the most reliable unimodal cue is used to identify specific speech features; however, visual cues are only used if the audiovisual stimuli are presented within a certain temporal integration window (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together and should be integrated. It is unknown whether temporal cues also provide information about speech content. Since spoken syllables have naturally varying audiovisual onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, these natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the audiovisual pair, while participants identified the syllables. We revealed that the most reliable cues of the audiovisual input were used to identify specific speech features (e.g., voicing). Additionally, we showed that the TWI was wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA-range for syllables that were not reliably identified by the unimodal cues, which we explained by the use of natural onset differences between audiovisual speech signals. This indicates that temporal cues not only determine whether or not different inputs belong together, but additionally convey identity information of audiovisual pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audiovisual–temporal interplay within speech perception.

Paula Regener

The ability to integrate auditory and visual information is important for everyday life. The Temporal Integration Window (TIW) measures how much asynchrony can be tolerated between auditory and visual streams before one loses the perception of a unitary audiovisual event. Previous investigations of TIW in individuals with Autism Spectrum Disorders (ASD) show mixed results in how performance compares to typically developed individuals (TD). The current study looked at TIW across a range of audiovisual stimuli to further examine this issue. The stimuli included the following audiovisual pairings: (1) a beep with a flashing circle (BF), (2) a point-light drummer with a drumbeat (PLD), (3) a face moving to say a single word and the voice saying the word (FV).

Eleven adult males with ASD, and their age, sex and IQ matches were shown the three audiovisual stimuli with varying degrees of audiovisual asynchrony. In separate blocks participants were asked to make either Temporal Order Judgements (TOJ) or Synchrony Judgements (SJ) when presented with these stimuli. For both TOJ and SJ psychophysical fits to the data provided estimates of the Point of Subjective Synchrony (PSS) and the width of TIW.

Individual ANOVAs on TIW and PSS for every stimulus type were ran using a within factor of judgement (SJ, TOJ) and a between factor of group (ASD, TD). These revealed no group differences in TIW or PSS. However, the FV TIW was significantly wider for TOJ than SJ. Additionally, PSS for all stimuli was significantly influenced by the types of judgements.