Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).
Temporal Processing in Clinical Populations
Edited by Argiro Vatakis and Melissa Allman
This collection of current knowledge on temporal processing in clinical populations is an excellent reference for the student and scientist interested in the topic, but it also serves as the stepping-stone to share ideas and push forward the advancement in understanding how distorted timing can lead to a disturbed brain and mind or vice versa.
Contributors to this volume: Ryan D. Ward, Billur Avlar, Peter D Balsam, Deana B. Davalos, Jamie Opper, Yvonne Delevoye-Turrell, Hélène Wilquin, Mariama Dione, Anne Giersch, Laurence Lalanne, Mitsouko van Assche, Patrick E. Poncelet, Mark A. Elliott, Deborah L. Harrington, Stephen M. Rao, Catherine R.G. Jones, Marjan Jahanshahi, Bon-Mi Gu, Anita J. Jurkowski, Jessica I. Lake, Chara Malapani, Warren H. Meck, Rebecca M. C. Spencer, Dawn Wimpory, Brad Nicholas, Elzbieta Szelag, Aneta Szymaszek, Anna Oron, Melissa J. Allman, Christine M. Falter, Argiro Vatakis, Alexandra Elissavet Bakou
Maria Kostaki and Argiro Vatakis
Argiro Vatakis and Helena Sgouramani
Argiro Vatakis and Alexandra Elissavet Bakou
Argiro Vatakis, Katerina Pastra and Panagiotis Dimitrakis
We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.
Edited by Argiro Vatakis, Warren Meck and Hedderik van Rijn
Timing & Time Perception aims to be the forum for all psychophysical, neuroimaging, pharmacological, computational, and theoretical advances on the topic of timing and time perception in humans and other animals. We envision a multidisciplinary approach to the topics covered, including the synergy of: Neuroscience and Philosophy for understanding the concept of time, Cognitive Science and Artificial Intelligence for adapting basic research to artificial agents, Psychiatry, Neurology, Behavioral and Computational Sciences for neuro-rehabilitation and modeling of the disordered brain, to name just a few.
Given the ubiquity of interval timing, this journal will host all basic studies, including interdisciplinary and multidisciplinary works on timing and time perception and serve as a forum for discussion and extension of current knowledge on the topic.
Online submission: Articles for publication in Timing & Time Perception can be submitted online through Editorial Manager, please click here.
Need support prior to submitting your manuscript? Make the process of preparing and submitting a manuscript easier with Brill's suite of author services, an online platform that connects academics seeking support for their work with specialized experts who can help.
Miketa Arvaniti, Noam Sagiv, Lucille Lecoutre and Argiro Vatakis
Our research project aimed at investigating multisensory temporal integration in synesthesia and explore whether or not there are commonalities in the sensory experiences of synesthetes and non-synesthetes. Specifically, we investigated whether or not synesthetes are better integrators than non-synesthetes by examining the strength of multisensory binding (i.e., the unity effect) using an unspeeded temporal order judgment task. We used audiovisual stimuli based on grapheme-colour synesthetic associations (Experiment 1) and on crossmodal correspondences (e.g., high-pitch — light colours; Experiment 2) presented at various stimulus onset asynchronies (SOAs) with the method of constant stimuli. Presentation of these stimuli in congruent and incongruent format allowed us to examine whether congruent stimuli lead to a stronger unity effect than incongruent ones in synesthetes and non-synesthetes and, thus, whether synesthetes experience enhanced multisensory integration than non-synesthetes. Preliminary data support the hypothesis that congruent crossmodal correspondences lead to a stronger unity effect than incongruent ones in both groups, with this effect being stronger in synesthetes than non-synesthetes. We also found that synesthetes experience stronger unity effect when presented with idiosyncratically congruent grapheme-colour associations than in incongruent ones as compared to non-synesthetes trained in certain grapheme-colour associations. Currently, we are investigating (Experiment 3) whether trained non-synesthetes exhibit enhanced integration when presented with synesthetic associations that occur frequently among synesthetes. Utilizing this design we will provide psychophysical evidence of the multisensory integration in synesthesia and the possible common processing mechanisms in synesthetes and non-synesthetes.
Edited by Argiro Vatakis, Fuat Balcı, Massimiliano Di Luca and Ángel Correa
Contributors are: Patricia V. Agostino, Rocío Alcalá-Quintana, Fuat Balcı, Karin Bausenhart, Richard Block, Ivana L. Bussi, Carlos S. Caldart, Mariagrazia Capizzi, Xiaoqin Chen, Ángel Correa, Massimiliano Di Luca, Céline Z. Duval, Mark T. Elliott, Dagmar Fraser, David Freestone, Miguel A. García-Pérez, Anne Giersch, Simon Grondin, Nori Jacoby, Florian Klapproth, Franziska Kopp, Maria Kostaki, Laurence Lalanne, Giovanna Mioni, Trevor B. Penney, Patrick E. Poncelet, Patrick Simen, Ryan Stables, Rolf Ulrich, Argiro Vatakis, Dominic Ward, Alan M. Wing, Kieran Yarrow, and Dan Zakay.