Search Results

You are looking at 1 - 10 of 17 items for

  • Author or Editor: Argiro Vatakis x
  • Search level: All x
Clear All

Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).

In: Seeing and Perceiving
In: Timing and Time Perception: Procedures, Measures, & Applications
Temporal Processing in Clinical Populations
Volume Editors: and
Time Distortions in Mind brings together current research on aspects of temporal processing in clinical populations, in the ultimate hope of elucidating the interdependence between perturbations in timing and disturbances in the mind and brain. Such research may inform not only typical psychological functioning, but may also elucidate the psychological consequences of any pathophysiological differences in temporal processing.
This collection of current knowledge on temporal processing in clinical populations is an excellent reference for the student and scientist interested in the topic, but it also serves as the stepping-stone to share ideas and push forward the advancement in understanding how distorted timing can lead to a disturbed brain and mind or vice versa.

Contributors to this volume: Ryan D. Ward, Billur Avlar, Peter D Balsam, Deana B. Davalos, Jamie Opper, Yvonne Delevoye-Turrell, Hélène Wilquin, Mariama Dione, Anne Giersch, Laurence Lalanne, Mitsouko van Assche, Patrick E. Poncelet, Mark A. Elliott, Deborah L. Harrington, Stephen M. Rao, Catherine R.G. Jones, Marjan Jahanshahi, Bon-Mi Gu, Anita J. Jurkowski, Jessica I. Lake, Chara Malapani, Warren H. Meck, Rebecca M. C. Spencer, Dawn Wimpory, Brad Nicholas, Elzbieta Szelag, Aneta Szymaszek, Anna Oron, Melissa J. Allman, Christine M. Falter, Argiro Vatakis, Alexandra Elissavet Bakou
In: Timing & Time Perception
In: Time Distortions in Mind
In: Time Distortions in Mind

We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.

In: Seeing and Perceiving
In: Timing & Time Perception