Search Results

You are looking at 1 - 10 of 16 items for

  • Author or Editor: Argiro Vatakis x
  • Search level: All x
Clear All
In: Timing & Time Perception
Temporal Processing in Clinical Populations
Volume Editors: Argiro Vatakis and Melissa Allman
Time Distortions in Mind brings together current research on aspects of temporal processing in clinical populations, in the ultimate hope of elucidating the interdependence between perturbations in timing and disturbances in the mind and brain. Such research may inform not only typical psychological functioning, but may also elucidate the psychological consequences of any pathophysiological differences in temporal processing.
This collection of current knowledge on temporal processing in clinical populations is an excellent reference for the student and scientist interested in the topic, but it also serves as the stepping-stone to share ideas and push forward the advancement in understanding how distorted timing can lead to a disturbed brain and mind or vice versa.

Contributors to this volume: Ryan D. Ward, Billur Avlar, Peter D Balsam, Deana B. Davalos, Jamie Opper, Yvonne Delevoye-Turrell, Hélène Wilquin, Mariama Dione, Anne Giersch, Laurence Lalanne, Mitsouko van Assche, Patrick E. Poncelet, Mark A. Elliott, Deborah L. Harrington, Stephen M. Rao, Catherine R.G. Jones, Marjan Jahanshahi, Bon-Mi Gu, Anita J. Jurkowski, Jessica I. Lake, Chara Malapani, Warren H. Meck, Rebecca M. C. Spencer, Dawn Wimpory, Brad Nicholas, Elzbieta Szelag, Aneta Szymaszek, Anna Oron, Melissa J. Allman, Christine M. Falter, Argiro Vatakis, Alexandra Elissavet Bakou

Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).

In: Seeing and Perceiving
In: Timing and Time Perception: Procedures, Measures, & Applications
In: Time Distortions in Mind
In: Time Distortions in Mind

Our timing estimates are often prone to distortions from non-temporal attributes such as the direction of motion. Motion direction has been reported to lead to interval dilation when the movement is toward (i.e., looming) as compared to away from the viewer (i.e., receding). This perceptual asymmetry has been interpreted based on the contextual salience and prioritization of looming stimuli that allows for timely reactions to approaching objects. This asymmetry has mainly been studied through abstract stimulation with minimal social relevance. Focusing on the latter, we utilized naturalistic displays of biological motion and examined the aforementioned perceptual asymmetry in the temporal domain. In Experiment 1, we tested visual looming and receding human movement at various intervals in a reproduction task and found no differences in the participants’ timing estimates as a function of motion direction. Given the superiority of audition in timing, in Experiment 2, we combined the looming and receding visual stimulation with sound stimulation of congruent, incongruent, or no direction information. The analysis showed an overestimation of the looming as compared to the receding visual stimulation when the sound presented was of congruent or no direction, while no such difference was noted for the incongruent condition. Both looming and receding conditions (congruent and control) led to underestimations as compared to the physical durations tested. Thus, the asymmetry obtained could be attributed to the potential perceptual negligibility of the receding stimuli instead of the often-reported salience of looming motion. The results are also discussed in terms of the optimality of sound in the temporal domain.

In: Timing & Time Perception

We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.

In: Seeing and Perceiving