Search Results

Authors: Alshuth, Tiippana, and V.Paramei

Spatial Vision , Vol. 15, No. 1, pp. 25– 43 (2001) Ó VSP 2001. Contrast discrimination and choice reaction times at near-threshold pedestals K. TIIPPANA 1 ; ¤ , G. V. PARAMEI 2 and E. ALSHUTH 2 1 Helsinki University of Technology, Laboratory of Computational Engineering, P.O. Box 9400, 02015

In: Spatial Vision
Author: Farid I. Kandil

The ‘time window of integration’ (TWIN) model (e.g., Colonius and Diederich, ; Diederich and Colonius) allows to predict response speed effects in multisensory settings. E.g., in the focused attention paradigm (FAP), subjects are instructed to respond to stimuli of the target modality only, yet reaction times are shorter if the unattended stimulus is presented in a certain temporal relation to the target stimulus. The TWIN model accounts for these cross-modal effects by proposing that all the initially unimodal bits of information must arrive at the point of integration within a certain time window in order to be integrated and thus to allow response enhancements like the observed reaction time reductions. It has been successfully used to account for empirical data. Here, we conduct a parameter recovery study of the basic TWIN model with five parameters for the duration of the visual and acoustic unimodal and the integrated second stage, the length of the time window, and the size of the effect. We conducted 576 ‘experiments’ with different parameter value sets, each comprising 500 ‘subjects’, for which data for 80 trials in a FAP setting were generated, averaged and fed into the recovery process. Parameter estimations were evaluated in terms of absolute and relative accuracy and precision. Results show that deviations from the true value are of only insignificant size for all parameters. Especially duration parameters for the unimodal stage of the focused stimulus and the integrated second stage are both highly accurate and precise, in fact to such an extent that they match statistics of single cell recordings.

In: Multisensory Research

Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., ) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, ; Sanders et al., ).

In: Seeing and Perceiving

Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads

In: Seeing and Perceiving
In: Neuro-Visionen 4
Authors: Steven Chin and David Pisoni
"Alcohol and Speech" serves as a single, unifying reference source for those interested in speech motor effects evident in the acoustic record, reaction times, speech communication strategies, and perceptual judgments. Written by a linguist and a psychologist, the book provides an analytic orientation toward speech and alcohol with an emphasis on laboratory-based research in acoustic-phonetics and speech science. It is a comprehensive review of the effects of alcohol on speech and compares the various theoretical concerns which form this research. Studies of both alcohol and speech have been rare because each field has its own experimental protocols, methodologies, and research agendas. This book fills a long-standing gap and is unique in providing both breadth of coverage and depth of analysis. A case study involving the 1989 Exxon Valdez oil spill in Prince William Sound develops some of the legal implications of this research. It illustrates a unified perspective for the study of alcohol and speech. It contains the benefit of years of research on alcohol and speech. It provides a wealth of research to investigators in a wide variety of disciplines: medicine, psychology, speech, forensics, law, and human factors. It demonstrates how alcohol and speech research applies in a practical situation: the Exxon Valdez grounding. It includes a glossary as well as numerous tables and graphs for a quick overview of data and results.

multisensory interactions in musicians were also reported using reaction times (RTs) (Landry and Champoux, 2017 ). Musicians were revealed to have faster simple reaction time for simultaneously presented auditory and tactile stimuli than non-musicians. Improved interactions between senses for musicians were

In: Multisensory Research

with equal probability from all locations, the McGurk effect tended to be stronger for sounds emanating from the centre, but this tendency was not reliable. Additionally, reaction times were the shortest for a congruent audiovisual stimulus, and this was the case independent of location. Our main

In: Seeing and Perceiving

computerized method used to indirectly estimate the strength of the association between a target concept and a valence attribute via reaction times to a double-categorization task (faster when association is stronger). Since the association is between a target and a valence attribute, the IAT paradigm is

In: Art & Perception

.g., a multisensory stimulus), the signal from the information source that is processed the fastest is the signal that produces the response (i.e., the ‘winner’ of the race). However, co-activation models are supported when reaction times (RTs) to multisensory stimuli are faster than would be predicted

In: Multisensory Research