Search Results
Expert musicians are able to accurately and consistently time their actions during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises to vision in a ‘ready-set-go’ paradigm. Subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall musicians performed more veridically than non-musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However non-musicians, particularly with visual intervals, consistently exhibited a substantial and systematic regression towards the mean of the interval. When subjects judged intervals from distributions of longer total length they tended to exhibit more regression towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model which minimizes reproduction errors by incorporating a central tendency prior weighted by the subject’s own temporal precision relative to the current intervals distribution (Cicchini et al., ; Jazayeri and Shadlen, ). Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.
When visual and auditory stimuli are displayed with a spatial offset, the sound is heard at or near the visual stimulus (ventriloquist effect). After an adaptation period of repeated exposure to spatially offset audio–visual stimuli, sounds presented alone are perceived spatially displaced, in the direction of the adapting offset (ventriloquist aftereffect: Recanzione, ), pointing to recalibration of audio–visual alignment. Here we show that the recalibration is spatially selective. Adapting, one visual hemifield to (say) a leftward offset, and the other to a rightward (or zero) offset produces two separate spatially localized aftereffects, in opposite directions. If a large (30°) eye-movement is interposed between adaptation and test, the spatial specificity remains in head-centered coordinates. The results provide further evidence for the existence of spatiotopic (or at least craniotopic) spatial maps, which are subject to continual recalibration.
Animals, including fish, birds, rodents, non-human primates, and pre-verbal infants are able to discriminate the duration and number of events without the use of language. In this paper, we present the results of six experiments exploring the capability of adult rats to count 2–6 sequentially presented white-noise stimuli. The investigation focuses on the animal’s ability to exhibit spontaneous subtraction following the presentation of novel stimulus inversions in the auditory signals being counted. Results suggest that a subtraction operation between two opposite sensory representations may be a general processing strategy used for the comparison of stimulus magnitudes. These findings are discussed within the context of a mode-control model of timing and counting that relies on an analog temporal-integration process for the addition and subtraction of sequential events.