The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.
Contrast sensitivity to patch stimuli: Effects of spatial bandwidth and temporal presentation ELI PELI*, LAWRENCE E. AREND, GEORGE M. YOUNG, and ROBERT B. GOLDSTEIN The Schepens Eye Research Institute, Harvard Medical School, 20 Staniford Street, Boston, MA 02114, USA Received 10 February 1992
(little energy above 2000 Hz) or wideband signals (250–10 000 Hz, mean train duration of 0.5 s) (Barklow, 1997; Maust-Mohl, Soltis & Reiss, 2015). The production of inharmonic clicks, single clicks with little harmonic content and similar bandwidth of wideband clicks, were also reported by Barklow (1997
formats (see Figure 1). In some regions in India, North Korea, and Bangladesh, the bandwidth to stream music or videos is not available, and people thus go to ‘download vendors’, who download songs onto their memory cards (colloquially called ‘chips’). 2 These cards are then inserted into mobile phones
parameters, including the duration, dominant frequency and frequency bandwidth of each component in successive calls. Males almost invariably began the rst call in a bout with a high amplitude broad-band pulse, which was followed by a much longer and highly variable sustained ele- ment. They then
This note discusses some issues related to bandwidth selection based on moment expansions of the mean squared error (MSE) of the regression quantile estimator. We use higher order expansions to provide a way to distinguish among asymptotically equivalent nonparametric estimators. We derive approximations to the (standardized) MSE of the covariance matrix estimation. This facilitates a comparison of different estimators at the second order level, where differences do occur and depend on the bandwidth choice. A method of bandwidth selection is defined by minimizing the second order effect in the mean squared error.
, maximum frequency, duration, energy, bandwidth and minimum fre- quency in the contact calls of spectacled parrotlets. Discriminant function analysis has shown individual and social subunit speci c calls but also that individuals of different social classes share some calls. From our results we
multiples of this frequency (its ‘harmonics’). Many factors can be considered when classifying these sounds, such as the range of frequencies used (the sound’s ‘bandwidth’) and the power distribution for frequencies within the sound (the sound’s ‘centre of gravity’). While these factors can be applied to
There has recently been resurgent interest in the notion of cross-modal correspondences — i.e. stimulus features that may be preferentially integrated. The extent to which any such correspondences emanate from intrinsic anatomical connectivity or instead from learned (presumably statistical) regularities in the environment remains unresolved. Anatomical data from non-human primates would suggest that high-frequency auditory representations together with peripheral visual field representations in primary cortices are a preferred locus of low-level integration, whereas functional studies in humans have repeatedly demonstrated effects with centrally-presented stimuli and a range of auditory pitches/bandwidths. The present psychophysics and EEG study examined whether auditory–visual integration systematically varies with acoustic pitch and visual eccentricity. Subjects viewed 2 annuli (either foveally-presented or at 12.5° eccentricity, with surface area controlled for cortical magnification) and indicated which, if either, changed its brightness. The paradigm followed a 3 × 3 within subject design: (no, foveal, or peripheral brightness change) × (no, 500 Hz, or 4000 Hz pure tone presentation; the latter of which were controlled for perceived loudness). Accuracy data were analyzed according to signal detection theory, using sensitivity (d′). Reaction time data were analyzed after dividing by detection rates, using the multisensory response enhancement metric (MRE; see Rach et al., 2011, Psychological Research). Preliminary data suggest there to be a main effect of pitch (i.e. larger d′ and MRE when the multisensory conditions included a high pitch vs. low pitch sound). There was no evidence of a main effect of eccentricity or interaction between factors. Ongoing EEG analyses will likewise be discussed.