Mehri from the gravel desert village of Rabkut. All speakers have been educated to secondary level; they have formal knowledge of Arabic, but use the local language in their daily lives. Note that the Mehri and Śḥerɛ̄t data analysed here is a sub-set of a >500-minute body of audio-visual data recorded
You are looking at 1 - 10 of 826 items for :
- All: audio-visual data x
Variation in multimodal constructions during task-based interaction
Janet C.E. Watson and Jack Wilson
Mauro Ursino, Cristiano Cuppini, Elisa Magosso, Ulrik Beierholm and Ladan Shams
model behavior in the presence of multisensory inputs, and b) comparing model behavior with behavioral data, in conditions where the reliability of the stimuli or the prior are manipulated. In particular, we analyze the differences in spatial audio-visual integration when the subject experiences two
Jonathan M. P. Wilbiks and Benjamin J. Dyson
1. Introduction Auditory and visual sensory systems show remarkable flexibility in updating multi-sensory processes on the basis of previous discrepancies between times of arrival for sound and vision. First, there are natural constraints associated with audio-visual processing due to
Ryota Miyauchi, Dea-Gee Kang, Yukio Iwaya and Yôiti Suzuki
localization of audition and vision, we focused on auditory and visual localizations in horizontal directions beyond a few degrees of the fovea. Although there is still little qualitative and quantitative psychological data on the perceived locations of audio-visual events simultaneously presented in the
Issues of Dynamicity and Multimodality
The lectures encompass such main paradigms as blending and mental space theory, conceptual metaphor and metonymy, construction and cognitive grammars, image schemas, and mental simulation in relation to semantics. Overall, Alan Cienki shows that taking the usage-based commitment seriously with audio-visual data raises new issues and questions for theoretical models in cognitive linguistics.
The lectures for this book were given at The China International Forum on Cognitive Linguistics in May 2013.
Research Data Journal for the Humanities and Social Sciences is soliciting new submissions, in particular related to the following domains:
• Archaeology and geo-archaeological research
• Social and economic history
• Oral history
• Language and literature
• Audio-visual media
Click here for further information.
The Research Data Journal for the Humanities and Social Sciences (RDJ) is a peer-reviewed journal, which is designed to comprehensively document and publish deposited datasets and to facilitate their online exploration. RDJ is e-only and open access, and focuses on research across the Social Sciences and the Humanities.
The publication language is English. RDJ contains data papers: scholarly publications of medium length (with a maximum of 2500 words) containing a description of a dataset and putting the data in a research context.
How it works Before publication a data paper is assessed by peer reviewers and data specialists. They will give feedback to the author and indicate the necessary improvements for acceptance. DANS is founder of RDJ and is responsible for the Editorial Board for the Humanities Section. For the Social Sciences Section, the Editorial Board is coordinated by the UK Data Service. For sub-fields specific Editorial Boards will be set up.
Data papers receive a persistent identifier (DOI). The author, usually also the data depositor, will receive publication credits. Datasets that underpin the submitted data papers should be formally published in a trusted digital archive or repository.
RDJ is published by Brill in collaboration with DANS.
Online submission: Articles for publication in Research Data Journal for the Humanities and Social Sciences can be submitted online through Editorial Manager, please click here
Interdisciplinary Dialogues in the Field
Edited by Philip Sapirstein and David Scahill
Jonathan M. P. Wilbiks
stimuli were also manipulated with regard to their stimulus congruency relationship. The main finding was that the primary factor for audio-visual perceived synchrony was temporal coincidence, with stimulus congruency only playing a role when temporal information was ambiguous. In terms of temporal
analysis, split by stimulation type, was performed to reveal two interesting observations regarding audio-visual integration and irrelevant stimulus–response integration. 3.2.1. Audio-Visual Integration A significant interaction between pitch and color was obtained only in the sham condition, F
Carolina Sánchez-García, Sonia Kandel, Christophe Savariaux, Nara Ikumi and Salvador Soto-Faraco
When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course of the contribution of visual and auditory information during the process of speech perception. To this end, we designed an audio–visual gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in the central consonant. Different Spanish consonant phonemes with different degree of visual and acoustic saliency were included, and tested on visual-only, auditory-only and audio–visual trials. The data showed different patterns of contribution of unimodal and bimodal information during identification, depending on the visual saliency of the presented phonemes. In particular, for phonemes which are clearly more salient in one modality than the other, audio–visual performance equals that of the best unimodal. In phonemes with more balanced saliency, audio–visual performance was better than both unimodal conditions. These results shed new light on the temporal course of audio–visual speech integration.