Search Results

Gesture in Modern South Arabian languages

Variation in multimodal constructions during task-based interaction

Janet C.E. Watson and Jack Wilson

Mehri from the gravel desert village of Rabkut. All speakers have been educated to secondary level; they have formal knowledge of Arabic, but use the local language in their daily lives. Note that the Mehri and Śḥerɛ̄t data analysed here is a sub-set of a >500-minute body of audio-visual data recorded

Mauro Ursino, Cristiano Cuppini, Elisa Magosso, Ulrik Beierholm and Ladan Shams

model behavior in the presence of multisensory inputs, and b) comparing model behavior with behavioral data, in conditions where the reliability of the stimuli or the prior are manipulated. In particular, we analyze the differences in spatial audio-visual integration when the subject experiences two

Jonathan M. P. Wilbiks and Benjamin J. Dyson

1. Introduction Auditory and visual sensory systems show remarkable flexibility in updating multi-sensory processes on the basis of previous discrepancies between times of arrival for sound and vision. First, there are natural constraints associated with audio-visual processing due to

Ryota Miyauchi, Dea-Gee Kang, Yukio Iwaya and Yôiti Suzuki

localization of audition and vision, we focused on auditory and visual localizations in horizontal directions beyond a few degrees of the fovea. Although there is still little qualitative and quantitative psychological data on the perceived locations of audio-visual events simultaneously presented in the

Series:

Alan Cienki

Cognitive linguistics is purported to be a usage-based approach, yet only recently has research in some of its subfields turned to spontaneous spoken (versus written) language data. The collection of Alan Cienki’s Ten Lectures on Spoken Language and Gesture from the Perspective of Cognitive Linguistics considers what it means to apply different approaches from within this field to the dynamic, multimodal combination of speech and gesture.

The lectures encompass such main paradigms as blending and mental space theory, conceptual metaphor and metonymy, construction and cognitive grammars, image schemas, and mental simulation in relation to semantics. Overall, Alan Cienki shows that taking the usage-based commitment seriously with audio-visual data raises new issues and questions for theoretical models in cognitive linguistics.

The lectures for this book were given at The China International Forum on Cognitive Linguistics in May 2013.
Contributing to transparency of research, accelerating dissemination and fostering reuse of scholarly data.

Research Data Journal for the Humanities and Social Sciences is soliciting new submissions, in particular related to the following domains:
• Archaeology and geo-archaeological research
• Social and economic history
• Oral history
• Language and literature
• Audio-visual media
Click here for further information.

The Research Data Journal for the Humanities and Social Sciences (RDJ) is a peer-reviewed journal, which is designed to comprehensively document and publish deposited datasets and to facilitate their online exploration. RDJ is e-only and open access, and focuses on research across the Social Sciences and the Humanities.

The publication language is English. RDJ contains data papers: scholarly publications of medium length (with a maximum of 2500 words) containing a description of a dataset and putting the data in a research context.

How it works Before publication a data paper is assessed by peer reviewers and data specialists. They will give feedback to the author and indicate the necessary improvements for acceptance. DANS is founder of RDJ and is responsible for the Editorial Board for the Humanities Section. For the Social Sciences Section, the Editorial Board is coordinated by the UK Data Service. For sub-fields specific Editorial Boards will be set up.

Data papers receive a persistent identifier (DOI). The author, usually also the data depositor, will receive publication credits. Datasets that underpin the submitted data papers should be formally published in a trusted digital archive or repository.

RDJ is published by Brill in collaboration with DANS.

Online submission: Articles for publication in Research Data Journal for the Humanities and Social Sciences can be submitted online through Editorial Manager, please click here

New Directions and Paradigms for the Study of Greek Architecture

Interdisciplinary Dialogues in the Field

Series:

Edited by Philip Sapirstein and David Scahill

New Directions and Paradigms for the Study of Greek Architecture comprises 20 chapters by nearly three dozen scholars who describe recent discoveries, new theoretical frameworks, and applications of cutting-edge techniques in their architectural research. The contributions are united by several broad themes that represent the current directions of study in the field, i.e.: the organization and techniques used by ancient Greek builders and designers; the use and life history of Greek monuments over time; the communication of ancient monuments with their intended audiences together with their reception by later viewers; the mining of large sets of architectural data for socio-economic inference; and the recreation and simulation of audio-visual experiences of ancient monuments and sites by means of digital technologies.

Jonathan M. P. Wilbiks

stimuli were also manipulated with regard to their stimulus congruency relationship. The main finding was that the primary factor for audio-visual perceived synchrony was temporal coincidence, with stimulus congruency only playing a role when temporal information was ambiguous. In terms of temporal

Sharon Zmigrod

analysis, split by stimulation type, was performed to reveal two interesting observations regarding audio-visual integration and irrelevant stimulus–response integration. 3.2.1. Audio-Visual Integration A significant interaction between pitch and color was obtained only in the sham condition, F

Carolina Sánchez-García, Sonia Kandel, Christophe Savariaux, Nara Ikumi and Salvador Soto-Faraco

When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course of the contribution of visual and auditory information during the process of speech perception. To this end, we designed an audio–visual gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in the central consonant. Different Spanish consonant phonemes with different degree of visual and acoustic saliency were included, and tested on visual-only, auditory-only and audio–visual trials. The data showed different patterns of contribution of unimodal and bimodal information during identification, depending on the visual saliency of the presented phonemes. In particular, for phonemes which are clearly more salient in one modality than the other, audio–visual performance equals that of the best unimodal. In phonemes with more balanced saliency, audio–visual performance was better than both unimodal conditions. These results shed new light on the temporal course of audio–visual speech integration.