Search Results

Joëlle Provasi, Christelle Lemoine-Lardennois, Eric Orriols and Françoise Morange-Majoux

information about anticipatory mechanisms, such as visual exploration of temporal asynchrony, is to use an eye tracker. Eye tracking is particularly well adapted for studies of development because the dependent measurement (i.e., the proportion of time spent looking at a particular area of interest) does not

Veronique Drai-Zerbib, Veronique Drai-Zerbib and Thierry Baccino

Musical sight-reading requires to process simultaneously multimodal information: visual, auditory, motor. Does the expertise in music rely on an efficient cross-modal integration? This talk investigates this issue with 2 experiments. In the first one, 30 expert and 31 non-expert musicians were required to report whether two successively presented fragments of classical music were same or different. In half the conditions the 61 participants received the fragments in the same modal presentation (visual/visual), in the other half they received the fragments in cross-modal presentation (auditory/visual). Analysis of Response Time and Errors showed that more experienced musicians seemed to be better able to transfer the information from one modality to another. In a second experiment using eye-tracking, 64 participants, 26 expert and 38 non-expert musicians were required also to report whether two successively presented fragments of classical music were same or different but in cross-modal way only. Visual and auditory cues were used to investigate whether a kind of expert memory using retrieval cues was acting for more expert musicians. An accent mark, emphasis placed on a note contributing to the prosody of the musical phrase, was put in a congruent or incongruent way, during the auditory and reading phases. As expected, the analysis of fixations and mistakes validated the hypothesis of modal independence for expert musicians, observed in the first experiment. Moreover analyses validated the cross-modal ability of expert memory, using accent marks as retrieval cues. Results are discussed in terms of amodal memory for expert musicians that can be in support of theoretical work by Ericsson and Kintsch (1995): more experienced performers better integrate knowledge across modalities using retrieval cues.

Alan Bovik, Lawrence Cormack, Ian Van Der Linde and Umesh Rajashekar

January 2008 Abstract —DOVES, a database of visual eye movements, is a set of eye movements collected from 29 human observers as they viewed 101 natural calibrated images. Recorded using a high- precision dual-Purkinje eye tracker, the database consists of around 30 000 fixation points, and is believed to

Series:

Edited by Luuk van Waes, Mariëlle Leijten and Christophe Neuwirth

Digital media has become an increasingly powerful force in modern society. This volume brings together outstanding European, American and Australian research in "writing and digital media" and explores its cognitive, social and cultural implications. The book is divided into five sections, covering major areas of research: writing modes and writing environments (e.g. speech technology), writing and communication (e.g. hypervideos), digital tools for writing research (e.g. web analysis tools, keystroke logging and eye-tracking), writing in online educational environments (e.g. collaborative writing in L2), and social and philosophical aspects of writing and digital media (e.g. CMC, electronic literacy and the global digital divide).In addition to presenting programs of original research by internationally known scholars from a variety of disciplines, each chapter provides a comprehensive review of the current state-of-the-art in the field and suggests directions for future research.

Trevor B. Penney, Edward N. K. Yim and Kwun Kei Ng

allocation of attention to the timing signal by simultaneously presenting, on some trials, brief peripheral distractor stimuli. However, participants were required to fixate on the timing signal and to avoid making saccades to the distractor stimuli. An eye-tracker was used to confirm that participants did

Jidong Chen, Melissa Bowerman, Falk Huettig and Asifa Majid

Huettig a, * , Jidong Chen b , Melissa Bowerman a and Asifa Majid a a Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH Nijmegen, The Netherlands b California State University, Fresno, CA, USA * Corresponding author, e-mail: falk.huettig@mpi.nl Abstract In two eye-tracking studies we

Nicholas D. Smith, David P. Crabb, Fiona C. Glen, Robyn Burton and David F. Garway-Heath

in a variety of naturalistic situations in order to gain a wider insight into the functioning of those with visual field loss. The principal aim of this study was to use eye tracking to examine the hypothesis that patients with bilateral glaucomatous field defects exhibit significant differences in

Diogo Marques

Concerning the mutable, three-dimensional, kinetic and tactile aspects of 3D words in works of digital poetry, it is my intention to outline a comprehensive analysis of this genre in virtual environments such as CAVEs and Second Life. I will try to demonstrate that in these virtual platforms words can behave as living organisms that establish a paradigmatic relationship of cyclic creation between both author and reader, referring to what I call Poetic Words for the High Tech Generation. Finally, this chapter will also state some evidence regarding the potentialities of digital poetry as an alternative method of teaching poetry through experiential learning, by analysing some of the studies conducted in reading and visualising habits through eye tracking technology.

Series:

Pablo Romero-Fresco

Although the interest regarding live subtitles is shifting from quantity to quality, given that broadcasters such as the BBC already subtitle 100% of their programmes, hardly any research has been carried out on how viewers receive this type of subtitle. The aim of this article is to cast some light on this issue by means of two experiments focussing on comprehension and viewing patterns of subtitled news. The results obtained in the first experiment suggest that some of the current subtitles provided for the news in the UK prevent viewers from being able to focus on both the images and the subtitles, which results in an overall poor comprehension of the programme. In order to ascertain whether this is due to the speed of the subtitles or to other factors, a second experiment is also included. In this case, an eye-tracker has been used to record the participants' viewing patterns. The results show that the word-for-word display mode of live subtitles results in viewers spending 90% of their time looking at the subtitles and only 10% looking at the images, which affects overall comprehension.

Yi-Chuan Chen and Gert Westermann

Infants are able to learn novel associations between visual objects and auditory linguistic labels (such as a dog and the sound /dɔg/) by the end of their first year of life. Surprisingly, at this age they seem to fail to learn the associations between visual objects and natural sounds (such as a dog and its barking sound). Researchers have therefore suggested that linguistic learning is special (Fulkerson and Waxman, 2007) or that unfamiliar sounds overshadow visual object processing (Robinson and Sloutsky, 2010). However, in previous studies visual stimuli were paired with arbitrary sounds in contexts lacking ecological validity. In the present study, we create animations of two novel animals and two realistic animal calls to construct two audiovisual stimuli. In the training phase, each animal was presented in motions that mimicked animal behaviour in real life: in a short movie, the animal ran (or jumped) from the periphery to the center of the monitor, and it made calls while raising its head. In the test phase, static images of both animals were presented side-by-side and the sound for one of the animals was played. Infant looking times to each stimulus were recorded with an eye tracker. We found that following the sound, 12-month-old infants preferentially looked at the animal corresponding to the sound. These results show that 12-month-old infants are able to learn novel associations between visual objects and natural sounds in an ecologically valid situation, thereby challenging our current understanding of the development of crossmodal association learning.