Search Results

You are looking at 1 - 10 of 15 items for

  • Author or Editor: Fiona N. Newell x
  • Search level: All x
Clear All
Author: Fiona N. Newell

Synaesthesia has been known for centuries but the last 20 years in particular has seen a renewed interest in the condition within the scientific community. This Special Issue of the journal Multisensory Research, entitled ‘Synaesthesia and Cross-Modal Perception’ stemmed from a meeting on this topic held in Dublin last year, hosted by Kevin Mitchell, Smurfit Institute of Genetics and Fiona Newell, School of Psychology at Trinity College Dublin. We were particularly keen to bring together researchers investigating multisensory processes in the general population with researchers specifically interested in synaesthesia. We, and many others, felt that it was timely

Full Access
In: Multisensory Research
Author: Fiona N. Newell

We recently reported that efficient multisensory integration is affected by the ageing process. Specifically, we found that older persons were more susceptible to the auditory-flash illusion (Shams et al., ) than younger adults, even at relatively large stimulus onset asynchronies of more than 170 ms. Furthermore, susceptibility to this illusion increased with age (i.e., across individuals), and over time (i.e., over two years in the same individual). Our findings also suggest that inefficient multisensory integration is associated with balance maintenance and control: older persons with a history of falling were more susceptible to the auditory-flash illusion than their age-matched counterparts (Setti et al., 2011a) and more illusions were reported in older adults during a standing than a seated position. Importantly, we found no differences in sensory acuity between older adults with and without a history of falls, suggesting important interactions in the brain. We also found that during spatial navigation, older persons with a history of falling, relative to an age-matched cohort, failed to compensate for changes in their visual environment (full or blurred visual input) by adjusting their gait accordingly (Barrett et al., ). Our findings are suggestive of temporal interactions between the sensory systems in the brain (see e.g., Setti et al., 2011b), and not in the nature of the information encoded at the periphery, which underpin efficient perception-to-action in an older adult.

Full Access
In: Seeing and Perceiving

Major findings in attractiveness such as the role of averageness and symmetry have emerged primarily from neutral static visual stimuli. However it has increasingly been shown that ratings of attractiveness can be modulated within unisensory and multisensory modes by factors including emotional expression or by additional information about the person. For example, previous research has indicated that humorous individuals are rated as more desirable than their non-humorous equivalents (Bressler and Balshine, 2006). In two experiments we measured within and cross-sensory modulation of the attractiveness of unfamiliar faces. In Experiment 1 we examined if manipulating the number and type of expressions shown across a series of images of a person influences the attractiveness rating for that person. Results indicate that for happy expressions, ratings of attractiveness gradually increase as the proportional number of happy facial expressions increase, relative to the number of neutral expressions. In contrast, an increase in the proportion of angry expressions was not assocated with an increase in attractiveness ratings. In Experiment 2 we investigated if perceived attractiveness can be influenced by multisensory information provided during exposure to the face image. Ratings are compared across face images which were presented with or without voice information. In addition we provided either an auditory emotional cue (e.g., laughter) or neutral (e.g., coughing) cue to assess whether social information affects perceived attractiveness. Results shows that multisensory information about a person can increase attractiveness ratings, but that the emotional content of the cross-modal information can effect preference for some faces over others.

Full Access
In: Seeing and Perceiving

We investigated age-related effects in cross-modal interactions using tasks assessing spatial perception and object perception. Specifically, an audio-visual object identification task and an audio-visual object localisation task were used to assess putatively distinct perceptual functions in four age groups: children (8–11 years), adolescents (12–14 years), young and older adults. Participants were required to either identify or locate target objects. Targets were specified as unisensory (visual/auditory) or multisensory (audio-visual congruent/audio-visual incongruent) stimuli. We found age-related effects in performance across both tasks. Both children and older adults were less accurate at locating objects than adolescents or young adults. Children were also less accurate at identifying objects relative to young adults, but the performance between young adults, adolescents and older adults did not differ. A greater cost in accuracy for audio-visual incongruent relative to audio-visual congruent targets was found for older adults, children and adolescents relative to young adults. However, we failed to find a benefit in performance for any age group in either the identification or localisation task for audio-visual congruent targets relative to visual-only targets. Our findings suggest that visual information dominated when identifying or localising audio-visual stimuli. Furthermore, on the basis of our results, object identification and object localisation abilities seem to mature late in development and that spatial abilities may be more prone to decline as we age relative to object identification abilities. In addition, the results suggest that multisensory facilitation may require more sensitive measures to reveal differences in cross-modal interactions across higher-level perceptual tasks.

Full Access
In: Multisensory Research

The interaction of audio–visual signals transferring information about the emotional state of others may play a significant role in social engagement. There is ample evidence that recognition of visual emotional information does not necessarily depend on conscious processing. However, little is known about how multisensory integration of affective signals relates to visual awareness. Previous research using masking experiments has shown relative independence of audio–visual integration on visual awareness. However, masking does not capture the dynamic nature of consciousness in which dynamic stimulus selection depends on a multitude of signals. Therefore, we presented neutral and happy faces in one eye and houses in the other resulting in perceptual rivalry between the two stimuli while at the same time we presented laughing, coughing or no sound. The participants were asked to report when they saw the faces, houses or their mixtures and were instructed to ignore the playback of sounds. When happy facial expressions were shown participants reported seeing fewer houses in comparison to when neutral expressions were shown. In addition, human sounds increase the viewing time of faces in comparison when there was no sound. Taken together, emotional expressions of the face affect which face is selected for visual awareness and at the same time, this is facilitated by human sounds.

Full Access
In: Seeing and Perceiving

This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participants responded when an object occupied either of two target locations and withheld response if the object occupied a different location. Target stimuli were presented either by vision alone, audition alone, or bimodally. In both tasks cross-modal cues were either congruent or incongruent. The results revealed that response time performance in both the object recognition task and in the object location task benefited from the presence of a congruent cross-modal cue, relative to incongruent or unisensory conditions. In the younger adult group, the effect was strongest for response times although the same pattern was found for accuracy in the object location task but not for the recognition task. Following recent studies on multisensory integration in children (e.g., Brandwein, 2010; Gori, ), we then tested performance in children (i.e., 8–14 year olds) using the same task. Although overall performance was affected by age, our findings suggest interesting parallels in the benefit of congruent, cross-modal cues between children and adults, for both object recognition and location tasks.

Full Access
In: Seeing and Perceiving

Multisensory Research 26 Supplement (2013) 197–198 brill.com/msr Poster Presentation The effect of ageing on acoustic facilitation of object movement detection within optic-flow Eugenie Roudaia 1 , ∗ , Finnegan J. Calabro 2 , Lucia M. Vaina 2 and Fiona N. Newell 1 1 Trinity College Dublin, Ireland 2 Boston University, Boston, MA, USA Abstract Multisensory integration appears to be enhanced in older age (e.g., Laurienti et al. , 2006; Maguinness et al. , 2011). Given that motion perception declines with ageing, we examined whether multisensory integration may enhance motion perception in older adults. Calabro et al. (2011) recently showed that

Full Access
In: Multisensory Research

The presence of a moving sound has been shown to facilitate the detection of an independently moving visual target embedded among an array of identical moving objects simulating forward self-motion (Calabro et al., Proc. R. Soc. B, ). Given that the perception of object motion within self-motion declines with aging, we investigated whether older adults can also benefit from the presence of a congruent dynamic sound when detecting object motion within self-motion. Visual stimuli consisted of nine identical spheres randomly distributed inside a virtual rectangular prism. For 1 s, all the spheres expanded outward simulating forward observer translation at a constant speed. One of the spheres (the target) had independent motion either approaching or moving away from the observer at three different speeds. In the visual condition, stimuli contained no sound. In the audiovisual condition, the visual stimulus was accompanied by a broadband noise sound co-localized with the target, whose loudness increased or decreased congruent with the target’s direction. Participants reported which of the spheres had independent motion. Younger participants showed higher target detection accuracy in the audiovisual compared to the visual condition at the slowest speed level. Older participants showed overall poorer target detection accuracy than the younger participants, but the presence of the sound had no effect on older participants’ target detection accuracy at either speed level. These results indicate that aging may impair cross-modal integration in some contexts. Potential reasons for the absence of auditory facilitation in older adults are discussed.

Full Access
In: Multisensory Research

Multisensory Research 26 Supplement (2013) 154–155 brill.com/msr Poster Presentation Audio-visual interactions in the perception of intention from actions Hanni Kiiski 1 , ∗ , Ludovic Hoyet 2 , Katja Zibrek 2 , Carol O’Sullivan 2 and Fiona N. Newell 1 1 School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland 2 Graphics, Vision and Visualisation Group, School of Computer Science and Statistics, Trinity College Dublin, Ireland Abstract Although humans can infer other people’s intentions from their visual actions (Blakemore and Decety, 2001), it is not well understood how auditory information can influence this process. We investigated whether auditory

Full Access
In: Multisensory Research

Multisensory Research 26 Supplement (2013) 175 brill.com/msr Poster Presentation Perceptual training alters the time window of multisensory integration David P. McGovern 1 , ∗ , Neil W. Roach 2 , Eugenie Roudaia 1 and Fiona N. Newell 1 1 Trinity College Institute of Neuroscience, Trinity College Dublin, Ireland 2 Visual Neuroscience Group, The University of Nottingham, UK Abstract Given the inherently noisy nature of sensory signals, the brain must maintain a degree of tolerance for temporal discrepancies when combining multisensory information. While the limits of this temporal window of integration are well established, little is known about how they are

Full Access
In: Multisensory Research