When deciding on a product’s quality, we often pick it up to gauge its weight. If it’s heavy enough, we tend to think that it is good quality. We have recently shown that the weight of a dish can affect the taste and quality perception of the food it contains. Here, we varied the weight of spoons in order to determine whether the weight or size of the cutlery might influence taste perception. Teaspoons and tablespoons were tested, with one of each spoon-size artificially weighted with lead hidden into the handle (teaspoons: 2.35 and 5.67 g, and tablespoons: 3.73 and 10.84 g). Participants tasted yoghurt from each spoon and rated the yoghurt’s perceived density, price, sweetness, and pleasantness. Four within-participant ANOVAs were used to test the effects of spoon size and spoon weight on each attribute. The perceived density of the yoghurt was affected by the spoon’s weight, with yoghurt from light spoons being perceived as thicker than yoghurt sampled from a heavy spoon. The perceived price of the yoghurt also varied with spoon weight such that lighter spoons made the yoghurt taste more expensive. The most reliable effect was an interaction between spoon weight and spoon size on sweetness perception: heavy teaspoons and light tablespoons made the yoghurt appear sweeter. These data support the growing body of research demonstrating that tableware (and silverware) can affect the consumer’s judgements without their being aware.
Vanessa Harrar and Charles Spence
Vanessa Harrar, Vanessa Harrar, Charles Spence, Vanessa Harrar, Charles Spence and Laurence R. Harris
Generally speaking, multisensory integration is more likely to occur when the stimuli are synchronous (Stein and Meredith, 1993). Repeated exposure to temporally offset multisensory stimuli can change the perceived delay between the stimuli so that synchrony is perceived closer to the adapted delay rather than physical synchrony (Fujisaki et al., 2004). If the perception of synchrony is adaptable, might the point (or delay) of maximal integration also be altered after adaptation? Temporal adaptation might be achieved by changing the processing times of the component stimuli (Harrar and Harris, 2008; Navarra et al., 2009), or changing the integration mechanism. In the present study, each participant underwent daily adaptation to either synchronous or asynchronous (auditory lagging by 200 ms, or visual lagging by 60 ms) stimulus pairs. To assess unimodal processing time changes, we measured reactions times (RTs) to audio and visual stimuli after adaptation. In order to assess the effects of adaptation on multisensory integration, we measured RTs to synchronously presented AV stimuli and compared these with the RTs predicted from the Miller’s race model (Miller, 1982) for each participant (Molholm et al., 2004). The results comparing RTs following synchronous and asynchronous adaptation conditions are discussed in the context of perception versus action and current models of multisensory integration. The RTs changed considerably over a period of a week; these patterns are discussed in the context of learning to perceive synchrony.
Vanessa Harrar, Georg Meyer and Charles Spence
Vanessa Harrar, Charles Spence and Tamar R. Makin
The body is represented in a somatotopic framework such that adjacent body parts are represented next to each other in the brain. We utilised the organisation of the somatosensory cortex to study the generalisation pattern of tactile perceptual learning. Perceptual learning refers to the process of long-lasting improvement in the performance of a perceptual task following persistent sensory exposure. In order to test if perceptual learning generalises to neighbouring brain/body areas, 12 participants were trained on a tactile discrimination task on one fingertip (using tactile oriented gratings) over the course of four days. Thresholds for tactile acuity were estimated prior to, and following, the training for the ‘trained’ finger and three additional fingers: ‘adjacent’, ‘homologous’ (the same finger as trained but on the opposite hand) and ‘other’ (which was neither adjacent nor homologous to the trained finger). Identical threshold estimating with no training was also carried out for a control group. Following training, tactile thresholds were improved (as compared to the control group). Importantly, improved performance was not exclusive for the trained finger; it generalised to the adjacent and homologous fingers, but not the other finger. We found that perceptual learning indeed generalises in a way that can be predicted by the topography of the somatosensory cortex, suggesting that sensory experience is not necessary for perceptual learning. These findings may be translated to rehabilitation procedures that train the partially-deprived cortex using similar principles of perceptual learning generalisation, such as following amputation or blindness in adults.
Laurence R. Harris, Charles Spence and Vanessa Harrar
Liesbet Goubert, Sophie Vandenbroucke and Vanessa Harrar
Vanessa Harrar, Vanessa Harrar, Jonathan Tammam, Vanessa Harrar, Jonathan Tammam, Alexis Pérez-Bellido, Vanessa Harrar, Jonathan Tammam, Alexis Pérez-Bellido, Rachel Hulatt, Vanessa Harrar, Jonathan Tammam, Alexis Pérez-Bellido, Rachel Hulatt, Anna Pitt, Vanessa Harrar, Jonathan Tammam, Alexis Pérez-Bellido, Rachel Hulatt, Anna Pitt, John Stein, Vanessa Harrar, Jonathan Tammam, Alexis Pérez-Bellido, Rachel Hulatt, Anna Pitt, John Stein and Charles Spence
There is growing experimental support for the presence of specific deficits in the magnocellular visual pathway in dyslexia. The magnocellular system is thought to be involved in multisensory integration. As a result of impaired magnocellular function, specific deficits in multisensory integration may be observed in dyslexia. Multisensory integration differences were compared between dyslexics and matched controls, using simple reaction times. Four visual stimuli, preferentially activating the magnocellular or parvocellular visual systems, and white noise bursts were presented either alone or together and participants were instructed to respond as quickly as possible. Reaction times (RTs) to multisensory stimuli were predicted from the unisensory stimuli using Miller’s Race model (which assumes independence of the two channels). RTs to multisensory stimuli that exceeded the model (faster than predicted) provided evidence of multisensory integration. Dyslexics integrated less than matched controls. Differences between groups were more pronounced for magnocellular-preferred stimuli, and literacy scores were significantly correlated with the range of RTs demonstrating integration. These results provide further support for the magnocellular theory of dyslexia, and suggest a simple non-literacy based diagnostic or experimental tools.
Cesare V. Parise, Vanessa Harrar, Marc O. Ernst and Charles Spence
Humans are equipped with multiple sensory channels that provide both redundant and complementary information about the objects and events in the world around them. A primary challenge for the brain is therefore to solve the ‘correspondence problem’, that is, to bind those signals that likely originate from the same environmental source, while keeping separate those unisensory inputs that likely belong to different objects/events. Whether multiple signals have a common origin or not must, however, be inferred from the signals themselves through a causal inference process.
Recent studies have demonstrated that cross-correlation, that is, the similarity in temporal structure between unimodal signals, represents a powerful cue for solving the correspondence problem in humans. Here we provide further evidence for the role of the temporal correlation between auditory and visual signals in multisensory integration. Capitalizing on the well-known fact that sensitivity to crossmodal conflict is inversely related to the strength of coupling between the signals, we measured sensitivity to crossmodal spatial conflicts as a function of the cross-correlation between the temporal structures of the audiovisual signals. Observers’ performance was systematically modulated by the cross-correlation, with lower sensitivity to crossmodal conflict being measured for correlated as compared to uncorrelated audiovisual signals. These results therefore provide support for the claim that cross-correlation promotes multisensory integration. A Bayesian framework is proposed to interpret the present results, whereby stimulus correlation is represented on the prior distribution of expected crossmodal co-occurrence.
Sophie Vandenbroucke, Sophie Vandenbroucke, Liesbet Goubert, Sophie Vandenbroucke, Liesbet Goubert and Vanessa Harrar
Introduction: There is evidence that viewing touch or pain can modulate the experience of tactile stimulation. The aim of this study was to investigate whether the observation of needle pricks to another person’s hand facilitates the detection of tactile stimuli applied to the hand. We hypothesized this would be different for a group of chronic pain patients compared to controls.
Method: Participants with fibromyalgia () and controls () were presented a series of videos showing hands being pricked (e.g., by needles) or control videos (e.g., a sponge being pricked), whilst receiving subtle tactile stimuli themselves in the same spatial location (congruent trials) or on the opposite hand (incongruent trials) as the visual stimuli. Participants were asked to detect the presence and location of the tactile stimulus. Signal detection theory was used to compare whether sensitivity was different for congruent and incongruent trials, and between the groups of participants.
Results: Perceptual sensitivity (d′) was significantly higher when videos contained a painful situation, compared to the control videos, and was higher when the videos were presented on the same side of the body as the tactile stimulus (congruency effect). The congruency effect was larger when a painful situation was shown compared to control videos. No difference in sensitivity was found between fibromyalgia patients and controls.
Discussion: This study suggests that the detection of somatic sensations can be facilitated by observing painful visual stimuli. The visual-somatosensory modulation was independent upon the presence of chronic pain.