Vision and Haptics Share Spatial Attentional Resources and Visuotactile Integration Is Not Affected by High Attentional Load

in Multisensory Research
Restricted Access
Get Access to Full Text
Rent on DeepDyve

Have an Access Token?



Enter your access token to activate and access content online.

Please login and go to your personal user account to enter your access token.



Help

Have Institutional Access?



Access content through your institution. Any other coaching guidance?



Connect

Human information processing is limited by attentional resources. Two questions that are discussed in multisensory research are (1) whether there are separate spatial attentional resources for each sensory modality and (2) whether multisensory integration is influenced by attentional load. We investigated these questions using a dual task paradigm: Participants performed two spatial tasks (a multiple object tracking [‘MOT’] task and a localization [‘LOC’] task) either separately (single task condition) or simultaneously (dual task condition). In the MOT task, participants visually tracked a small subset of several randomly moving objects. In the LOC task, participants either received visual, tactile, or redundant visual and tactile location cues. In the dual task condition, we found a substantial decrease in participants’ performance and an increase in participants’ mental effort (indicated by an increase in pupil size) relative to the single task condition. Importantly, participants performed equally well in the dual task condition regardless of whether they received visual, tactile, or redundant multisensory (visual and tactile) location cues in the LOC task. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the tactile and visual modality. Also, we found that participants integrated redundant multisensory information optimally even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) spatial attentional resources for the tactile and visual modality overlap and that (2) the integration of spatial cues from these two modalities occurs at an early pre-attentive processing stage.

Vision and Haptics Share Spatial Attentional Resources and Visuotactile Integration Is Not Affected by High Attentional Load

in Multisensory Research

Sections

References

AlaisD.MorroneC.BurrD. (2006). Separate attentional resources for vision and auditionProc. Biol. Sci. 273(1592) 13391345.

AlnæsD.SneveM. H.EspesethT.EndestadT.Van de PavertS. H. P.LaengB. (2014). Pupil size signals mental effort deployed during multiple object tracking and predicts brain activity in the dorsal attention network and the locus coeruleusJ. Vis. 14(4) 1. DOI:10.1167/14.4.1.

AlsiusA.NavarraJ.CampbellR.Soto-FaracoS. (2005). Audiovisual integration of speech falters under high attention demandsCurr. Biol. 15839843.

AlsiusA.NavarraJ.Soto-FaracoS. (2007). Attention to touch weakens audiovisual speech integrationExp. Brain Res. 183399404.

AlvarezG. A.FranconeriS. L. (2007). How many objects can you track?: Evidence for a resource-limited attentive tracking mechanismJ. Vis. 7(13) 14.

ArnellK. M.JenkinsR. (2004). Revisiting within-modality and cross-modality attentional blinks: effects of target–distractor similarityPercept. Psychophys. 6611471161.

ArnellK. M.LarsonJ. M. (2002). Cross-modality attentional blinks without preparatory task-set switchingPsychonom Bull. Rev. 9497506.

ArrighiR.LunardiR.BurrD. (2011). Vision and audition do not share attentional resources in sustained tasksFront. Psychol. 256. DOI:10.3389/fpsyg.2011.00056.

BatesD.MaechlerM.BolkerB.WalkerS. (2014). lme4: linear mixed-effects models using Eigen and S4 (R package version 1.1-5 ed.) http://CRAN.R-project.org/package=lme4.

BeattyJ. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resourcesPsychol. Bull. 91276296.

BertelsonP.VroomenJ.De GelderB.DriverJ. (2000). The ventriloquist effect does not depend on the direction of deliberate visual attentionPercept. Psychophys. 62321332.

BonnelA.-M.PrinzmetalW. (1998). Dividing attention between the color and the shape of objectsPercept. Psychophys. 60113124.

BruinJ. (2014). R library: contrast coding systems for categorical variables.

BusseL.RobertsK. C.CristR. E.WeissmanD. H.WoldorffM. G. (2005). The spread of attention across modalities and space in a multisensory objectProc. Natl Acad. Sci. U.S.A. 1021875118756.

CalhounG. L.DraperM. H.RuffH. A.FontejonJ. V. (2002). Utility of a tactile display for cueing faultsProc. Hum. Fact. Ergon. Soc. Annu. Meet. 4621442148.

ChunM. M.GolombJ. D.Turk-BrowneN. B. (2011). A taxonomy of external and internal attentionAnnu. Rev. Psychol. 6273101.

CorenS.WardL. M.EnnsJ. T. (2004). Sensation and Perception6th edn. WileyNew York, NY, USA.

DuncanJ.MartensS.WardR. (1997). Restricted attentional capacity within but not between sensory modalitiesNature 397808810.

ErnstM. O. (2006). A Bayesian view on multimodal cue integration in: Human Body Perception from the Inside OutKnoblichG.ThorntonI. M.GrosjeanM.ShiffrarM. (Eds) pp.  105131. Oxford University PressNew York, NY, USA.

ErnstM. O.BülthoffH. H. (2004). Merging the senses into a robust perceptTrends Cogn. Sci. 8162169.

GepshteinS.BurgeJ.ErnstM. O.BanksM. S. (2005). The combination of vision and touch depends on spatial proximityJ. Vis. 510131023.

GreenC. S.BavelierD. (2006). Enumeration versus multiple object tracking: the case of action video game playersCognition 101217245.

HagenM. C.FranzénO.McGloneF.EssickG.DancerC.PardoJ. V. (2002). Tactile motion activates the human middle temporal/v5 (mt/v5) complexEur. J. Neurosci. 16957964.

HeinG.ParrA.DuncanJ. (2006). Within-modality and cross-modality attentional blinks in a simple discrimination taskPercept. Psychophys. 685461.

HessE. H.PoltJ. M. (1964). Pupil size in relation to mental activity during simple problem-solvingScience 143(3611) 11901192.

HillstromA. P.ShapiroK. L.SpenceC. (2002). Attentional limitations in processing sequentially presented vibrotactile targetsPercept. Psychophys. 6410681082.

HolmesN. P.SpenceC. (2005). Multisensory integration: space, time and superadditivityCurr. Biol. 15R762R764.

JamesW. (1890). The Principles of Psychology. Harvard University PressCambridge, MA, USA.

JolicoeurP. (1999). Restricted attentional capacity between sensory modalitiesPsychonom. Bull. Rev. 68792.

KietzmannT. C.GeuterS.KönigP. (2011). Overt visual attention as a causal factor of perceptual awarenessPloS One 6e22614.

KoelewijnT.BronkhorstA.TheeuwesJ. (2010). Attention and the multiple stages of multisensory integration: a review of audiovisual studiesActa Psychol. 134372384.

LaBergeD. (1995). Attentional Processing: The Brain’s Art of MindfulnessVol. 2. Harvard University PressCambridge, MA. USA.

LeifeldP. (2013). texreg: conversion of statistical model output in R to LATEX and HTML tablesJ. Stat. Softw. 55(8) 124.

LivingstoneM.HubelD. (1988). Segregation of form, color, movement, and depth: anatomy, physiology, and perceptionScience 240(4853) 740749.

MaroisR.IvanoffJ. (2005). Capacity limits of information processing in the brainTrends Cogn. Sci. 9296305.

McGurkH.MacDonaldJ. (1976). Hearing lips and seeing voicesNature 264(5588) 746748.

MeredithM. A.SteinB. E. (1983). Interactions among converging sensory inputs in the superior colliculusScience 221(4608) 389391.

MozolicJ. L.HugenschmidtC. E.PeifferA. M.LaurientiP. J. (2008). Modality-specific selective attention attenuates multisensory integrationExp. Brain Res. 1843952.

NagelS. K.CarlC.KringeT.MärtinR.KönigP. (2005). Beyond sensory substitution — learning the sixth senseJ. Neural Eng. 2R13R26.

NikolicM. I.SklarA. E.SarterN. B. (1998). Multisensory feedback in support of pilot-automation coordination: the case of uncommanded mode transitionsProc. Hum. Fact. Ergon. Soc. Annu. Meet. 42239243.

PotterM. C.ChunM. M.BanksB. S.MuckenhouptM. (1998). Two attentional deficits in serial target search: the visual attentional blink and an amodal task-switch deficitJ. Exp. Psychol. Learn. Mem. Cogn. 24979992.

PylyshynZ. W.StormR. W. (1988). Tracking multiple independent targets: evidence for a parallel tracking mechanismSpat. Vis. 3179197.

ReedC. L.KlatzkyR. L.HalgrenE. (2005). What vs. where in touch: an fMRI studyNeuroimage 25718726.

SklarA. E.SarterN. B. (1999). Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domainsHum. Fact. 41543552.

Soto-FaracoS.SpenceC. (2002). Modality-specific auditory and visual temporal processing deficitsQ. J. Exp. Psychol. 552340.

Soto-FaracoS.SpenceC.FairbankK.KingstoneA.HillstromA. P.ShapiroK. (2002). A crossmodal attentional blink between vision and touchPsychonom. Bull. Rev. 9731738.

Soto-FaracoS.NavarraJ.AlsiusA. (2004). Assessing automaticity in audiovisual speech integration: evidence from the speeded classification taskCognition 92B13B23.

SteinB. E.StanfordT. R. (2008). Multisensory integration: current issues from the perspective of the single neuronNat. Rev. Neurosci. 9255266.

TalsmaD.WoldorffM. G. (2005). Selective attention and multisensory integration: multiple phases of effects on the evoked brain activityJ. Cogn. Neurosci. 1710981114.

TalsmaD.DotyT. J.StrowdR.WoldorffM. G. (2006). Attentional capacity for processing concurrent stimuli is larger across sensory modalities than within a modalityPsychophysiology 43541549.

TalsmaD.DotyT. J.WoldorffM. G. (2007). Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration? Cereb. Cortex 17679690.

TremblayS.VachonF.JonesD. M. (2005). Attentional and perceptual sources of the auditory attentional blinkPercept. Psychophys. 67195208.

TwiskJ. W. (2006). Applied Multilevel Analysis. Cambridge University PressCambridge, MA, USA.

Van der BurgE.OliversC. N. L.BronkhorstA. W.KoelewijnT.TheeuwesJ. (2007). The absence of an auditory–visual attentional blink is not due to echoic memoryPercept. Psychophys. 6912301241.

Van der BurgE.OliversC. N.BronkhorstA. W.TheeuwesJ. (2008). Pip and pop: nonspatial auditory signals improve spatial visual searchJ. Exp. Psychol. Hum. Percept. Perform. 3410531065.

Van ErpJ. B.Van VeenH. A. (2004). Vibrotactile in-vehicle navigation systemTransp. Res. Part F: Traffic Psychol. Behav. 7247256.

VroomenJ.BertelsonP.De GelderB. (2001). The ventriloquist effect does not depend on the direction of automatic visual attentionPercept. Psychophys. 63651659.

WickhamH. (2009). ggplot2: Elegant Graphics for Data Analysis. SpringerNew York, NY, USA.

Figures

  • View in gallery

    Vibrotactile belt. The 21 vibromotors (Precision Microdrives, 14 mm diameter, vibration frequency 170–185 Hz) were arranged in a grid of three rows and seven columns with an adjustable distance (minimum of 4 cm) between actuators. For the experiment, a total of nine vibromotors were always used.

  • View in gallery

    (a) Localization (LOC) task overview. The top row depicts the VI condition (in which visual location cues were received), the middle row the TA condition (in which tactile location cues were received via the vibrotactile belt) and the bottom row the VITA condition (in which redundant visual and tactile location cues were received). Arrows indicate the current movement direction of the objects. (b) Mapping of numpad numbers on the keyboard (top left) to visual stimuli on the screen (top right) and tactile stimuli on the vibrotactile belt (bottom).

  • View in gallery

    Multiple object tracking (MOT) task overview. Trial logic shown for the MOT task (top row), for performing the MOT task while either receiving the visual location cues (MOT + VI, second row), the tactile location cues (MOT + TA, 3rd row) or the redundant visual and tactile location cues (MOT + VITA, 4th row) in the localization (LOC) task. Arrows indicate the current movement direction of the objects. This figure is published in colour in the online version.

  • View in gallery

    Results of multiple object tracking (MOT) task and localization (LOC) task: (a) Percentage correct in MOT task for the single task condition MOT and the dual task conditions MOT + VI, MOT + VITA and MOT + TA. (b) Percentage correct in LOC task for each type of location cue (visual [VI], tactile [TA] and redundant tactile and visual location cue [VITA]), separately for single and dual task conditions. (c) Interference between the MOT and LOC task expressed as percentage shown as a function of type of location cue in LOC task. (d) Mean pupil size difference in percentage between dual and single task conditions for each type of location cue. Error bars in all panels are SEM.

  • View in gallery

    Results localization (LOC) task: (a) Error (in city block distance) in LOC task for each type of location cue (visual [VI], tactile [TA] and redundant tactile and visual location cue [VITA]), separately for single and dual task conditions. (b) Reaction time (‘RT’) in LOC task for each type of location cue (VI, TA, VITA), separately for single and dual task conditions. Error bars in all panels are SEM.

Index Card

Content Metrics

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 7 7 6
Full Text Views 5 5 5
PDF Downloads 1 1 1
EPUB Downloads 0 0 0