Previous studies have found that semantics, the higher-level meaning of stimuli, can impact multisensory integration; however, less is known about the effect of valence, an affective response to stimuli. This study investigated the effects of both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via response time (RT) and temporal-order judgement (TOJ) tasks [assessing processing speed (RT), Point of Subjective Simultaneity (PSS), and time window when multisensory stimuli are likely to be perceived as simultaneous (temporal binding window; TBW)]. Through an online study with 40 participants (mean age: 26.25 years; females = 17), we found that both congruence and valence had a significant main effect on RT (congruency and positive valence decrease RT) and an interaction effect (congruent/positive valence condition being significantly faster than all others). For TOJ, there was a significant main effect of valence and a significant interaction effect where positive valence (compared to negative valence) and the congruent/positive condition (compared to all other conditions) required visual stimuli to be presented significantly earlier than auditory stimuli to be perceived as simultaneous. A subsequent analysis showed a positive correlation between TBW width and RT (as TBW widens, RT increases) for the categories that were furthest from true simultaneity in their PSS (Congruent/Positive and Incongruent/Negative). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.
Purchase
Buy instant access (PDF download and unlimited online access):
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your brill.com account
Avero, P. and Calvo, M. G. (2006). Affective priming with pictures of emotional scenes: the role of perceptual similarity and category relatedness, Span. J. Psychol. 9, 10–18. https://doi.org/10.1017/S1138741600005928.
Barutchu, A., Spence, C. and Humphreys, G. W. (2018). Multisensory enhancement elicited by unconscious visual stimuli, Exp. Brain Res. 236, 409–417. https://doi.org/10.1007/s00221-017-5140-z.
Basharat, A., Adams, M. S., Staines, W. R. and Barnett-Cowan, M. (2018). Simultaneity and temporal order judgments are coded differently and change with age: an event-related potential study, Front. Integr. Neurosci. 12, 15. https://doi.org/10.3389/fnint.2018.00015.
Basharat, A., Mahoney, J. R. and Barnett-Cowan, M. (2019). Temporal metrics of multisensory processing change in the elderly, Multisens. Res. 32, 715–744. https://doi.org/10.1163/22134808-20191458.
Bastian, B., Loughnan, S., Haslam, N. and Radke, H. R. M. (2011). Don’t mind meat? The denial of mind to animals used for human consumption, Pers. Soc. Psychol. Bull. 38, 247–256. https://doi.org/10.1177/0146167211424291.
Bazilinskyy, P. and De Winter, J. (2018). Crowdsourced measurement of reaction times to audiovisual stimuli with various degrees of asynchrony, Hum. Factors 60, 1192–1206. https://doi.org/10.1177/0018720818787126.
Bechlivanidis, C. and Lagnado, D. A. (2016). Time reordered: causal perception guides the interpretation of temporal order, Cognition 146, 58–66. https://doi.org/10.1016/j.cognition.2015.09.001.
Bedard, G. and Barnett-Cowan, M. (2016). Impaired timing of audiovisual events in the elderly, Exp. Brain Res. 234, 331–340. https://doi.org/10.1007/s00221-015-4466-7.
Bilewicz, M., Michalak, J. and Kamińska, O. K. (2016). Facing the edible. The effects of edibility information on the neural encoding of animal faces, Appetite 105, 542–548. https://doi.org/10.1016/J.APPET.2016.06.014.
Box, G. E. P. and Cox, D. R. (1964). An analysis of transformations, J. R. Stat. Soc. Series B Stat. Methodol. 26, 211–243. https://doi.org/10.1111/j.2517-6161.1964.tb00553.x.
Bradley, M. M. and Lang, P. J. (2017). International affective picture system, in: Encyclopedia of Personality and Individual Differences, V. Zeigler-Hill and T. K. Shackelford (Eds), pp. 1–4. Springer International Publishing, Cham, Switzerland. https://doi.org/10.1007/978-3-319-28099-8_42-1.
Bridges, D., Pitiot, A., MacAskill, M. R. and Peirce, J. W. (2020). The timing mega-study: comparing a range of experiment generators, both lab-based and online, PeerJ 8, e9414. https://doi.org/10.7717/peerj.9414.
Chen, Y.-C. and Spence, C. (2010). When hearing the bark helps to identify the dog: semantically-congruent sounds modulate the identification of masked pictures, Cognition 114, 389–404. https://doi.org/10.1016/j.cognition.2009.10.012.
Chen, Y.-C. and Spence, C. (2013). The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words, Multisens. Res. 26, 371–386. https://doi.org/10.1163/22134808-00002420.
Chen, Y.-C. and Spence, C. (2017). Assessing the role of the “unity assumption” on multisensory integration: a review, Front. Psychol. 8, 445. https://doi.org/10.3389/fpsyg.2017.00445.
Chen, Y.-C. and Spence, C. (2018). Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: the time-courses and categorical specificity, J. Exp. Psychol. Hum. Percept. Perform. 44, 1488–1507. https://doi.org/10.1037/XHP0000545.
Colonius, H. and Diederich, A. (2017). Measuring multisensory integration: from reaction times to spike counts, Sci. Rep. 7, 3023. https://doi.org/10.1038/s41598-017-03219-5.
Cox, D. and Hong, S. W. (2015). Semantic-based crossmodal processing during visual suppression, Front. Psychol. 6, 722. https://doi.org/10.3389/fpsyg.2015.00722.
Crouzet, S. M., Joubert, O. R., Thorpe, S. J. and Fabre-Thorpe, M. (2012). Animal detection precedes access to scene category, PLoS ONE 7, e51471. https://doi.org/10.1371/journal.pone.0051471.
Crump, M. J. C., McDonnell, J. V. and Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research, PLoS ONE 8, e57410. https://doi.org/10.1371/journal.pone.0057410.
Delong, P. and Noppeney, U. (2021). Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness, Sci. Rep. 11, 10832. https://doi.org/10.1038/s41598-021-90183-w.
Der, G. and Deary, I. J. (2017). The relationship between intelligence and reaction time varies with age: results from three representative narrow-age age cohorts at 30, 50 and 69 years, Intelligence 64, 89–97. https://doi.org/10.1016/j.intell.2017.08.001.
Doehrmann, O. and Naumer, M. J. (2008). Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration, Brain Res. 1242, 136–150. https://doi.org/10.1016/j.brainres.2008.03.071.
Gerdes, A. B. M., Wieser, M. J. and Alpers, G. W. (2014). Emotional pictures and sounds: a review of multimodal interactions of emotion cues in multiple domains, Front. Psychol. 5, 1351. https://doi.org/10.3389/fpsyg.2014.01351.
Gottfried, J. A. and Dolan, R. J. (2003). The nose smells what the eye sees: crossmodal visual facilitation of human olfactory perception, Neuron 39, 375–386. https://doi.org/10.1016/S0896-6273(03)00392-1.
Howard, I. P. and Templeton, W. B. (1966). Human Spatial Orientation. John Wiley and Sons, New York, NY, USA.
Jackson, C. V. (1953). Visual factors in auditory localization, Q. J. Exp. Psychol. 5, 52–65. https://doi.org/10.1080/17470215308416626.
Jones, J. A. and Jarick, M. (2006). Multisensory integration of speech signals: the relationship between space and time, Exp. Brain Res. 174, 588–594. https://doi.org/10.1007/s00221-006-0634-0.
Kitamura, M. S., Watanabe, K. and Kitagawa, N. (2016). Positive emotion facilitates audiovisual binding, Front. Integr. Neurosci. 9, 66. https://doi.org/10.3389/FNINT.2015.00066.
Koppen, C., Alsius, A. and Spence, C. (2008). Semantic congruency and the Colavita visual dominance effect, Exp. Brain Res. 184, 533–546. https://doi.org/10.1007/s00221-007-1120-z.
Kurdi, B., Lozano, S. and Banaji, M. R. (2017). Introducing the Open Affective Standardized Image Set (OASIS), Behav. Res. Meth. 49, 457–470. https://doi.org/10.3758/s13428-016-0715-3.
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs, Front. Psychol. 4, 863. https://doi.org/10.3389/FPSYG.2013.00863.
Laurienti, P. J., Kraft, R. A., Maldjian, J. A., Burdette, J. H. and Wallace, M. T. (2004). Semantic congruence is a critical factor in multisensory behavioral performance, Exp. Brain Res. 158, 405–414. https://doi.org/10.1007/s00221-004-1913-2.
Love, S. A., Petrini, K., Cheng, A. and Pollick, F. E. (2013). A psychophysical investigation of differences between synchrony and temporal order judgments, PLoS ONE 8, e54798. https://doi.org/10.1371/journal.pone.0054798.
Maiworm, M., Bellantoni, M., Spence, C. and Röder, B. (2012). When emotional valence modulates audiovisual integration, Atten. Percept. Psychophys. 74, 1302–1311. https://doi.org/10.3758/S13414-012-0310-3.
McGurk, H. and MacDonald, J. (1976). Hearing lips and seeing voices, Nature 264, 746–748. https://doi.org/10.1038/264746a0.
Navarra, J., Soto-Faraco, S. and Spence, C. (2007). Adaptation to audiotactile asynchrony, Neurosci. Lett. 413, 72–76. https://doi.org/10.1016/J.NEULET.2006.11.027.
Praß, M., Grimsen, C., König, M. and Fahle, M. (2013). Ultra rapid object categorization: effects of level, animacy and context, PLoS ONE 8, e68051. https://doi.org/10.1371/journal.pone.0068051.
Sakia, R. M. (1992). The Box–Cox transformation technique: a review, Statistician 41, 169–178. https://doi.org/10.2307/2348250.
Senkowski, D., Schneider, T. R., Foxe, J. J. and Engel, A. K. (2008). Crossmodal binding through neural coherence: implications for multisensory processing, Trends Neurosci. 31, 401–409. https://doi.org/10.1016/J.TINS.2008.05.002.
Seow, T. X. F. and Hauser, T. U. (2022). Reliability of web-based affective auditory stimulus presentation, Behav. Res. Meth. 54, 378–392. https://doi.org/10.3758/S13428-021-01643-0.
Shapiro, K. L., Egerman, B. and Klein, R. M. (1984). Effects of arousal on human visual dominance, Percept. Psychophys. 35, 547–552. https://doi.org/10.3758/BF03205951.
Shore, D. I., Spence, C. and Klein, R. M. (2001). Visual prior entry, Psychol. Sci. 12, 205–212. https://doi.org/10.1111/1467-9280.00337.
Spence, C. (2013). Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule, Ann. N. Y. Acad. Sci. 1296, 31–49. https://doi.org/10.1111/NYAS.12121.
Spence, C. (2021). The scent of attraction and the smell of success: crossmodal influences on person perception, Cogn. Res. Princ. Implic. 6, 46. https://doi.org/10.1186/S41235-021-00311-3.
Spence, C., Shore, D. I. and Klein, R. M. (2001). Multisensory prior entry, J. Exp. Psychol. Gen. 130, 799–832. https://doi.org/10.1037/0096-3445.130.4.799.
Spilcke-Liss, J., Zhu, J., Gluth, S., Spezio, M. and Gläscher, J. (2019). Semantic incongruency interferes with endogenous attention in cross-modal integration of semantically congruent objects, Front. Integr. Neurosci. 13, 53. https://doi.org/10.3389/fnint.2019.00053.
Stein, B. E. and Stanford, T. R. (2008). Multisensory integration: current issues from the perspective of the single neuron, Nat. Rev. Neurosci. 9, 255–266. https://doi.org/10.1038/nrn2331.
Stein, B. E., Laurienti, P. J., Wallace, M. T. and Stanford, T. R. (2002). Multisensory integration, in: Encyclopedia of the Human Brain, V. S. Ramachandran (Ed.), pp. 227–241. Academic Press, San Diego, CA, USA. https://doi.org/10.1016/b0-12-227210-2/00225-9.
Steinweg, B. and Mast, F. W. (2017). Semantic incongruity influences response caution in audio-visual integration, Exp. Brain Res. 235, 349–363. https://doi.org/10.1007/s00221-016-4796-0.
Sugita, Y. and Suzuki, Y. (2003). Implicit estimation of sound-arrival time, Nature 421, 911. https://doi.org/10.1038/421911a.
Takeshima, Y. (2020). Emotional information affects fission illusion induced by audio-visual interactions, Sci. Rep. 10, 998. https://doi.org/10.1038/s41598-020-57719-y.
Thaler, L., Schütz, A. C., Goodale, M. A. and Gegenfurtner, K. R. (2013). What is the best fixation target? The effect of target shape on stability of fixational eye movements, Vision Res. 76, 31–42. https://doi.org/10.1016/j.visres.2012.10.012.
Topolinski, S. and Deutsch, R. (2013). Phasic affective modulation of semantic priming, J. Exp. Psychol. Learn. Mem. Cogn. 39, 414–436. https://doi.org/10.1037/a0028879.
van Eijk, R. L. J., Kohlrausch, A., Juola, J. F. and van De Par, S. (2008). Audiovisual synchrony and temporal order judgments: effects of experimental method and stimulus type, Percept. Psychophys. 70, 955–968. https://doi.org/10.3758/PP.70.6.955.
Vatakis, A. and Spence, C. (2007). Crossmodal binding: evaluating the “unity assumption” using audiovisual speech stimuli, Percept. Psychophys. 69, 744–756. https://doi.org/10.3758/BF03193776.
Vatakis, A. and Spence, C. (2008). Evaluating the influence of the ‘unity assumption’ on the temporal perception of realistic audiovisual stimuli, Acta Psychol. 127, 12–23. https://doi.org/10.1016/j.actpsy.2006.12.002.
Vatakis, A., Ghazanfar, A. A. and Spence, C. (2008). Facilitation of multisensory integration by the “unity effect” reveals that speech is special, J. Vis. 8, 14. https://doi.org/10.1167/8.9.14.
Vuilleumier, P. (2005). How brains beware: neural mechanisms of emotional attention, Trends Cogn. Sci. 9, 585–594. https://doi.org/10.1016/J.TICS.2005.10.011.
Woods, A. T., Velasco, C., Levitan, C. A., Wan, X. and Spence, C. (2015). Conducting perception research over the Internet: a tutorial review, PeerJ 3, e1058. https://doi.org/10.7717/peerj.1058.
Yao, Z. and Wang, Z. (2013). The effects of the concreteness of differently valenced words on affective priming, Acta Psychol. 143, 269–276. https://doi.org/10.1016/j.actpsy.2013.04.008.
Yao, Z., Zhu, X. and Luo, W. (2019). Valence makes a stronger contribution than arousal to affective priming, PeerJ 7, e7777. https://doi.org/10.7717/peerj.7777.
All Time | Past Year | Past 30 Days | |
---|---|---|---|
Abstract Views | 219 | 219 | 44 |
Full Text Views | 13 | 13 | 4 |
PDF Views & Downloads | 35 | 35 | 10 |
Previous studies have found that semantics, the higher-level meaning of stimuli, can impact multisensory integration; however, less is known about the effect of valence, an affective response to stimuli. This study investigated the effects of both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via response time (RT) and temporal-order judgement (TOJ) tasks [assessing processing speed (RT), Point of Subjective Simultaneity (PSS), and time window when multisensory stimuli are likely to be perceived as simultaneous (temporal binding window; TBW)]. Through an online study with 40 participants (mean age: 26.25 years; females = 17), we found that both congruence and valence had a significant main effect on RT (congruency and positive valence decrease RT) and an interaction effect (congruent/positive valence condition being significantly faster than all others). For TOJ, there was a significant main effect of valence and a significant interaction effect where positive valence (compared to negative valence) and the congruent/positive condition (compared to all other conditions) required visual stimuli to be presented significantly earlier than auditory stimuli to be perceived as simultaneous. A subsequent analysis showed a positive correlation between TBW width and RT (as TBW widens, RT increases) for the categories that were furthest from true simultaneity in their PSS (Congruent/Positive and Incongruent/Negative). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.
All Time | Past Year | Past 30 Days | |
---|---|---|---|
Abstract Views | 219 | 219 | 44 |
Full Text Views | 13 | 13 | 4 |
PDF Views & Downloads | 35 | 35 | 10 |