There have been many debates of the two-visual-systems (what vs. how or perception vs. action) hypothesis that was proposed by Goodale and his colleagues. Many researchers have provided a variety of evidence for or against the hypothesis. For instance, a study performed by Aglioti et al. offered good evidence for the two-visual-systems theory using the Ebbinghaus illusion, but some researchers who used other visual illusions failed to find consistent results. Therefore, we used a perceptual task of conflict or interference to test this hypothesis. If the conflict or interference in perception had an influence on the processing of perception alone and did not affect the processing of action, we could infer that the two visual systems are separated, and vice versa. In the current study, we carried out two experiments which employed the Stroop, Garner and SNARC paradigms and used graspable 3-D Arabic numerals. We aimed to find if the effects resulting from perceptual conflicts or interferences would affect participants’ grasping and pointing. The results showed that the interaction between Stroop and numeral order (ascending or descending, or SNARC) was significant, and the SNARC effect significantly affected action, but the main effects of Stroop and Garner interference were not significant. The results indicated that, to some degree, perceptual conflict affects action processing. The results did not provide evidence for two separate visual systems.
Xia Shi, Xunbing Shen and Xiuying Qian
In 1976 Harry McGurk and I published a paper in Nature, entitled ‘Hearing Lips and Seeing Voices’. The paper described a new audio–visual illusion we had discovered that showed the perception of auditorily presented speech could be influenced by the simultaneous presentation of incongruent visual speech. This hitherto unknown effect has since had a profound impact on audiovisual speech perception research. The phenomenon has come to be known as the ‘McGurk effect’, and the original paper has been cited in excess of 4800 times. In this paper I describe the background to the discovery of the effect, the rationale for the generation of the initial stimuli, the construction of the exemplars used and the serendipitous nature of the finding. The paper will also cover the reaction (and non-reaction) to the Nature publication, the growth of research on, and utilizing the ‘McGurk effect’ and end with some reflections on the significance of the finding.
Denis Burnham and Barbara Dodd
Cross-language McGurk Effects are used to investigate the locus of auditory–visual speech integration. Experiment 1 uses the fact that , as in ‘sing’, is phonotactically legal in word-final position in English and Thai, but in word-initial position only in Thai. English and Thai language participants were tested for ‘n’ perception from auditory [m]/visual  (A[m]V) in word-initial and -final positions. Despite English speakers’ native language bias to label word-initial  as ‘n’, the incidence of ‘n’ percepts to A[m]V was equivalent for English and Thai speakers in final and initial positions. Experiment 2 used the facts that (i) [ð] as in ‘that’ is not present in Japanese, and (ii) English speakers respond more often with ‘tha’ than ‘da’ to A[ba]V[ga], but more often with ‘di’ than ‘thi’ to A[bi]V[gi]. English and three groups of Japanese language participants (Beginner, Intermediate, Advanced English knowledge) were presented with A[ba]V[ga] and A[bi]V[gi] by an English (Experiment 2a) or a Japanese (Experiment 2b) speaker. Despite Japanese participants’ native language bias to perceive ‘d’ more often than ‘th’, the four groups showed a similar phonetic level effect of [a]/[i] vowel context × ‘th’ vs. ‘d’ responses to A[b]V[g] presentations. In Experiment 2b this phonetic level interaction held, but was more one-sided as very few ‘th’ responses were evident, even in Australian English participants. Results are discussed in terms of a phonetic plus postcategorical model, in which incoming auditory and visual information is integrated at a phonetic level, after which there are post-categorical phonemic influences.
Rannie Xu and Russell M. Church
The capacity for timed behavior is ubiquitous across the animal kingdom, making time perception an ideal topic of comparative research across human and nonhuman subjects. One of the many consequences of normal aging is a systematic decline in timing ability, often accompanied by a host of behavioral and biochemical changes in the brain. In this review, we describe some of these behavioral and biochemical changes in human and nonhuman subjects. Given the involvement of timing in higher-order cognitive processing, age-related changes in timing ability can act as a marker for cognitive decline in older adults. Finally, we offer a comparison between human and nonhuman timing through the perspective of Alzheimer’s disease. Taken together, we suggest that understanding timing functions and dysfunctions can improve theoretical accounts of cognitive aging and time perception, and the use of nonhuman subjects constitutes an integral part of this process.
Teresa McCormack and Christoph Hoerl
A new model of the development of temporal concepts is described that assumes that there are substantial changes in how children think about time in the early years. It is argued that there is a shift from understanding time in an event-dependent way to an event-independent understanding of time. Early in development, very young children are unable to think about locations in time independently of the events that occur at those locations. It is only with development that children begin to have a proper grasp of the distinction between past, present, and future, and represent time as linear and unidirectional. The model assumes that although children aged two to three years may categorize events differently depending on whether they lie in the past or the future, they may not be able to understand that whether an event is in the future or in the past is something that changes as time passes and varies with temporal perspective. Around four to five years, children understand how causality operates in time, and can grasp the systematic relations that obtain between different locations in time, which provides the basis for acquiring the conventional clock and calendar system.
Maria Dolores de Hevia, Yu-Na Lee and Arlette Streri
Time is a multifaceted concept that is critical in our cognitive lives and can refer, among others, to the period that lapses between the initial encounter with a stimulus and its posterior recognition, as well as to the specific duration of a certain event. In the first part of this paper, we will review studies that explain the involvement of the temporal dimension in the processing of sensory information, in the form of a temporal delay that impacts the accuracy of information processing. We will review studies that investigate the time intervals required to encode, retain, and remember a stimulus across sensory modalities in preverbal infants. In the second part, we will review studies that examine preverbal infants’ ability to encode the duration and distinguish events. In particular, we will discuss recent studies that show how the ability to recognize the timing of events in infants and newborns parallels, and is related to, their ability to compute other quantitative dimensions, such as number and space.
Frederic Fol Leymarie and Prashant Aparajeya
In this article we explore the practical use of medialness informed by perception studies as a representation and processing layer for describing a class of works of visual art. Our focus is towards the description of 2D objects in visual art, such as found in drawings, paintings, calligraphy, graffiti writing, where approximate boundaries or lines delimit regions associated to recognizable objects or their constitutive parts. We motivate this exploration on the one hand by considering how ideas emerging from the visual arts, cartoon animation and general drawing practice point towards the likely importance of medialness in guiding the interaction of the traditionally trained artist with the artifact. On the other hand, we also consider recent studies and results in cognitive science which point in similar directions in emphasizing the likely importance of medialness, an extension of the abstract mathematical representation known as ‘medial axis’ or ‘Voronoi graphs’, as a core feature used by humans in perceiving shapes in static or dynamic scenarios.
We illustrate the use of medialness in computations performed with finished artworks as well as artworks in the process of being created, modified, or evolved through iterations. Such computations may be used to guide an artificial arm in duplicating the human creative performance or used to study in greater depth the finished artworks. Our implementations represent a prototyping of such applications of computing to art analysis and creation and remain exploratory. Our method also provides a possible framework to compare similar artworks or to study iterations in the process of producing a final preferred depiction, as selected by the artist.
Stephen Grossberg and Lauren Zajac
This article illustrates how the paintings of visual artists activate multiple brain processes that contribute to their conscious perception. Paintings of different artists may activate different combinations of brain processes to achieve their artist’s aesthetic goals. Neural models of how advanced brains see have characterized various of these processes. These models are used to explain how paintings of Jo Baer, Banksy, Ross Bleckner, Gene Davis, Charles Hawthorne, Henry Hensche, Henri Matisse, Claude Monet, Jules Olitski, and Frank Stella may achieve their aesthetic effects. These ten painters were chosen to illustrate processes that range from discounting the illuminant and lightness anchoring, to boundary and texture grouping and classification, through filling-in of surface brightness and color, to spatial attention, conscious seeing, and eye movement control. The models hereby clarify how humans consciously see paintings, and paintings illuminate how humans see.
Katie Greenfield, Danielle Ropar, Kristy Themelis, Natasha Ratcliffe and Roger Newport
The closer in time and space that two or more stimuli are presented, the more likely it is that they will be integrated together. A recent study by Hillock-Dunn and Wallace (2012) reported that the size of the visuo-auditory temporal binding window — the interval within which visual and auditory inputs are highly likely to be integrated — narrows over childhood. However, few studies have investigated how sensitivity to temporal and spatial properties of multisensory integration underlying body representation develops in children. This is not only important for sensory processes but has also been argued to underpin social processes such as empathy and imitation (Schütz-Bosbach et al., 2006). We tested 4 to 11 year-olds’ ability to detect a spatial discrepancy between visual and proprioceptive inputs (Experiment One) and a temporal discrepancy between visual and tactile inputs (Experiment Two) for hand representation. The likelihood that children integrated spatially separated visuo-proprioceptive information, and temporally asynchronous visuo-tactile information, decreased significantly with age. This suggests that spatial and temporal rules governing the occurrence of multisensory integration underlying body representation are refined with age in typical development.
Andrew T. Smith, Mark W. Greenlee, Gregory C. DeAngelis and Dora E. Angelaki
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.