Save

Multisensory Decisions: the Test of a Race Model, Its Logic, and Power

In: Multisensory Research
Authors:
Thomas U. Otto 1School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK

Search for other papers by Thomas U. Otto in
Current site
Google Scholar
PubMed
Close
and
Pascal Mamassian 2Laboratoire des Systèmes Perceptifs (CNRS UMR 8248), Ecole Normale Supérieure — PSL Research University, Paris, France

Search for other papers by Pascal Mamassian in
Current site
Google Scholar
PubMed
Close
Open Access

The use of separate multisensory signals is often beneficial. A prominent example is the speed-up of responses to two redundant signals relative to the components, which is known as the redundant signals effect (RSE). A convenient explanation for the effect is statistical facilitation, which is inherent in the basic architecture of race models (Raab, 1962, Trans. N. Y. Acad. Sci. 24, 574–590). However, this class of models has been largely rejected in multisensory research, which we think results from an ambiguity in definitions and misinterpretations of the influential race model test (Miller, 1982, Cogn. Psychol. 14, 247–279). To resolve these issues, we here discuss four main items. First, we clarify definitions and ask how successful models of perceptual decision making can be extended from uni- to multisensory decisions. Second, we review the race model test and emphasize elements leading to confusion with its interpretation. Third, we introduce a new approach to study the RSE. As a major change of direction, our working hypothesis is that the basic race model architecture is correct even if the race model test seems to suggest otherwise. Based on this approach, we argue that understanding the variability of responses is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decisions. Despite being largely rejected, it should be recognized that race models, as part of a broader class of parallel decision models, demonstrate, in fact, a convincing explanatory power in a range of experimental paradigms. To improve research consistency in the future, we conclude with a short checklist for RSE studies.

1. Introduction

Different senses are highly beneficial to act in the environment. Each sense increases the number of signals that can be perceived, which makes it less likely to crash into something unnoticed. Moreover, as different sensory signals complement one another, the environment can be better understood by considering signals in combination. To realize the combinatorial benefit in the control of actions, flexible cognitive functions are needed that analyze and interpret the countless potential combinations of signals according to task demands and/or subjective goals. What are these cognitive functions and how can they be studied?

Multisensory research has a long-standing tradition in investigating the mapping of sensory signals to motor outputs. A classic paradigm is the redundant signals paradigm, which asks participants to respond with the same motor act in three conditions, each presenting a different combination of signals (Hershenson, 1962; Kinchla, 1974; Miller, 1982; Todd, 1912). In two single signal conditions, for example, either an auditory or a visual signal is presented. In the third condition, which is called redundant signals condition, both signals are presented. The two signals are ‘redundant’ in the sense that detection of either signal is sufficient to initiate a correct response. In fact, a systematic analysis of the task demands shows that the two signals are, by design of the paradigm, coupled by a logical disjunction (Fig. 1). This elementary response mapping makes the redundant signals paradigm a very interesting tool to investigate the cognitive functions that deal with the combinatorial aspects of multisensory signals.

Figure 1.
Figure 1.

Analysis of the task demands in the redundant signal paradigm using a truth table. As indicated by the input variables x and y, two distinct signals are either present (1) or absent (0). The output variable indicates whether a motor response is required (1) or not (0). A response is required in the redundant signals condition, when both signals are present (top row), and in either of the single signal conditions, when only one signal is present (middle rows). No response is required when neither signal is present (bottom row). The resulting truth table is the truth table of a logical disjunction (OR).

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

Multisensory research has extensively used the redundant signals paradigm also for the reason that it shows one of the most prominent behavioral benefits that can occur with signals from different senses. The typical finding is that reaction times (RTs) in the redundant signals condition are on average faster compared to RTs in the two single signal conditions, which is known as the redundant signals effect (RSE; Hershenson, 1962; Kinchla, 1974; Miller, 1982; Todd, 1912). Interestingly, the RSE has been studied not only with a wide range of multisensory signals (e.g., Giard and Peronnet, 1999; Hecht et al., 2008; Martuzzi et al., 2007; Molholm et al., 2002; Murray et al., 2005; Otto and Mamassian, 2012; Otto et al., 2013; Perez-Bellido et al., 2013; Wuerger et al., 2012) but also with two signals within one sensory modality (e.g., Feintuch and Cohen, 2002; Katzner et al., 2006; Koene and Zhaoping, 2007; Krummenacher et al., 2014; Mordkoff and Yantis, 1993; Poom, 2009; Schröter et al., 2009), with different subject and patient populations (e.g., Brandwein et al., 2013; Brang et al., 2012; Collignon et al., 2010; Corballis, 1998; Harrar et al., 2014), and with different response modalities or task instructions (e.g., Blurton et al., 2014; Hughes et al., 1994 — see Note 1). Given this fascinating diversity of studies, it is evident that there can be processing differences with redundantly defined signals (e.g., it makes a difference whether one or two sensory modalities are stimulated; Girard et al., 2013). Still, as the basic paradigm is always the same, a main objective for multisensory research should be to understand whether or not a common cognitive function contributes to the RSE as suggested by the analysis of the task demands.

To address the objective, it is critical to appreciate that there is not only a fascinating diversity in studies on the RSE but also confusion regarding how to test and interpret the effect. This confusion starts with a broad range of important methodological details (e.g., handling of fast outliers, random stimulus presentation procedures, etc.; Gondan and Minakata, 2016) and extends towards major conceptual issues (e.g., the definition of race models; Miller, 2016; Otto and Mamassian, 2010). To unravel the conceptual issues, we address four main aspects in the following. First, we ask how models of perceptual decision making, which account for responses to a single sensory signal, can be extended to conditions that include signals in combination. Second, we review the so-called race model test (Miller, 1982), which is highly used to test a specific model of the RSE, and emphasize elements leading to confusion with the interpretation of test. Third, we introduce a new approach to study the RSE and argue that understanding the variability of RTs is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decision making.

2. Multisensory Decision Making

The link between sensing and acting is the central topic in research on perceptual decision making (Bogacz, 2007; Gold and Shadlen, 2007; Heekeren et al., 2008). Perceptual decisions are tested by asking participants to perform a motor act in response to a sensory signal. The task may appear abstract but applies to many real-life situations, in which a sensory signal is linked to an action (consider a car driver, who starts driving when a traffic light turns green). To account for the speed and the accuracy of such responses, research on perceptual decision making has converged on a fundamental computational framework that assumes three main processing steps (Bogacz, 2007). First, neurons tuned to relevant features of the signal provide with their activity sensory evidence for the signal. Critically, due to the variability in neuronal activity, this evidence is subject to noise. To enable accurate decisions, a second step reduces the impact of noise by accumulating evidence over time. Finally, a third step checks whether the accumulated evidence has reached a criterion. If so, a categorical decision is made (the traffic light is green) and a motor response is triggered (start driving). The framework has several strong features. It is optimal in that it has been demonstrated to be the fastest decision maker for a given level of accuracy (e.g., Bogacz et al., 2006; Wald and Wolfowitz, 1948). It provides direct explanation for many behavioral findings including the shape of RT distributions (e.g., Reddi et al., 2003) and speed–accuracy trade-offs (e.g., Bogacz et al., 2010). It is biologically plausible since neurons have been found that increase activity over time similar to the proposed accumulation of evidence (e.g., Hanes and Schall, 1996; Hanks et al., 2015; Shadlen and Newsome, 2001). And it has been realized many times ranging from psychological models of behavior (e.g., Carpenter and Williams, 1995; Ratcliff, 1978) to biophysically realistic attractor models (e.g., Deco and Rolls, 2006; Wang, 2002). In summary, the accumulation framework has proven to be very powerful in unisensory decision making.

Turning back to multisensory research, there is a key question to ask here: How can the perceptual decision making framework be extended to cover multisensory decisions as tested in the redundant signals paradigm? As a first hypothesis, evidence for two redundant signals may be accumulated by two parallel decision units (Fig. 2A). The two units are then coupled by a logic OR gate that triggers a response. This basic architecture predicts a speed-up of responses in the redundant compared to the single signal conditions because a response can be triggered by the faster of the two parallel decision units, which has been described as statistical facilitation (Raab, 1962). Given the race-like mechanism, models complying with this basic architecture are commonly called race models. As an alternative hypothesis, evidence for two redundant signals may be pooled first. The pooled evidence is then accumulated by a single decision unit that triggers a response (Fig. 2B). This basic architecture also predicts a speed-up of responses because two signals provide more sensory evidence than one signal. Consequently, the accumulation of evidence reaches the decision criterion faster in the redundant compared to the single signal conditions (Miller, 1982). We refer to this class of models with only one decision unit as pooling models. In summary, there are two competing cognitive architectures that can, in principle, account for the RSE (see Note 2).

Figure 2.
Figure 2.

The key question: How can the perceptual decision making framework be extended to account for the redundant signals effect? (A) Logical coupling of parallel decisions. Signals x and y are first processed by two parallel decision units. Categorical decisions made by these units are then coupled by a logic gate (OR) that triggers a motor response as output. (B) Pooling of sensory evidence in one decision. Sensory evidence for signals x and y is first pooled (Σ). The pooled evidence is then accumulated by a single decision unit that triggers the output.

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

To advance understanding of multisensory decisions, it is a main objective to test and discriminate between the competing model classes. The discrimination is very important because analysis of the RSE on this algorithmic level can — following Marr’s (1982) level of analysis — bridge the gap between the computational level (as provided by the analysis of the task demands, see Fig. 1) and the implementational level (as provided by the many striking interactions in multisensory processing on the neuronal level; e.g., Driver and Noesselt, 2008; Ghazanfar and Schroeder, 2006; Stein and Stanford, 2008). At this point, we like to highlight that the basic race model architecture (Fig. 2A) is convenient as it perfectly matches the task demands in the redundant signals paradigm (Fig. 1). Moreover, as we will discuss in the following section, race models are convenient as they can directly predict the exact distribution of RTs in the redundant signals condition. Notwithstanding these powerful features, we have the impression that multisensory research considers race models often as ‘not interesting’. One reason may be that race models seem not to fit the rather vague term ‘multisensory integration’, which has been defined as “the neural process by which unisensory signals are combined to form a new product” (Stein et al., 2010, p. 1719). While pooling models seem to fit here very well by suggesting that multisensory signals are merged into a single perceptual decision, it may be difficult to see how race models ‘form a new product’. On the other hand, Stein et al. (2010, p. 1719) further define multisensory integration operationally “as a multisensory response (neural or behavioral) that is significantly different from the responses evoked by the modality-specific component stimuli”. Remarkably, race models are integration models according to this definition as they predict faster mean RTs to redundant signals relative to the components. Still, due to the semantic confusion around the term multisensory integration, we have the impression that RSE studies are often guided by the belief that race models need to be rejected in order to demonstrate ‘true multisensory integration’. These issues should be kept in mind when we review next how the discrimination between the two model classes has been approached in the past.

3. Testing Race Models

As discussed in the previous paragraph, a strong feature of race models is that these allow directly for RSE predictions on the level of RT distributions. In a milestone contribution, Miller (1982) used this feature to develop a very influential hypothesis test, which since then is routinely used in multisensory research to check whether or not race models can explain the RSE. Unfortunately, however, for reasons that are difficult to trace back, the test has not always been used consistently. To avoid confusion in the future, it is therefore very important to be clear about how the test works before discussing the conclusions that can be drawn from it.

3.1. How Does Miller’s Test Work?

The test builds on the basic idea by Raab (1962), who hypothesized that the RSE can be explained by statistical facilitation, which is inherent in the basic architecture of race models (Fig. 2A). To formalize statistical facilitation, it is convenient to describe the time by which a decision unit reaches its criterion to trigger a response by a random variable. Let Tx be such a random variable to describe the decision time at which signal x is detected by one unit. Likewise, let Ty be a random variable to describe the time at which signal y is detected by the other unit. Assuming a race model, a response to redundant signals can be triggered by the faster unit to reach its criterion. The corresponding random variable Txy is given by

Txy=min(Tx,Ty)

where the equal sign denotes equal in distribution. If the probability density functions of Tx and Ty overlap, the framework predicts that responses to redundant signals are on average faster than responses to either of the single signals.

To check if statistical facilitation can explain the RSE, Miller (1982) derived a testable prediction using probability summation (for an illustration, see Fig. 3). Let Pxy(Txt) be the cumulative probability that a response to signal x has been triggered at time t on presentation of both signals (the latter is indicated by the subscript xy). Likewise, let Pxy(Tyt) be the cumulative probability that a response to signal y has been triggered. Then, the cumulative probability Pxy(Txyt) that one or the other response has been triggered can be computed by the sum of the probabilities of the individual responses to have been triggered minus the joint probability Pxy(Txyt) that both responses have been triggered

Pxy(Txyt)=Pxy(Txt)+Pxy(Tyt)Pxy(Txyt).

The probability summation rule allows for an exact prediction of decision times in the redundant signals condition if the RSE is explained by a race model. Unfortunately, the right-hand side of this equation contains two unknowns, which need to be considered to derive a testable prediction.

Figure 3.
Figure 3.

Probability summation illustrated by Venn diagrams. Two sets of events (denoted X and Y) are represented by circles. The area covered by both sets together is called the union of X and Y (denoted XY). It is given by the sum of the areas covered by each individual set minus the area in which the sets overlap, which is called the intersection of X and Y (denoted by XY). The illustration corresponds to equation (2).

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

The first unknown concerns the joint probability Pxy(Txyt) that both channels have been triggered, which is contingent on a potential dependence between Tx and Ty. If the two random variables are statistically independent (i.e., if a response to one signal makes it neither more nor less likely to observe a response to the other signal), the joint probability is given by the product of the individual probabilities. Hence, under the assumption of statistical independence, equation (2) can be changed to

Pxy(Txyt)=Pxy(Txt)+Pxy(Tyt)Pxy(Txt)×Pxy(Tyt).

However, as we are going to see below, the two random variables may be statistically dependent. In case of a positive correlation between the random variables, observing a fast response to one signal would make it more likely to also observe a fast response to the other signal. Conversely, in case of a negative correlation, observing a fast response to one signal would make it more likely to observe a slow response to the other signal. Critically, a potential correlation has a major effect on statistical facilitation. To solve the issue of the unknown joint probability, Miller (1982) used a simple but elegant trick. He noted that the joint probability, as all probabilities, has to be positive. If the joint probability is removed from equation (2), the only thing that can happen is that the right-hand side of the equation becomes larger. Consequently, the unchanged left-hand side has to be equal to or smaller than the right-hand side, which is expressed by

Pxy(Txyt)Pxy(Txt)+Pxy(Tyt).

It is interesting to note that this inequality is an instance of Boole’s inequality, which states that the probability to observe at least one event of a countable set cannot be greater than the sum of the probabilities of the individual events. It is also interesting to note that the right-hand side corresponds to the prediction in case of a maximal negative correlation, for which the expected statistical facilitation is largest (Colonius, 1990). In summary, inequality (4) puts an upper limit on statistical facilitation which is independent of a potential correlation.

The second unknown concerns the cumulative probabilities to observe a response to one signal when both signals are presented in the redundant signals condition, which are given by Pxy(Txt) and Pxy(Tyt). Unfortunately, these probabilities are not directly measured in the redundant signals paradigm. What is measured are the probabilities to observe responses in the single signal conditions, which can be expressed by Px(Txt) and Py(Tyt). Although it is unknown whether or not these probabilities are the same, Miller (1982) implicitly assumed that these are identical

Px(Txt)=Pxy(Txt),Py(Tyt)=Pxy(Tyt).

This auxiliary context invariance assumption has been made explicit only after the advent of the test (Ashby and Townsend, 1986; Luce, 1986). The assumption is also known as context independence (Colonius, 1990) but to avoid confusion with statistical independence, we here use the same terminology as Townsend and Wenger (2004). Assuming context invariance, inequality (4) can be changed to

Pxy(Txyt)Px(Txt)+Py(Tyt).

This so-called race model inequality (or Miller’s bound) can then be used to test race models (Miller, 1982; Ulrich et al., 2007). If decision times are equated with RTs (which is another auxiliary assumption, see Gondan and Minakata, 2016; Luce, 1986), a handy feature of the race model inequality is that it relates RTs measured with redundant signals to RTs measured with single signals. For each time t, the left-side of inequality (6) is given by the cumulative distribution function (CDF) empirically determined in the redundant signals condition. The right-side is given by the sum of the empirical CDFs determined in the single signal conditions. Then, the two sides just need to be compared. To use Miller’s (1982, p. 253) words, “the important feature of [inequality (6)] is that it puts an upper limit on the facilitation produced by redundant signals, and no race model is consistent with a reversal of the inequality for any value t”. Hence, if the race model inequality is violated, the test seems to suggest that all race models can be rejected.

3.2. Interpreting Miller’s Test

Miller (1982) race model test is a beautiful example of hypothesis testing in scientific enquiry. Consequently, it is no surprise that the race model test is frequently used in multisensory research and has become a ‘standard testing tool’ (Colonius and Diederich, 2006, p. 148). Unfortunately, the redundant signals paradigm and the race model test have not always been applied consistently (Gondan and Minakata, 2016). One exemplary issue is that the independent race model prediction (equation (3)) is frequently mistaken as the race model inequality (inequality (6)). A far more critical issue is that the race model test is easily misunderstood in that it could provide an answer to the key question regarding the basic cognitive architecture (Fig. 2; Otto and Mamassian, 2010). In fact, it cannot. The race model test is mute about how the perceptual decision making framework is extended to multisensory decisions and provides no evidence whether the cognitive architecture involves one or two decision units.

To understand this critical issue, it is important to consider the role of auxiliary assumptions in hypothesis testing (Hempel, 1966). How can the race model test be used to build a valid argument? As a first scenario, the argument starts with the hypothesis (H) that the RSE can be explained by a basic cognitive architecture assuming two parallel decision units as part of a race model that produces statistical facilitation (equation (1)). As an implication (I) of this hypothesis, Miller (1982) derived the race model inequality (inequality (6)). The two are linked by a conditional statement: If the hypothesis is true, then so is the implication. Interestingly, empirical evidence denies the implication: There is strong evidence that the race model inequality is violated in the redundant signals paradigm. Consequently, the hypothesis cannot be true, which results in the statement that the basic cognitive architecture of race models can be ruled out. The scenario looks like a valid deductive argument, known as the modus Tollens (denying the consequent — see Note 3), in which two premises lead to a conclusion

article image

To be clear, this scenario’s argument is a fallacy. The reason is that the first premise, the conditional statement, is wrong: The argument does not acknowledge the role of the context invariance assumption, which states that processing of one signal is not changed whether or not the other signal is presented (equation (5)). As shown in the previous section, context invariance is an auxiliary assumption (A) that is critically needed to reach the implication. A second scenario can account for the role of the auxiliary assumption by changing the conditional statement: If the hypothesis and the auxiliary assumption are true, then so is the implication. Although the empirical evidence stays of course the same, the new conditional statement changes the argument profoundly. It is no longer valid to conclude that the hypothesis is not true. It can only be concluded that the hypothesis and the auxiliary assumption are not both true

article image

It follows that a violation of the race model inequality does not rule out all race models. A valid alternative is that the basic architecture of race models (Fig. 2A) is still correct but the auxiliary assumption of context invariance is wrong. Unfortunately, context invariance is not mentioned by most studies that have used the test, which has led to confusion about how to interpret the test (Otto and Mamassian, 2010).

Another source of confusion regarding the test may result from the definition of concepts, and here the devil is in the detail. We define race models as models that build on the basic architecture of parallel decision units that are coupled by a logic OR gate (Fig. 2A). Raab’s (1962) model is a pure version of this model class as it assumes statistical independence and context invariance. However, there is disagreement regarding the definition of race models (Miller, 2016). Where exactly is this disagreement coming from and how can it be resolved? Originally, Miller (1982) framed the race model test as a test of separate activation, which he uses synonymously with race models and which he distinguishes from coactivation. Do these two concepts correspond to what we have specified as a key question for multisensory research (Fig. 2)? For example, is coactivation the same as what we have introduced as pooling of sensory evidence in one decision unit? When Miller (1982, p. 248) defined the term, he stated that “it is convenient to characterize [coactivation] models as allowing activation from different channels to combine in satisfying a single criterion for response initiation”. Particularly the final part may suggest that coactivation is the same as pooling, which may be the reason that pooling is often depicted as a characteristic feature of coactivation models (e.g., Mordkoff and Yantis, 1991, their Fig. 1; Townsend and Wenger, 2004, their Fig. 4). However, it is critical to understand that the definition is much broader. In fact, as the issue of context invariance was not considered in the original definition (Ashby and Townsend, 1986; Luce, 1986), it is very helpful that Miller (2016) clarified that the term coactivation should include any violation of the context invariance assumption (which could be anything including for example attentional effects). The clarification in turn implies that a model, which uses a logic OR coupling of parallel decision units but that also allows for some sort of interaction other than a potential correlation, is not a race model according to Miller (2016). The clarified definition saves the claim that all race models can be rejected if the race model inequality is violated, but it is unfortunately not fitting with what we think is the key question (i.e., does the cognitive architecture involve a single or two parallel decision units? See Fig. 2). Moreover, the clarified definition seems arbitrary to us as it allows for one sort of interaction (a potential correlation; see the first unknown in equation (2)) but not for another (a violation of context invariance; see the second unknown in equation (2)). We therefore keep our definition of race models. If the aim is however to resolve disagreement, it is probably best to say that Raab’s (1962) simple race model alone cannot explain the RSE as some sort of interaction must have taken place. Critically, this interaction is not necessarily a pooling of sensory evidence within a single decision unit (Fig. 2B) as an explanation of the RSE may still involve parallel decision units that are coupled by a logic OR gate (Fig. 2A).

4. Variability Is the Key

The review of the race model test made clear that research on the RSE has provided so far no evidence that allows rejecting all race models. Quite the contrary, we like to recap that the basic architecture of race models has two very strong features. First, it is convenient as it perfectly matches the task demands in the redundant signals paradigm (Fig. 1). Second, it is convenient as it can provide direct explanation for the RSE at the level of RT distributions (equation (2)). These aspects provide sufficient justification to approach the RSE with the working hypothesis that the basic race model architecture is correct (even if the race model test has been interpreted to suggest otherwise). What is more, this new approach to the RSE may provide a very interesting tool to study possible interactions in the processing of multisensory signals. Specifically, given that equation (2) can predict the RSE at the level of RT distributions, it seems feasible to reveal and quantify possible interactions by analysing the two unknowns of the equation. The approach results therefore in two questions: firstly ‘Are RTs statistically independent?’ and secondly ‘Is context invariance true?’. When describing the approach in the following, it will become clear that studying sources of variability is the key to understand the RSE.

Before starting, it is helpful to consider the stochastic nature of RTs (Luce, 1986). As introduced with the race model test, it is convenient to summarize RTs by random variables, and hence in form of probability distributions. Such distributions are determined experimentally by sampling RTs in long trial sequences. One aspect of this standard procedure is that RTs are subject to different sources of variability, including but not limited to attentional fluctuations (an example of inter-trial variability) and noise in neuronal processing (an example of intra-trial variability). In the end, several sources have contributed to the overall RT variability, which is important to keep in mind regarding the new approach to the RSE.

Figure 4.
Figure 4.

The first unknown: Are RTs statistically independent? It is good practice to test the RSE with the paradigm’s three conditions in a random trial sequence. For example, it follows that signal x is sometimes presented after a switch from signal y. On other trials, the presentation of signal x is repeated. As trial history contributes to RTs, statistical independence cannot be assumed.

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

To understand the RSE using the new approach, first, we need to know if it is realistic to assume that RTs are statistically independent. As a quick and easy answer, it is not. The issue was stressed already by Miller (1982, p. 252), who noted that “there is a consistent negative correlation [emphasis added] between detections of signals on different channels”. To make the issue more tangible, how can such a negative correlation be understood and how does it arise? At least one issue is that RTs are subject to history effects, which refer to the many findings showing that RTs can depend on what was tested on previous trials, for example, due to task/modality switches or priming effects (e.g., Monsell, 2003; Spence et al., 2001; Waszak et al., 2003). To test the RSE, it is good practice to present the paradigm’s three conditions randomly interleaved (Gondan and Minakata, 2016). This procedure implies that the recent trial history is constantly changing (Fig. 4). For example, on one trial, signal x may be presented following a signal y trial, which defines a switch. On another trial, signal x may be presented after a signal x trial, which defines a repetition. In accordance with research on history effects, it is typically found that repetition RTs are faster than switch RTs (e.g., Gondan et al., 2004; Miller, 1982; Otto and Mamassian, 2012). Notably, the effect varies depending on procedures and seems larger in bi- compared to unimodal RSE experiments (e.g., Miller, 1982, his Table 4). What implication do history effects have for the analysis of the RSE? If switch and repetition trials are jointly used to determine RT distributions, it is critical to understand that history effects, as an instance of inter-trial variability, contribute to the overall RT variability. Moreover, as history effects to signals x and y are opposite in sign (i.e., expecting x is the same thing as not expecting y), a negative correlation can arise between the random variables that describe the experimentally determined RTs in the single signal conditions (Otto and Mamassian, 2012). As potential correlations have a major impact on race model predictions (Colonius, 1990; Otto and Mamassian, 2012, their Fig. S1A), it is clear that history effects, which lead to a negative correlation, can strongly influence the size of the RSE. For example, the size of the RSE is reduced if this type of history effects are avoided by using a block design instead a random trial order (Otto and Mamassian, 2012). However, as we only start to investigate these issues, a more detailed knowledge of history effects as a source of variability is needed to advance research on the RSE. For future RSE studies, it should therefore become a standard to study potential correlations and to report at least the size of history effects for each data set. Likewise, any analysis or modelling approach that does not consider history effects and potential correlations should be considered at least incomplete.

To understand the RSE using the new approach, second, we need to know if it is realistic to assume context invariance. Unfortunately, it is difficult to test the assumption directly, which was noted already by Luce (1986, p. 131, “[context invariance] is not obviously true, and it is difficult to know how to verify it”). Anyway, as we approach the RSE with the working hypothesis that the basic architecture of race models is correct, we assume that the context invariance assumption is wrong. Then, several scenarios are possible (Fig. 5). For example, within the decision process, it can be assumed that processing of signal x is changed by signal y as it may provide additional sensory evidence via excitation (Fig. 5, y1). Consequently, if signal y is present, evidence is accumulated faster, which will lead to a speed-up of decision times (at this point, we should note that pooling may be considered an extreme version of an excitatory interaction). As an alternative, it can be assumed that processing of signal x is changed by signal y as it may be a source of noise in the decision process (Fig. 5, y2). Consequently, if signal y is present, noise is increased during evidence accumulation, which will lead to an increased variability of decision times. It is critical to understand that these two scenarios do not provide an exhaustive set as there are many potential scenarios. The context invariance assumption may be violated not only within the decision process but in principle at any other level (Fig. 5, y3). In addition, the illustrated interactions are reciprocal in that signal x may also change processing of signal y. What this list shows is that there are many hypothetical scenarios how context invariance may be violated and pooling of sensory evidence within one decision unit is only one extreme of these. To advance research on the RSE, the question is how and on what level is context invariance violated?

Figure 5.
Figure 5.

The second unknown: Is context invariance true? Context invariance assumes that processing of signal x is not changed whether or not signal y is presented as well. Violations of the assumption may occur at different stages. Within the decision making process, for example, signal y may change the processing of signal x by inhibition/excitation (situation y1) or in terms of noise (situation y2). Interactions may occur at any other stage including stages after a decision is made (situation y3).

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

Our new approach helps to answer this question. The basic idea is that equation (2) can be fitted to the CDF in the redundant signals condition and that differences between data and model can reveal potential interactions (Otto and Mamassian, 2012). To highlight the advantage of this approach, it is interesting to note that the typical reading of the race model test is that processing in the redundant signals condition is ‘faster’ (Miller, 2016, p. 518) or ‘more efficient’ (Townsend and Nozawa, 1997, p. 597) than predicted by pure statistical facilitation. However, the test is basically limited to the fast tail of the RT distribution (Diederich and Colonius, 2004). The test does not consider the slow tail, which is typically not violating the race model inequality. Given the limitation of the test to only a subset of the responses, it is in fact very difficult to make inferences about how or in what direction context invariance is violated. In stark contrast, a major advantage of our approach is that it considers the entire RT distribution. Hence, a much more detailed view on potential violations of the context invariance assumption is possible.

To implement the approach, we built a simple race model following the ideas championed by Raab (1962). We used two parallel decision units that were originally developed to study unisensory decisions (Carpenter and Williams, 1995; Noorani and Carpenter, 2016). These units can be fitted in the single signal conditions. Then, to account for the redundant signals condition, the two units are coupled by a logic OR gate in agreement with the task demands (Fig. 1). Consequently, a response to redundant signals is triggered by the faster of the two units (which is the basic idea of race models; see Fig. 2A). We extended Raab’s (1962) basic model by including a potential correlation as a free model parameter (the correlation parameter corresponds to the first unknown in equation (2) and is needed to compute the joint probability). In summary, the model is constrained by the single signal conditions and has only one additional free parameter. We found that the model fitted the RT distribution in the redundant signals condition best by assuming a strong negative correlation, which is in agreement with the finding of a strong history effect in the data. Notably, the most prominent difference between the model fit and the empirical data was that RTs in the redundant signals condition were more variable compared to the best model fit (Otto and Mamassian, 2012, their Fig. 3F). Hence, to understand potential violations of the context invariance assumption (the second unknown in equation (2)), the question is not what could lead to faster or more efficient processing but what could lead to more variable processing. To account for this difference, we added a second free model parameter that controls not the mean but the variability of decision times in the parallel decision units in the redundant compared to the single signal conditions. It is critical to understand that the additional parameter does not change the basic race model architecture but manifests a specific violation of the context invariance assumption (equation (5)). Interestingly, this context variant race model (see Note 4) readily predicts violation of the race model inequality very similar to empirical data (Otto and Mamassian, 2012, their Fig. S1B). Moreover, the model with only two free parameters fitted the entire RT distribution in the redundant signals condition reasonably well.

The two model parameters not only allow us to fit the empirical data but can also provide meaning. We have discussed already the link between the correlation parameter and history effects in random trial sequences (Fig. 4). A prediction here is that a strong negative correlation is needed in the model when experimental procedures lead to relatively strong history effects. Consequently, as history effects seem to be smaller in uni- compared to bimodal RSE experiments (e.g., Miller, 1982, his Table 4), the model should explain unimodal RSE experiments assuming a relatively weak negative correlation. The second parameter points to more variable processing in the redundant compared to the single signal conditions. Although there are probably several explanations for increased variability, we have speculated that one solution may be provided by the main factor that explains RT variability in decision making models, which is noise (Fig. 5, y2). This noise hypothesis could be linked to recent electrophysiological findings showing that the activity of some neurons in early sensory cortices is changed by signals in the non-preferred modality (Lemus et al., 2010). As the changed activity is unspecific regarding features of the signals, this change could point to a noise interaction. However, based on our behavioral data, we can only speculate about potential sources of increased variability and additional research is needed to reveal more details. Still, as a main message, each of the two model parameters points to a source of variability that needs to be understood, which we argue is the key to finally understand the RSE.

5. Testability and Explanatory Power

Our new approach has demonstrated that the RSE can be explained by a race model even if the race model inequality is violated (Otto and Mamassian, 2010, 2012). The important new feature is that our race model allows for increased variability, which is a specific violation of the context invariance assumption (equation (5)). Miller (2016, p. 519) criticized context variant race models, including the approach discussed above, because the “class of context [variant] race models is sufficiently open-ended that it can never be falsified, just like the wider class of coactivation models”. What does this mean? This fundamental and important criticism should be alarming for any researcher arguing for coactivation (pooling). The issue is that if a model is not testable, at least in principle, it cannot be significantly proposed as a scientific hypothesis or theory as it lacks empirical import (Hempel, 1966, p. 30). Without going as far as Miller (2016) who argues that coactivation (pooling) models are not testable in principle, we agree that up to now there has been no significant attempt to derive testable predictions based on coactivation (pooling). For example, does coactivation (pooling) allow for predictions on the level of RT distributions? Consequently, we agree that coactivation (pooling) is at present only a very weak hypothesis to account for the RSE (not to mention the conceptual confusion around the terms in the past).

The question is whether or not Miller’s (2016) fundamental critique applies also to race models. We can certainly understand the motivation to criticize context variant race models like ours as we stress that such models cannot be falsified by the race model test (see Section 3, Testing Race Models; Otto and Mamassian, 2010). However, does the limitation of a specific test mean that race models are not testable at all? We do not think so. We have already shown that race models provide strict rules, which we even called principles of multisensory behavior, that predict the size of the RSE based on the RTs in the single signal conditions (Otto et al., 2013). First, the principle of congruent effectiveness states that the RSE is larger when median RTs in the two single signal conditions are more similar. Second, the variability rule states that the RSE is larger when the RTs in the single signal conditions are more variable (which is closely related to our claim that understanding sources of variability is the key to understand the RSE). The two principles can of course be tested, and with that race models that explain the RSE by statistical facilitation. For example, a very typical manipulation in RSE experiments is to change the relative onset of signals (e.g., Hershenson, 1962; Miller, 1986; Otto et al., 2013). This manipulation puts race models to test because the principle of congruent effectiveness predicts that the RSE is largest when the two signals trigger responses at the same time. Whatever signals are tested, it follows that the RSE should be largest when the onset of the signal with faster RTs is delayed by the RT difference between the single signal conditions. As this prediction is confirmed in experiments, studies that changed the relative onset of signals have in fact provided evidential support for statistical facilitation and, hence, for race models (Otto et al., 2013). Another typical manipulation is to change the intensity of signals (e.g., Chandrasekaran et al., 2011; Otto et al., 2013; Senkowski et al., 2011). As changing the intensity of signals affects both the average and the spread of RTs, both principles are put to test. The principle of equal effectiveness is tested, as it predicts that the RSE is largest if signal intensities are calibrated to trigger responses at the same time. The variability rule is tested as it predicts that the RSE is increasing as the RT variability increases, which is the case for weaker signal intensities. As the predictions are confirmed in experiments, studies that changed the intensity of signals have in fact also provided evidential support for statistical facilitation and, hence, for race models (Otto et al., 2013). Consequently, Miller’s (2016) fundamental criticism does not apply to race models as these can be and have been tested. The only unfortunate issue is that the standing of race models has suffered in the past from the repeated misinterpretation of the race model test. To advance research on the RSE, we hope it will be finally recognized that there is in fact strong evidential support for race models across a broad range of signals, conditions, and participants.

We are very optimistic that race models can provide a universal account for the fascinating diversity of RSE studies, but there is even more as understanding race models can help to bridge the gap to other experimental tasks and paradigms. The analysis of the task demands shows that the redundant signals paradigm requires a logical disjunction (Fig. 1). Race models perfectly meet this requirement by assuming parallel decision units that are coupled by a logic OR gate (Fig. 2A). Now, what if the task demands change? Is it possible to adapt the framework to meet the task demands in other paradigms? For example, the psychological refractory period (PRP) paradigm presents two signals as well but asks participants to respond to each signal with a different motor response (e.g., Pashler, 1994; Sigman and Dehaene, 2005; Welford, 1952). Given these task demands, a pooling of sensory evidence seems to make no sense here. In contrast, it has been recently argued that the PRP is also well understood in that sensory evidence for the two signals is accumulated by parallel decision units (Zylberberg et al., 2012). The PRP and the redundant signals paradigm could therefore be analyzed jointly with a focus on the cognitive structures that route perceptual decisions to motor outputs. This approach can be further refined by not only presenting identical signals but by also keeping the motor response identical. In this case, only the cognitive structures that route perceptual decisions to motor outputs would be put to test. This approach can be implemented for example in a new experiment, which changes the task demands of the redundant signals paradigm in that the single signal conditions no longer require responses. A systematic analysis of the task demands shows that the two signals are, by design of the paradigm, coupled by a logical conjunction (Fig. 6A). Given the new task demands, a pooling of sensory evidence seems to make no sense here either. In contrast, the basic cognitive architecture of race models can be effortlessly adapted to meet the new task demands. In the model, only the logic OR needs to be replaced by a logic AND gate (Fig. 6B). We tested the adapted model and found that it predicted the slowdown of RTs in the new conjunction task surprisingly well (Otto and Mamassian, 2012, their Fig. 4). Consequently, the basic ideas championed by Raab (1962) can actually be understood as part of a much larger framework, which assumes that parallel decision units are flexibly coupled by cognitive functions according to the task demands. This larger framework demonstrates a convincing explanatory power within and beyond the redundant signals paradigm, which makes it potentially extremely useful for multisensory research.

Figure 6.
Figure 6.

Explanatory power. (A) Analysis of the task demands in a new experiment. To test the hypothesis that parallel decisions are flexibly coupled by logic gates, a new task uses identical signals and the same motor response as in the redundant signals paradigm (Fig. 1). The only difference is that the new task does not ask for responses in the single signal conditions (middle rows). The resulting truth table is the truth table of a logical conjunction (AND). (B) Model prediction. To account for the new task demands, the basic race model architecture (Fig. 2A) can be easily adapted by replacing the logic OR by a logic AND gate. The adapted model predicts RTs in the new experiment surprisingly well (Otto and Mamassian, 2012).

Citation: Multisensory Research 30, 1 (2017) ; 10.1163/22134808-00002541

6. Conclusion

The redundant signals paradigm is a classic paradigm in multisensory research and the RSE is one of the most prominent behavioral benefits with multisensory signals. Given the huge number and fascinating diversity of experimental studies, it should be a main objective for multisensory research to understand whether or not a common cognitive function contributes to the RSE. We here discussed our new approach to the RSE, which is in stark contrast to the mainstream of the last decades (see the many follow-up studies of Miller, 1982). As a major change of direction, our working hypothesis is that the basic race model architecture championed by Raab (1962) is correct even if the race model test has been interpreted to suggest otherwise. We argue that our new approach allows to precisely predict the RSE and to reveal and quantify specific interactions in the processing of multisensory signals (Otto and Mamassian, 2012; Otto et al., 2013). What is more, the approach points directly to a set of basic cognitive functions, which we summarize as a flexible logic coupling of parallel decisions according to the task demands. In the end, we think that this set of functions is fundamental to the combinatorial benefit of multisensory signals.

Finally, we like to reiterate that Miller’s (1982) race model test is a beautiful example of hypothesis testing in scientific enquiry. The test shows that Raab’s (1962) basic race model alone, even when a correlation parameter is added, cannot explain the RSE. Hence, we would say that ‘something interesting’ happens. To our opinion, RSE research should not stop at this point but try to find an explanation (which, as we have shown, can still involve the basic race model architecture). This endeavor calls of course for the continued use of the race model test. However, what became evident throughout is that RSE research has suffered in the past from methodological inconsistencies and conceptual confusion (Gondan and Minakata, 2016; Miller, 2016; Otto and Mamassian, 2010). We hope that the four main items, which we have discussed here, help to resolve at least some of the confusion. Based on the discussed material, we provide a checklist for RSE studies (Table 1). This minimum set of questions hopefully helps to further improve research consistency, which we think is critically needed to finally understand the RSE.

Table 1.

Checklist for RSE studies

Table 1.

Acknowledgements

The research leading to these results has received funding from the European Community’s Seventh Framework Program (FP7/2007-2013 under grant agreement number 214728-2) and from the Biotechnology and Biological Sciences Research Council (BB/N010108/1).

Footnotes

Notes

  1. 1. The cited studies present only a small subset of research on the RSE as an extensive review is beyond the scope of this opinion paper (for a methods review on 181 recent studies, see Gondan and Minakata, 2016).
  2. 2. We focus on two solutions that were considered most frequently in research on the RSE. Townsend and Nozawa (1997) discuss also an alternative serial processing architecture.
  3. 3. If the race model inequality is not violated (confirming the consequent), nothing can be concluded. The hypothesis may be true or false.
  4. 4. Gondan and Minakata (2016, p. 731) classify our model as a ‘coactivation model’, which we think can be misleading given the frequent confusion of coactivation and pooling (Fig. 2B). The term context variant race model is much more consistent as the basic model architecture fits our definition of race models (Fig. 2A). The add-on context variant indicates that the context invariance assumption is violated. The class of context variant race models includes for example also the ‘interactive race model’ proposed by Mordkoff and Yantis (1991).

References

  • Ashby F. G., Townsend J. T. (1986). Varieties of perceptual independence, Psychol. Rev. 93, 154179.

  • Blurton S. P., Greenlee M. W., Gondan M. (2014). Multisensory processing of redundant information in go/no-go and choice responses, Atten. Percept. Psychophys. 76, 12121233.

    • Search Google Scholar
    • Export Citation
  • Bogacz R. (2007). Optimal decision-making theories: linking neurobiology with behaviour, Trends Cogn. Sci. 11, 118125.

  • Bogacz R., Brown E., Moehlis J., Holmes P., Cohen J. D. (2006). The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks, Psychol. Rev. 113, 700765.

    • Search Google Scholar
    • Export Citation
  • Bogacz R., Wagenmakers E. J., Forstmann B. U., Nieuwenhuis S. (2010). The neural basis of the speed-accuracy tradeoff, Trends Neurosci. 33, 1016.

    • Search Google Scholar
    • Export Citation
  • Brandwein A. B., Foxe J. J., Butler J. S., Russo N. N., Altschuler T. S., Gomes H., Molholm S. (2013). The development of multisensory integration in high-functioning autism: high-density electrical mapping and psychophysical measures reveal impairments in the processing of audiovisual inputs, Cereb. Cortex 23, 13291341.

    • Search Google Scholar
    • Export Citation
  • Brang D., Williams L. E., Ramachandran V. S. (2012). Grapheme-color synesthetes show enhanced crossmodal processing between auditory and visual modalities, Cortex 48, 630637.

    • Search Google Scholar
    • Export Citation
  • Carpenter R. H. S., Williams M. L. (1995). Neural computation of log likelihood in control of saccadic eye movements, Nature 377(6544), 5962.

    • Search Google Scholar
    • Export Citation
  • Chandrasekaran C., Lemus L., Trubanova A., Gondan M., Ghazanfar A. A. (2011). Monkeys and humans share a common computation for face/voice integration, PLoS Comput. Biol. 7, e1002165. DOI:10.1371/journal.pcbi.1002165.

    • Search Google Scholar
    • Export Citation
  • Collignon O., Girard S., Gosselin F., Saint-Amour D., Lepore F., Lassonde M. (2010). Women process multisensory emotion expressions more efficiently than men, Neuropsychologia 48, 220225.

    • Search Google Scholar
    • Export Citation
  • Colonius H. (1990). Possibly dependent probability summation of reaction-time, J. Math. Psychol. 34, 253275.

  • Colonius H., Diederich A. (2006). The race model inequality: interpreting a geometric measure of the amount of violation, Psychol. Rev. 113, 148154.

    • Search Google Scholar
    • Export Citation
  • Corballis M. C. (1998). Interhemispheric neural summation in the absence of the corpus callosum, Brain 121, 17951807.

  • Deco G., Rolls E. T. (2006). Decision-making and Weber’s law: a neurophysiological model, Eur. J. Neurosci. 24, 901916.

  • Diederich A., Colonius H. (2004). Modeling the time course of multisensory interaction in manual and saccadic responses, in: Handbook of Multisensory Processes, Calvert G., Spence C., Stein B. E. (Eds), pp.  395408. MIT Press, Cambridge, MA, USA.

    • Search Google Scholar
    • Export Citation
  • Driver J., Noesselt T. (2008). Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments, Neuron 57, 1123.

    • Search Google Scholar
    • Export Citation
  • Feintuch U., Cohen A. (2002). Visual attention and coactivation of response decisions for features from different dimensions, Psychol. Sci. 13, 361369.

    • Search Google Scholar
    • Export Citation
  • Ghazanfar A. A., Schroeder C. E. (2006). Is neocortex essentially multisensory? Trends Cogn. Sci. 10, 278285.

  • Giard M. H., Peronnet F. (1999). Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study, J. Cogn. Neurosci. 11, 473490.