Save

Human Simulation as the Lingua Franca for Computational Social Sciences and Humanities: Potential and Pitfalls

In: Journal of Cognition and Culture
View More View Less
  • 1 The MITRE Corporation
  • | 2 Boston University and the Center for Mind and Culture
  • | 3 University of Agder and NORCE Center for Modeling Social Systems
  • | 4 Old Dominion University
Full Access

Abstract

The social sciences and humanities are fragmented into specialized areas, each with their own parlance and procedures. This hinders information sharing and the growth of a coherent body of knowledge. Modeling and simulation can be the scientific lingua franca, or shared technical language, that can unite, integrate, and relate relevant parts of these diverse disciplines.

Models are well established in the scientific community as mediators, contributors, and enablers of scientific knowledge. We propose a potentially revolutionary linkage between social sciences, humanities and computer simulation, forging what we call “human simulation.” We explore three facets of human simulation, namely: (1) the simulation of humans, (2) the design of simulations for human use, and (3) simulations that include humans as well as simulated agents among the actors. We describe the potential of human simulation using several illuminating examples. We also discuss computational, epistemological, and hermeneutical challenges constraining the use of human simulation.

Abstract

The social sciences and humanities are fragmented into specialized areas, each with their own parlance and procedures. This hinders information sharing and the growth of a coherent body of knowledge. Modeling and simulation can be the scientific lingua franca, or shared technical language, that can unite, integrate, and relate relevant parts of these diverse disciplines.

Models are well established in the scientific community as mediators, contributors, and enablers of scientific knowledge. We propose a potentially revolutionary linkage between social sciences, humanities and computer simulation, forging what we call “human simulation.” We explore three facets of human simulation, namely: (1) the simulation of humans, (2) the design of simulations for human use, and (3) simulations that include humans as well as simulated agents among the actors. We describe the potential of human simulation using several illuminating examples. We also discuss computational, epistemological, and hermeneutical challenges constraining the use of human simulation.

1 Introduction1

In the scientific domain, a set of common principles and guidelines facilitates high quality research, effective dissemination of results, and concise representation of important concepts, terms, and activities. From the early stages of modern science, theoretically informed models played a pivotal role in helping scientists understand and explain observed phenomena, forecast future events, make sense of the world we inhabit, and communicate their ideas. Even in everyday life, lessons are shared in stories, parables, and fairy tales, communicating common truths and values. We naturally use a variety of models to capture and communicate our knowledge.

Currently, the social sciences are fragmented into subdomains, each with their own terms, methods, concepts, and procedures. The situation is even more challenging in the humanities. Across many scientific fields, simulation has already proven its capacity to foster the integration of components of diverse theories, to surface hidden assumptions and clarify the scope of the conflicts among them.

Rosen (1998) describes modeling as the essence of epistemological effort. Similarly, Tolk (2015) defines models as purposeful, task-driven abstractions and simplifications of a perception or our understanding of reality. To answer a research question, we select the most appropriate abstraction level, choosing only what we really need, and then we attempt to capture causal relationships in the way we describe changes in entities, behaviors, and relations. These modeling activities are conducted purposefully, and are based on our perception, capturing our current understanding of reality, which implies that even a well-constructed model can be misleading if the underlying perception is flawed or immature. On the other hand, less than perfect models can generate insights, make competitive ideas comparable, and uncover relationships in complex data. As Gelfert observes in his philosophical primer on models:

Whereas the heterogeneity of models in science and the diversity of their uses and functions are nowadays widely acknowledged, what has perhaps been overlooked is that not only do models come in various forms and shapes and may be used for all sorts of purposes, but they also give unity to this diversity by mediating not just between theory and data, but also between the different kinds of relations into which we enter with the world. Models, then, are not simply neutral tools that we use at will to represent aspects of the world; they both constrain and enable our knowledge and experience of the world around us: models are mediators, contributors, and enablers of scientific knowledge, all at the same time.

Gelfert 2016, p. 127

This suggests that models have the potential to contribute to many of the subdomains of social science and the humanities.

If models are specified enough to allow their algorithmic execution in a computer program, the result is a computer simulation. Computer simulations require precise and complete specification of all parts of a model to be implemented. As such, the use of simulations demands a level of rigor that is rare within some sub-disciplines in the humanities and social sciences. In such contexts, vagueness and ambiguity can be meaningful theoretical strategies, but executing scientific insights and theories on a computer demands complete and unambiguous specification. This can require several simulation systems to express the various facets of ambiguity within competitive or complementary views. Simulation forces scientists to be precise and complete, resulting in a powerful executable representation of what we know about the subset of reality captured by the model within the supported discipline. This is true for all computational science applications. To paraphrase an old saying: A picture is worth 1,000 words, but an executable simulation is worth 1,000 pictures! This is especially true when immersive methods are used to bring the simulation to life in a virtual environment, allowing users to interact with artificial agents or other live human beings inside the simulation.

Within the engineering disciplines, simulation has been widely used as a way of managing complexity. In complex systems, the components are interconnected in multiple, often non-linear ways, leading to emergent behaviors that are no longer traceable without support from systems engineering tools. Furthermore, emergent behavior can be observed when “the whole is more than the sum of its parts,” when larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties. While dealing with such emergent behaviors is rather new for the engineering disciplines that are still predominantly defined by the quest for control over engineered systems, social science has routinely been used to study emergent behaviors since its inception. Simulation can thus be a useful tool within the social sciences, helping to discover and analyze emergent behaviors more efficiently, and to communicate research more effectively.

Our focus lies on human simulation, understood in at least three senses. First, this phrase can refer to the use of computer modeling and simulation to represent and study human beings. This interpretation, which we might call the simulated humans approach, captures characteristics of human life (such as cognition or group behavior) within digital computers.

Human simulation can also refer to human-centered simulation, i.e., the development and use of computer simulations for consumption by human actors. Human beings have used experience-based stories for thousands of years. By extrapolating from their experiences, they simulate possible future developments, which they capture in such stories. In this interpretation, simulation developers produce systems that fit the way human beings have always understood and used simulation. Theories of cognition, narrative design, scripting, human computer interface design, and data representation are essential to this second understanding of human simulation.

Third, human simulation can refer to extended-reality environments in which humans and computer simulations interact. The military simulation community describes this as live-virtual-constructive: live (real humans with real systems) – virtual (real humans with simulated systems) – constructive (simulated humans with simulated systems). In this case, we are dealing with the human as (part of a) computer simulation (Hodson and Hill 2014). Theories of empathy, understanding, religiosity, group formation, and evolution are especially relevant here.

We acknowledge that there are other possible interpretations of human simulation and we embrace the ambiguity of the concept. In what follows, we employ the term with these three interpretations particularly in mind. The principles of human simulation are presented in more detail in Diallo et al. (2018).

What we want to argue here is that human simulation can provide a common language for the humanities and social sciences. This lingua franca would foster better documentation, communication, theory refinement, and exchange of research results. By enabling the unification of findings from the social sciences and humanities, it has the potential to launch truly transdisciplinary research ventures.

2 The Potential of Simulation

There are several competing views of – and approaches to – Modeling and Simulation (M&S). Diallo et al. (2014) take a purely mathematical approach and use Model Theory, virtually equating model and theory. They define computer simulation as the finite-state machine realization of a theory. Fishwick (2014) argues that Modeling and Simulation (M&S) should be viewed as an empirical science. Tolk et al. (2015) make the case for distinguishing M&S science from M&S engineering: the former is the study of the process of natural and artificial abstraction whereas the latter is concerned with the process of realizing abstractions. Of the two, M&S engineering is thriving especially in the engineering disciplines. The military domain uses simulation systems for training, exercises, and education, but also for procurement, analysis, testing, and evaluation (Tolk 2012). The automobile industry successfully combines computer aided design tools with powerful simulation-based testing tools, running thousands of virtual tests before first prototypes are built (Hanselmann 1996). Soldiers fight side-by-side with worldwide distributed allies in realistic, immersive virtual environments, and cars participate in virtual races or drive through city traffic before the first piece of metal is cut.

M&S engineering has been extended to the social sciences in the form of Computational Social Sciences and it is currently being adapted to the humanities; see, e.g., the efforts of scholars involved in the Modeling Religion Project and the Modeling Religion in Norway project.2 M&S is so pervasive that it is safe to take it for granted (like breathing) – witness the fact that the discipline is claimed by virtually all scientific fields (including mathematical, physical, social, cognitive, and medical sciences). However, as the pace of technology accelerates dramatically and the boundary between humans and artificial systems blurs, the role of M&S science becomes even more salient. M&S engineering can no longer ignore human actors that create, use, and consume artificial systems. Further, as access to technology becomes more widespread and social media connects more and more people across social, cultural, and geopolitical boundaries, the interpretative aims of social scientists and humanities scholars confront a more complex society where individuals and networks have the power to effect change in ways never seen before. In the last decade, for example, we have seen an Arab Spring, fake news, the intensification of nationalism, and the advent of robotic personal companions. While seemingly unconnected, these are manifestations of the entanglement of humans, technologies, religions, and societies. The topic of human simulation is important for a variety of stakeholders interested in improving our understanding of our surroundings. It has the potential to unify M&S engineering, social sciences, and the humanities in all three senses of “human simulation:” the simulated human, human-centered simulation, and humans as (part of a) computer simulation.

First, the idea of the simulated human is especially relevant for the humanities, especially when supported by the social sciences and simulation engineering. Social sciences provide the larger context capturing how human beings interact in groups. Simulation engineering provides a process through which theories about human beings can be expressed formally. This formal expression starts with the specification of an informal model in natural language. Informal models can be the (simplification of) a theory. Once specified, informal models are validated against the theory and transformed into formatted models. Formatted models are specified in an implementation-independent language and capture the main actors, actions, and relationships involved in the theory. Finally, the formatted model is transformed into a formal model that unambiguously describes the formatted model. The formal model can be implemented in a digital computer for simulation purposes. Gains in formality brought by the simulation-development process yield added clarity and precision for the theory. However, there is a loss in interpretative richness that must be accounted for through the process of verification and validation. Verification means that all three models – informal, formatted, and formal – must be consistent with one another, the accuracy of transformation. Validation means that the three types of model can be traced directly to the theory, the accuracy of representation. The simulated-human approach to human simulation provides us with intelligent artificial human agents that are consistent with, and are bounded by, the theories they express.

Second, human-centered simulation is based on the realization that simulations are objects that are used by humans. Simulation engineers are the primary drivers in this approach, with support from the humanities and social sciences. The humanities disciplines provide theories that help simulation engineers understand how humans perceive and interact with objects and information. Currently, simulation engineers typically assume that most people are analytical and therefore assimilate information in the form of graphs and charts – a questionable assumption. As a result, computer simulations deliver data and information but do rarely tell a story or provide a narrative. Humanities and social sciences can also be helpful in building a compelling narrative and captivating an audience. Interestingly, computer simulations do have a story to tell. The story is often buried in algorithms and formulae but it does exist. Human-centered simulation focuses on enhancing our knowledge and understanding by providing compelling stories about human beings, society, and nature. By moving beyond data and information toward knowledge and ultimately wisdom, human-centered simulations become valuable tools for exploring and understanding the world and may even create empathy.

Third, we come to humans as (part of a) computer simulation: extended- reality contexts where real humans, artificial agents, real objects, and augmented objects interact in a virtual multi-sensory environment. Here our efforts rely on insights from the social sciences and the humanities with support from simulation. The idea of a society of computers implies that computing is a natural phenomenon. Computing technology is an extension of our existing computational ability into physical and virtual objects. One could argue that we are already living in a society of computers. The idea of humans as (part of a) computer simulation opens an avenue of research that could improve our understanding of the relationships between humans, societies, and technology.

The areas of human simulation are interrelated: developments in one area are usually applicable to the others. Together they have the potential to transform the way in which the social sciences, the humanities and computational modeling interact. Simulation furnishes a common language for expressing theories mathematically. Many theories about human life seem mutually contradictory or even incommensurable, but simulation forces hidden or unconscious assumptions into the open. As such, simulation can serve as a lingua franca, grounded in this common mathematical foundation, which can render fragmented or ambiguous ideas more transparent and susceptible to integration. Modeling methods and executable simulations can combine to express these now precisely formulated ideas in the form of entities, relations, and emergent phenomena that manifest the effects of underlying causal dynamics, thereby providing a common methodology and knowledge base for the future study of human beings.

3 Examples of Successful Ventures in Human Simulation

So far, we have argued that “human simulation” has the potential to serve as a lingua franca for communicating across and within the social sciences and the humanities, providing a common knowledge base that can facilitate interdisciplinary dialogue. Below we will analyze several challenges that such an approach faces. But how is human simulation supposed to work? If our goal is to encourage more humanities scholars and social scientists to engage these methodologies, we must make a convincing case that they are worth exploring (Wildman, Fishwick, and Shults 2017). In this section, we provide some examples of successful ventures in human simulation.

Because our focus in this context is on inviting scholars of human cognition and culture to take these methodologies seriously, we limit ourselves to examples that are especially relevant for these fields. The use of these techniques for social simulation in general has been expanding rapidly in recent years, maturing as a sub-field within computational modeling (Hauke, Lorscheid, and Meyer 2017). Insights from the cognitive and psychological sciences are increasingly being incorporated into the agent architectures of computational models (Squazzoni, Jager, and Edmonds 2014; Alvarez 2016).

In fact, in the last few years computational models have been developed that are specifically oriented toward the study of cognition and culture (Lane 2013), including models about the divergent modes of religiosity theory (Whitehouse et al. 2012), the relationship between group size and religious identification (Hoverd, Atkinson, and Sibley 2012), the transmission of religious violence in the Radical Reformation (Matthews et al. 2013), the emergence of priestly elites in the emergence of large-scale cooperative societies (Dávid-Barrett and Carney 2015), and the role of cooperation style and contagious altruism in proselytizing religions (Roitto 2015). Beside the variety in application domains, these examples also show the diversity of models, ranging from mathematical models without any time dynamics to fully specified agent-based simulation systems.

Let’s look at a couple of examples in some detail. One of the most influential set of theories in the cognitive and cultural study of religion has to do with costly signaling and credibility enhancing displays. The basic idea here is that religion played an important role in the evolution of cooperation in human societies by promoting costly signals of commitment to an in-group (Bulbulia 2004; Sosis 2006). Research on contemporary societies indicates that one of the strongest predictors of religiosity in a population is the level of credibility enhancing displays (CREDs) in the cultural context (Lanman and Buhrmester 2015; Willard and Cingl 2017). Much of this literature has engaged a replicator dynamics model of the cultural evolution of costly displays and cooperation in religion developed by Joseph Henrich, which was able to demonstrate the existence of a high-cost equilibrium stability result for an entire population using a standard cultural evolutionary model. This formalization of the argument rendered more plausible the idea that large-scale cooperation and solidarity in human groups may have been “an emergent product of the interaction between an evolved cognitive adaptation for avoiding exploitation during social learning and larger-scale processes of cultural evolution” (Henrich 2009, p. 258).

Wildman and Sosis (2011) extended this formalization through an agent-based model that introduced cognitive and communicative variables as well as group identities. This approach is more realistic. Agents have variables such as skepticism, charisma, consistency, and sensitivity, and interact with one another over time as exemplars and learners. Their model also found stability results for high-cost groups within a wider population and furnished multivariate predictions of group population averages. Their simulation experiments found that several factors were relevant for stability: the presence of enemies, the difficulty of entering a high-cost group, the difficulty of leaving a high-cost group, and the charisma and consistency of leadership. Wildman and Sosis’ extension of Henrich’s model, which involved more realistic agents with group identities interacting in simulated space and time, identified a variety of pathways and strategies for achieving an equilibrium of cooperating agents within a population.

Our second example is the modeling of an established psychological theory that has been extensively used and explored within the cognitive and cultural study of religion: terror management theory (TMT). Research in TMT has shown that anxiety related to death awareness tends to ratchet up religiosity in the sense that it can increase the tendency to scan for invisible causes and the tendency to scramble to protect one’s in-group (McGregor, Hayes, and Prentice 2015; Norenzayan et al. 2008). In other words, when human cognitive systems encounter threats that produce mortality salience as “inputs,” they quite often have two sorts of “output”: increased belief in hidden intentional forces and decreased openness to out-group members. The activation of the human terror management system can thereby amplify belief in supernatural agents as well as behavioral dispositions toward participating in parochial ritual practices perceived as protective. These mechanisms help to mitigate psychological distress and to strengthen in-group cohesion, both of which provided a survival advantage in ancestral environments.

This research on the relationship between mortality salience and religiosity guided the construction of a computational model designed to simulate these dynamics (Shults et al. 2018). The variables and behavioral rules of the agents in this model were selected and designed to enable the simulation of the conditions under which – and the mechanisms by which – religiosity would increase within a population. Agent variables included the tendency to infer the causal relevance of hidden supernatural agents and the tendency to prefer the normative rituals of one’s religious in-group, both of which were susceptible to amplification when encountering hazards in the simulated environment. Agents were also distributed into groups. Depending on the intensity of threats, and the agent’s toleration of threats, they would pass a threshold beyond which they would seek ingroup members to engage more intensely and often in religious rituals.

The simulation experiments on this model led to several interesting results. For example, agents in a “minority” group were more likely to increase their religiosity under stress than members of a “majority” group. The emergence of this macro-phenomenon in the model could be interpreted as a pattern similar to what is found in U.S. culture: black Americans are far more likely to be religious than white Americans (Taylor et al. 1996). The model suggests that this effect arises largely because “minority” agents are more likely to encounter “majority” agents (than vice versa); the anxiety of the former about the (potentially socially threatening) presence of the latter is triggered more often, leading to heightened anxiety and the longing for religious rituals. Simulation experiments also revealed that large ritual clusters were more likely to form within populations whose agents were more homogenous and had low tolerance levels. In other words, the model could “grow” a cultural phenomenon reminiscent of white, suburban, mega-churches in an artificial society. This macro-level pattern was not programmed into the model but emerged from the micro-level behaviors and interactions of simulated heterogenous agents.

These brief examples indicate that the potential of human simulation can already be realized. The other articles in this special issue provide additional illustrations of the potential value of this approach. An international team of researchers, of which the authors of this article are members, have produced several other computational models that fall under the category of what we are calling human simulation (e.g., Shults et al. 2017; Shults and Wildman 2018).

4 The Pitfalls of Human Simulation

Using models to capture and communicate knowledge and using simulation to bring this knowledge to life, preferably in an immersive way, has many benefits and advantages, as we have discussed so far. However, there are also pitfalls and dangers of which the user of human simulation should be aware. We address these in three categories: computational challenges, epistemological challenges, and hermeneutical challenges. This list of challenges is not exhaustive, but it provides a useful frame within which to sketch the various dangers to monitor.

4.1 Computational Challenges

When implementing models as computer simulations, the constraints of computability apply, as computer simulations are computer programs that are governed by the same assumptions and constraints as other computer programs. One of the most comprehensive overviews of computational challenges was compiled by Oberkampf et al. (2002), summarizing their experiences and insights from the development of computer simulation systems at the Sandia National Laboratory. The following list enumerates some of the challenges of which scientific users of simulations should be aware.

Mathematically, a simulation system uses (1) a set of input parameters to call (2) a set of algorithms based on (3) computable functions to produce (4) a set of outputs. We can use highly sophisticated data collection methods, and we may even use “big data” or “data mining and farming” approaches, but the result of such analyses will be based on a discrete, finite set of input parameters. Computable functions take this finite set and map its elements to the discrete and finite set of output variables. Simulators may use immersive displays of a virtual reality, creating the impression of being “real,” but that doesn’t change the fact that the computable functions are applied to discrete and finite set of input parameters. Furthermore, computable functions implement algorithms describing all steps. Alan Turing showed that algorithms cannot exist for answering certain questions, no matter how powerful computers get, by proving the so-called “halting problem.” No algorithm can exist to solve any question that is equivalent to the halting problem, such as the question whether two functions are equal, whether two graphs are isomorphic, etc. But even if a problem can be solved by an algorithm, it is possible that a computer will take too long or use too many resources. In these cases, we talk about computationally complex problems.

Despite these difficulties, computer scientists have generated many heuristics that help to solve problems with the help of computers. A problem being generally not solvable does not mean that, for a concrete specification of the same problem, a feasible solution cannot be found. However, as these heuristics are often sub-problem specific, they cannot easily be transferred, and if two simulations use different heuristics, their results can differ significantly, as shown in the work of Oberkampf et al. (2002).

Another challenge is the digital nature of computers, requiring numerical approximations of differential equations that are usually mapped to difference equations. Depending on the numerical methods applied, the results can differ significantly. These numerical effects are especially important with so-called chaotic functions. These nonlinear dynamical systems first stretch the limited input domain and then fold outputs back into the domain. The result is high sensitivity to initial conditions, in the sense that two arbitrarily close points can end up on dramatically different trajectories over time, making such systems eventually unpredictable over time, even when they are wholly deterministic in their state changes. As computers use discrete numbers, we will always be close to but not exactly aligned with the initial condition of an experiment. If the underlying function implemented by the simulation is chaotic, we can use it for short-term trend analysis but not for precise forecasts of future values.

4.2 Epistemological Challenges

How can we gain knowledge in the light of such computational constraints? Humphreys (2004) and Gelfert (2016) provide insights into this fundamental question, as do several articles presented at the various conferences on the topic of Epistemology of Simulation (EPOS). Vallverdú (2014) summarizes:

A simulation is a mathematical model that describes or creates computationally a system process. Simulations are our best cognitive representation of complex reality, that is, our deepest conception of what reality is.

Vallverdú 2014, p. 6

The epistemological challenge of simulation predominantly lies in the model that is used to build the simulation. Tolk (2015) states that this model is a result of the task-driven purposeful simplification and abstraction of a perception – or understanding – of reality. Whatever is in this model becomes part of the reality of the simulation. Everything outside this model cannot be part of a valid interpretation of the simulation, a challenge that we will discuss in the following section on hermeneutics. But for now, let’s have a closer look at the components of this definition.

  • Task-driven: A model is built to help to answer a scientific question. A simulation can also extract information from users by providing stimuli to the user. A serious game may educate people, or provide an immersive virtual environment in which individuals can react and be observed without any risks. In any case, the task drives the modeling process.

  • Purposeful: Modeling is a willful, creative act, focused and determined by the task to be accomplished.

  • Simplification: Just as experimental settings eliminate all unwanted influences, so simplification eliminates unimportant elements that distract from the main event. Only what is necessary for the task becomes part of the model.

  • Abstraction: Abstraction levels are used to hide details, focusing on concerns that do not require knowledge about the underlying details. Complex systems expose different characteristics on micro-, meso-, and macro-levels, so focusing on level-specific concerns and methods is important for the modeling process.

  • Reality: Useful models are rooted in empirical data, observations, or logical extensions of valid theories. This rooting can be simplified and abstracted, but reality grounds any useful model.

  • Perception: The perception/understanding of reality is shaped by physical-cognitive aspects as well as other constraints on human inquiry. The physical aspect defines what information about the object that can be obtained. Cognitive aspects are shaped by the education and the knowledge of observers, including their worldviews, research paradigms, and even familiarity with tools associated with analyzing the subject matter.

Computer simulations are computable functions that map input data to output data. As discussed in detail by Chaitin (1977), all information that can be extracted from a computer program is either encoded into the algorithms or is part of the input data. Hidden patterns and information can be discovered by visualization and presentation in different modes, but the computational process, even when it has stochastic elements, is purely transformative, not creative. Every simulation-based discovery must therefore be mappable back to the model, or the discoveries are interpretations by the user that are not results warranted by a simulation-based epistemological process.

Another challenge has been called “simulationist’s regress” by Tolk (2017) and deals with the danger of bias in the modeling process. In philosophy, a regress is a series of statements in which a logical procedure is continually reapplied to its own result, sometimes without approaching a useful conclusion (philosophers call that a “vicious regress”). The term experimenter’s regress was coined and communicated to the wider scientific audience by Collins (1975) to show how bias can lead to experimentation set-ups that are not objective but use measuring methods that already assume the correctness of the hypothesis implicitly. As simulations implement a model that represents knowledge in form of an executable theory, Tolk (2017) observes the following:

The danger of the simulationist’s regress is that such predictions are made by the theory, and then the implementation of the theory in form of the simulation system is used to conduct a simulation experiment that is then used as supporting evidence. This, however, is exactly the regress we wanted to avoid: we test a hypothesis by implementing it as a simulation, and then use the simulated data in lieu of empirical data as supporting evidence justifying the propositions: we create a series of statements – the theory, the simulation, and the resulting simulated data – in which a logical procedure is continually reapplied to its own result….

In particular in cases where moral and epistemological considerations are deeply intertwined, it is human nature to cherry-pick the results and data that support the current world view (Shermer 2017). Simulationists are not immune to this, and as they can implement their beliefs into a complex simulation system that now can be used by others to gain quasi-empirical numerical insight into the behavior of the described complex system, their implemented world view can easily be confused with a surrogate for real world experiments.

Tolk 2017, p. 321

The validation of models is a critical step in overcoming this challenge, but if peers conducting the validation share the same belief system as the creator of the simulation, it may only be of limited use, as group bias may not be discovered but reinforced in this process. It is therefore essential to have a deep understanding of the simulation system before using it for simulation-based experiments. A rigorous approach to capture, document, and communicate research and results using simulations should help to address this challenge.

4.3 Hermeneutical Challenges

Two aspects of the epistemological puzzle – the abstraction process in creating a simulation and the interpretation process when making sense of simulation results – become especially prominent when the focus of M&S is on human persons and life situations. We call these hermeneutical challenges; attending to them can help us guard against generalization, protect against reading more into a simulation than is present within the model, and point us toward relevance in human-simulation activities.

Recall Tolk’s (2015) formulation: a simulation is a task-driven purposeful simplification and abstraction of a perception – or understanding – of reality. There is a lot of hermeneutical activity in that definition. Let’s consider a series of cases to surface the hermeneutical challenges of human simulations.

Suppose we are modeling vehicle movement at an intersection governed by a traffic light. We’d be using a discrete-event simulation for this task because that method is well suited to the queueing and timing features of the real-world situation. Our perception of reality is focused on frequency of vehicle arrival, departure frequency, wait times, and traffic safety. This perceptual task could be complicated in various ways if we are unfamiliar with vehicles and traffic lights but culturally conversant observers will tend to perceive the situation in much the same way. This perception guides the simplifications and abstractions we employ to create a simulation. We get started on this because someone pays us or orders us, or because we are curious. We can tell when we are finished because the task is determinate in scope. The results should ideally tell us how to specify timing of traffic lights to optimize throughput all times of day and night. The hermeneutical character of the model-building process is relatively straightforward in this case. Humans are involved but we abstract from their full complexity – their thoughts, emotions, intentions, behaviors, relationships, what they had for breakfast, what they’re listening to on the radio, etc. – and focus only on the behavior of the vehicles the humans are using. Likewise, when experiments with the simulation are completed and we have results, they will be meaningful specifically with reference to traffic-light timings, not to the full complexity of human beings. Here again, the hermeneutical complexity of interpreting results is relatively simple.

Simulation building is not always this simple, and it can be particularly complex when we are focusing on a human situation. Consider the same traffic-light intersection in relation to a driver whose pregnant wife is in advanced labor and urgently needs to get to the hospital on the other side of the intersection. The wife is trying not to scream in pain while ordering her partner to drive quickly and safely, but in a tone of voice that suggests fast might be more important than safe. The behavior of the driver upon approaching the intersection in question is quite unpredictable. Creating a simulation of this very human situation calls for perceiving reality differently, now taking in not only the realities of well-ordered traffic and light timings but also the very human emotions and personalities involved, which can lead to poor judgment as well as accurately calculated risk-taking behaviors. Simplifications and abstractions are still required to create a model but now we can’t abstract from human emotions but instead should employ theories of emotion, emotion regulation, emotion-laden decision processes, driving skill, and personality tendencies to reduce the infinite intricacy of this human situation to a finite number of variable settings and behavioral options. The hermeneutical challenges in this case are very significant because of the unpredictability of the driver’s behavior: will the simplifications and abstractions leave out critical elements needed to analyze the situation? if so, the simulation built may not be meaningfully relevant to the real-world situation, even if it is excellent in other respects. And what about results? Translating the findings of optimization experiments is easy in the model but not necessarily relevant to the real-world people and the human situation they are navigating.

This contrast between two ways of perceiving the intersection and the vehicles and people moving through it establishes the point: hermeneutical complexities in the case of human simulation can be daunting. Yet we need not be paralyzed with uncertainty about how to proceed in this situation. While hermeneutical complexities alert us to the possibility of uselessly low levels of real-world relevance in a simulation, we can still make simplifications and abstractions to build simulations. Moreover, we simplify and abstract far better when guided by empirically well-supported theories from the social and psychological sciences. Humanities already address the critical essence of the target of a study through careful thought and interpretation. Modeling and simulation as described above must be guided by these principles and ensure that all alternatives, viewpoints, and facets are captured with the same rigor, which facilitates better comparability and communication. Thus, there is reason to try and see. Our simulations may risk irrelevance but we can still be drawn into building a human simulation by the possibility of generating insights into human life that might be otherwise unattainable. We can also manage these problems using the “humans as (part of a) computer simulation” approach, whereby we study what live human beings do in a virtual representation of the real-world traffic-light dilemma.

What could induce us to take the risk of human simulation, perhaps against the advice of wary humanities specialists who are rightly worried about the loss of nuance suffered in any abstraction from the complexities of human life? Well, picture the situation where a new kind of evidence-based policy debate comes to life, with people learning how to project policy effects into the near-term future using computational M&S. Though such a tool could never replace the experience-based judgment of policy professionals, it may generate insights and focus attention on policy proposals that otherwise would never see the light of day. It’s complicated, yes, and there are risks. But contemporary human life confronts many dangerous situations that policies can influence so it makes sense to try them virtually out as much as possible before committing valuable resources to one policy.

Feinstein and Cannon (2003) recommend that we take these hermeneutical constraints into account when evaluating the validity and correctness of simulation systems – in particular when worldview and perceptions are purposefully introduced concepts, and not part of an unconscious bias, as we discussed earlier in this paper. Philosophies, perceptions, feelings, and other usually non-tangible concepts will be parts of the social sciences and humanities, and therefore parts of human simulation, and simulation can help make these concepts tangible in form of executable mathematics describing complex systems.

5 Conclusion

The recent development of computational methods supporting scientific research has led to the situation in which simulation is now widely regarded as the third pillar of science, with epistemological status comparable to formal theorizing and experimentation. Simulation techniques employ a virtual complex system to make sense of a real-world complex system, generating insights that are difficult or impossible to achieve in any other way. Graphical interfaces and other intuitive user support mechanisms facilitate the use of computational tools, including development of simulations and the evaluation and presentation of their results. However, as all research methods, simulation comes with limitations. These limitations include computational, epistemological, and hermeneutical challenges. Computational science employ computer programs, which entails the challenges of decidability and computational complexity, both limiting what can be achieved through simulation-based experiments. Epistemologically, the algorithms and data used in simulation systems, as well as the choices made about what part of the complex human system to study and what disciplines to employ in the process, may reflect the conscious and unconscious biases of researchers as much as established theory. Hermeneutically, some dimensions of human social and existential reality do not lend themselves to expression in computational systems, so it is vital to keep human simulation ventures in close contact with the social-scientific and humanities disciplines that inspire them.

We started with the assumption that human simulation has the potential to serve as the lingua franca for computational social sciences and humanities. As far as we focus on the computational part of social sciences and humanities, this is clearly true. Simulations execute their underlying models, and these models should represent executable parts of theories from the social sciences and humanities disciplines. The unambiguous mapping of a well-defined set of input parameters via fully specified computable functions onto a set of observed outputs is a fully unambiguous specification. However, is this sufficient or even appropriate to address all fields of social science and humanities?

Simulation has significant overlaps with computational thinking, which has been praised as a valuable way to address scientific challenges. However, as we have noted, the approach has its challenges, particularly in application to the complexities of human life. In support of this finding, Denning (2017) provides a critical review of computational thinking, warning against overselling the ideas when claiming that computational thinking will be good for everyone and everything. To take full benefit of the advantages, we need to be mindful that computational thinking includes often hidden design assumptions of the model, not just the procedures to control it. This can be generalized into the necessity to understand a model fully to control it. The reason for these observations is that the computational sciences offer powerful tools for evaluating scientific theories with a degree of detail that otherwise wouldn’t be possible, by scanning through vast regions of the solution space by sheer computing power. For the social sciences and humanities disciplines, these insights can be summarized very similar to Vallverdú’s (2014) quote earlier in this paper: simulation is the best method we currently have to describe complex systems in a mathematically exact way. And simulations can be brought to life in immersive environments, which improves our understanding of research, and facilitates experience of the simulation by students and scholars.

Human beings and human behavior are not easily describable by differential equations and causal diagrams. There are likely many real-world observations that cannot be expressed with our current best approaches. The live-virtual-constructive paradigm with humans as part of our simulation may be the best we can do. This, however, should not be used as an excuse to avoid being as precise as a simulation system requires where we can be.

Simulation enables computational scientists to execute theories and conduct virtual experiments. This is one of the pinnacles of current technology. As such, human simulation is a strong candidate to become something like a scientific lingua franca. Simulation is imperfect and brings its own dangers. Many fields within the social science and humanities will require extra care when utilizing simulation, and some fields may even require a new form of mathematics and simulation altogether. Whether emerging quantum computing technologies can lead to such new possibilities is a topic of ongoing research and, like all other ground-breaking ideas, would require some time before any new methods find their way through the universities and corporations and into the toolboxes that can be applied by computational social scientists. Despite all these hurdles, the recent developments, such as captured by Diallo et al. (2018), give reason to be optimistic.

References

  • Alvarez, R. Michael. 2016. Computational Social Science: Discovery and Prediction. Reprint edition. New York, NY: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Bulbulia, Joseph. 2004. “Religious Costs as Adaptations That Signal Altruistic Intention.” Evolution and Cognition 10 (1): 1938.

    • Search Google Scholar
    • Export Citation
  • Chaitin, Gregory J. 1977. “Algorithmic information theory.” IBM journal of research and development 21 (4): 350359.

  • Dávid-Barrett, Tamás, and James Carney. 2015. “The Deification of Historical Figures and the Emergence of Priesthoods as a Solution to a Network Coordination Problem.” Religion, Brain & Behavior, 111.

    • Search Google Scholar
    • Export Citation
  • Denning, P. J., 2017. “Remaining trouble spots with computational thinking.” Communications of the ACM, 60 (6): 3339.

  • Diallo, Saikou Y., Jose J. Padilla, Ross Gore, Heber Herencia‐Zapana, and Andreas Tolk. 2014. “Toward a formalism of modeling and simulation using model theory.” Complexity 19 (3): 5663.

    • Search Google Scholar
    • Export Citation
  • Diallo, Saikou Y., Wesley Wildman, F. LeRon Shults, and Andreas Tolk (Eds.). 2018. Human Simulation: Perspectives, Insights, and Applications. Springer. Series on New Approaches to the Scientific Study of Religion. Forthcoming.

    • Search Google Scholar
    • Export Citation
  • Feinstein, Andrew Hale, and Hugh M. Cannon. 2003. “A hermeneutical approach to external validation of simulation models.” Simulation & Gaming 34 (2): 186197.

    • Search Google Scholar
    • Export Citation
  • Fishwick, P. 2014. Computing as model-based empirical science. Proceedings Conference on Principles of Advanced Discrete Simulation. ACM, pp. 205–212.

  • Gelfert, Axel. 2016. How to do science with models: A philosophical primer. Cham: Springer.

  • Hanselmann, Herbert. 1996. “Hardware-in-the-loop simulation testing and its integration into a CACSD toolset.” Proceedings Computer-Aided Control System Design, IEEE, pp. 152–156.

  • Hauke, J., I. Lorscheid, and M. Meyer. 2017. “Recent Development of Social Simulation as Reflected in JASSS between 2008 and 2014: A Citation and Co-Citation Analysis.” JASSS 20 (1).

    • Search Google Scholar
    • Export Citation
  • Henrich, Joseph. 2009. “The Evolution of Costly Displays, Cooperation and Religion: Credibility Enhancing Displays and Their Implications for Cultural Evolution.” Evolution and Human Behavior 30 (4): 244260.

    • Search Google Scholar
    • Export Citation
  • Hodson, Douglas D., and Raymond R. Hill. 2004. “The art and science of live, virtual, and constructive simulation for test and analysis.” Journal of Defense Modeling and Simulation 11 (2): 7789.

    • Search Google Scholar
    • Export Citation
  • Humphreys, Paul. Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press, 2004.

  • Lane, Justin E. 2013. “Method, theory, and multi-agent artificial intelligence: creating computer models of complex social interaction.” Journal for the Cognitive Science of Religion (1.2): 161.

    • Search Google Scholar
    • Export Citation
  • Lanman, Jonathan A., and Michael D. Buhrmester. 2015. “Religious Actions Speak Louder than Words: Exposure to Credibility-Enhancing Displays Predicts Theism.” Religion, Brain & Behavior, 114.

    • Search Google Scholar
    • Export Citation
  • Matthews, L. J., J. Edmonds, W. J. Wildman, and C. L. Nunn. 2013. “Cultural Inheritance or Cultural Diffusion of Religious Violence? A Quantitative Case Study of the Radical Reformation.” Religion, Brain & Behavior 3 (1): 315.

    • Search Google Scholar
    • Export Citation
  • McGregor, I., J. Hayes, and M. Prentice. 2015. “Motivation for Aggressive Religious Radicalization: Goal Regulation Theory and a Personality × Threat × Affordance Hypothesis.” Frontiers in Psychology 6:1325.

    • Search Google Scholar
    • Export Citation
  • Norenzayan, Ara, Ian G. Hansen, and Jasmine Cady. 2008. “An Angry Volcano? Reminders of Death and Anthropomorphizing Nature.” Social Cognition 26 (2): 190197.

    • Search Google Scholar
    • Export Citation
  • Oberkampf, William L., Sharon M. DeLand, Brian M. Rutherford, Kathleen V. Diegert, and Kenneth F. Alvin. 2002, “Error and uncertainty in modeling and simulation.” Reliability Engineering & System Safety 75 (3): 333357.

    • Search Google Scholar
    • Export Citation
  • Roitto, Rikard. 2015. “Dangerous but Contagious Altruism: Recruitment of Group Members and Reform of Cooperation Style through Altruism in Two Modified Versions of Hammond and Axelrod’s Simulation of Ethnocentric Cooperation.” Religion, Brain & Behavior 5 (3): forthcoming.

    • Search Google Scholar
    • Export Citation
  • Rosen, R. 1998. Essays on Life Itself. New York NY: Columbia University Press.

  • Shermer, M. 2017. “How to Convince Someone When Facts Fail: Why worldview threats undermine evidenceScientific American 316 (1).

    • Search Google Scholar
    • Export Citation
  • Shults, F. LeRon, and Wesley J. Wildman. 2018. “Modeling Çatalhöyük: Simulating Religious Entanglement and Social Investment in the Neolithic.” In Religion, History and Place in the Origin of Settled Life, edited by Ian Hodder. University of Colorado Press. pp. 3363.

    • Search Google Scholar
    • Export Citation
  • Shults, F. LeRon, Justin E. Lane, Saikou Diallo, Christopher Lynch, Wesley J. Wildman, and Ross Gore. 2018. “Modeling Terror Management Theory: Computer Simulations of the Impact of Mortality Salience on Religiosity.” Religion, Brain & Behavior 8 (1): 77100.

    • Search Google Scholar
    • Export Citation
  • Shults, F. LeRon, Ross Gore, Wesley J. Wildman, Justin E. Lane, Chris Lynch, and Monica Toft. 2017. “Mutually Escalating Religious Violence: A Generative and Predictive Computational Model.” Social Simulation Conference Proceedings.

  • Sosis, Richard. 2006. “Religious Behaviors, Badges, and Bans: Signaling Theory and the Evolution of Religion.” Where God and Science Meet: How Brain and Evolutionary Studies Alter Our Understanding of Religion 1: 6186.

    • Search Google Scholar
    • Export Citation
  • Squazzoni, Flaminio, Wander Jager, and Bruce Edmonds. 2014. “Social Simulation in the Social Sciences.” Social Science Computer Review 32 (3): 279294.

    • Search Google Scholar
    • Export Citation
  • Taylor, Robert Joseph, Linda M. Chatters, Rukmalie Jayakody, and Jeffrey S. Levin. 1996. “Black and White Differences in Religious Participation: A Multisample Comparison.” Journal for the Scientific Study of Religion 35 (4): 40310.

    • Search Google Scholar
    • Export Citation
  • Tolk, Andreas. 2017. “Bias ex silico – observations on simulationist’s regress.” Proceedings Spring Simulation Multi-Conference, pp. 314–322.

  • Tolk, Andreas. Engineering Principles of Combat Modeling and Distributed Simulation. Hoboken, NJ: John Wiley & Sons, 2012.

  • Tolk, Andreas. 2015. “Learning Something Right from Models That Are Wrong: Epistemology of Simulation.” In Concepts and Methodologies for Modeling and Simulation, edited by L. Yilmaz, Springer, pp. 87106.

    • Search Google Scholar
    • Export Citation
  • Tolk, Andreas, Osman Balci, C. Donald Combs, Richard Fujimoto, Charles M. Macal, Barry L. Nelson, and Phil Zimmerman. 2015. “Do we need a national research agenda for modeling and simulation?” Proceeding Winter Simulation Conference, IEEE, pp. 2571–2585.

  • Vallverdú, Jordi. 2014. “What are Simulations? An Epistemological Approach.” Procedia Technology 13: 615.

  • Whitehouse, Harvey, Ken Kahn, Michael E. Hochberg, and Joanna J. Bryson. 2012. “The Role for Simulations in Theory Construction for the Social Sciences: Case Studies Concerning Divergent Modes of Religiosity.” Religion, brain & behavior 2 (3), 182201.

    • Search Google Scholar
    • Export Citation
  • Wildman, Wesley J., and Richard Sosis. 2011. “Stability of Groups with Costly Beliefs and Practices.” JASSS 14 (3).

  • Wildman, Wesley J., Paul A. Fishwick, and F. LeRon Shults. 2017. “Teaching at the Intersection of Simulation and the Humanities.” Proceedings Winter Simulation Conference, IEEE, pp. 4162–4174.

  • Willard, Aiyana K., and Lubomír Cingl. 2017. “Testing Theories of Secularization and Religious Belief in the Czech Republic and Slovakia.” Evolution and Human Behavior 38 (5), 604615.

    • Search Google Scholar
    • Export Citation
*

Andreas Tolk’s affiliation with The MITRE Corporation is provided for identification purposes only.

1

This work has been publicly released for unlimited distribution, Case Number 17-3081-10. The OrcIds of the authors are as follows: Tolk 0000-0002-4201-8757, Shults 0000-0002-0588-6977, Wildman 0000-0002-7571-1259, Diallo 0000-0003-2389-2809. Tolk’s affiliation is not intended to convey or imply MITRE’s concurrence with, or support for, the positions, opinions, or viewpoints expressed by the author.

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 539 106 0
Full Text Views 93 35 7
PDF Views & Downloads 59 28 3