1 Unwrapping the Gift
In the autobiographical fiction The Eternal Son, Cristovao Tezza movingly describes the growing bond between a father and his mentally handicapped son. Initially, the father is in complete denial of his son’s condition. The reader is gently induced to witness the transformation from a precariously distant and self-centered relationship to a caring and engaged one. Throughout the burdensome journey, the father, a writer, observes himself and his son, as well as the experts he consults and what happens around the two of them. He remains intensely and often painfully curious. The story is also about the handicapped son growing up, with the father gradually realizing and accepting the cognitive and emotional limitations that will never be overcome. The book ends with a seemingly trivial scene that yet shows in a profound sense that uncertainty also can be a gift to be unwrapped.
The son, now 14 years of age, has mastered typing some words on the computer and downloading files. At first elated, the father soon has to realize that distinguishing words is not the same as grasping the concepts behind them. The son also begins to share the passion that obsesses most boys of his age: football. This opens a new world. A real football game is almost always unpredictable. Avidly, he follows the results of the football matches of his favourite club and dons T-shirt and scarf sporting its colours when visitors arrive. In the last scene of the book, the boy engages the father in a lively conversation about today’s game and who is going to win: “they headed for the TV – the game was beginning once again. Neither of them had the slightest idea of how it would end, and that was a good thing” (Tezza 2013, 218).
Here it is, the joy of anticipation. Of not knowing yet, but of soon finding out. It is not so much the surprise that is gratifying. There is no real surprise, as the match has been scheduled well in advance. Everything there is to know about the players and the opposing team, about their past performance and their present physical condition, has avidly been absorbed by the son, as it is by other fans. It is the outcome that is still uncertain. The rules of the sport dictate that the outcome is to be accepted. Fairness is the meta-rule. The fans of the losing team will be saddened, but they know that this is only one event
What is it in our adult lives that makes these moments so rare when we remember them as abundant in childhood? The titillation of hardly being able to wait; the intense anticipation, which outweighs the joy over what is anticipated. Are these the costs incurred by experiencing repetition? Of having reached some kind of saturation in enjoying the frivolous tension that uncertainty can induce? But what would life be without them? Remember the stories of the savant imbecile, those poor people whose brain circuitry has gone awry. They are not able to forget anything but are also incapable of anticipation. Neuroscientists have identified the region in the brain enabling anticipation that lights up during a functional magnetic resonance imaging (fMRI) scan. Or is it due to repetition that the rewards of anticipating joy and the pleasure of unwrapping the gift diminishes over time? Or simply that it is no longer well-meaning parents who wrap them, but life with its share of unwanted gifts?
One of the many paradoxes that is revealed in our convoluted dealings with uncertainty concerns repetition. In the numerous efforts that have gone into making encryption safe – security business, researchers and hackers alike – it is recognized that the security of encrypted communication depends on the unforeseeable randomness in the process of transmitting. Randomness can be shared between the famous couple Alice and Bob, alias sender and receiver, only when a sequence of random bits, known as cryptographic keys, is used. Such key bits must be truly random, never reused and always securely delivered to Alice and Bob. Each key bit can be used only once and then only to encrypt one single message bit (Ekert and Renner 2014).
Reality is far from what theory demands. Most generators of random numbers deliver at best what is known as “medium-quality randomness.” They are the weak link in the security chain. In most cases of daily use, these generators deliver weak randomness where randomness becomes predictable. The challenge to come up with strong randomness, defined as the absolute impossibility to predict, is formidable. Alice and Bob, and everyone working for them, must find a way to generate and distribute fresh key bits continuously. Hopes are pinned on quantum mechanics. For some considerable time, it has been work in progress, but no definite solution has yet been found (Delahaye 1994).
In the meantime, the impasse constituted by the requirement of non-repetition is circumvented by programs that start with a random seed which is transformed into a number with the help of a mathematical function. It is so complex that the reverse operation is not possible. These one-way functions are the same as those which serve to encrypt. The random seed can have
When the handicapped son, the eternal child, whose humanness is revealed all the more by recognizing the limitations imposed by his handicaps, feverishly looks forward to knowing the outcome, he embraces uncertainty. Repetition is an essential ingredient, as the child knows, of taking part in a series of such events. The child is unaware of randomness. What matters to him is that he will soon know.
Yet randomness has been with him even before he was born. It has enwrapped the father and mother of this child. As every newborn baby, their child was a gift to be unwrapped. Only in this particular case, something had gone wrong. After a long and tortuous journey lasting almost fourteen years, the genetic outcome, their child, has finally been accepted by the father as he is. Why me, why him? The eternal son’s younger sister, born two years later, is a perfectly normal child. Nature does not answer. Randomness does not assume responsibility.
The usual questions resurge in hindsight. Could the parents have known? What would they have done if they had known? What would you have done? With prenatal genomic testing rapidly expanding, these are not just hypothetical questions raised by bio-ethicists. There are those who are eager to make fully available what science and biotechnology have to offer. They are keen to peek at the unwrapped gift. They argue for empowering enlightened citizens to take their genetic future into their own hands. In this view, to be able to know your genome and to some extent that of your offspring is a civic privilege, if not a duty. It offers the choice between knowing and not knowing. Of being able to act preventatively.
This follows in the tradition of the Enlightenment. After all, the libido scientiae, the passion for knowledge and the craving to know, emerged in Europe after centuries when wanting to know was considered sinful. It was interpreted as human hubris, setting oneself up in competition with an all- knowing and almighty God. In the early nineteenth century in some parts of the United States, taking out life insurance was still considered an immoral act, as knowledge about the moment of one’s death was thought to be the exclusive right of the Almighty and not the object of a commercial transaction (Zelizer 1979). With the potential offered by genomic foresight and diagnosis,
Science, in the best tradition of its own heroic narrative continues to do what it has done so successfully until now. It trespasses across boundaries, even if they exist only in the mind of those who believe that Nature tells us what we ought to do and who we ought to be. Undeterred, science continues its advances into the unknown, thriving on the cusp of the inherent uncertainty of the research process. And there are those who advocate a right not to know. Among these are some members of families with Huntington’s disease, this rare monogenetic condition for which no cure has yet been found (Konrad 2005). There are those who prefer a temporal relief before the gift unwraps itself. Even Jim Watson, one of the discoverers of the dna’s double helix and the third person whose entire genome was sequenced, asked not to be told in case the results showed a high probability of him developing Alzheimer’s disease.
Evolution, by inventing sexual reproduction, has made sure that randomness is introduced into genetic inheritance. Each offspring has a father and a mother and inherits different genes from each of its non-identical parents. The lottery starts anew with every successful implantation of a fertilized egg in the womb. Nature makes sure that the genetic key bits in their specific combination are used only once. It has no difficulty in producing them in abundance, thus assuring the genetic uniqueness of each of us. Even if the individual turns out to be handicapped. Or with a high probability of developing Alzheimer’s or any other dreaded condition.
Individual uniqueness holds also for twins. They are two individuals coming into the world in the same birth event. They need not resemble each other. In many languages, the word used for twins denotes exactly that: born from the same mother at the same time. Another adjective, “monozygotic” or “dizygotic,” is needed to clarify whether these individuals share not only the mother that gave birth to them, but also the same genome. Today, we also know that dizygotic twins and about one-third of all monozygotic twins do not share the placenta and the chorion in the womb. They develop in different hormonal and endocrinological uterine environments which influence their later lives. Randomness continues to play its part also in epigenetic processes, in which inheritance does not depend on dna (Nowotny and Testa 2010, 12).
The variation that underlies the genetic dimension of evolution has two sources: mutation, which creates new variation in genes, and sexual reproduction, through which pre-existing gene variations are reshuffled to produce new combinations. The diversity achieved through sexual reproduction generates offspring who differ from their parents. Moreover, it makes siblings different from each other and it allows for crossover, a recombination of genes that can
Randomness may thus be softened by what happens to the organism, which in turn can affect the processes that generate changes in genes. We are only at the very beginning of understanding some of these processes, such as trans-generational epigenetic effects. It has been known for some time that the lifetime health of a child may be affected by experience in the womb, but trans-generational epigenetic inheritance indicates a much wider range of influences. It includes the father and the grandparents’ generation. Events before and at conception shape our development and life-course trajectory, opening up a new range of questions about how to deal with these influences and to mitigate potentially adverse outcomes.
Encounters with randomness in daily life take many forms. Gambling is a highly profitable business. Bookmakers take bets on the basis of the uncertainty of outcomes, ranging from the trivial to the bizarre. Appealing to randomness as the last arbiter can be attractive. It is not only the enlightened decision maker who resorts to random trials. Accepting the verdict attributed to randomness can lift the burden of being held personally responsible, whether real or imagined. It can alleviate feelings of guilt. In short, it can be experienced as liberating.
Which gets us back to some old questions about free will, determinism and responsibility, including the question of how to make better and more responsible use of human freedom. In the philosophical discussion basso continuo about free will and dualism, about Descartes’s error (Damasio 1996) and the significance of Libet’s experiment, determinism repeatedly attempts a comeback, piggybacking on the latest scientific findings. This time, they come in a genetic or neuro-physiological garb. In his famous experiment, Libet and his collaborators showed that the human brain, through the readiness potential it possesses, is ready to execute a voluntary action half a second before the subject becomes conscious of its intention. More precisely, it precedes the movement of the arm by 550 milliseconds (Libet et al. 1983). Despite the many questions and speculations that the experiment has raised, the big puzzle remains consciousness. Who is the “I” who decides? Where is it inside me? Does it execute what my brain has already decided and, if so, how and where? Frontal cortex? Or is consciousness simply what emerges when I focus attention on something, as Stanislaw Dehaene maintains (Dehaene 2014)?
And yet there is something in us that protects our freedom of will, at least well enough and most of the time. It is the complexity of our brain. We are far from understanding how it actually works. This is one of the driving forces behind the recent mega-research projects that have been launched as the US
2 Thriving on the Cusp of Uncertainty
In his remarkable memoir No Time to Lose: A Life in Pursuit of Deadly Viruses, Peter Piot, an eminent epidemiologist and former undersecretary general of the United Nations, recalls the excitement he and his colleagues felt 1976 when samples of blood arrived in their laboratory in Antwerp. They were sent from Kinshasa, then the Democratic Republic of the Congo. They came from a Belgian nun who had died from an unusual epidemic raging in the region that had killed at least 200 people. With an incredible disregard for any safety measures, conditions that make the author and the reader wince, they opened a cheap plastic thermos flask that had been sent by mail. The lab was certified to diagnose infections of all kinds and the hypothesis for this epidemic was reported to be yellow fever with haemorrhagic manifestations. Some of the test tubes were intact, but there were pieces of a broken tube, its lethal content mixed with ice water. Thus begins a fascinating tale of the isolation and identification of a new deadly virus, Ebola. In August 2014, almost forty years later and for the third time in its 66-year history, the World Health Organization (who) declared Ebola a global public health emergency.
Back in 1976, the excitement of the team of young researchers about their unexpected mission to isolate the unknown virus soon received an equally unexpected blow. The Viral Diseases Unit of the who instructed the director of the lab to send the samples to a lab in the United Kingdom which would forward them to the Center for Disease Control (cdc) in Atlanta, the world’s reference laboratory for haemorrhagic viruses. But the researchers were so taken by their work that they decided, in collusion with their boss, to keep some of the material. The moment arrived when their semi-clandestine secondary cell line was ready for analysis. For the first time, they were able to look at the photographs that had been produced. “This looks like a Marburg,” the team leader exclaimed, the only virus known at the time, which was as long as the one they were looking at. It was not only about length, but also about lethality. Confronted with the evidence that “their virus” was closely related to the terrifying Marburg, the laboratory director was not prepared to risk the lives of his researchers. He decided that work would be shelved at once and the remaining samples sent to the cdc. Soon afterwards, news arrived that the cdc had
This is one of many momentous incidents that convey something of the feeling of what thriving on the cusp of uncertainty is like in science. It captures one of those exceptional moments when seeing, understanding or grasping something hitherto unknown is at one’s fingertips. It conveys the excitement of being on the threshold of something that has never happened before. Many other examples could be taken from any scientific field, ranging from awe-inspiring cosmology to humble plant biology. Each December in Stockholm, a tiny fraction of worldwide research conducted at the edge of knowledge or pursued against all odds is justly celebrated by honouring the will of Alfred Nobel. It provides some fascinating glimpses of science’s swirling dance with uncertainty. In the life of a researcher, joy is often mixed with frustration. Persistence requires patience and meticulous attention to detail. Continuing curiosity pervades every step. And each summer in Lindau, on the shores of Lake Constance, Nobel laureates meet with a group of more than five hundred young researchers to share with them their personal experience of embracing uncertainty and the scientific breakthrough that followed.
Thriving on the cusp of uncertainty, the thrill of being only a few steps away from a major discovery, of experiencing the elation of being the first to peer at what no human eye has seen before or to step onto territory where nobody has trod before is not unique to science. It pervades many human activities, from the arts to mountaineering, from the patient work in the dust of historical archives to thrill-seeking adventures involving the reckless pursuit of novelty for its own sake. It exists among the traders gesticulating frantically on the stock exchange floor and holds in thrall the concert audience listening in silence to a live performance by an artist. As such, it tells us much about the human species, as well as about the social institutions through which we encourage or impede, support, control or selectively reward such endeavours. Thriving on the cusp of uncertainty is both an intensely individual endeavour, which is at the same time framed and embedded in a collective mode of bringing forth and sustaining it.
Science has developed highly efficient modes of transforming uncertainties into certainties. Research is a profoundly collaboratively venture, and every scientist acknowledges the old adage of seeing further only because she or he is standing on the shoulders of giants. Especially in fundamental research, the ability to see further or to achieve what nobody has achieved before thrives on uncertainty. It pushes the boundaries. This is not to say that this is the only and the overwhelming driving force in “doing” science. The majority of scientific
The line separating what is known and what is not yet known, the divide between knowledge that is certain and ideas born of speculation and wild imagination, varies in the different scientific fields. The frontier is dynamic and quickly shifting. Imagination is a great source of inspiration, but what matters in the end is whether the idea works in reality or not. Imagination must be disciplined. Lee Smolin concludes that all progress of human civilization, from the invention of the first tools to our nascent quantum technologies, is the result of the disciplined application of the imagination (Smolin 2013). The community knows where the “hot spots” are, those intense convergences of novel insights and ways of putting them to work that spout like Icelandic geysers. They may also disappear as quickly as they appeared, only to burst out again somewhere else. Understanding what produces them and how these forces can be turned into a source of energy is part of the hard work that is science, in close alliance with technology and engineering.
Nor are these uneven and unpredictable trajectories unique to the natural sciences. The great stock of knowledge in the humanities is shared in a colourful and sometimes contradictory tapestry of different interpretations which projects a fascinating picture of the human past in all its diversity. Over time, new empirical evidence in the form of documents and other archival material as well as physical artefacts and what scholars have made of it, emerges. It gives rise to new interpretations, conjectures and juxtapositions. It is mainly about the past but a past that continues to be seen in the light of the present. Writing on the future of philology as the fate of a soft science in a hard world, Sheldon Pollack defines it as the discipline of making sense of texts. It is the critical self-reflection of language, of every language. It thus merits the same centrality among the disciplines as philosophy or mathematics (Pollack 2009, 934).
François Jacob once contrasted what he called “day science” with its darker twin “night science.” Day science employs reasoning that meshes like gears and proudly presents the results, often with the conviction of certainty. It parades the bright side of scientific achievement and collects the rewards that come with recognition of accomplishment. Night science, on the other hand, wanders blindly. It hesitates, falls back, sweats, wakes with a start. It takes place in the lab where everything seems possible and where imagination can roam wildly, where hypotheses take the form of vague presentiments and where plans for an experiment have barely taken form. It is the place where thoughts proceed along sinuous paths, tortuous streets, but most often blind alleys. They are littered with setbacks, doubts, errors and frustrations. What
In the exploration phase, research is tentative and encourages speculation. It is willing to go wherever curiosity leads. It often proceeds by trial and error and intersects in oblique and reiterative ways with the following phases. It may have to change direction, overturn established views and even set out on untried paths. Research is devoted to the laborious task of converting titillating uncertainties into empirically reproducible results. Towards this end, intervention and manipulation have to follow both, the scientific imagination and the strict control mechanisms of science. In the process of transforming uncertainties into certainties, a moment arrives when the leap of the imagination and instrumental shortcuts which were at first encouraged are no longer permitted. With this ultimate stage in view, Richard Feynman once said, “It doesn’t matter how beautiful your theory is. It doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” While experiments are not crucial for every scientific field, agreement with reliable and reproducible proofs remains the touchstone for every science.
The control of mechanisms of science demand that the claims of new discoveries or results obtained by experiment must be validated. They must lend themselves to replication by other research groups that are able to obtain the same or very similar results. Newly produced data must be reliable, their origins and the methodologies used to obtain them need to be well documented. Theories offered as explanation will be confronted with whatever empirical evidence is available, to confirm, modify or discard them. Nature and scientific peers remain the supreme arbiters in deciding whether a scientific claim is approximating truth and confirming that a break-through has indeed occurred. As with any other human activity, time will ultimately tell. Every scientific theory and scientific finding has only provisional status. Scepticism is a normative dimension in the ethos of science. There is no guarantee of containing certainty, as there can be none. Tomorrow we will know more, and better, than today.
Thus, there is ample room for errors in science, making their detection, elimination or correction all the more important. As mentioned before, errors
One way of exploiting as well as coping with the inherent uncertainty of the research process is to proceed step-wise, in ordered sequences that vary from field to field. In the initial phases of exploration, there is ample room for the unexpected setbacks that may require a restart. Next, various thresholds have to be crossed while the pathways are still open, gradually converging on an answer to the ultimate question: does it work? Patiently and persistently, the next steps are taken while reversals are still possible. These thresholds are not the preset milestones and deliveries requested by a managerial interpretation of how research works. For fundamental research, milestones and deliveries make no sense. Nor are they necessary. Overwhelming proof exists for the “usefulness of useless knowledge” as suggested by Abram Flexner in the manifesto he wrote in the 1930s, which eventually led to the establishment of the Princeton Institute of Advanced Study (Flexner 1939).
The histories of science and technology are full of examples showing that the dichotomy between useless and useful knowledge is often spurious and dependent on many contingencies. Nor does innovation proceed in a straight line from having a bright idea. Far from being at the opposite end of a polarized spectrum, innovation is much more closely linked to fundamental research than political rhetoric wants the public to believe. Both are inherently uncertain with regard to their eventual outcome. This is best exemplified by radical innovation which is exclusively science based. Its radicalism consists in its capacity to become the seedbed for a scientific- technological paradigm change that has wide-ranging consequences for the ways in which society and the economy function. In its pervasiveness, the paradigm change that comes with radical innovation spawns a host of new technologies, which emerge initially in the research environment. The dramatic changes that spring from the computerization of almost all facets of daily life are part of a long journey with
The process of transforming uncertainties into certainties eventually leads to a necessary stabilization and an always preliminary closure. The process is a fluid one, but science as a social institution has erected a boundary that divides an error-friendly environment from one in which errors are no longer tolerated. Mostly, it coincides with the publication of scientific results. As much as error and failure belong to the creative phase leading up to a scientific publication, they are no longer tolerated afterwards. They are even censored. When an error is discovered after a scientific article is already in print, the authors are expected to retract the paper in full public view. Even if no misconduct is involved, it hardly advances their scientific reputation.
It is easy to see why. The publication of scientific results does not only reflect on the authors. As an expression of its scientific autonomy, the entire scientific community takes responsibility for validating and verifying the results, thus publicly confirming the claims that are made. This is what peer review is all about, based as it should be on detailed scrutiny. In reality, the peer review system is near its limits due to overload stemming partly from the increasing pressure to publish. This has also led to allocating disproportionate credit to papers that have been published in a journal with a high-impact factor. It signals importance and relevance and is eagerly taken up by time-deprived academic administrators and funding agencies. It is also one of the reasons why negative findings are almost never published, even though it is admitted that their publication might prevent others from travelling down the same wrong road. And it also explains why the scientific community is extremely sensitive to charges of misconduct and fraud, even if in practice it finds it is not always easy to follow its own convictions and ideals (Nowotny and Exner 2013).
But errors do occur and even great scientists are sometimes wrong. They may come up with a theory that does not hold when confronted with conclusive scientific evidence to the contrary. Theories are “things to think with” and therefore tools. However much data there are, many theories will explain them more or less well. They do not reveal the truth but are attempts to approximate truth. Wrong ideas are sometimes helpful, in science as in everyday life, when they unexpectedly open up new avenues that would otherwise have remained closed. These are the productive errors. Errors are the explorers’ unavoidable companions. They occasionally turn out to be helpful. But there is yet another companion. It is a friendly ally that becomes indispensable to research. This is serendipity, the unexpected discovery of a phenomenon that one was not looking for but whose significance is soon realized. Serendipity is
Depending on their nature, where and when they occur, measures are taken to detect errors in time and to avoid them all together. Errors can be fatal. Protocols have to be followed in operation rooms and cockpits. In engineering, nothing is worse than having errors creep into calculations that are then transformed into physical structures, such as bridges or aeroplanes. There is no lack of horror stories of things that may and do actually go wrong. The Climate Orbiter, a US$125 million spacecraft that went missing in September 1999, was shown at an exhibition of instructive failures in Dublin. It turned out that one engineering team had used metric units, the other imperial – a simple error that turned the Orbiter into a collector’s item (King 2014). Like engineers, architects dread physical errors, compensating with redundancy and precision (Hughes 2014). Safeguarding against errors has given rise to many error-detection systems that are tailored for the specific error-proneness of the system.
Nor do people like failure. Psychologically, failure is not easy to admit. The previous cognitive and emotional investment may be high and public attention or visibility adds to the pressure. This is why even eminent scientists have sometimes shown extreme resistance to giving up on one of their theories, even after it has been dismissed by most of their peers. Mario Livio has analysed prominent brilliant blunders committed by great scientists from Darwin to Einstein, from Lord Kelvin to Fred Hoyle and Linus Pauling. He shows which particular blindness led them to espouse and hold onto a wrong idea. On the personal side, their reactions to being proven wrong and recognizing it differ. Einstein and Darwin turned out to be good losers, while Hoyle obstinately held on to his wrong theory of the universe, even after decisive evidence had disproved it after the 1964 discovery of micro- wave radiation in the universe (Livio 2013).
The cusp of uncertainty, that fortuitous and receptive period of time when the unexpected may be discovered and the unprecedented occur, does not remain open indefinitely. In fact, it is highly uncertain how long it will last. Remaining too long on the cusp may turn out to be a trap. Knowing when to quit is therefore important. Not everyone is ready to subscribe to the radical views of Samuel Beckett’s summary of his art: “Ever tried. Ever failed. No matter. Try again. Fail better.” Which does not prevent politicians, business leaders and management consultants from rhetorically promoting the many start-up enterprises with similar-sounding slogans. Due to the uncertainty that is inherent in research as much as in innovation, failure is an inevitable part of the
The difference begins with the definition of what constitutes success. For innovation, it is both easier and more difficult than for research. Easier, as it is the market that decides whether a product or process is taken up and proves commercially successful. More difficult, as the market simply does not exist. Markets differ according to sector, country, the regulations exerted over them or lack thereof, and other factors. A range of impressive examples exists in which the state has been deeply involved in stimulating and actively supporting innovation, contradicting the stereotypical or ideological belief that innovation can only be performed by the private sector at high risk (Mazzucato 2013). State-supported innovation resembles more a funnel for certainty than the cusp of uncertainty. But somewhere or sometimes in the process, there is a creative research team that knows how to exploit it and very often their funding comes from a public source.
The mantra for encouraging start-up entrepreneurs, “Fail early, fail fast, fail often,” continues to be repeated. They have to be convinced that failure is more than just committing errors. Political and managerial rhetoric presents it as a virtue. It is often depicted as a cultural trait that is dominant in the United States and lacking in Europe. When presented in such a way, the institutional conditions that facilitate or prevent it, like the availability of venture capital, are excluded from view. The exhortation of failure becomes a self-fulfilling prophecy, while remaining a realistic reflection of the competitive pressure that start-ups are under. It predicts that only a small minority of the numerous start-ups will actually succeed in positioning themselves in the market before being bought up by a larger company, as is often the case. Failure, in this sense, is the precondition for generating the requisite variety before selection in and through the market sets in.
Failure may also result from the accumulation of a series of errors. Trial and error are then replaced by error through repetition. This is largely the case with ageing cells, leading eventually to the ultimate failure – that of life itself. It is a failure of and for the individual organism, but not of and for the species. Many mutations arise when the machinery for copying our genes makes mistakes that continue to accumulate. But the view on mutations today is that not all are haphazard mistakes. Rather, some mutations are “directed,” i.e. sometimes localized and under environmental or developmental control. It echoes Pasteur’s dictum that chance favours the prepared mind and transforms it into
Something similar may be said to occur in highly successful organizations. James March, a sociologist of organizations, inquired why successful organizations innovate at all. How do the mechanisms of novelty generation survive and reproduce when, from an organizational point of view, novelty is nothing but deviance? The organization and its overconfident leaders are so habituated to success that they have no reason not to continue as before. They take it for granted that changes in the environment, if they occur, will not affect them. They believe that they can ignore uncertainty. And yet even the most successful do in fact innovate. Empirical case studies reveal that organizational slack, i.e. the uncommitted resources produced by efficiency are sometimes used to protect the status quo. Foolishness is therefore not eliminated. Together with managerial hubris and (over) competitive optimism, these are the instruments of adaptive effectiveness. Overconfident leaders are either oblivious or reckless in the face of adversity. When it finally hits, they have to innovate. Another road by which novelty enters is through copying past success. Paradoxically, exact copying defies even the best-organized routine in well-run organizations. Errors creep in through repetition and, unintentionally, may become the source of novelty (March 2010). The cunning of uncertainty has found some cracks in the wall. Past achievements may unwittingly generate something new, but they may also contain the seeds of disaster.
3 The Temporalities of Uncertainty: Knowing When
Not knowing when change is necessary and so persisting in habitual patterns recalls the messiness of ordinary social life. It is replete with errors and mistakes, with trials and renewed efforts that are neither designed nor planned. They simply happen. The cusp of uncertainty is far away and the possibility of thriving because of it an even rarer moment. Faced with the complexities of the social world and the turmoil generated by social beings, certainty about what the future will bring and preparing successfully for it seems more elusive than ever.
Knowing when – timing – reveals yet another dimension in embracing uncertainty. Timing matters. It arises when the temporalities of uncertainty intersect with the temporalities of human action. To know when to plunge into the unknown and to seize the right moment can be decisive. For the ancient Greeks, kairos was the right moment for accomplishing a crucial action.
Let us briefly reconsider the previous modes of embracing uncertainty. The social world is full of unintended consequences lying in the wake of purposeful action. Embracing uncertainty by unwrapping the gift of randomness puts us in the universe and, closer to us as the human species, in evolution. This kind of randomness appears incorruptible. We still do not know whether God plays dice or not. Embracing randomness takes courage, but it also imparts strength when randomness turns out to be in one’s favour. Then one can claim to be lucky. But it may also bring the realization that one may end up on the wrong side of what turns out to be a situation of either/or. Embracing randomness encapsulates the relationship between the individual and the forces that govern the universe and life on earth. It transcends human action and planning, strategic thinking and even deviance. Embracing randomness is humbling, but it can be put to good use. This is what makes gambling attractive. In the policy-making world, randomness can be harnessed when it is called upon as an impartial arbiter. In randomized trials, for instance, the uncertainty that chance carries is used in a controlled way. There are also occasions in one’s personal life when decisions are extremely difficult to make. Then one is not only ready to accept the verdict of chance, but it is felt as a relief. It takes away the responsibility that appears too big to shoulder.
Embracing uncertainty by using randomness acknowledges the power it has beyond human capabilities, while thriving on the cusp of uncertainty brings forth the potential to transcend human limits and limitations. It pushes us towards some of the greatest achievements in the sciences and in the arts. Not just that. The building of entire civilizations, the exploration of territories beyond the confines of the immediate environment, the numerous experiments conducted by our ancestors with different modes of life, which turned out to be essential for survival – they all meant to acknowledge, to accept and to actively engage with uncertainty. Living on our tiny blue planet in its present precarious condition and having to sustain a global population that will soon reach 9 billion cannot be done without moving both dynamically and delicately on the cusp of uncertainty. In unleashing the creativity of a multitude of individuals, every organization is likewise challenged to re-invent itself and its preparedness to live with uncertainty. Serious consideration must be given to the question of whether existing institutions, many of which were invented
Embracing uncertainty through timing, knowing when or daring to act at a specific moment in time, is mainly a response to uncertainty that emerges from interactions in the social world. Decisions may be deliberate, but unwittingly they generate consequences that are neither intended nor have they ever been considered. Thus, the unintended consequences of deliberate human action are constitutive of the temporal complexity of society. Unintended consequences continue to feed this complexity, and the temporal range varies enormously. Each single action impinges and implicates other actions, performed at the same or at other times and in many different places. Thus, social interactions and feedback generate ever-new entanglements and interdependencies. Computer simulations of artificial life can demonstrate how some very simple assumptions over time give rise to more complex patterns of interaction and even to relatively stable structures. As the different structural patterns of artificial life emerge in the simulation game, the importance of whether to respond by retaliation or reciprocation becomes clear. Their timing is crucial for the outcome and determines whether at the end it is the free-riders or the cooperating agents who will dominate.
In real life and in many social endeavours, timing may determine victory or defeat, life or death. “If not now, then when?” asks the Mishnah. Under less dramatic circumstances, timing decides gains and losses, the right beginnings or the wrong endings. Evolutionary biologists do not tire of quoting Dobzhansky’s insight that in biology nothing makes sense except in the light of evolution. Likewise, nothing in social life makes sense except when it is seen in the light of timing. It spans a wide range, from precious moments to rare or extreme events. It forms a selective moment in which purposeful social action encounters the texture of the social fabric, which is made up of a multitude of other purposeful social actions and, above all, by their unintended consequences.
However, how do we know when to act and when to abstain, wait or delay? Uncertainty about timing is a well-known problem in the analysis of decision making, be it in war games or in corporate strategies. It easily escapes prediction as the answers are mostly given in hindsight. To have a good sense of timing is an art. It is also a matter of luck. It is partly intuition, partly experience, partly absorbing the right cues from the environment, and partly the ability to listen to one’s unconscious that in some unfathomable way whispers Now! or Don’t! Not yet!
The role of timing in the life of individuals, social groups, institutions, corporations or even states is not always apparent nor consciously recognized. It is felt when things go wrong. “Not all days are equal” is an inscription that an anthropologist friend saw in Ghana a long time ago. What to make of each day and where to anchor it in longer trajectories or in the shorter temporal bits of which each day is composed is one of the salient characteristics of leadership. Timing spans a huge range of social interactions, from the delicate moments of intimacy to the building of enduring social ties. It is to be found in the turmoil of geopolitical relations in which highly sophisticated strategic thinking can oscillate with some of the most primitive blunders. At every level, timing plays a role, depending whether it is in sync or out of sync with what others know and do and with the unintended consequences that arise from these interactions. Being in or out of sync with the notorious Zeitgeist, the hard-to-define assemblage of larger economic, political or cultural developments determines whether convergence or a counter-cyclical, perhaps defiant engagement will result. Embracing uncertainty means being aware of this temporal complexity.
The obsession with the frenzy triggered by the “acceleration of just about everything” (Gleick 1999) focuses on the compression of time and the loss of control that accompanies it. The acceleration of social change may generate a (re)new(ed) sense of alienation in those who are overwhelmed by it (Rosa 2010). Arguably, the effect of acceleration on the sense and practices of timing are of far greater importance than the hunger for time induced by attempts to press ever more activities into the circadian rhythm reserved for the human species. Social change is therefore also always about timing. People feel left behind when de-synchronization sets in between their experience and expectations when they are confronted with changes occurring in social institutions and structures that become too large for them to accommodate. Then people risk falling outside time. They risk losing the little control over timing that they had.
If modernity was characterized by an obsession with planning and modernism through its compulsive removal of every material that it considered superfluous or ornamental, it also introduced contingency as a feature of social life. Contingency means that events and what happens are neither necessary nor impossible (Luhmann 1989). The realization that something could be otherwise opens a range of options experienced as liberating as well as potentially overwhelming. Science and unprecedented technological advances have since pushed contingency much further while at the same time expanding the range of control over events.
As a result of these intertwined developments, the possible, this vast potential of promises laced with visionary fantasies and a knack for new business
The sheer weight and pressure of controlling the temporalities of uncertainty can also lead to them being delegated. Timing is then best left to an algorithm that determines in which millisecond the electronic transmission delivers information or makes a decision for the user. All that need be done is to push the button. In this partly self-organizing admixture of human action and preset computerized trajectories, uncertainty has been removed, but it does not disappear. It is now hidden behind the computer screen. It has been shifted to a higher level of complexity. We have come a long way from the supplicant, who thousands of years ago submitted questions to destiny through the cracking of oracle bones, reading the answers in a form that most likely is the origin of Chinese writing. Computer programs can do this now for us, but it is still humans that have to provide the algorithms. Time and timing need to be modelled in new programming languages that include notions like the thickness of an instant and introduce concepts intended to capture hierarchical and multiform time which the algorithm creates through the repetition of events and the relationship between real and continuous time (Berry 2014).
But the complexity of social life does not yield easily. It increases with the number and densities of interactions. They unfold in non-linear ways. So does uncertainty regarding their consequences. As a principle of the organization of societies, centralization has given way to multiple forms of decentralization operating in various modes of governance on a vast scale. Heterarchies spring up next to hierarchies without replacing them completely. Government has been transformed into governance, complemented and preceded by the rise of the individual as consumer, voter and proprietor, as well as manager and caretaker of one’s body. Social structures that have been in place over centuries adapt swiftly to the nodes and hubs of horizontal networking. They generate new and absorb old ways of bonding that are now channeled through social media. They guarantee an overflow of information, processed through the new communication technologies, which waits to be acted upon. The act and art of timing becomes flatter the greater the speed and volume of messages to be processed.
Among the many social repercussions produced by these ongoing transformations, the normative trajectories of the life course occupy a special place. Biographical certainty, this socially assigned and internalized projection of an individual’s pathway through life, is punctured. It collapses like a balloon. With it, social identities start to crumble. Faced with a possible void, a patched-up version of the self appears. Or rather, several interchangeable patched-up versions – depending on what is cleverly advertised by markets and the skills of an individual in the do-it-yourself mode of fashioning one’s life. Modernity has prided itself on keeping material insecurity at bay, on taming uncertainty through planning and by offering an open horizon to the future in the name of technical progress. The traits that are declared essential for survival now are flexibility and adjustment. But once insecurity returns, it becomes much more difficult to keep uncertainty at bay.
So, how to live with contingency, the many options and choices whose outcomes could have been different and otherwise? How to get some firm ground under one’s feet? At a cognitive and emotional level, contingency sharpens the human ability to tolerate and juggle ambiguity. The human brain equips us to play several registers simultaneously. Knowing when to switch and which of the registers to deploy is decisive. The different meanings, which are created by ambiguity, are superimposed. They can be made to resonate just like music. One of the registers can sensitize us to appreciating probabilities, the likely occurrence of an event among several possible ones. People have learned, to take a trivial example, how to interpret and use weather forecasts and their probability ranges without insisting on certainty. Could they not also adopt a probabilistic attitude to life? We need not delve into a maze of calculations or compare Bayesian reasoning with natural frequencies. Nor is it mandatory to become an expert on topics like “why so many predictions fail – but some don’t” (Silver 2012), although knowing the answers is interesting and often useful. A probabilistic outlook on life has at least some ways of accommodating contingencies. So have the “fast and frugal” rules of heuristics. These registers need to be superimposed, harmonized or counterpointed with other registers with which the human brain attempts to understand the world.
At the societal level, once planning and the social structures that uphold it begin to crumble along with the temporality of uncertainty, either chaos takes over or improvisation and muddling through set in. Improvisation is a partly self-organizing response and largely uncoordinated when previously operating coordinating mechanisms fail. As a bottom-up phenomenon, it crops up unexpectedly in many places. It puts whatever material is at hand to short-term use. Makeshift and short-lived solutions provide temporary relief while
Like improvisation, muddling through is practiced and valued by many policy makers. It first hit the policy scene when Charles Lindblom observed incrementalism as a reasonable approach to public administration, enabling continued adaptation to change (Lindblom 1959). Typically, muddling through proceeds incrementally. It makes it easier to keep track of consequences and their effects. It shortens the temporal range of unintended consequences, skirting delayed uncertainties. It has no place for big ideas and shuns big schemes. It does not promise salvation. But it still faces the problem of unintended consequences. The component parts of muddling through, the ones that constitute the piece-meal approach, much like the bricolage elements of improvisation, get thrown into the great blending machine of societal complexity. There, they combine with other elements and leftovers of unintended consequences. At some point, in some place, recombined and often enlarged by their increased complexity, they suddenly turn up again. The question is who cares and who will take care of them.
With its inbuilt pragmatism, muddling through is not the worst option at a time when volatility has replaced stability. It is easy to criticize the overconfidence of political leaders that has led to the disasters of the past and to lament the lack of leadership in the present. In a climate of austerity and economic crises, the tendency is to shirk uncertainty, not to embrace it. But knowing when to muddle through can also be a deliberate act, reflexive of its timing. Whether it will be sufficient and for how long remains an open question. Meanwhile, the cunning of uncertainty may be on its way to push us anew in unexpected directions.1
Berry Gérard. 2014. L’informatique du temps et des événements. Collection Leçons inaugurales du Collège de France. Paris: Fayard.
Hughes Francesca. 2014. The Architecture of Error: Matter, Measure, and the Misadventures of Precision. Cambridge, MA: The MIT Press.
Jablonka Eva, and Marion J. Lamb 2005. Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life. Cambridge: The MIT Press.
Jacob François. 1987. La statue intérieure. Paris: Odile Jacob Seuil. Translated by Franklin Philip as The Statue Within: An Autobiography. Plymouth: Cold Spring Harbor Laboratory Press 1995.
Konrad Monica. 2005. Narrating the New Predictive Genetics: Ethics, Ethnography and Science .Cambridge: Cambridge University Press.
Libet Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl 1983. “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness Potential). The Unconscious Initiation of a Freely Voluntary Act.” BRAIN 106: 623−642.
Livio Mario. 2013. Brilliant Blunders: From Darwin to Einstein – Colossal Mistakes by Great Scientists that Changed Our Understanding of Life and the Universe. New York: Simon & Schuster.
Merton Robert K. 2004. “Afterword: Autobiographic Reflections on the Travels and Adventures of Serendipity,” in Robert K. Merton and Elinor Barbar, eds., The Travels and Adventures of Serendipity, 2004: 230−298.
Rosa Hartmut. 2010. Alienation and Acceleration: Towards a Critical Theory of Late-Modern Temporality. Malmö/Aarhus: Aarhus University Press.
Zelizer Viviana A. 1979. Morals and Markets: The Development of Life Insurance in the United States .New York: Columbia University Press.
Zelizer Viviana A. 2010. “Culture and Uncertainty,” in Craig J. Calhoun and Robert Merton (eds.), Sociology of Science and Sociology as Science. New York: Columbia University Press.
This chapter was previously published in Nowotny: The Cunning of Uncertainty, Polity Press 2015. We are grateful for the permission from the author to include it in the present volume.