Author:
Richard Rottenburg
Search for other papers by Richard Rottenburg in
Current site
Google Scholar
PubMed
Close

1 Metrics in Africa as Object of Investigation

For decades, most debates about how to create and secure political, legal, economic, and social systems that benefit the greatest number of people on the African continent have been largely dominated by one burning issue. The logic of the problem is this: improvements in social and legal systems are almost always seen as dependent on improvements in technological infrastructures. These improvements, in turn, are almost always seen as dependent on the diffusion of technologies. Advanced technologies have usually been developed elsewhere and need to be translated into local contexts in order to be useful. In most of these translation processes, only some aspects can be adapted to local conditions, while others must remain unchanged for the technology to work. The technological limitations of adaptation lead to a degree of global standardisation and centralisation. In many cases, this is not conducive to the original purpose of translation. There is therefore an inherent tension between globalisation and localisation.

The technology at the heart of this volume is metrics. Nowadays, the term metrics usually refers to digital metrics, which in turn depends on digital infrastructures. These are characterised by the fact that they are global but do not have a single centre. They are deliberately run through decentralised networks. This also means that while the core elements of the infrastructure may be centralised in Silicon Valley, Shenzhen, or Hyderabad, they can be used in a decentralised way for local purposes anywhere in the world. At the same time, key elements of the definition of equivalence must be defined in the centres of computation in order to work globally. This strongly shapes and constrains the possibilities that remain in local contexts.

The contributors to this volume use a variety of empirical case studies to explore the tensions between centralisation and decentralisation in the translation of circulating forms of metrics. Examining mathematical operations among the Yoruba people of Nigeria, Helen Verran asks how different number systems and ways of calculating can coexist within the same community of practice. Helen Robertson examines the extent to which a machine learning model can be said to possess the relevant concept that precedes the classification which the machine is supposed to learn. René Umlauf tackles a related phenomenon by examining how machine learning models developed by a company in San Francisco are trained in a “data factory” in Uganda. Looking at digital lending in Kenya, Emma Park and Kevin P. Donovan ask how the centralisation and standardisation that accompany digitisation affect social variations in local lending practices. Véra Ehrenstein addresses a connected issue in the field of forest metrics in Gabon, asking to what extent centralised and standardised forms can become independent of local practices. Jonathan Klaaren examines how locally hard-won rights of access to information in post-apartheid South Africa have been shaped by the introduction of new information technologies, but continue to be influenced by their history.

The aim of this introduction is to provide a framework for a better understanding of the common themes that unite the very different topics of the chapters in this volume.

2 Situating Metrics

In order to further specify the focus of this volume and its objects of investigation, it is helpful to ask how metrics relates to non-numeric forms of knowledge practices that seek to make sense of the world by reducing its complexity.

Metrics is used as a generic term for various forms of numerical representation, also known as quantification, which includes counting, measuring, and calculating. Metrics seeks to grasp the general by analysing the relationship of the one to the many. It can be practised in analogue and digital forms, and in principle both follow the same mathematical logic (Didier 2021). In our time, metrics is most often associated with digitalisation, and so it is in this volume. While digitalisation does not change the logic of metrics, one of its effects, namely datafication and the resulting big data, seems to change it (Mayer-Schönberger and Cukier 2013). When interoperability between different data sets is achieved, the data collected can be used for different purposes that were not known at the time of collection. And once this state of affairs is established, the interest in installing technological infrastructures to generate data becomes an end in itself. Data mining and related new digital technologies are born and change the character of metrics.

Hermeneutics is used as a generic term encompassing all non-numerical forms of interpretation and representation, but above all it relates to language and words, to narrating and reading the world. Hermeneutics seeks to grasp the general by interpreting the relationship of the part to the whole, of the particular to the general. To some extent separately, but in principle together, metrics and hermeneutics constitute the world as we know it. Both kinds of knowledge practices generate their own apparatus of inquiry and archive of knowledge. Raising questions about the entanglements between metrics and hermeneutics is one of the main aims of this introduction, in order to frame the work of the following chapters.

The starting point for this volume is that metrics has become the most robust form of evidence in any public dispute around the world. This is due, at least in part, to the understanding that quantification translates the political into the technical and thus provides the most effective language for communication across social, cultural, and disciplinary divides. As early as 1904, Max Weber had a dark premonition of the modern rationalisation of the social world, and of quantification as an important dimension of it. He recognised that modernity was constructing a “steel-hard casing” for itself, and this filled him with a remarkable unease (Weber 1972, 203). Weber identified the elements used in the construction of this casing as Protestantism, capitalism, industrialism, bureaucracy, and mechanical technology, which together rationalised the Western lifeform. He feared that this very rationalisation would become a self-devouring process that would not end until “the last ton of fossil fuel has burned up” (Weber 1972, 203).

Since the beginning of the twentieth century, Weber’s premonition has continued to haunt many public discourses. It has taken on an even darker tone with the event of digitalisation and datafication. In March 2023, the artificial intelligence community’s call for a six-month moratorium on AI development to prevent chaos is probably one of the most extreme expressions of the dark premonition Weber had more than a hundred years earlier. The opening paragraph states:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control. (Future of Life Institute, 2023)

Of course, the problem is huge and the ability to see a pattern in its development is limited. As usual in such situations, not everyone is equally alarmed. Some are more fascinated by the current panic and the escalation of a rhetoric of finality, arguing that these reactions are not primarily related to what is actually going on. Others are fascinated by the amount of attention, energy, and money being invested in post-solutionist projects, such as the search for an alternative planet as a future habitat for a humanity that has destroyed its original planet. Still others are convinced of the need to pay more attention to details and differences, and to avoid attributing truth to grand assumptions about the course of history when not enough is known. The contributors to this volume are mainly invested in the latter approach.

Closer to the questions raised in this volume, a growing number of concurrent studies problematise the implications and consequences of the expansion of digital metrics to countries with comparatively new and sparse networks of numerical representation, most of which are postcolonies (Breckenridge 2014; Jerven 2015; Nyabola 2018; Lamoureaux et al. 2021). Three fundamental questions come to the fore differently in sparsely-measured postcolonies than in densely-measured, highly-technicised countries. One question is about the extent to and the way in which trust in digitised metrics depends on trust in the organisations that produce the numbers and in the institutions that regulate the producers. We are dealing here with a particular version of the foundational circle of trust creation. As noted above, numerical evidence has become the golden standard of evidence in general. You need numbers to challenge numbers. But what can you produce as evidence if you do not trust the way the numbers are produced? A second question concerns the problem of data sovereignty in relation to state sovereignty. Finally, a third question problematises the tendency in African countries to move towards digital metrics without first institutionalising basic infrastructures of quantification. Taken together, these three questions seek to open up a space of inquiry that does not start from unproven assumptions about the adoption of digital metrics as a promising form of leapfrogging.

The contributors to this volume scale these larger questions down to a level at which they can be examined empirically through praxiographies. They follow Thomas Eriksen’s credo of studying large issues in small places. In the remainder of this introduction, I argue for what the social study of quantification has to say about knowledge practices that depend on digital metrics in postcolonial African settings. The guiding question problematises the idea that doing metrics and asking about the deeper meaning of life are two separate endeavours. In this vein, I argue that rather than using hermeneutics to critically examine metrics, or using metrics to discard hermeneutical insights, it is perhaps more accurate and promising to ask how metrics and hermeneutics are intertwined (Didier 2021; Morgan 2022).

3 Hermeneutics

Hermeneutics maintains that the most elementary part of social life is the meaning related to acting—here understood in the emphatic sense of Max Weber’s notion of handeln (Weber 1973). Actions are conceived as being meaningful to the participants of an interaction. Subsequent actions are understood to be oriented towards the meanings of prior actions conveyed through various forms of signification, mainly through narratives. Accordingly, to understand a particular event or situation one needs to look at the social actions that led to the particular event or situation and identify the meanings that the various participants give to their own actions and those of others acting before and after them. This implies the necessity to look at the institutionalised forms that are indispensable for sense-making and at the notions implied in these forms (Garfinkel 2012).

Some of the most basic concepts that are essential for making sense are causality, the principle of non-contradiction, temporality, rationality, responsibility, person, notions of good and evil, reciprocity, life, death and the afterlife, and all sorts of classifications and orders of things. But these concepts and the institutionalised forms associated with them—and this is the important point—cannot be used as sufficient explanations of action. Unlike behaviour, action by definition means acting freely and in view of making a difference. So, on the one hand, institutionalised forms are there to guide actions in certain ways and to prevent them from going otherwise. But on the other hand, because institutionalised forms are constantly re-enacted and only thus remain alive and functional, each repetition can—and often does—make a small difference. People mostly do not blindly follow rules and just behave, but they act to make a difference for themselves and others. Over time, forms therefore change and take on different meanings. In this view, action and form co-constitute each other in endless circles (Arendt 1998; Joas 1996; Boltanski 2011).

Rather than trying to escape this circularity, hermeneutics embraces it, claiming that the humanities must move in the same circles. In other words, the social sciences, as part of the humanities, must search for interpretations of the meanings of previous actions. This methodology is followed intuitively, for example, when reading a text. You take in the meaning of individual words, then the sentence of which they form part, and this in turn changes your original understanding of the meaning of the words. The same process is repeated for any larger semantic unit to which the words and phrases belong. Understanding a chapter, a book (or a film), a genre, an era, and so on, makes it possible to go back to a single word and realise a different meaning. These endless back-and-forth movements are known as hermeneutic circles. Classical hermeneutics, similar to classical phenomenology, conceived meaning as something purely ideational that can become attached to material objects but only due to human attribution.

This position resulted in the understanding that quantification can only deal with external things that have primary qualities like extension, shape, strength, number, degree of mobility, and due to these qualities, they can be counted, measured, and experimented with. In contrast, the ideas of the mind have none of these qualities and therefore do not immediately belong to the realm of quantification. In his seminal book The Crisis of European Sciences and Transcendental Phenomenology, Edmund Husserl (1970) critically diagnosed a loss of reality due to “Galileo’s mathematization of nature” (Husserl 1970, 23–59).

This old fear of losing the true meaning of life and the essence of nature through the penetration of quantification into all areas of knowledge production is alive and well today. But so is the fascination with the possibilities that only numbers and mathematics allow. This ambivalence, the simultaneity of anxiety and fascination, has become more alarming by the increased importance of digital metrics in shaping contemporary lifeforms around the globe, including the increasing disparities between social spaces, countries, and continents.

The contributors to this book argue that the anxiety and fascination with the growing importance of digital metrics is not the result of clearly separated practices of knowledge, one of which is about to make the other redundant. Rather, they move closer to concrete instances of practice, looking at the details and examining if and how metrics and hermeneutics are intertwined. Two of the authors are more explicit about the difficulty of juxtaposing metrics and hermeneutics. In her chapter, Helen Robertson shows how numbers and meanings are intertwined in ways that raise fundamental doubts about the possibility of a strict separation of hermeneutics and metrics, let alone the possibility of replacing one with the other. Human medical experts around the world can, for instance, distinguish a benign from a malignant tumour. This is because they all possess the same relevant concept for diagnosing a tumour after the appropriate medical training. They hence understand the meaning of “malignant” and “benign” and can apply the words appropriately. Computers and the machine learning models that are run on them can also be used to distinguish benign from malignant tumours based on measurements and thus can assist the medical expert’s decision-making process. However, Robertson shows that there is an important difference between the human medical expert and the computational model. The model does not understand the meaning of benign and malignant, but simply assigns certain measurements to the benign category and others to the malignant category. More specifically, Robertson shows that, even according to the most uncommitting account of what it is for a human to possess a concept, machines cannot be said to possess any concepts. The capability to possess a concept, that is, to understand meaning, is limited to human beings. It cannot be delegated to digital machines that only measure, count, compare, and calculate.

Helen Verran’s chapter demonstrates that numbers, the basis of metrics, are already socially and culturally embedded. She examines the linguistic and arithmetic practices facilitated by two distinct numerical systems that emerged historically among the Yoruba people. One of which is associated with trans-Saharan trade, using cowrie shells as currency. The other is associated with British colonisation and schooling. Recognising that the two systems, with their different numerical forms, are based on different commitments, carry different social values and enact different relationships, the chapter asks how the two numerical systems relate to each other. In answering this question, the chapter refutes the common belief that numbers and mathematics—as they are known and taught in schools and universities around the world—stand outside any sociocultural fabric and carry reality itself in their forms.

4 Metrics

Over the last few decades, scholars of the history and philosophy of science and technology have examined the implications of numbers and calculative practices for the construction of knowledge. It is not surprising that interest in this topic arose in countries where quantification first emerged as a distinct and important area of practice, encompassing all social sectors notably in Great Britain, United States, France, and Germany. Research on the history of statistics highlights how this form of knowledge enables a new way of grasping and shaping the social world by being premised on a particular notion of objectivity (Porter 1986, 1995; Desrosières 1998). As Hacking (1975; 1990) shows, since the nineteenth century statistics and its probabilistic forecasting were fundamental to the rise of the modern nation state and its practices of governance in Europe and North America. Studies of the history of the notion of objectivity as it has emerged in modern public, political, and bureaucratic life have well revealed how numbers have come to signify an almost taken for granted understanding of impartial and objective knowledge (Daston and Galison 2007). To highlight this point, Theodore Porter has coined the phrase “mechanical objectivity” (Porter 1995).

This work is complemented by research on accounting and auditing that reveals the growing importance of numbers also in everyday life. In further developing Foucault’s notion of governmentality (Foucault 2006; 2006a), scholars argue that accounting is not a purely technical and neutral practice as it fosters forms of disciplinary power (Miller, Hopper and Laughlin 1991; Miller 2001; Hopwood and Miller 1994; Espeland and Vannebo 2007). Social studies of accounting argued that during the 1980s countries at the forefront of neoliberal reforms were facing an “audit explosion” (Power 1997). The drive to compare public service delivery in order to thereby assess its relative cost-efficiency, led to stricter practices of auditing that in turn served to test the legitimacy of governing practices (Poovey 1998; Carruthers and Espeland 1991). Like the literature on statistics, the one on accounting raises concern that the increasing importance of quantitative evidence has created a situation where only those operations that are quantified are taken into account at all and that in this process, long-term goals have largely been lost from view. Narrative modes of knowing, so the literature argues, receive less attention even though they are deeply entwined with quantitative modes of knowing (Espeland and Sauder 2007; 2016; Morgan 2022).

Social studies of standardisation provide another valuable framework for considering the role of numeric representation in governance and everyday life. They show how technologies of quantification and formal representation (mathematical formulae and models, charts, graphic depictions) become indispensable in processes of modern rationalisation (Berg 1997; Bowker and Star 1999; for organisation theory, see also Brunsson and Jacobsson 2000; Morgan 2012).

The literature on statistics, probabilism, objectivity, accounting, and standardisation also shows that the production of numerical knowledge depends not only on its own relevant methodologies and research technologies. It also requires social, political, economic, and legal support, including networks of scientific peer support, financial resources, legal approval, and political recognition. Some of these supportive networks gradually become institutionalised and provide the basis for public trust in numeric representation (Shapin and Schaffer 1985; Porter 1986; Bloor 1991). Some social studies of quantification further emphasise that the meaning, purpose, and intended and unintended effects of institutionalised numerical representations differ across social domains, and therefore require careful empirical analysis of the proliferation of numbers and computational protocols within their social contexts (Callon, Millo and Muniesa 2007; MacKenzie, Muniesa and Siu 2007; Mugler 2018; Didier 2020; Didier 2021). In sum, at least since the 1990s, there has been a well-established and still growing body of scholarship that examines the expansion of metrics in all spheres of social life, while at the same time questioning the independence of metrics from hermeneutics and vice versa.

5 Doing Metrics

As elaborated in the previous section, it is generally accepted that modernity is inconceivable without numeric representation. Statistics and accounting have emerged as key forms of knowledge production and technologies of governance of industrialised states; probability theory, random sampling, market ideology, and the democratic welfare state have collectively co-evolved around the notion that independent agents choose freely and yet—in aggregate—predictably (Krüger, Daston, and Heidelberger 1987). From its beginnings, modernity created an affinity between governance and evidence accessible to the public, as Foucault’s (1973) work authoritatively demonstrates by spelling out Friedrich Nietzsche’s programmatic assertion.

In order to have that degree of control over the future, man must first have learnt to distinguish between what happens by accident and what by design, to think causally, to view the future as the present and anticipate it, to grasp with certainty what is end and what it means, in all, to be able to calculate, compute—and before he can do this, man himself will really have to become reliable, regular, necessary, even in his own self-image, so that he, as someone making a promise […], is answerable for his own future! (Nietzsche 2006, 36)

The necessary evidence implied in this capacity to become answerable for one’s own future turned more numeric during the twentieth century. In contemporary democracies, most relevant questions—like where to build a road, a railway, a school, a hospital, a waste disposal site, a nuclear power station, or how and for what ends to use taxes, or who should be qualified to receive a credit—invite answers based on numeric evidence.

Through quantification, the world becomes knowable at a distance, neatly compartmentalised, and ordered. Things that at first appeared incommensurable can be made commensurable. While one always loses some aspects of the reality in question through numeric representation, one equally gains others that were invisible before quantification. The discovery of this can be attributed to the early French statisticians who at the beginning of the nineteenth century were greatly concerned with normality and deviance (Hacking 1990, 64–104).

Numeric representation lends itself to the generation of comparisons and rankings of known phenomena, but it also allows us to re-arrange data originally collected for a particular purpose into endless new configurations that enable the detection of previously unanticipated interconnections. Established forms are mostly simple, unambiguous, and seemingly easy to prove and understand. Once quantifications are established, they successfully hide the theoretical and normative assumptions inscribed into them. Against much of their public image, all forms of quantification do not mirror reality but are instead the product of a series of interpretive decisions about what to quantify, how to categorise, and how to label things. The more diverse and less countable the phenomenon being quantified, the more difficult and cumbersome these decisions are. New quantifications always rely on previous ones and are thus shaped by their logic and the kinds of data they generated.

Quantification also needs substantial resources and these depend on what governments and private organisations consider worth knowing in numeric form. What ends up being quantified, and thus encoded in particular ways, is often the product of what is understood as being problematic by relevant and influential publics. The very act of numeric representation constrains the kinds of information that are available. While selected problems and their connections that previously were hidden are made visible through refined modes of quantification, others remain concealed or become even more invisibilised by the dominant numeric representations. However, the power of numbers has reached a point where attempts to question dominant quantifications are themselves often presented in numeric form (Merry 2006; 2011; Hetherington 2011; Bruno, Didier, and Prévieux 2014). For the argument of this introduction, it is important to outline those operations of quantification that are conventionally contrasted with hermeneutics, yet should rather be interpreted as interwoven with it—as I propose here.

Central to any system of quantification is commensuration and comparison (Espeland and Stevens 1998; Heintz 2010). In order to collect data, it is essential to make things commensurable: to decide on a principle of similarity so that things can be grouped or classified, counted, and calculated. Only then can data be transformed into information, and then information into knowledge. The starting point is often a list or series of items that are easy to count. A logical first step is to establish some equivalence between all the items on the list, including variations that are not on the list but could potentially exist. This requires finding a commonality—a shared characteristic—between the individual cases and ignoring the differences (Desrosières 1998, 10–11). Once this construction is accepted, as in the case of the idea of the “average man,” the common feature becomes a real thing. In this sense, we are dealing with a fundamentally performative practice which as such has predictive power (Osborne and Rose 2003; Didier 2020). By establishing such equivalences, categorisations (also known as classifications or taxonomies) are created. These, in turn, are defined and organised into a system of multiple categorisations, so that all things that seem relevant fall into one category or another, ideally mutually exclusive and together all-encompassing.

The creation of categories for the purpose of statistics and governance is, after all, an arena of significant interpretive work, shaped by pre-existing categories, theoretical concerns, and practical purposes. Debates recur in the history of statistical classifications about “a sacrifice of inessential perceptions; the choice of pertinent variables; how to construct classes of equivalence; and last, the historicity of discontinuities” (Desrosières 1998, 239). Taxonomies bring together things that do not necessarily belong together and attach a common label to them so that they constitute a single category. Moreover, each category has to be usable in all future situations in which the taxonomy is meant to order things in a meaningful way and thus has to be even more abstract than a given context already requires.

For the categories to be useful, they must be populated by individual cases that again need to be encoded into them. The encoding process refers to the decision to attribute an individual case to a particular class. For the argument about the unavoidable interlacing of metrics and hermeneutics it is important to emphasize that the act of classification is hermeneutic work that needs to downplay certain aspects of the case and highlight others. Here cultural, normative, social, political, and technical dimensions play important roles that are quite independent of the object to be encoded (Mervis and Rosch 1981.) In the end it appears as if objectively given things were simply sorted out and quantified, when in fact these things only become real in a certain way as a result of having been encoded. Over time and with use, they become more established and accepted as unquestionably real. “When the actors can rely on objects thus constructed, and these objects resist the tests intended to destroy them, aggregates do exist—at least during the period and in the domain in which these practices and tests succeed” (Desrosières 1998, 101). In other words, it is their institutionalisation that makes metric aggregates real and thus trustworthy.

Perhaps the most important part of this stabilisation through institutionalisation is the increasingly dense, all-encompassing, and complex web of cross-references between numeric forms of world-making. While one particular form of quantification might try to be as comprehensive and differentiated as possible towards a particular issue—like for instance health, unemployment, suicide, poverty, air pollution, racial discrimination—it unavoidably cross-references several other issues and strongly depends on the availability of those metrics. This means that a particular instance of a metric representation, such as one depicting the health of a population, is heavily dependent on the availability of several other metric representations, such as vital statistics, income, education, infrastructure, environmental aspects, workplace studies, and so on. Taken together, these different metric representations are much more robust and useful than they would on their own. This is one of the key differences between countries with scattered measurement networks and countries with dense measurement networks.

Whether scattered or dense, a self-stabilising web of numeric representations is distinguished by its specific and intended shallowness. It is a web of “thin descriptions” (Porter 2012). As such it contrasts with the web of narrative representations of reality characterised by its intended depth. Those are webs of “thick descriptions” (Geertz 2017). However, as I try to show in this introduction, thick and thin descriptions are each enfolding the other, without ever becoming completely subsumed or purified from each other. In the next section I delve into more details of quantification to strengthen this insight.

6 Forms of Digital Metrics

Metrics, as shown in the previous section, is never neutral, but always a form of technopolitics. Neoliberal governance as it emerged in the 1980s—first in North America and the UK, now everywhere—introduced a specific form of metric technopolitics. The innovation was not about unleashing seemingly eternal and natural market mechanisms that had been restrained by the state. It was about introducing measurements that created new market mechanisms. One of the main forms of neoliberal measurement, known as “benchmarking,” aims to improve performance by comparison through indicators in contexts where there are no conventional market mechanisms to perform the same function (Bruno and Didier 2013; Mennicken and Espeland 2019; Mennicken and Salais 2022; Guter-Sandu and Mennicken 2022).

Benchmarking through digital metrics is officially linked to the idioms of subsidiarity, self-monitoring, self-auditing, and self-responsibility. In a discourse of supposedly increased civil liberties, control became largely a matter of self-control and was interpreted as a shift away from old structures of domination that privileged the few towards more democracy, freedom of choice, participation, and transparency. But as it turned out a few decades later, the resulting new order dramatically increased inequalities within and between states around the globe (Piketty 2014). And, as some scholars argue, new technological and other developments seem to have begun to reshape or end the neoliberal era since the new millennium.

Even in the neoliberal era, not all metrics were driven by the logic of benchmarking. Other forms of metrics continued to operate and new ones began to emerge. To understand the mechanisms at work, it is again necessary to pay attention to concrete practices and to differences between fields and sites of practice around the world. One of the more important dimensions of neoliberal reforms is that a large proportion of metrics is no longer produced by state institutions but by private or at least independent agents. To some extent this is related to benchmarking and performance-based funding, as in the case of privatised railways, postal services, telecommunications providers, other utilities, hospitals, libraries, universities, and sometimes even prisons. On the other hand, a new and opposite trend is emerging as a result of the same neoliberal intervention. Contrary to the neoliberal programme that sought to turn citizens into self-entrepreneurial subjects, contemporary forms of digital metrics reinvent the possibilities of centralised action. The central actor does not necessarily have to be the state, it can be a private corporation, an international organisation, a foundation. Often it is a private firm that is subcontracted by a state institution that thereby becomes fully dependent on the development and maintenance of the software to do the digital metrics for the state.

For the argument of this introduction, the crucial point of currently emerging forms of digital metrics is the creation of an “experimenter” who can analyse a “population” to objectify its behaviour and make it more predictable—as has been the case with conventional statistics since its inception. In this seemingly familiar context, however, new forms of digital metrics have come to the fore. One of these is the design of sociopolitical interventions as controlled experiments. They start from a known and given state of affairs that is seen as problematic and move towards identifying previously unknown relationships. The HIV/AIDS pandemic became the prototype of this form. Initially, only the symptoms were known, but later statistical analyses revealed previously unknown correlations between the symptoms and social variables. This helped guide medical research into the causes of the symptoms and led to the identification of the virus. Once a potential treatment was identified in the laboratory, it was quickly applied on a large-scale before going through the previously established and accepted medical procedures to measure efficacy and safety. The chronology from trial to treatment was partly suspended so that the treatment remained part of the trial. This particular approach was accepted as the norm in many fields outside the HIV crisis (Rottenburg 2009). It was further institutionalised in attempts to contain the 2014 Ebola crisis in Sierra Leone, Liberia, and Guinea, and quietly established as the gold standard during the COVID-19 pandemic.

Another version of the intervention-as-trial is used when the problem, its causes and the desired end are known, but the means of achieving the end are not. In most cases in this category, the means are desired changes in the behaviour of relevant groups. The relevant population, or a sample of it, is divided into two statistically equivalent groups, and a particular incentive for behaviour change is tested. For example, one village receives a water treatment system and the other does not. Differences that emerge after the defined trial period are then attributed to the presence or absence of the incentive. The Abdul Latif Jameel Poverty Action Lab at MIT provides good examples of this approach.

A key difference between large-scale modernist government interventions designed to make the world a better place and these newer interventions, which are run as experiments, is the way in which they conceive of the future. Based on the narrative of progress, high modernity interventions assumed that the future was partly indeterminate and therefore malleable. They aimed to make the future not only better but also more predictable by shaping it. In contemporary interventions, faith in the ability to shape the future has been replaced by a heightened and nervous awareness of the risks involved. It is largely for this reason that some interventions are designed as experiments to generate the data needed to correct the next phase of intervention, which is itself designed as an experiment (Ezrahi 1990). And yet some of these experimental interventions raise fears that they might grow to be out of control. Voices calling for political regulation based on ethical principles are becoming louder and more influential. They question the legitimacy of interventions that may have unpredictable consequences and demand that only those interventions should be pursued whose consequences can be reversed in the future (Jonas 1979; Jasanoff 1994; 2020). Digital metrics, which is always part of the problem and part of the solution, is needed to assess the relevant probabilities for both positions.

One important form of digital metrics remains largely unaffected by the spread of experimentality, though. It is mostly employed when a problem arises that seems expansive, pervasive, and hard to delineate. The exact articulation of the problem, its expressions, its causes and effects, and ultimately its remedies are unknown. A quintessential example of this was the premonition of climate change, which held a controversial place in the debates of the 1980s, yet the scientific controversy was in fact closed during that decade mostly due to successful metrics. Since then climate change, like poverty alleviation, is among those issues where the causes, effects, and even the remedies are well known. Although most of these remedies are in fact non-controversial in terms of their causal relevance (like the reduction of energy gained from fossil fuels), they are still hard to translate into workable and globally enforceable interventions because they demand substantial changes of the dominant lifeform particularly in the industrialised countries. Working out these interventions requires more metrics.

Searching not for specific cause and effect relations or statistical regularities by following a hypothesis, but for radical changes of the contemporary dominant lifeform is an open exploration. It is mainly about discovering surprising and potentially useful patterns in a huge pile of big data, without anticipating what those patterns might be. Hope is invested in previously completely unknown escape routes. This is one of the practical situations in which conventional metrics morphs into digital metrics associated with big data, data mining, and machine learning. In this kind of open exploration, digital metrics seems less like a tool of governance and more like a version of basic science. It is often run by centralised, permanent infrastructures that are set up for a specific problem, but with an open exploration focus. They collect, aggregate, and correlate large, heterogeneous sets of digital data and develop new models with open, non-experimental research questions. The National Oceanic and Atmospheric Administration (NOAA) (2021) is one and the European Centre for Disease Prevention and Control (ECDC) (2021) is another, among many examples.

Open explorations are increasingly also run by a rather different type of emerging infrastructure that is decentralised, non-hierarchical, participative, and facilitated by the rapid development of the Internet, and the increasing speed and capacity of computing. Web2 platforms facilitate mass participation by ordinary citizens, according to the wiki-principle and generate a new type of information gathering and new chances for quantification. A successful example of this type is the “Extreme Citizen Science Blog” (2021) housed by the Department of Geography of the University College London (UCL). While these new forms of lay expertise in processes of digital metrics are partly an expression of an increasing scepticism towards the type of institutionalised numeric expertise implied in political decision-making, they simultaneously reinforce the power of digital metrics, and contribute to its spread across all sectors of life.

7 Metrics and (De)Centralisation

Digital metrics is often celebrated for its improved calculations of probability for all sorts of developments—from the unfolding of a pandemic, over environmental and climate changes, to economic and demographic transformations and any kind of social phenomena, including positive effects of digital technologies and robotics for society. Forecasting has always been about extrapolating patterns from the past to the future. Without this endeavour our everyday routines would collapse, and all efforts to be prepared for troubles—like earthquakes, flooding, financial crises, wars, and epidemics—would be pointless were this not the case. Over the past decades, radical changes in data processing speeds and data storage capacities have made a huge difference and any celebration of these developments has become accompanied by disapproval.

The endless and often random gathering of big data has quickly found multiple deployments in science, business, and governance. Much of this is related to new forms of surveillance facilitated by digital metrics. Most strikingly, though, much of this is powerfully driven by market mechanisms and the financialisation of capitalism. Customer profiling, crime prevention, migration control, infectious disease control, platform economies, automated stock exchange trading, market prognoses, and financial tools—all work with big digital data, machine learning, and artificial intelligence, to identify patterns and prognosticate developments that trigger automated or human interventions. The very distinctions between basic science, applied science, governance, and market mechanisms have lost much of their clarity. This new state of affairs has huge implications not only for relations between the state, business, and civil society, but also for the unequal relations between rich and poor countries around the world.

A politically important and remarkably centralised programme, defined and run by the United Nations, is called the Sustainable Development Goals (SDG s) and has the stated aim of overcoming inequalities between rich and poor countries. Unlike its predecessor, the Millennium Development Goals (MDG s), the new programme and its seventeen SDG s are fully metricised. Not only do they set targets for seventeen key areas of intervention, but they also define the precise measurements that will be used to assess the impact of all the myriad meliorative interventions. This means that the primary objective of the UN programme is first to put in place the necessary infrastructure to carry out the measurements according to the defined standards. It is no exaggeration to say that the SDG s are, in fact, a vast programme of metricisation. Its primary goal is to make things comparable across space and time.

A key problem of this aim is the simultaneity of centralising and decentralising effects of digital metrics. On the one hand, digital infrastructures are characterised by the fact that they do not depend on a single centre, but are deliberately run through decentralised infrastructures (Hecht and Edwards 2007; Edwards 2010). As mentioned earlier, this also means that while the core elements of the infrastructure may be located in the tech-centres around the world, they can be applied for local purposes in any particular part of the world. Similarly, while social media have their technological, financial, and managerial centres, they are also used for more local purposes around the globe, far from these centres (Lamoureaux et al. 2021).

On the other hand, key elements of defining equivalences, categories, and codifications are located in the centres of calculation, and powerfully shape and constrain the possibilities that remain in local contexts. The obvious examples of this kind of centralisation—such as Amazon, Alphabet (Google), Apple, Microsoft, Meta and the platforms for which they offer the background service—are notable for their enormous size, global scale, degree of monopolisation, impact on local structures, and tax evasion strategies that dwarf the powers of nation-state regulation (Srnicek 2017). The opportunities and pathways for capitalism to move beyond extraversion and promote profitable businesses that contribute to increased welfare, not only in the centres but also on the African continent, will largely depend on how the mechanisms of digital centralisation and decentralisation work (Breckenridge 2018).

Like with the SDG s, any global survey needs to handle the challenge of developing categories that can travel across various borders—infrastructural, legal, political, social, and cultural—and create commensurability across a multiplicity of relevant cases. This creates a dilemma: the survey categories need to be translated into local terms in order to accurately quantify local ideas and behaviours. To allow comparisons across borders, the categories must still refer to the same thing wherever they are used, even if the phenomenon being quantified manifests itself differently in different places. In order to understand how such categories are formed, it is essential to examine the process of their creation, in other words, the practices, templates, actors, and networks that collectively constitute the expertise to draw up such categories.

The authors of the chapters in this volume do not assume that they already know how the dynamics of centralisation and decentralisation will or should develop in the future with regard to local forms of self-determination. Rather, they understand this as an open empirical question and therefore invest their energy in gaining more detailed insights through their praxiographies.

Summarising one aspect of her pioneering study of multiple linguistic and arithmetic practices among the Yoruba people of Nigeria (Verran 2001), Helen Verran demonstrates that even mathematics and its numbers can operate in different forms and are therefore not inherently standardised. At a similarly fundamental level, Helen Robertson argues that the mathematical models of machine learning can achieve high levels of accuracy in classifying entities, but they still do not possess—in other words, understand—the relevant concept required for classification. This insight implies that centralisation and standardisation can hardly be separated from human understanding of meaningful concepts that make sense in particular local contexts, and thus remain fluid.

On a more empirical level, René Umlauf provides a disturbing insight into the practice of training a machine model to learn classification. The work is carried out by computer science students at a university in Uganda, whose role is reduced to the simple task of training the machine to repeat standardised attributions. At a similar empirical level, Emma Park and Kevin P. Donovan offer a more optimistic insight into digital forms of centralisation that are more in line with Helen Robertson’s findings. They show that local variation is still possible and more powerful than is often feared. We learn that the digital apparatus centrally designed to regulate lending through standardisation does not necessarily weaken local forms of social relations and mutual support. On the contrary, users are finding creative new ways to combine digital lending with their existing social obligations. Véra Ehrenstein elaborates how forest metrics—beyond its centrally standardised methods, technologies, and funding arrangements—still rely heavily on decentralised local practices. Jonathan Klaaren’s chapter explains how the right of access to information as a form of local empowerment depends on digital forms of standardisation and centralisation. He argues that centralisation and decentralisation do not simply contradict each other but unfold dialectically in the contemporary sociopolitical order of post-apartheid South Africa

Acknowledgements

I am indebted to Faeeza Ballim and Bronwyn Kotzen for their critical and encouraging comments on earlier versions of this text. Two anonymous reviewers helped to make the argument clearer and more accessible. I am particularly grateful to Emmanuel Didier, who took the trouble to read the text carefully and to offer substantial critical advice. The remaining weaknesses in the text are, of course, my own.

Bibliography

  • Abdul Jameel Latif Poverty Action Lab, MIT. n.d. http://www.povertyactionlab.org. Accessed July 08, 2023.

  • Arendt, Hannah. (1958) 1998. The Human Condition. Chicago: University of Chicago Press.

  • Bail, Christopher A. 2021. Breaking the Social Media Prism: How to Make our Platforms Less Polarizing. Princeton: Princeton University Press.

    • Search Google Scholar
    • Export Citation
  • Berg, Marc. 1997. Rationalizing Medical Work: Decision-support Techniques and Medical Practices. Cambridge, MA: MIT Press.

  • Bloor, David. (1976) 1991. Knowledge and Social Imagery. London: University of Chicago Press.

  • Boltanski, Luc. 2011. On Critique: A Sociology of Emancipation. Cambridge: Polity Press.

  • Bowker, Geoffrey and Susan Leigh Star. 1999. Sorting Things Out: Classification and its Consequences. Cambridge, MA: MIT Press.

  • Breckenridge, Keith. 2014. Biometric State: The Global Politics of Identification and Surveillance in South Africa, 1850 to the Present. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Breckenridge, Keith. 2018. “The Global Ambitions of the Biometric Anti-bank: Net1, Lockin and the Technologies of African Financialisation.” International Review of Applied Economics 33, no. 1: 93118.

    • Search Google Scholar
    • Export Citation
  • Bruno, Isabelle and Emmanuel Didier. 2013. Benchmarking: l’État sous pression statistique. Paris: Éditions La Découverte.

  • Bruno, Isabelle, Emmanuel Didier, and Julien Prévieux. 2014. Statactivisme Comment Lutter Aavec des Nombres. Paris: Zones.

  • Brunsson, Nils and Bengt Jacobsson, eds. 2000. A World of Standards. Oxford: Oxford University Press.

  • Callon, Michel, Yuval Millo, and Fabian Muniesa. 2007. Market Devices. Sussex: Wiley Blackwell.

  • Carruthers, Bruce G. and Wendy Nelson Espeland. 1991. “Accounting for Rationality: Double-Entry Bookkeeping and the Rhetoric of Economic Rationality.” American Journal of Sociology 97, no.1: 3169.

    • Search Google Scholar
    • Export Citation
  • Daston, Lorrain and Peter Galison. 2007. Objectivity. New York: Zone Books.

  • Desrosières, Alain. 1998. The Politics of Large Numbers. A History of Statistical Reasoning. Cambridge, MA: Harvard University Press.

  • Didier, Emmanuel. 2020. America by the Numbers: Quantification, Democracy, and the Birth of National Statistics. Cambridge, MA: MIT Press.

    • Search Google Scholar
    • Export Citation
  • Didier, Emmanuel. 2021. Quantitative Marbling. Anton Wilhelm Amo Lectures 7, edited by Matthias Kaufmann, Richard Rottenburg and Reinhold Sackmann. Halle (Saale): Martin-Luther University.

    • Search Google Scholar
    • Export Citation
  • Edwards, Paul N. 2010. A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming. Cambridge, MA: MIT Press.

  • Espeland, Wendy Nelson and Mitchell L. Stevens. 1998. “Commensuration as Social Process.” Annual Review of Sociology 24: 31343.

  • Espeland, Wendy Nelson and Berit Irene Vannebo. 2007. “Accountability, Quantification and Law.” Annual Review of Law and Social Science 3, no. 1: 2143.

    • Search Google Scholar
    • Export Citation
  • Espeland, Wendy Nelson and Michael Sauder. 2007. “Rankings and Reactivity: How Public Measures Recreate Social Worlds.” American Journal of Sociology 113, no. 1: 140.

    • Search Google Scholar
    • Export Citation
  • Espeland, Wendy Nelson and Michael Sauder. 2016. Engines of Anxiety: Academic Rankings, Reputation, and Accountability. New York: Russell Sage Foundation.

    • Search Google Scholar
    • Export Citation
  • European Centre for Disease Prevention and Control (ECDC). n.d. https://www.ecdc.europa.eu/en. Accessed July 08, 2023.

  • Extreme Citizen Science Blog. n.d. https://uclexcites.blog. Accessed July 08, 2023.

  • Ezrahi, Yaron. 1990. The Descent of Icarus: Science and the Transformation of Contemporary Democracy. Cambridge, MA: Harvard University Press.

    • Search Google Scholar
    • Export Citation
  • Foucault, Michel. (1966) 1973. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books.

  • Foucault, Michel. (2004) 2006. Sicherheit, Territorium, Bevölkerung: Vorlesung am Collège de France, 1977–1978. Geschichte der Gouvernementalität I. Frankfurt am Main: Suhrkamp.

    • Search Google Scholar
    • Export Citation
  • Future of Life Institute. 2023. Pause Giants AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

    • Search Google Scholar
    • Export Citation
  • Future of Life Institute. (2004) 2006a. Die Geburt der Biopolitik: Vorlesung am Collège de France, 1978–1979. Geschichte der Gouvernementalität II. Frankfurt am Main: Suhrkamp.

    • Search Google Scholar
    • Export Citation
  • Garfinkel, Harold. (1967) 2012. Studies in Ethnomethodology. Englewood Cliffs: Prentice-Hall.

  • Geertz, Clifford. (1973) 2017. The Interpretation of Cultures: Selected Essays. New York: Basic Books.

  • Guter-Sandu, Andrei and Andrea Mennicken. 2022. “Quantification = Economization? Numbers, Ratings and Rankings in the Prison Service of England and Wales.” In The New Politics of Numbers: Utopia, Evidence and Democracy, edited by Andrea Mennicken and Robert Salais, 30736. Cham: Springer International Publishing.

    • Search Google Scholar
    • Export Citation
  • Hacking, Ian. 1975. The Emergence of Probability. A Philosophical Study of Early Ideas About Probability Induction and Statistical Inference. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Hacking, Ian. 1990. The Taming of Chance. Cambridge: Cambridge University Press.

  • Hecht, Gabrielle and Paul N. Edwards. 2007. The Technopolitics of Cold War: Toward a Transregional Perspective. Washington, D.C.: American Historical Association.

    • Search Google Scholar
    • Export Citation
  • Heintz, Bettina. 2010. “Numerische Differenz. Überlegungen zu einer Soziologie des (quantitativen) Vergleichs.” Zeitschrift für Soziologie 39, no. 3: 16281.

    • Search Google Scholar
    • Export Citation
  • Hetherington, Kregg. 2011. Guerrilla Auditors: The Politics of Transparency in Neoliberal Paraguay. Durham, NC: Duke University Press.

  • Hopwood, Anthony and Peter Miller. 1994. Accounting as Social and Institutional Practice. Cambridge: Cambridge University Press.

  • Husserl, Edmund. (1934–37) 1970. The Crisis of European Sciences and Transcendental Phenomenology. An Introduction to Phenomenological Philosophy. Evanston: Northwestern University Press.

    • Search Google Scholar
    • Export Citation
  • Jasanoff, Sheila. 1994. The Fifth Branch: Science Advisers as Policymakers. Cambridge, MA: Harvard University Press.

  • Jasanoff, Sheila. 2020. “Ours Is the Earth: Science and Human History in the Anthropocene.” Journal of the Philosophy of History 14, no. 3: 33758.

    • Search Google Scholar
    • Export Citation
  • Jerven, Morten, ed. 2015. Measuring African Development: Past and Present. New York: Routledge.

  • Joas, Hans. 1996. The Creativity of Action. Chicago: The University of Chicago Press.

  • Jonas, Hans. 1979. Das Prinzip Verantwortung. Versuch einer Ethik für die technologische Zivilisation. Suhrkamp. Frankfurt am Main.

  • Krüger, Lorenz, Lorraine J. Daston, and Michael Heidelberger, eds. 1987. The Probabilistic Revolution, Cambridge, MA: MIT Press.

  • Lamoureaux, Siri, Enrico Ille, Amal Hassan Fadlalla, and Timm Sureau. 2021. “What Makes a Revolution “Real”? A Discussion on Social Media and Al-Thawra in Sudan.” In Digital Imaginaries. African Positions Beyond the Binary, edited by Richard Rottenburg, Oulimata Guye, Julien McHardy and Phillip Ziegler, 12445. Bielefeld, Berlin: Kerber.

    • Search Google Scholar
    • Export Citation
  • MacKenzie, Donald A., Fabian Muniesa, and Lucia Siu. 2007. Do Economists Make Markets? On the Performativity of Economics. Princeton: Princeton University Press.

    • Search Google Scholar
    • Export Citation
  • Mayer-Schönberger, Viktor and Kenneth Cukier. 2013. Big Data: A Revolution that will Transform How we Live, Work, and Think. Boston: Houghton Mifflin Harcourt.

    • Search Google Scholar
    • Export Citation
  • Mennicken, Andrea and Salais, Robert, eds. 2022. The New Politics of Numbers: Utopia, Evidence and Democracy. London: Palgrave Macmillan.

    • Search Google Scholar
    • Export Citation
  • Mennicken, Andrea and Espeland, Wendy N. 2019. “What’s New with Numbers? Sociological Approaches to the Study of Quantification.” Annual Review of Sociology 45, no. 1: 22345.

    • Search Google Scholar
    • Export Citation
  • Merry, Sally Engle. 2006. “Transnational Human Rights and Local Activism: Mapping the Middle.” American Anthropologist 108, no. 1: 3851.

    • Search Google Scholar
    • Export Citation
  • Merry, Sally Engle. 2011. “Measuring the World: Indicators, Human Rights, and Global Governance.” Current Anthropology 52, no. S3: 8395.

    • Search Google Scholar
    • Export Citation
  • Mervis, Carolyn B. and Eleanor Rosch. 1981. “Categorization of Natural Objects.” Annual Review of Psychology 32, no. 1: 89115.

  • Miller, Peter, Trevor Hopper, and Richard Laughlin. 1991. “The New Accounting History: An Introduction.” Accounting, Organizations and Society 16, no. 5–6: 395403.

    • Search Google Scholar
    • Export Citation
  • Miller, Peter. 2001. “Governing by Numbers: Why Calculative Practices Matter.” Social Research 68, no. 2: 37996.

  • Morgan, Mary S. 2012. The World in the Model: How Economists Work and Think. Cambridge: Cambridge University Press.

  • Morgan, Mary S. 2022. “Narrative: A General-Purpose Technology for Science.” In Narrative Science Reasoning, Representing and Knowing Since 1800, edited by Mary S. Morgan, Kim M. Hajek, and Dominic J. Berry, 330. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • Mugler, Johanna. 2018. Measuring Justice: Quantitative Accountability and the National Prosecuting Authority in South Africa. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • National Oceanic and Atmospheric Administration (NOAA). n.d. http://www.noaa.gov/Accessed July 08, 2023.

  • Nietzsche, Friedrich. (1887) 2006. On the Genealogy of Morals. Cambridge: Cambridge University Press.

  • Nyabola, Nanjala. 2018. Digital Democracy, Analogue Politics: How the Internet Era is Transforming Kenya. London: Zed Books.

  • Osborne, Thomas and Nikolas Rose. 2003. “Do the Social Sciences Create Phenomena? The Example of Public Opinion Research.” The British Journal of Sociology 50, no. 3: 36796.

    • Search Google Scholar
    • Export Citation
  • Piketty, Thomas. 2014. Capital in the Twenty-first Century. Cambridge, MA: The Belknap Press of Harvard University Press.

  • Poovey, Mary. 1998. A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. Chicago: University of Chicago Press.

    • Search Google Scholar
    • Export Citation
  • Porter, Theodore. 1986. The Rise of Statistical Thinking 1820–1900. Princeton: Princeton University Press.

  • Porter, Theodore. 1995. Trust in Numbers. The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.

  • Porter, Theodore. 2012. “Thin Description: Surface and Depth in Science and Science Studies.” Osiris 27, no. 1: 20926.

  • Power, Michael. 1997. The Audit Society: Rituals of Verification. Oxford: Oxford University Press.

  • Rottenburg, Richard. 2009. “Social and Public Experiments and New Figurations of Science and Politics in Postcolonial Africa.” Postcolonial Studies 12, no. 4: 42340.

    • Search Google Scholar
    • Export Citation
  • Shapin, Steven and Simon Schaffer. 1985. Leviathan and the Air-pump. Hobbes, Boyle and the Experimental Life. Princeton: Princeton University Press.

    • Search Google Scholar
    • Export Citation
  • Srnicek, Nick. 2017. Platform capitalism. Cambridge: Polity Press.

  • Verran, Helen. 2001. Science and an African Logic. Chicago & London: The University of Chicago Press.

  • Weber, Max. (1904)1972. “Die Protestantische Wirtschaftsethik und der Geist des Kapitalismus.” In Gesammelte Aufsätze zur Religionssoziologie. Band I, 17206. Tübingen: Mohr.

    • Search Google Scholar
    • Export Citation
  • Weber, Max. (1904)1973. “Die ‘Objektivität‘ Sozialwissenschaftlicher und Sozialpolitischer Erkenntnis.” In Gesammelte Aufsätze zur Wissenschaftlehre, 146214. Tübingen: Mohr.

    • Search Google Scholar
    • Export Citation