Chapter 5 Health Problems of Industrializing Societies

In: A History of Population Health
Author:
Johan P. Mackenbach
Search for other papers by Johan P. Mackenbach in
Current site
Google Scholar
PubMed
Close
Open Access

Abstract

Industrialization and urbanization were accompanied by a rise and then decline of many different diseases. This chapter first traces the history of a number of communicable diseases, including three intestinal infections (cholera, dysentery and typhoid), tuberculosis, syphilis, four childhood infections (scarlet fever, measles, whooping cough and diphtheria ) and two respiratory infections (pneumonia and influenza). It then traces long-term trends in maternal, infant and perinatal mortality, and in three nutrient deficiencies (pellagra, rickets and goitre), peptic ulcer and appendicitis, and lung diseases caused by occupational and environmental exposures (such as pneumoconiosis, mesothelioma and the non-specific effects of air pollution). The factors involved in the ultimate decline of these diseases were many, with an important role for public health interventions. There were striking differences between European regions in the timing of the decline of health problems of industrializing societies, with Northern and Western Europe again often taking the lead.

In terms of population health, developments in Europe during the 18th and 19th centuries were not only positive. As a side effect of industrialization and urbanization, there was a rise in many diseases, which were only gradually brought under control. This chapter reviews the secular trends in a number of health conditions which first rose and then started to decline again in this period. These include a number of communicable diseases, as well as maternal and infant mortality, and some other, less easily classifiable diseases.

Communicable Diseases

Cholera, Dysentery, Typhoid

Cholera is one of the most extensively studied epidemic diseases of the 19th century. This is not only because it was such a deadly and frightening disease, but also because the experience of cholera played an important role in the genesis of public health as we know it today. It was particularly important for the acceptance of public health’s most well-known method of prevention: ‘sanitation’.1

Cholera was such a frightening disease because it gave symptoms of massive vomiting and diarrhoea and could be rapidly fatal, sometimes within 12 hours after symptom onset (Plate 10). Patients lost enormous amounts of body fluid, and because dehydration caused a change of the colour of the skin it became known as the ‘blue disease’. It is caused by infection with the bacterium Vibrio cholerae, discovered as the cause of cholera in 1883 by German physician Robert Koch (1843–1910). It spreads by the faecal-oral route, and can nowadays be prevented by a vaccine and treated with oral rehydration therapy and antibiotics, but all this knowledge lay in the distant future when cholera first struck Europe in the 1830s.

Plate 10
Plate 10

Cholera victims being carried away in Palermo, 1835

Like many other cities in Europe, Palermo (on Sicily) was hit by the second cholera pandemic. Cholera sometimes struck so quickly that people died on the streets. This plate shows men in uniform picking up a corpse in the street and putting it on a wagon.Print based on drawing by Gabriele Castagnola. Wellcome Collection (CC BY 4.0)

The origins of cholera lie in the Indian subcontinent where it has been endemic for millennia. However, in the 19th century, probably as a result of an increase in long-distance travel, the disease spread to other parts of the world in six large waves. The first of these pandemics (1817–24) did not reach Europe, with the exception of the region of Astrakhan at the South-eastern border of the Russian Empire. The other ones did reach Europe and caused large numbers of deaths, reminding people of the plague and shaking Europeans’ confidence in progress. Epidemics spread along waterways, and later also and more rapidly along railways.2

The second pandemic (1829–37) reached Europe from India overland through Russia, from where it spread westwards, partly through troop movements in the on-going Russo-Polish War. It reached Western Europe in 1831 and 1832, and caused hundreds of thousands of deaths altogether (see Suppl. Table 7). The third pandemic (1846–60) followed a similar route and reached Europe through Russia again. This was the most fatal of all European cholera epidemics, and caused more than one million deaths in Russia alone, perhaps because it coincided with a turbulent period in European political history and occurred immediately after the famines of the 1840s.3

The fourth pandemic (1863–75) followed a different route and arrived in Europe from Egypt. Its spread was again partly propagated by troop movements, in this case during the Austro-Prussian war of 1866. This was the last of the pandemics to cause large numbers of deaths in North-western Europe, because it stimulated many cities and countries to implement sanitation measures, in the form of piped drinking water and sewage systems. During the fifth pandemic (1881–96) the only larger North-western European city afflicted by cholera was Hamburg, where more than 8000 people died because the city had neglected to effectively filtrate its piped drinking water. However, this pandemic still caused large numbers of deaths in Southern and Eastern Europe.4

The gradual retreat of cholera continued during the sixth pandemic (1899–23), which mainly caused large-scale mortality in Russia and South-eastern Europe, but also – less well-known, because the Italian authorities tried to hide it – struck Naples and parts of Southern Italy in 1910–11. Whereas the rest of Europe was now reasonably well protected against the disease, the sixth pandemic still caused large numbers of deaths in South-eastern Europe during the Balkan Wars, and more than half a million deaths in Russia during the Civil war.5

What caused the decline (or non-return) of cholera? As the geographical variations illustrate, the implementation of effective sanitation measures certainly played an important role, together with improvements in personal hygiene. These changes were based on a better understanding of the causes of the disease and on gradual acceptance of intervention by the state, even if this interfered with personal freedom or commercial interests.6

Success was not immediate, however, as shown by the fact that cholera had to return at least three times before effective countermeasures were taken. When cholera first spread in Europe to the 1830s, it led to the imposition of quarantines and cordons sanitaires, which had been effective to contain plague epidemics. Yet, when epidemiological observations on the effects of these measures in Russia proved to be inconclusive, faith in contagion and the effectiveness of cordons sanitaires was lost.

Until the 1880s, when Robert Koch’s discovery finally convinced (almost) everyone that cholera was contagious, cholera was a scientific battleground between those who believed that it somehow spread between humans (‘contagionists’), and those who believed that cholera was not contagious but somehow emanated from the soil (‘anti-contagionists’). The latter view could long be held not only because it was difficult to unambiguously prove that cholera was contagious, but also because anti-contagionism provided an argument for economically liberal governments to oppose restrictions on free trade.7

However, it did gradually become clear that cholera spread through contaminated drinking water. John Snow’s (1813–1858) studies of cholera in London, leading to the famous removal of the handle of the Broad Street pump, were conducted during the third pandemic and published in 1855. Although his views were not immediately and widely accepted, studies by William Farr (1807–1883) of differences in cholera risks between different companies supplying London’s drinking water during the fourth pandemic provided further evidence that contaminated drinking water was the culprit.8

Convincing scientific evidence was certainly not the only condition which needed to be fulfilled for the implementation of large-scale sanitation measures that required enormous public investments. Another condition was that a degree of political consensus arose that this was necessary, or at least inevitable. As this was a period in which political conservatism and economic liberalism reigned supreme, this was far from self-evident and required an argumentation that fitted within the dominant way of thinking.

This can again be illustrated with the well-researched history of cholera prevention in Britain, where Edwin Chadwick (1800–1890) had published his Report on The Sanitary Condition of the Labouring Population of Great Britain in 1842, just before the third pandemic struck. This report showed that ill-health often caused poverty among the labouring classes, and was an important contributor to the high costs of public poverty relief. Chadwick therefore recommended to provide each house with a constant water supply and with water-closets that would discharge into sewers. These would carry faecal waste to rural areas where it could be spread on the land as manure, preventing rivers from becoming polluted. This ‘sanitary idea’ was soon adopted, and became popular throughout Europe.9

Strikingly, the second, third and fourth pandemics all occurred in periods of political upheaval in Europe, and in association with revolution and wars. This is probably not a coincidence: unrest produced cholera, and cholera produced unrest. As mentioned above, the spread of cholera was often promoted by mass movements of people, during war or in response to famines, although most spread was due to trade. Cholera could also produce unrest, as in the case of ‘cholera riots’ against the elites in Russia or civil protests against restrictions on movement and trade.10

In addition to cholera, many other intestinal infections played a role in the high death rates of industrializing societies. These other diseases were less eye-catching, but their accumulated death toll has been much larger than that of cholera. They must have become common as soon as humans adopted a sedentary lifestyle, and had assumed epidemic proportions in the 17th and 18th centuries. Within the larger family of diarrheal diseases, a few stand out as having been particularly important.

Dysentery, a disease mainly caused by Shigella bacteria, is characterized by a typical bloody diarrhoea and was therefore already recognized in antiquity. It caused major epidemics during military campaigns and famines, but as an endemic disease also caused many deaths among infants and children in ‘normal’ years.11

Typhoid fever, a disease caused by Salmonella typhi bacteria, had less distinctive symptoms, and was long confused with typhus (see Chapter 4). It was only in the 1830s that the difference between the two diseases was recognized, and it is therefore difficult to trace the history of this disease through the ages. Together with what we now call paratyphoid fever, due to infection by other members of the Salmonella family, it used to cause huge numbers of cases of disease and deaths. Dysentery, typhoid and the other intestinal infections are all spread by contaminated food and water, which was gradually discovered in the second half of the 19th century, often before the responsible micro-organisms were identified.12

In North-western Europe, mortality from intestinal infections started to decline around the middle of the 19th century, as a result of gradual improvements in sanitation (see Suppl. Figure 9). Very steep declines continued in the first half of the 20th century. As in the case of other diseases, declines in mortality in other European regions followed later. In the beginning of the 20th century, mortality rates from typhoid and paratyphoid fever were still very high in Spain, Italy and other Southern European countries, and undoubtedly also in South-eastern and Eastern Europe, although for these countries the data series start considerably later. In many countries, mortality from typhoid and paratyphoid rose during both World Wars, as it did in Spain during the Civil War.13

Tuberculosis

The history of tuberculosis has been studied extensively – and for good reasons, because it was probably the most important cause of death in Northern and Western Europe during much of the 18th and 19th centuries. In cities like London, Paris, The Hague and Stockholm, respiratory tuberculosis, the most important form of this disease, accounted for between 10 and 25% of all deaths. Somewhat lower but still astonishingly high figures applied to countries as a whole when, in the course of the 19th century, national cause-of-death statistics became available. Because mortality was highest among young adults, the demographic, economic and social consequences were huge.14

Since then, tuberculosis mortality has declined enormously. The study of the causes of this decline has become something like a scientific battlefield, between those who have argued that the decline of tuberculosis mortality is mainly due to improvements in living standards, and those who have emphasized the contribution of public health and other interventions. Since the 1970s, when McKeown published his iconoclastic studies of the decline of mortality in England & Wales, the first point of view for some time had the upper hand, but the accumulated weight of the critiques of McKeown’s analyses has gradually shifted the balance of opinion towards the second point of view.15

Tuberculosis is a disease that is caused by infection with Mycobacterium tuberculosis, which was discovered in 1882 by Robert Koch, who also discovered the causative agent of cholera. It usually enters the body through the lungs, where it may cause acute infection, but the bacterium may also lie dormant for a long time and still cause disease after many years, when the immune system fails as a result of stress, undernourishment, or – as has occurred more recently – hiv infection. While tuberculosis of the lungs and other respiratory organs is the most common form of the disease, many other organs may become affected, and without adequate treatment active tuberculosis has a high case fatality.

In the 19th century, another type of tuberculosis, caused by infection with Mycobacterium bovis, was also common, accounting for 20–30% of all non-pulmonary tuberculosis. This milder form of tuberculosis was transmitted in cow’s milk and mainly occurred among infants and young children. In the 20th century, it was eradicated by a combination of livestock sanitation (i.e., testing cattle for tuberculosis and destroying infected animals) and milk pasteurization.16

The term ‘tuberculosis’ refers to ‘tubercles’, i.e., the characteristic lumps of tissue which are found in the body of tuberculosis patients, and which contain tuberculosis bacteria. Before this term came in use, the disease was known under names like ‘consumption’, ‘phthisis’, and similar terms in national languages. These refer to a symptom characteristic of later stages of the disease, i.e., a state of exhaustion interrupted by temporary flare-ups, in which the patient appears to be ‘consumed’ by his or her illness.17

Before the discovery of the causative agent, it was widely believed, at least in Northern and Western Europe, that the main causes of tuberculosis were ‘constitutional’, i.e., that the disease found its origin in genetic and other characteristics of the diseased individual. It is not difficult to understand how this idea arose. Due to the slow progression of the disease it was not immediately obvious from where or whom individual cases arose, and tuberculosis was so common that often whole families fell ill. Yet, in Southern Europe tuberculosis was less common, and people generally believed that it was contagious even before the bacteriological revolution.18

Tuberculosis is an age-old human disease: traces of tuberculosis infection have been recovered from Egyptian and Peruvian mummies more than 1000 years old. The emergence of tuberculosis as an important infectious disease of humans is associated with the shift from hunting-gathering to the sedentary life of agriculturists, which greatly increased opportunities for transmission.19

However, as a result of industrialization and urbanization tuberculosis became much more prevalent, and much more important as a cause of death, in the 18th and 19th centuries. Overcrowded housing, exposure to dust in mining and other occupations, and a rise of excessive alcohol consumption – to name just a few side-effects of the Industrial Revolution – increased the likelihood of tuberculosis transmission as well as the likelihood of a fatal outcome of the disease. The resulting rise of mortality can clearly be seen in Swedish mortality data, which show an increase in mortality from respiratory tuberculosis between ca. 1750 and ca. 1850. In many other countries, this rise probably occurred before the start of national cause-of-death registration, but a rise can be witnessed in a few other European countries as well, such as Norway and Portugal where mortality rose until ca. 1900 and ca. 1930, respectively (Figure 13).20

Figure 13
Figure 13

Trends in tuberculosis mortality in Europe, 1750–2015

Notes: Both sexes combined Before 1900: decadal or quinquennial data from various sources, calibrated using overlaps with data from Alderson for the first years of the 20th century. Between 1900 and 1960: quinquennial dataSource OF DATA: Before 1900: various national publications; after 1900: see Suppl. Table 1

In most countries, all we can see in the available data is a decline of tuberculosis mortality, and it is impossible to give a precise date to its start. McKeown thought that the decline in mortality from respiratory tuberculosis in England & Wales started around 1850, i.e., before sanitation and other public health interventions could be expected to have had an impact. Yet, as he himself noted, the timing is uncertain because of problems in cause-of-death classification and irregular fluctuations in the mortality rate. Others have argued that mortality decline did not start before the late 1860s, i.e., in a time when public health interventions may have started to have an effect.21

In addition to England & Wales, Sweden and the Netherlands also had a relatively early decline, i.e., a decline starting before or around 1875, but, as mentioned above, in some other European countries the decline started around the turn of the century or even later. As is clear from Figure 13, declines of mortality were generally precipitous, and although the rate of decline naturally slowed down in absolute terms as the mortality rate started to approach the null, in relative (or percentage) terms the rate of decline actually accelerated in the 1940s and 1950s in all European countries.22

Two other observations that can be made on Figure 13 – and which tuberculosis shares with several other causes of death – are that during most of the 20th century, countries in Southern and Central-eastern Europe lagged behind countries in Northern and Western Europe in their tuberculosis mortality decline. Countries in Central-eastern and Eastern Europe – for the latter, only post-World War ii data are available – also experienced a renewed (but small) rise of tuberculosis mortality in the 1990s and 2000s.

As mentioned above, there are several competing explanations for this long-term decline of tuberculosis mortality. A first possibility to consider is a spontaneous, favourable change in the biological relationship between the ‘agent’ (Mycobacterium tuberculosis) and its ‘host’ (European citizens). This is an explanation that always needs to be considered when one deals with the time-course of infectious diseases, because these often become less severe over time. This can occur through selection of less virulent bacteria which, because they do not kill or incapacitate their hosts, have a greater likelihood to spread to other people. On a longer time-scale, it can also occur through selective survival of genetically more resistant hosts. However, although this may have played a role in the early declines of tuberculosis mortality, it is a less likely explanation for the continued decline in later stages, because changes in virulence of Mycobacterium tuberculosis have never been demonstrated in laboratory studies, and because effects of the disease on reproduction of tuberculosis patients were limited.23

Another autonomously operating, biological factor may have been more important. As soon as the incidence of tuberculosis mortality starts to decline, for example as a result of isolation of patients or less overcrowding, the number of patients who can spread the disease to others also becomes less. This will set in motion a self-propelling mechanism which, as has been shown in mathematical modelling studies, may well have been partly responsible for the continued rapid decline of the disease during the 20th century.24

A second factor to consider is a general improvement in living standards, which decreased the risks of transmission (e.g., through more spacious housing with less overcrowding) or the risk of becoming ill when infected (e.g., through better nutrition). Despite all the scientific criticisms of McKeown’s work, in which this explanation was championed, there can be little doubt that, during the whole trajectory of mortality decline, nutrition and other factors linked to general living standards must indeed have played a role. For example, tuberculosis mortality in many European countries rose during World Wars i and ii, even in those countries that stayed neutral, and studies have shown that the most likely explanation was food scarcity.25

Furthermore, while McKeown did not have data on changes in food consumption in England & Wales during the 19th century, and thus had to speculate about the role of this factor, later studies have shown that food availability in this country did rise between 1750 and 1800, then stagnated between 1800 and 1850, and rose again between 1850 and 1900. These changes roughly coincide with periods of rising, stagnating and again rising life expectancy at birth, and provide some support for the idea that improvements in nutrition may have helped to bring down tuberculosis mortality.26

The third factor to consider – largely rejected by McKeown but increasingly accepted as an important contributory factor in the decline of tuberculosis mortality – is public health and medical interventions. Improvements in housing were not only a side-effect of increased living standards, but were also the result of stricter housing regulations and government investments in public housing projects, particularly after the turn of the 20th century. A similar point can be made for working conditions, which partly improved as a side-effect of changes in production and mining methods, but were also the result of labour union demands and labour protection laws.

Medical knowledge also helped to limit the spread of the disease, e.g., by educating patients not to spit, and by treating infective patients in hospitals or sanatoria instead of at home. Vaccination, first used in 1921, and outreaching facilities for prevention and treatment of tuberculosis, implemented in many European countries in the 1920s, were important as well.27

Even before the 1940s, treatment probably also contributed to mortality decline. Although the first highly effective medical treatment, antibiotics, only became available in the 1940s, previous generations of doctors had gradually worked out how to treat tuberculosis patients. In the course of the 19th century, the newly invented stethoscope helped to diagnose the condition more accurately. Some treatments, such as a combination of rest and a nutritious diet, or collapsing the affected lung by surgically creating an artificial pneumothorax, may already have helped to slow down the progression of the disease.

The first effective drugs against tuberculosis, such as streptomycin, arrived late in the course of tuberculosis mortality decline in England & Wales, and therefore made a relatively small contribution in this country, as well as in other North-western European countries where tuberculosis mortality was already well underway. However, as Figure 13 shows, in those countries where tuberculosis mortality was still high, the precipitous declines in the 1940s and 1950s accounted for a much larger part of mortality decline.28

Although it has not been possible to disentangle the separate contribution of each of these three factors, a reasonable conclusion is that the decline of tuberculosis mortality reflects the combined and mutually reinforcing effect of all three. The relative importance of each factor has likely shifted over time, and has probably also been different between European countries, because they differed in their timing of tuberculosis mortality decline. Seen from a European perspective, it strikes one as a little odd that so much energy has been spent on finding the main factor driving tuberculosis mortality decline in England & Wales – a country unlikely to be representative for the wider European experience.

As can be seen in Figure 13, recent trends in tuberculosis have not been altogether favourable. During the 1990s, mortality from respiratory tuberculosis increased again in the former Soviet Union, and also in some South-eastern European countries (Bulgaria and Romania) and in Portugal. Because of the decoupling of mortality and incidence, trends in mortality no longer give an accurate picture of trends in incidence, and incidence has risen almost everywhere, also in Northern and Western Europe.29

This renewed rise of tuberculosis has a complex explanation. One immediate cause is the rise of drug-resistant, and then multi-drug resistant, and then extremely multi-drug resistant, strains of Mycobacteria. Another part of the story is that, after the successful push-back on tuberculosis in the 1940s and 1950s, tuberculosis control programs were neglected in many European countries.

However, wider societal factors also played a role. Immigration from countries outside Europe where tuberculosis was still endemic raised its incidence, as did the epidemics of injecting drug use and – partly related – aids. In the background, an increasing prevalence of homelessness, and increasing socioeconomic inequality generally, made parts of the population more vulnerable. Increasing rates of incarceration, and inadequate control of tuberculosis in prisons, also contributed.30

In response to this (world-wide) surge of tuberculosis, strong international action has been mounted, and more recently tuberculosis incidence and mortality in Europe have declined again.31

Syphilis

The mortality trends for syphilis are somewhat similar to those for respiratory tuberculosis in the same period, with steep declines during most of the 20th century (see Suppl. Figure 10). In contrast to tuberculosis there has been no epidemic rise of mortality from syphilis in the 1990s, but in this case mortality trends are somewhat misleading. Syphilis mortality has become almost completely decoupled from syphilis incidence, even more so than tuberculosis mortality has become decoupled from tuberculosis incidence, and hiding below the flat trends in syphilis mortality in the second half of the 20th century are rises of syphilis incidence of epidemic proportions.

The available evidence on long-term trends in syphilis incidence in Europe suggests that trends since the beginning of the 20th century, when syphilis incidence and prevalence were still very high, can be characterized as an over-all decline interrupted by four periods of steep temporary increases. Incidence and prevalence of syphilis in the general population rose during World War i, then declined, rose again during and shortly after World War II, declined to its lowest level ever in the 1950s, then rose again during and after the ‘sexual revolution’ of the 1960s, declined in response to the aids epidemic of the 1980s, but rose again in the 1990s after the aids scare had disappeared.32

Syphilis is a treacherous disease. After infection, it may lead to ‘early infectious syphilis’ with local ulceration a few weeks after infection, followed, after a few symptomless weeks, by a variety of a-specific signs of illness (skin rash, headaches, sore throat etc.). These symptoms usually disappear spontaneously, leading to a latent period of up to many years, but the disease may then return in the form of ‘late syphilis’ with serious cardiovascular and neurological complications, eventually leading to paralysis, dementia and even death. Syphilis is caused by infection with Treponema pallidum which was discovered in 1905. In contrast to many other infectious diseases, however, the contagious nature of syphilis was obvious to medical professionals and lay people long before the bacteriological revolution.33

The origins of syphilis have long been debated, but it is most likely that it was brought from the Americas to Europe by Columbus and other seafarers in the last decade of the 15th century. It spread rapidly, partly as a result of Europe-wide military campaigns, and originally had a violent and malignant character. However, after a few decades it developed into the milder disease as we still know it today (illustrating a biological mechanism mentioned above in our discussion of the decline of tuberculosis mortality).34

Because syphilis is spread by sexual intercourse, attitudes to syphilis and syphilis control have always been strongly influenced by norms on sexual behaviour. Moralizing approaches have always competed with more pragmatic approaches to the control of syphilis and other sexually transmitted diseases (stds).35

Historically, four phases in the approach to syphilis can be distinguished. In a first period (ca. 1490–ca. 1520), the sudden eruption of a very serious disease of unknown origins was commonly regarded as God’s punishment for men’s sins. In a second period (ca. 1520–ca. 1750), when it had become clear that syphilis spread through sexual intercourse, ‘double standards’ applied. Syphilis in lower class people was regarded as a punishment for their sins, whereas syphilis in higher class patients was seen as the acceptable risk of a frivolous life-style. In a third period (ca. 1750–ca. 1900), rejection and shame dominated the attitudes towards syphilis under the influence of the morality of the upcoming bourgeoisie, which regarded syphilis as a threat to family-life. Finally, in a fourth period (ca. 1900 to the present), a more pragmatic attitude came to prevail, with increasing involvement of the state, due to a recognition that syphilis was a threat to society as a whole, reducing the strength of the military, limiting successful procreation, etc.36

Syphilis control became an issue in the 19th century, mainly in the form of regulation of prostitution, but also of education of the public at large (including the military) about the dreadful disease that promiscuity could lead to. These efforts may have contributed to some decline of syphilis in the second half of the 19th century. Prostitution was very wide-spread, perhaps due to the disruption of family life caused by industrialization and urbanization which had brought many single men into Europe’s larger cities. It was more or less accepted that these men’s ‘sexual urges’ had to find an outlet in prostitution, so that bourgeois girls and women could remain chaste, but in order to limit prostitution’s medical and social risks it had to be strictly regulated.

This regulation implied that prostitutes were only allowed to work in brothels, which were placed under municipal and medical supervision. They had to be tested regularly for signs of venereal disease, and when they were found to be infected, they were prohibited from working until they were free of symptoms again. Treatment options were limited in the 19th century, and mainly consisted of various applications of mercury, which have never been shown to be effective but did have very unpleasant side-effects. The regulatory approach to prostitution started in the early 1800s in Paris, under the influence of the socio-medical studies of Alexandre Parent-Duchâtelet, and was later adopted in many other European countries.37

A change in approach occurred around 1900, involving a turn away from prostitution as the main or only source of syphilis infection, and towards more pragmatic strategies of syphilis control, in what mattered most was effectiveness and not morality. This turn coincided with the discovery of salvarsan (1909), which proved to be much more effective against syphilis than mercury, and created opportunities for a more medically oriented approach. In many European countries, venereal disease clinics and dispensaries were set up, which offered treatment free of charge and also offered testing and treatment of contacts of syphilis patients.

Other social-hygienic measures included targeted prevention and treatment facilities for military and naval personnel (Plate 11), deployment of specially trained nurses and social workers for active contact tracing, and screening and treatment of pregnant women to protect their babies from congenital syphilis. Although it remains unclear whether salvarsan as such reduced mortality, the combination of advances in treatment with outreaching service facilities almost certainly contributed to the general decline of syphilis incidence and mortality in the first decades of the 20th century.38

Plate 11
Plate 11

“Do not trust appearances.” Spanish poster warning against syphilis, 1938

Soldiers have always been at risk for sexually transmitted diseases, which were not only a threat to their personal health but also to army strength. Translation of the Spanish text: “Do not trust appearances … Sometimes they cheat.” Poster published by the Health headquarters of the Republican army during the Spanish Civil War.Lithograph by Blas. Wellcome Collection (cc by 4.0)

In the 1940s, partly as a result of stepped-up research efforts because of the war, penicillin became available and proved to be much more effective than salvarsan. Several studies have shown that the introduction of penicillin coincided with an acceleration of syphilis mortality decline. The wide spread in syphilis mortality curves in the post-war period suggests, however, that delays were common.39

Starting in the late 1940s, the incidence of ‘early infectious syphilis’ declined to a historical low in many European countries, briefly plateaued in the late 1950s, and then turned upwards again. Fortunately, and probably thanks to penicillin, this was not followed by a rise in ‘late syphilis’ and/or syphilis mortality, but it was worrying enough. Contemporary analyses pointed to a complex of causes that have since been summarized under the heading of the ‘sexual revolution’. The rise of syphilis and other stds resulted from an increase in pre-marital sex, an increase in homosexual intercourse, and an increase in promiscuity generally. This ‘sexual enthusiasm’ was promoted by cultural change and the advent of safe oral contraceptives in the early 1960s. The increasing share of men-having-sex-with-men (msm) among syphilis patients was particularly striking in this period.40

In Northern and Western European countries, the rise of syphilis abruptly reversed in the second half of the 1980s, probably as a result of the aids scare and campaigns to promote ‘safe sex’. However, when aids became a treatable disease in the 1990s, ‘safe sex’ practices declined and syphilis rose again.41

Scarlet Fever, Measles, Whooping Cough, Diphtheria

The four diseases that have been clustered in this section were important causes of childhood mortality in the 19th century. Scarlet fever, measles, whooping cough and diphtheria were diseases of crowding: they spread by airborne droplets and intimate contact. Their incidence therefore was higher when population density in an area was higher, and when families lived together in more cramped housing conditions.42

The frequency with which these diseases occurred, increased when European countries started to industrialize and urbanize. Mortality from these diseases peaked in the 19th century before starting to decline again. For example, in the Netherlands mortality from measles, whooping cough and diphtheria was still very high in the last decades of the 19th century, but started to decline in the 1890s (see Suppl. Figure 11). Currently, the likelihood that a child will die from one of these diseases is almost nil, but behind us lies a time in which 5–10% of all children died from one of these diseases before reaching their 15th birthday.43

Mortality from these diseases fluctuated from year to year in regular cycles. These oscillations were due to the fact that after an epidemic had occurred, and all surviving children had been naturally immunized, it took a few years before a new group of susceptible children had been born and had become large enough to allow a new epidemic to occur.44

Mortality from these four conditions reached negligible levels in most European countries in the 1960s. These declines were due to a combination of factors: a ‘spontaneous’ decline in virulence of the infectious agent; ‘side-effects’ of modernization which reduced exposure, or increased resistance, to infection; and the introduction of effective methods of prevention and treatment.

One important ‘side-effect’ of modernization which played a role in the decline of mortality from childhood infections was fertility control. When families became smaller, the likelihood of transmission of infection within the family became smaller, which reduced the incidence of infectious diseases. It probably also reduced case fatality, because a lower force of infection led to a higher age of infection, at which children were less vulnerable.45

Other important factors which – as part of the socioeconomic modernization of European societies – reduced mortality from childhood infections were better nutrition (undernutrition increased case fatality), better education (educated mothers provided better care for their children), and better housing (more space and better ventilation reduced risks of infection). Some of these changes were definitely intentional, with better well-being or even better health in mind; others would simply not have occurred if the effects on well-being or health had been negative.46

Finally, medical advances contributed to mortality decline. At the end of the 19th century, medical science gained momentum. The discovery of the microbiological origin of these diseases not only allowed more targeted advice on how to prevent infection, but was also a spring-board for the development of prophylactics such as vaccinations, and therapies such as immune sera and antibiotics.

The relative importance of each of these three groups of factors differs between the four diseases, and is also likely to differ between countries, because in a country where decline started later the scope for a contribution from medical advances was larger. We will now briefly discuss the history of the four childhood infectious diseases one-by-one, to illustrate when, where and why their decline occurred.

Scarlet fever is a disease with symptoms of a sore throat, fever, headache and a typical skin rash. It may cause fatal complications such as anaemia, meningitis, and sepsis, particularly in young children. It is due to infection by haemolytic (“blood-cell destroying”) streptococci which spread by intimate contact, and which may also have spread by contaminated milk.

This is one of the few diseases for which there is reliable evidence that its severity has spontaneously changed over time. In Western Europe, there was a rise of a virulent form of scarlet fever from the 1820s onwards, making scarlet fever the leading cause of death among the childhood infectious diseases in mid-century. Thereafter, it declined in severity, first in England and other Western European countries, and much later in Central-eastern and Eastern Europe.47

While a ‘spontaneous’ decline in virulence was probably the most important factor in the decline of scarlet fever mortality in the 19th century, other factors have made additional contributions in the 20th century. These include smaller families, more spacious housing and – from the 1930s and 1940s onwards – the introduction of sulphonamides and antibiotics. The latter resulted in an acceleration of the decline in scarlet fever mortality, as well as in the mortality from other streptococcal infections, such as erysipelas and puerperal fever, and the mortality from the sequelae of streptococcal infections (acute rheumatic fever and acute nephritis).48

Measles is a disease with symptoms of fever, cough, and (again) a typical skin rash. It may cause potentially fatal complications such as pneumonia, encephalitis (infection of the brain), and severe diarrhoea, and case fatality may be as high as 5–10% among malnourished children. It is one of the most highly communicable diseases, and caused by infection with the Morbilli virus which is transmitted through the air and by direct contact. Measles needs a population of a substantial size to secure a continuous chain of susceptibles. It is thought to have arisen around 2500 bce, when a similar virus in domesticated animals, perhaps rinderpest, jumped the species barrier after humans had started to live in cities.49

Measles occurred in large epidemics in the 18th and 19th centuries, and struck very dramatically. It caused huge numbers of deaths when it arrived on virgin soil, for example (and famously) on the Faeroe islands in 1846 which had been free of measles for 65 years. Mortality from measles remained high throughout the 19th century, partly as a negative side-effect of increased schooling which provided new reservoirs of infection.50

Mortality from measles started to decline in the first decades of the 20th century, not because incidence declined but because case fatality declined. Because case fatality was influenced by crowding, undernutrition, and previous respiratory infections, it is likely that the general improvement in living conditions helped to reduce case fatality. From the 1930s and 1940s onwards, sulphonamides and antibiotics helped to reduce deaths among measles patients as a result of secondary bacterial infections. The incidence of measles only started to decline when mass vaccination was gradually introduced in European countries from the 1960s onwards.51

Whooping cough, also known as pertussis, is a disease characterized by terrifying paroxysms of coughing. These culminate in a prolonged inspiration with a typical sound that gave the disease its name. Its potentially fatal complications include collapsed lungs, lack of oxygen and secondary bacterial infections, and it used to have a case fatality of around 10%. It is caused by infection with Bordetella pertussis, which has an airborne transmission.

Like the other childhood infections discussed in this section, its frequency as a cause of death rose in the 18th and 19th centuries. It reached a peak in the last decades of the 19th century, and then declined, at first because case fatality declined (from 10% in the 1880s to 1% during World War ii, and then to only 0.1% in recent years), and more recently because incidence declined as well.52

As mentioned above, the decline in case fatality in the first half of the 20th century may be partly due to smaller families. This shifted the age group experiencing the highest incidence of pertussis from very young to older children, among whom case fatality was lower. It may also have been partly due to better housing and diet and better nursing care. An effective vaccine was introduced in the 1930s and was in wide-spread use from the late 1940s onwards, leading to a decline in incidence and an acceleration of mortality decline.53

Unfortunately, this success story was followed by several temporary, but severe, setbacks. In Britain, a vaccine scare led to a decline in vaccine uptake in the mid-1970s, leading to a rise in whooping cough incidence around 1980. In many European countries, there was a resurgence of whooping cough in the late 1990s due to decreasing vaccine efficacy, possibly because of an evolutionary change in the micro-organism and/or the shift to an acellular vaccine which had fewer side-effects but proved less effective.54

Diphtheria is a disease with symptoms of sore throat, coughing, fever, and a swollen neck. It derives its name from a characteristic membrane that forms on the tonsils and in the throat, and that obstructs the airways. It is caused by infection with Corynebacterium diphtheriae which spreads through airborne droplets, intimate contact and contaminated milk. When the bacterium itself is infected by a phage virus, it excretes a powerful toxin that may damage the heart and the nervous system. Case fatality can then be in the order of 30–50%. Diphtheria already occurred in antiquity, but became more common in Europe in the 17th and 18th centuries, before mortality peaked in the second half of the 19th century.55

This is the only of the four diseases for which an effective treatment was already developed in the 19th century. ‘Antitoxin’ – that is, passive immunization on the basis of serum from horses which had been infected with human diphtheria – became available in the 1890s. Together with tracheostomy – that is, creating an opening in the neck in order to place a tube into the child’s windpipe – antitoxin reduced case fatality and mortality from diphtheria in the first half of the 20th century. Other factors mentioned above (smaller families, more spacious housing, etc.) likely also played a role.56

In the 1920s active immunization with a vaccine was developed and shown to be effective, after which it spread rapidly in the US and Canada in the 1930s. Vaccination in Western Europe started to spread in the 1940s, even during World War ii. Mass vaccination covering the entire population of children generally started in the 1950s, with the exception of Portugal where it started in 1966. Vaccination reduced diphtheria incidence, and accelerated the decline of diphtheria mortality.57

While Figure 14 clearly shows the over-all decline of diphtheria mortality in European countries (and the delayed decline in Portugal), it also shows a massive epidemic during World War ii, and a setback in Eastern Europe in the 1990s. The diphtheria epidemic during the war struck many European countries, both warring and neutral, and both German-occupied and non-occupied. In the Netherlands this was the most serious epidemic of diphtheria ever. It may have been due to a particularly virulent strain of diphtheria that was already circulating in Western and Central-eastern Europe in the late 1930s, and/or to a decrease in natural immunity as a consequence of war-time conditions.58

Figure 14
Figure 14

Trends in diphtheria mortality in Europe, 1900–2015

Notes: Between 1900 and 1960: quinquennial dataSource of data: see Suppl. Table 1

In the Soviet Union, mass vaccination against diphtheria (and other childhood infections) was implemented in the 1950s, and this effectively controlled the disease. However, the incidence of diphtheria slowly rose again in the 1980s. A serious epidemic broke out around 1990, coinciding with the collapse of communism and the transition to a new political and economic order. The weakening of state structures led to declines in vaccine coverage and an epidemic of diphtheria with thousands of deaths, many of whom were adults.59

Russia did not report diphtheria deaths to the World Health Organization in this period, but Figure 14 does show the rise of diphtheria mortality in Latvia, where a serious epidemic occurred as well. With international support, aggressive countermeasures were taken in 1995, which within a few years brought diphtheria mortality down again in the countries of the former Soviet Union.60

Pneumonia, Influenza

Like the diseases treated in the previous section, pneumonia and influenza form part of a larger group of acute respiratory infections which were important causes of mortality in Europe before the middle of the 20th century. Trends in mortality from pneumonia and influenza are illustrated in Figure 15, together with trends in a few other respiratory diseases. The data come from the Netherlands, but similar trends have been found elsewhere. Please note that, in order to fit these diseases with very different mortality rates in the same graph, the vertical axis has a logarithmic scale.61

Figure 15
Figure 15

Trends in respiratory disease mortality in the Netherlands, 1901–1992

Notes: Logarithmic Y-axis. copd = Chronic Obstructive Pulmonary DiseasesSource of data: Judith H. Wolleswinkel-van den Bosch. The Epidemiological Transition in the Netherlands. Erasmus University, 1998

Mortality from all the acute respiratory conditions (pneumonia, influenza, otitis, laryngitis) has declined, in contrast to mortality from chronic respiratory conditions (chronic bronchitis, emphysema and asthma, taken together under its modern heading ‘Chronic Obstructive Pulmonary Disease’). In the Netherlands, as well as in many other European countries, mortality from copd started to rise in the early 1950s, due to the smoking epidemic.62

The over-all decline of mortality from acute respiratory conditions has been far from smooth. This applies particularly to mortality from influenza, which has oscillated in cycles of between two and six years, reflecting periodic changes in virulence and immunity. Mortality from influenza had an exceptionally high peak in 1918 during the well-known pandemic of ‘Spanish flu’. But trends for the other diseases have also been quite volatile. For example, mortality from pneumonia and several other respiratory infections peaked in 1944, due to war-time conditions in the Netherlands, and mortality from laryngitis and otitis temporarily rose in the 1930s, due to a rise in streptococcal virulence.63

We will now discuss the trends for pneumonia and influenza in more detail, focussing on the explanation of mortality decline and temporary setbacks, and incorporating the experience of a wider range of European countries.

Pneumonia, particularly in its classic form of ‘lobar pneumonia’ in which a complete lobe of the lung was affected, was a very serious disease, with a case fatality of up to 30%. It was most commonly caused by infection with Streptococcus pneumoniae (also called the ‘pneumococcus’), discovered in 1880 by French biologist Louis Pasteur (1822–1895). However, it may also be caused by a range of other bacteria and by viruses.64

Antisera were introduced in the 1920s but had limited effectiveness, and so the sulphonamides, discovered in the 1930s, and penicillin, discovered in the early 1940s, were the first life-saving treatments for pneumonia. Figure 15 shows that mortality from pneumonia in the Netherlands declined abruptly in the late 1940s, and then continued to decline at a faster speed than in the 1920s and 1930s, until the early 1960s. A similarly abrupt decline, and/or acceleration of pre-existing decline, has been noted in many countries.65

However, the quantitative impact, and the contribution of these new medical treatments to over-all decline of pneumonia mortality, differed importantly between European countries. This depended on whether or not pneumonia mortality had already reached a low level before they were introduced. Around 1930, pneumonia mortality was already quite low in Northern and Western Europe, whereas it was still high in the rest of Europe, so that the introduction of effective medical treatment could make a much larger contribution elsewhere (Figure 16).

Figure 16
Figure 16

Trends in pneumonia mortality in Europe, 1900–2015

Notes: Quinquennial data before 1960Source of data: see Suppl. Table 1

For example, in countries like Sweden, Norway, England, the Netherlands, and Switzerland pneumonia mortality declined from 100–150 per 100,000 in 1930 to 25–50 per 100,000 in 1960 – certainly a most impressive decline. However, in countries like Italy, Spain, Greece, Czechoslovakia, and Hungary, pneumonia mortality declined from 250–350 per 100,000 in 1930 to 50–100 per 100,000 in 1960. Their decline was larger in both relative and absolute terms, and formed a much larger part of the over-all decline in pneumonia mortality during the 20th century.66

Although mortality from pneumonia has declined, it is still a common cause of death, particularly among the elderly. A deceleration of pneumonia mortality decline occurred in the 1960s in the Netherlands (Figure 15), and around the same time in many other countries (Figure 16). This may partly be a manifestation of the decreased efficacy of antibiotics after the emergence of penicillin resistance. It probably also reflects increasing survival of people with chronic conditions, among whom pneumonia is one of the few remaining gateways to death. This also implies that the validity of pneumonia as an underlying cause of death has become somewhat doubtful.67

Recently, several European countries have introduced vaccination against pneumococcal pneumonia, but whether this is effective against pneumonia mortality is uncertain.68

Influenza is a disease of humans, but also of pigs, horses, swine and birds, and is extremely contagious. During pandemics, it may infect more than half of the world’s population. This has given the disease its Italian name: the Italians blamed its massive occurrence on the ‘influence’ of the stars. Case fatality is usually low (below 1%) and death mainly occurs among the very young, the very old, and the immune-compromised. Yet, because of the enormous numbers of infected persons, influenza mortality can be very high.

Influenza is caused by a virus which was discovered in 1933 and which changes all the time, making permanent immunity impossible. These changes may occur through mutation of the virus in humans, through mutation of a virus in animals that crosses the species barrier, or through recombination between a human and an animal virus. Often these changes are small, but sometimes they are more radical causing a pandemic, i.e., an epidemic affecting a large part of the world.69

Europe had several large-scale epidemics of influenza in the 16th and 17th centuries, and there were at least three documented pandemics in the 18th century (1729–30, 1732–33, 1781–82). The last of these three arrived in Europe from Russia, and spread by sea routes. Three-quarters of the European population fell ill, and despite the fact that case fatality was low, it caused hundreds of thousands of deaths. There were also at least three pandemics in the 19th century (1830–31, 1833, 1889–90). The last was called the ‘Russian flu’, because it arrived again from the East. It spread by steamship over sea, and by train over land, and caused between 250,000 and 300,000 deaths in Europe.70

However, the greatest influenza pandemic of all – and the one that eclipses all others in popular memory – was the ‘Spanish flu’ which killed around 1% of the European population between Spring 1918 and Spring 1919. There were around 2.5 million deaths in Europe, and at least 50 million deaths world-wide. In contrast to most other influenza epidemics it killed many young adults, perhaps because these had not experienced the 1889–90 influenza and therefore had no natural immunity.

There has been much speculation about a relationship between this pandemic and World War i. Was it a coincidence that it started in the last year of the ‘Great War’? Even the remotest countries on earth were struck by this pandemic, so it may have occurred anyway. But the war also increased transmission of the virus (due to massive troop movements, often in crowded ships), reduced human resistance (due to other diseases and other war-related suffering), and reduced collective capacity for countermeasures (due to countries’ focus on the war, and the collapse of some governments at the end of the war).71

Mortality from influenza was considerably higher in Europe than in the US. This has been attributed to the fact that the public health response in the US was more vigorous, by closing schools and other measures that lowered the mortality peak. The death toll also differed importantly between European countries (see Suppl. Figure 12). Northern and Western Europe experienced far lower influenza mortality than Southern and South-eastern Europe. One possible explanation is that, as we have seen in previous sections, the latter countries still had a higher disease burden from other respiratory conditions (such as respiratory tuberculosis, pneumonia, …), and that the higher prevalence of co-morbidity increased influenza’s case fatality.72

After the 1918 pandemic, several other pandemics occurred, but although influenza mortality kept oscillating, none of the later pandemics really stands out with an exceptional death toll. This is probably not only due to lesser virulence of the virus, but also to better medical treatment, including antibiotics for secondary bacterial pneumonia, and better supportive care. Nevertheless, influenza has remained an important cause of death, particularly in cold winters and among the elderly.73

Whether the introduction of mass vaccination of the elderly and other high-risk groups has contributed to a decline in the incidence and/or mortality from influenza is uncertain. This was successfully introduced in several European countries in the 1990s, and may have contributed to further mortality decline in recent years. Yet, studies comparing trends of influenza mortality with trends in vaccine uptake have not found a clear relationship.74

Despite progress in the prevention and treatment of influenza, some experts believe that there is still a serious risk that a new pandemic, with a much more virulent influenza virus, will arise and cause large numbers of deaths. So far, however, nothing of the sort has happened, despite world-wide scares of ‘bird flu’ in 2004 (which only caused a few hundred deaths world-wide) and ‘swine flu’ in 2009 (which perhaps caused 30,000 deaths in Europe, including excess deaths from other causes).75

Maternal, Infant and Perinatal Mortality

Maternal Mortality

Maternal mortality – defined as “the death of a woman while pregnant or within 42 days of termination of pregnancy, from any cause related to or aggravated by the pregnancy or its management” – nowadays is very rare. In 2015, the maternal mortality rate was far below 1 per 10,000 births in almost all European countries, down from more than 100 per 10,000 births three centuries ago.76

As Figure 17 illustrates, this is one of the most breath-taking trends ever seen in population health, with the steepest decline occurring in the 1940s and shared by all European countries. Only few countries have data on maternal mortality going back to the 18th century or before. However, national data from Sweden and nationally representative data from English family reconstitution studies show that the maternal mortality rate in these two countries was above 100 per 10,000 births in the first half of the 18th century. The English study actually found an even higher rate, of more than 150 per 10,000 births, in the late 17th century. Regional data from France confirm this with estimates of between 100 and 200 maternal deaths per 10,000 births in the 17th and 18th centuries.77

Figure 17
Figure 17

Trends in maternal mortality in Europe, 1700–2015

Notes: Sparse data in many countries before 1900Source of data: see Suppl. Table 1

At this time medical assistance of child-birth was still rare: most parturient women were assisted by female neighbours, family members and/or lay birth assistants. When complications arose, such as breech presentation, too narrow birth canal, post-partum haemorrhage or puerperal fever, little could be done. It is likely, therefore, that a maternal mortality rate of 100 deaths per 10,000 births or more represents the risk of maternal death in the absence of any professional assistance.78

In Sweden and England, maternal mortality already declined importantly between 1700 and 1850, to around half its original level. After 1850 we have data from many more European countries, which show highly variable levels and trends. In contrast to the pre-1850 period, for many countries we now have annual (instead of quinquennial or decadal) mortality data, which show very ‘spikey’ patterns up to the 1930s.

More generally, the maternal mortality trends in Europe are not altogether favourable between 1850 and the 1930s. The main exception is a group of countries consisting of Sweden, Norway, Denmark and the Netherlands. In these countries, maternal mortality declined strongly in the second half of the 19th century, reached comparatively low levels of just above 20 deaths per 10,000 births around the year 1900, but then rose somewhat again in the first decades of the 20th century, before starting a precipitous decline around 1935.

In all other European countries, including England and other parts of the United Kingdom, maternal mortality declined less or not at all in the second half of the 19th century, and several European countries had ‘spikes’ of maternal mortality exceeding a rate of 60 deaths per 10,000 births even in the first decades of the 20th century.

However, in the late 1930s and 1940s maternal mortality declined strongly all around Europe, to reach levels of ca. 10 per 10,000 deaths in 1950, with slower but continuous declines until the present day. The main exception is Romania, which experienced an epidemic of maternal mortality in the 1960s to 1980s, ending with a steep decline in 1989, coinciding with the fall of the communist regime.

As we will see in the next section, long-term trends of maternal mortality are remarkably different from those of infant mortality, particularly in the late 19th and early 20th century when infant mortality showed a more consistent decline (Figure 18). Trends are more similar between maternal mortality and late foetal mortality (Figure 19), suggesting common determinants between maternal and late foetal mortality, but not between maternal and infant mortality.

Figure 18
Figure 18

Trends in infant mortality in Europe, 1745–2015

Notes: Decadal data for England & Wales before 1840Source of data: see Suppl. Table 1
Figure 19
Figure 19

Trends in still-births in Europe, 1775–2015

Notes: Sparse data for England & Wales before 1928; quinquennial data for many countries before 1980Source of data: see Suppl. Table 1

Those who have studied possible explanations for the trends of maternal mortality have come to the conclusion that most of the decline can be attributed to improvements in obstetric practice. As always in these historical analyses, definite proof is impossible to obtain, but the available evidence is rather compelling. While improvements in living standards, general improvements in women’s health, and declines in fertility have also played a role, the main factor has probably been more effective medical assistance. Yet, this has not been a smooth trajectory.79

It is during the 18th century that the old style of childbirth started to change, both with regard to who attended the birth and to what was known and done. Medical doctors and surgeons became interested in pregnancy and childbirth, wrote treatises on how to deal with complications, and started to assist women from the higher social classes as their ‘man-midwife’ or ‘accoucheur’. In several European countries, such as Sweden and France, governments started to promote a formal education of lay birth assistants, based on an increased understanding of the anatomy of the uterus and the mechanics of delivery, as a result of which the number of trained ‘midwives’ gradually increased.

While it is likely that improvements in obstetric care contributed to the decline of maternal mortality during the 18th and first half of the 19th century, the decline stalled around 1850, with the exception of the Nordic countries and the Netherlands. This was due to a combination of factors. The increased involvement of medical doctors in child-birth, particularly within so-called lying-in hospitals, actually increased the risk of puerperal fever; and not all countries regulated community midwifery in such a way that all births outside hospital were attended by well-trained midwives.

The main difference between the Nordic countries and the Netherlands on the one hand, and most other European countries on the other hand was in the second factor. In the second half of the 19th century, most deliveries in these countries still occurred at home, and were attended by qualified midwives who were able to apply new insights in their obstetric practice, such as the importance of antisepsis for the prevention of puerperal fever.80

In the 19th century, puerperal fever was the most important single cause of maternal mortality, accounting for up to half of all maternal deaths. We now know that it is mostly caused by infection with Streptococcus pyogenes, a bacterium with high variable behaviour, generally present in people’s noses and throats, and involved in a range of diseases, including pharyngitis, otitis, erysipelas, scarlet fever, and acute rheumatic fever. During child-birth it can enter the woman’s body through her wounds, and when it caused puerperal sepsis this was often fatal before the advent of sulphonamides and antibiotics in the 1930s and 1940s.

Puerperal fever has probably occurred throughout human history, but became epidemic with the increased involvement of medical practitioners in child-birth. This was because they could unwittingly introduce the bacterium during their interventions, for example after having done a vaginal examination on another woman in a lying-in hospital, after having conducted an autopsy on a woman who had died from puerperal fever, or simply because they carried the bacterium in their own nose and throat. It was a well-known fact that some doctors were tragically followed by epidemics of puerperal fever among the women they attended. The spikes in maternal mortality seen in Figure 17 were probably due to epidemics of puerperal fever.

The discovery of the aetiology of puerperal fever is traditionally associated with the name of Ignaz Semmelweis (1818–1865). Semmelweis was a medical doctor of Hungarian descent who worked in the lying-in part of the Allgemeines Krankenhaus in Vienna. By comparing two parts of this clinic, he discovered the connection between puerperal fever and previously conducted autopsies by the same doctors who assisted in deliveries. He published his findings in 1860, but his views on the aetiology of puerperal fever remained controversial until the end of the 19th century. So were similar views propagated by others, some of whom had discovered the contagious nature of puerperal fever long before Semmelweis.

The idea that puerperal fever had a simple aetiology only became widely accepted, after Louis Pasteur (1822–1895) had demonstrated that infection was caused by living organisms, and after Joseph Lister (1827–1912) had demonstrated that ‘antisepsis’ with carbolic acid and other chemicals could prevent wound infection. Application of these new ideas led to a rapid reduction of the frequency of puerperal fever in the last decades of the 19th century, but unfortunately adoption was not universal, even in countries where these new ideas were widely diffused.

Instead of accelerating, the decline of maternal mortality stagnated in England and other parts of the United Kingdom in the second half of the 19th century and first decades of the 20th century (Figure 17). This has been attributed to the fact that, in contrast to midwives in the Nordic countries and the Netherlands, British midwives were not well-trained and did not adequately apply the lessons of the ‘bacteriological’ and ‘antiseptic’ revolutions. This coincided with a probable rise in the virulence of Streptococcus pyogenes in the first decades of the 20th century, which led to a rise in puerperal fever mortality in many European countries. It was the arrival of sulphonamides and penicillin that finally brought the mortality rate drastically down.81

The rapid, and almost simultaneous decline in maternal mortality in the late 1930s and 1940s can, however, not be explained by the reduction of puerperal fever mortality alone. Other causes of maternal mortality also started to decline rapidly. In the 19th century and first decades of the 20th century, the most important other causes of maternal mortality were haemorrhage (e.g., due to problems with the placenta praevia), pregnancy-induced hypertension (which may in severe forms lead to ‘toxaemia’ or ‘eclampsia’), and the complications of induced abortion. In a detailed analysis for England it was shown that all three declined in the late 1930s and 1940s, but with subtle differences in timing related to the timing of the interventions involved.

For example, while puerperal fever started to decline in the late 1930s, coinciding with the introduction of the sulphonamides, maternal mortality from haemorrhage started to decline in the early 1940s. This coincided with the introduction of ergometrine (a drug that helps the uterine blood vessels to contract), blood transfusion, and the transformation of the health service to better accommodate emergency cases during World War ii. Caesarean sections, which were too dangerous before the advent of blood transfusion and antibiotics, now also could be applied on a larger scale.82

The simultaneous decline of maternal mortality during the late 1930s and the 1940s in so many European countries can only be understood if we assume that the diffusion of these new insights and methods took place very rapidly. The only country that forms an exception to the favourable trends in the post-War period is Romania, where the brutally pro-natalist policies of the Ceausescu regime caused an epidemic of maternal mortality. Starting in 1966, the Romanian government tried to raise the birth rate by forbidding both contraceptives and induced abortion, as a result of which many women resorted to illegal abortion, often with fatal consequences.83

Despite the fact that, seen from a historical perspective, levels of maternal mortality are generally very low in Europe, there are still relevant variations, which point to differences in the quality of antenatal and perinatal care.84

Infant Mortality

Declines in infant mortality – deaths occurring in the first year of life – have been extremely important for the increase in life expectancy in Europe. The contribution of declines in infant mortality to the total increase in life expectancy since the 19th century exceeds that of any other age-group. This is not only because saving the life of an infant adds more years to life than saving the life of an older child or adult, but also because declines of mortality in the first year of life have been larger than those in other age-groups.85

It is difficult to grasp the enormity of these changes – and the enormity of the social and psychological impact infant mortality must have had in the past. Around 1870, before infant mortality started to decline, the infant mortality rate in most European countries ranged between 150 and 300 per 1000 live-born children. In 2015, the likelihood that a child would die before its first birth-day had declined to below 10 per 1000 everywhere – only Turkey had an infant mortality rate that was still slightly above 10 per 1000.

These high death rates were mainly due to the fact that many babies died from diarrheal and respiratory diseases. These were caused by a combination of unhygienic living conditions and inadequate infant care, e.g., overcrowded housing, lack of clean water, too early weaning, and bacterial contamination of food and milk. In many European countries, infants’ high risk of infection was also due to the fact that mothers often did not breast-feed their infants, but gave them animal milk, gruels or other artificial foods, or relied on poor ‘wet nurses’ in the country-side. Many factors played a role, ranging from a taboo on sexual intercourse during lactation to a complete lack of understanding of infants’ dietary needs. Also, breast-feeding was often impossible when women had to work outdoors, which became more common as a result of low wages in the industry and high cost of living in cities.86

Around 1870, about one-third of infant deaths occurred during the first month of life (‘neonatal mortality’), whereas two-thirds occurred in the remaining eleven months (‘post-neonatal mortality’). Since then, because the decline of post-neonatal mortality has been much stronger than that of neonatal mortality, the distribution has reversed: nowadays, neonatal mortality accounts for two-thirds, and post-neonatal mortality for one-third of infant mortality.

A similar shift has occurred within the first month of life: deaths in the first week (‘early neonatal mortality’) used to account for only one-third of all neonatal deaths, but now far outnumber those in the remaining three weeks (‘late neonatal mortality’). This shift tells us something important: the causes of death in the first week of life are different from those in the rest of the first year of life, and have been much more difficult to tackle.

Death in the first week of life occurs among babies who are born in a suboptimal manner (e.g., because they are born prematurely, or because they are damaged during childbirth), and among babies who are already compromised during pregnancy (e.g., because they do not grow well, or have a congenital anomaly). These are health problems that need a different approach from what worked against the digestive and respiratory diseases which dominated infant mortality before 1870.87

The historically slower decline of first-week mortality is mirrored by a similarly slow decline of the still-birth rate, or the ‘late foetal mortality rate’. This is the death rate among unborn babies who are considered viable, i.e., who are developed well enough to be able to survive independently of their mother. As a result of its slow decline, the still-birth rate has become relatively more important as an object of concern, just like the early neonatal mortality rate. In the late 1940s this resulted in the promotion of the ‘perinatal mortality rate’ (which is the sum of late foetal and early neonatal mortality) as an additional indicator for monitoring the health of infants.88

We will review the history of both infant and perinatal mortality in Europe, using – as elsewhere in this book – a compilation of data from national statistics. More than in the case of other statistics, these need to be interpreted with caution, because criteria for the registration of infant and perinatal mortality have varied, both over time and between countries, and because under-registration has been and still is common.89

Figure 18 illustrates the long-term trends of infant mortality in Europe. Like the decline of the all-ages mortality rate that we saw in Figure 2, the decline of infant mortality has not followed a smooth path. Although there has been a dramatic net decline, this has been interrupted by huge spikes, such as in Finland in 1868 when more than 400 out of 1000 infants died during the last great famine, and in Russia in 1942, when almost 300 out of 1000 infants died during the worst years of World War ii.

The Nordic countries already started national registration of infant mortality in the 18th century, reflecting an early interest in the fate of their new-borns among politicians and medical professionals. Sweden’s infant mortality rate has been highlighted in Figure 18, and had already started to decline by the beginning of the 19th century. Since then, it has been declining almost continuously. The same applies to Norway and Denmark. These early declines have been attributed to the promotion of breast-feeding, partly through community midwives, high levels of literacy, early uptake of smallpox vaccination, and other measures reducing exposure to infection. Due to their early and persistently rapid decline, Sweden’s and Norway’s infant mortality rate have been among the lowest in Europe throughout the 19th and 20th centuries.90

Other countries in which infant mortality rates can be followed since the 18th century are France and England. France experienced a decline in the last decades of the 18th century but, in contrast to Sweden, this was followed by stagnation of infant mortality during most of the 19th century. The early decline has been attributed to “la première médicalisation de la petite enfance à l’époque des lumières” [the first medicalisation of infancy in the age of Enlightenment]. In this period, physicians started to promote breast-feeding, safer methods of delivery assistance, and regular bathing of infants, which may already have helped to bring infant mortality down from ca. 300 to ca. 200 deaths per 1000 live-born.91

Infant mortality trends in England (highlighted in Figure 18) can also be followed over a very long period, partly thanks to family reconstitution studies. England had relatively low infant mortality in the 18th and early 19th centuries, perhaps because artificial feeding had become less wide-spread than elsewhere. Yet, the secular decline of infant mortality started later than in other European countries. English trends were unfavourable in the second half of the 19th century, probably due to rapid urbanization which exposed increasing numbers of infants to the sanitary problems of England’s larger cities. The decline that started in the late 1890s has been studied extensively, and has been attributed to a combination of declining fertility, improvements in women’s education, sanitary reform, and improved milk supply and food hygiene.92

Similar factors must have played a role in the first phase of infant mortality decline elsewhere, but with national variations in timing which have, unfortunately, never been satisfactorily explained. National studies have revealed many of the factors involved in national infant mortality decline, but explaining why some countries (other than the Nordic countries) had earlier infant mortality decline than others has remained guesswork. Despite considerable research efforts, the same applies to the explanation of national variations in the timing of another important component of the demographic transition, fertility decline.93

In the second half of the 19th century, infant mortality was high and rising in the Netherlands, and even higher in Germany and Austria, probably because of unhygienic conditions and wide-spread artificial infant feeding in all three countries. However, infant mortality started to decline precipitously in the 1870s, due to a combination of changes in infant care, partly driven by cultural change (breast-feeding, modern hygienic practices) and the start of large-scale public sanitation.94

In France and Switzerland, infant mortality also started to decline in the 1870s. Spain and Italy followed in the 1880s, England and Scotland towards the end of the 1890s, and Hungary, Bulgaria, Romania, Yugoslavia, Russia, and Portugal after the year 1900.95

Underlying the specific causes of infant mortality decline, such as the promotion of breast-feeding and sanitary reform, was often a broader national movement to reduce the appallingly high levels of infant mortality. This is clear in the case of the Nordic countries, where both early registration of mortality and training of midwives were inspired by a political desire to strengthen the population. It also applies in varying degrees and at different points in time to other countries. For example, in France during the last decades of the 19th century a concern with both social justice and the national interest led the government to enact a series of laws to protect infants (e.g., a law regulating nanny care), to organize the distribution of pasteurized milk, and – somewhat later – to set up a system of home visiting.96

This public concern with the health of infants continued and strengthened during the first decades of the 20th century, and led to an expanding system of child health and social services, with child health clinics, food supplements, child benefits, antenatal clinics, etc. in all European countries. Mean-while, improvements in obstetric care, a shift from home to hospital deliveries, the introduction of antibiotics, and later the creation of neonatal intensive care units and other advanced medical treatments made the medical component of these systems more and more effective, with all factors together helping to continue the decline of infant mortality.97

After World War ii, infant mortality continued to decline, and ultimately all national rates converged around values of between 2 and 10 per 1000. This also applies to the communist and post-communist countries of Europe, although in the Balkans it took more time than elsewhere. In the 1980s, infant mortality rates above 30 per 1000 were still registered in North Macedonia, Serbia, Kosovo and Albania, but in these (now independent) countries infant mortality rates have since declined to lower levels. Registration issues may, however, apply, both before and after 1991. For example, under-registration of infant mortality has been documented for the Soviet Union before 1991, and for many countries of the former Soviet Union as well as several countries in Central-eastern Europe after 1991.98

Still-births

The long-term trend in the still-birth rate does not at all resemble the trend in the infant mortality rate. In some of the Nordic countries, the still-birth rate can be followed since the 18th or early 19th century – again illustrating the early public engagement with the health of new-borns – and seems to have fluctuated in large waves, without indications for a consistent decline until the 1940s (Figure 19).99

Whereas the decline of the infant mortality rate has taken more than a century, the decline of the still-birth rate is of more recent date and appears to be concentrated in a short period of time starting around 1940. The decline coincides with the sudden decline of the maternal mortality rate that occurred in the same period.

This also applies to the perinatal mortality rate which, in some countries, can be traced since the second or third decade of the 20th century. In Sweden, Denmark, England & Wales and the Netherlands, perinatal mortality started to decline in the late 1930s or early 1940s, and has further declined without major interruptions until the present day. Similar declines have been registered in the post-war period for other European countries. Before World War ii, the perinatal mortality rate was around 50 per 1000: 50 still-born and first-week deaths per 1000 still- and live-born taken together. Currently, the perinatal mortality rate is below 10 per 1000 almost everywhere.100

The co-occurrence of the decline in the still-birth rate with the rapid decline in maternal mortality in the middle of the 20th century suggests that both had the same causes, i.e., improvements in obstetric care. That is indeed what different authors on this topic have concluded. Antibiotics not only saved mothers from puerperal infection, but together with the advent of blood transfusion also reduced the risk of infection of operative procedures. As caesarean sections became less risky for the mother, they could now also be performed to save the life of her baby. Improvements in how midwives and specialized obstetricians handled childbirth and its complications probably also helped, as did the introduction of systematic antenatal care, in which women at risk were identified, counselled and, increasingly, referred for in-hospital delivery.101

Improvements in antenatal and perinatal care continued during the second half of the 1960s, and the continued decline of perinatal mortality reflects the combined effect of a wide range of favourable trends. These include important behavioural changes, such as less unwanted pregnancies, less teenage and multiparous pregnancies, less smoking in pregnancy, and better nutrition (e.g., more fresh fruits and vegetables, leading to less neural tube defects). Important advances in care for mother and child occurred as well. Advances in the antenatal period include screening for congenital anomalies followed by induced abortion and more wide-spread and systematic use of antenatal care. Advances in care during and immediately after child-birth included initiating delivery in overdue pregnancies, increased use of caesarean sections for babies at risk, regionalization of care, and creation of neonatal intensive care units.102

Not all trends were favourable, however, and the decline of perinatal mortality occurred despite the continued high frequency of premature birth. The latter even increased in some European countries in the past decades. Several factors, some of which are the result of more medical intervention, have contributed to an upward pressure on the preterm birth rate. There is an elevated risk of preterm birth in multiple pregnancies, and these have increased as a result of the application of in vitro fertilization. There has also been an increase in artificially induced early childbirth, aiming to protect the baby against the risks of a longer stay in the womb, for example in the case of eclampsia (severe pregnancy hypertension). In addition, the prevalence of some risk factors for preterm birth has risen as well, such as higher age and obesity of pregnant women. Although survival of preterm babies has improved, long-term consequences are often severe, and include cerebral palsy and difficulties at school.103

The remarkable rise of the still-birth rate in Russia (and also Ukraine, see Figure 19) during the 1990s illustrates the sensitivity of these rates to registration practices. In the Soviet Union, the official criteria for registering still-births, live-births, and deaths among the live-born differed in subtle but relevant ways from the criteria recommended by the World Health Organization, to the effect that infant and perinatal mortality rates were underestimated. This changed in the 1990s, when Russia and the newly independent states started to harmonize their criteria with those of the who. The steep but temporary rise of the still-birth rate probably reflects an abrupt change, and later correction, in registration practices, and cautions against taking the currently low perinatal mortality rates in Eastern Europe at face-value.104

Other Health Problems of Industrializing Societies

Pellagra, Rickets, Goitre

Nutrient deficiencies used to be common, and an important cause of disease throughout Europe. One important group of such diseases are ‘avitaminoses’, i.e., diseases due to a lack of vitamins. Vitamins are essential dietary elements that are necessary in small quantities for the proper functioning of the organism. Although vitamins and their role in health and disease were only discovered in the first decades of the 20th century, the diseases caused by a lack of vitamins have been recognized for a long time. These include beriberi (now known to be due to lack of vitamin B1), pellagra (vitamin B3), scurvy (vitamin C) and rickets (vitamin D). Other important nutrient deficiencies were iodine-deficiency (leading to goitre and cretinism) and iron-deficiency (leading to anaemia). In this section, we will briefly illustrate the European history of this group of diseases with the long-term trends of pellagra, rickets, and goitre.

Pellagra is a disease of people living on a diet mainly consisting of maize. It is characterized by dermatitis (‘pelle agra’ meaning rough skin), diarrhoea and dementia (Plate 12). Before its cause was discovered, it could be highly fatal. Like potatoes, maize was brought to Europe from the New World in the 16th and 17th centuries. Maize grew easily in Southern Europe, and replaced wheat as a staple food for the poor in rural areas of Spain, Portugal, Southern France, Northern Italy, Yugoslavia, and Romania. Maize porridge (polenta) and other preparations of maize contain very little niacin (vitamin B3), particularly if maize is not treated in the traditional American-Indian way. The adoption of a monotonous maize diet among poor farmers and land labourers in Southern and South-eastern Europe therefore led to a serious rise of pellagra in the 18th and 19th centuries.105

Plate 12
Plate 12

A woman suffering from chronic pellagra. Watercolour, ca. 1925

This picture shows an Italian woman suffering from pellagra, showing the typical skin abnormalities. These are due to the fact that the effects of niacin deficiency are felt most in body parts with high rates of cell turnover, such as the skin. However, chronic pellagra also caused serious neurological symptoms, and Italian mental hospitals once housed many pellagra patients. Watercolour by A.J.E. Terzi. Wellcome Collection (CC BY 4.0)

Quantitative data on long-term trends in pellagra are scarce – most countries where pellagra occurred were underdeveloped at the time and did not keep registers or conduct surveys. The main exception is Italy, where the Ministries of Agriculture and Internal Affairs conducted several large-scale surveys of pellagra. In 1879, the total number of cases in Italy as a whole was still around 100,000 (3.4% of the population). In the last decades of the 19th century, prevalence declined strongly, to 42,000 cases (1.2% of the population) in 1909. In 1954, only 25 cases were left in the country as a whole. Strong declines have also occurred elsewhere, creating another striking picture of a disease’s ‘rise-and-fall’.106

In Italy as in other European countries, the decline of pellagra was partly caused by the improvements in nutrition that accompanied the general rise in living standards and diversification of diets. Yet, deliberate interventions to prevent and treat pellagra also played a crucial role. The link with a monotonous maize diet had already been recognized by some in the 19th century, leading to early attempts to change diets for the better, and to improve the miserable living conditions of poor agricultural workers. It had also already been found that yeast (later shown to be rich in niacin) could cure pellagra. More precisely targeted interventions, such as food fortification for the prevention of pellagra, and vitamin injections for its treatment, became possible in the late 1930s, after the scientific demonstration that the disease was due to a lack of niacin.107

Rickets is another nutrient deficiency disease that rose strongly and then fell precipitously during the socioeconomic modernization of European societies. Its name has an unknown origin, and is synonymous with ‘rachitis’. Rickets is due to faulty ossification of bones, which leads to weak bones that fracture easily, bowed legs, stunted growth, and various other abnormalities, including mental retardation. It was a disease of growing infants and children, caused by a lack of dietary vitamin D combined with insufficient exposure to sunlight. It was first described in the 17th century, apparently because the disease then started to occur more frequently, first in England, then in other countries in North-western Europe. It has remained uncommon in Southern Europe.

Rickets already occurred in prehistory and in antiquity, but became very common in the 18th and 19th centuries as a result of industrialization and urbanization. The rise was due to the increasing numbers of children growing up in crowded and air-polluted urban centres, with inadequate diets and very little exposure to sunlight. Sunlight is needed to make active vitamin D from vitamin D precursors in the skin. In the absence of active vitamin D, dietary calcium cannot be properly absorbed in the intestines, leading to the skeletal and other abnormalities mentioned above. Because England was ahead of other countries in its industrialization and urbanization, it also led the world in rickets, which was therefore often called ‘the English disease’.108

As in the case of pellagra, quantitative data on long-term trends in rickets are hard to find. It was not a directly fatal disease, but did appear in the London Bills of Mortality as a cause of 2 to 3% of all deaths by the mid-17th century, to disappear again in the first half of the 18th century. In the 19th century, rickets prevalence among children must have been very high, as shown by hospital statistics. Data on admissions to children’s hospitals, or visits to policlinics of children’s hospitals in Europe’s large cities, show that in the 1860s to 1880s the prevalence of rickets ranged between 8 and 30% of all children below the age of 5 years in Copenhagen, Basel, Dresden, Berlin, Frankfurt-on-Main, London, Manchester and Prague. Mild forms of rickets were present in even higher percentages of children.109

Although vitamin D was only identified in 1922, the role of lack of sunlight had already been established in the late 19th century. The beneficial effects of cod-liver oil, which is naturally rich in vitamin D, was known even earlier, and treatment with cod-liver oil was already applied sporadically in the 19th century. An experimental study among children in Vienna in 1919–1922 definitely showed that addition of cod-liver oil to their diets prevented the occurrence of rickets, and that cod-liver oil also cured the disease among rachitic children. From then onwards, cod-liver oil has been used prophylactically in millions of children in North-western Europe. As a result of improvements in housing conditions and diet, and of these specific prevention efforts, rickets has disappeared almost completely. However, it has recently returned in Europe as a disease among migrants (due to culturally prescribed whole-skin covering or to dark skins, in which the weaker sun-rays of Northern latitudes cannot sufficiently penetrate) and among families following unusual diets (e.g., infants on a vegan diet).110

Goitre, an enlargement of the thyroid that is due to iodine-deficiency, is different from the previous two diseases in that it has not followed a clear pattern of ‘rise-and-fall’. It also had a peculiar geographic distribution across Europe, with a high prevalence of the disease in, for example, the Alps, Pyrenees and Apennines, and a number of circumscribed areas in Scandinavia, Yugoslavia, Hungary and Romania. This is probably due to the fact that drinking water in these areas does not naturally contain iodine, a mineral necessary for the synthesis of thyroid hormones.111

The enlargement of the thyroid results from attempts of the organism to increase the production of thyroid hormone. More serious than the goitre itself are some of the other consequences of a lack of thyroid hormone, such as growth retardation, mental retardation, fatigue and depression. Iodine deficiency in pregnancy can lead to cretinism in offspring, a congenital form of the disease that is characterized by severely stunted physical growth (dwarfism). Epidemiological studies conducted in European regions where goitre was endemic, showed that in the 1920s to 1940s the prevalence of the disease was as high as 20 to 50% of all children, and that the prevalence of cretinism was as high as 0.1 to 1%.112

Iodine was discovered in the early 19th century, as part of the rise of modern chemistry. A possible role of iodine deficiency in causing goitre was already suspected in the 19th century, but side effects of iodine prophylaxis and treatment did not encourage its wide-spread use. It was in the early 1920s that the effectiveness of small doses of iodine was proven experimentally, after which iodization of table salt was gradually introduced as a public health measure to prevent goitre. Switzerland was the first European country to apply this policy, and to eliminate goitre and cretinism. Many other countries followed, and goitre and cretinism have since receded greatly.113

Unfortunately, low-grade iodine deficiency has returned in the 1980s and 1990s, when almost half of all European children and adults were shown to have insufficient iodine intake. This reversal has occurred partly because iodization of table salt is no longer strictly enforced, partly because table salt is no longer the main source of salt in the diet. The rise of iodine deficiency was particularly strong in Eastern Europe, where the political disruption after the collapse of the Soviet Union also disrupted these countries’ iodization policies. The main health risk now is that this may cause cognitive impairment in children, who are born from mothers with low-grade iodine deficiency during pregnancy. Although the World Health Organization has tried to improve the situation, progress is unsatisfactory in many countries.114

Peptic Ulcer, Appendicitis

Peptic ulcer (i.e., stomach and duodenal ulcer) and appendicitis have been combined in one section, because both have a very striking ‘rise-and-fall’ pattern within the same, relatively short time-frame. Mortality data show a steep rise of both conditions until the 1930s, and an equally steep decline thereafter. The rising legs of these mortality curves are due to a rising incidence of these diseases, whereas the declining legs reflect a combination of decreasing incidence and decreasing case fatality.

In the case of peptic ulcer, a strong ‘birth cohort effect’ underlies these trends. In many European countries, mortality from gastric and duodenal ulcer increased among consecutive generations born during the second half of the 19th century, and then decreased in generations born around the turn of the 20th century and later. Such a ‘birth cohort effect’ suggests an influence of changes in early-life exposure to environmental factors, which influence disease risks throughout subsequent life-course. However, after the 1960s declines in peptic ulcer mortality were no longer due to birth cohort effects, but occurred as ‘period effects’, i.e., simultaneously in all birth cohorts. This points more in the direction of treatment effects, or other immediately operating changes affecting all birth cohorts at the same time (see Suppl. Figure 13).115

The causes of ulcers of the stomach and duodenum, and of the increased acid secretion in the stomach that produces these ulcers, have long remained mysterious. Because of the rise of these diseases in the 19th century, their causes were first sought in changes in nutrition or exposure to psychosocial stress. It was only in the 1980s that it was discovered that infection with Helicobacter pylori played a crucial role. Since then, treatment with antibiotics has been added to the management of the disease, which previously mainly focused on lowering acid-production (and surgical removal of parts of the stomach and duodenum in very severe cases).

Infection with Helicobacter pylori, and more specifically a shift in the ages at which children become infected with this micro-organisms, also provides a plausible explanation for the birth cohort effects just mentioned. According to this hypothesis, generations born before the later 19th century became infected with H. pylori as toddlers, and developed ‘atrophic gastritis’ (inflammation of the stomach with reduced acid secretion) which protected them from peptic ulcer in their later lives. As a result of improved hygiene and lower risks of transmission within the household, however, later born generations were infected with H. pylori at an older age. This made them develop less severe gastritis with less reduction of acid secretion, or even an increase in acid secretion, causing an increase in peptic ulcer incidence in their later lives. Finally, generations who were born even later, benefited from further improved hygiene and a lesser risk of infection with H. pylori, which explains the later decline of the incidence of peptic ulcer.116

The evidence for this explanation is largely circumstantial, and although the explanation fits the cohort patterns, other factors may have played a role as well. The striking pattern of ‘rise-and-fall’ may also be partly due to increased recognition of the disease in the early decades of the 20th century (contributing to its apparent rise), and improvements in treatment during the decline of the disease (surgery, blood transfusions, …). The introduction of effective acid-lowering drugs (cimetidine and other H2-receptor-antagonists) has caused further mortality declines from the 1970s onwards – explaining why recent mortality trends no longer exhibit a birth cohort pattern.117

Surprisingly, trends in mortality from appendicitis are roughly similar to those for peptic ulcer: mortality started to rise in the late 19th century, peaked around the 1930s, and declined steeply thereafter (Figure 20). However, in contrast to peptic ulcer, no clear birth-cohort patterns have been identified. There are two competing hypotheses about the explanation of the rise and fall of this disease.

Figure 20
Figure 20

Trends in appendicitis mortality in Europe, 1900–2015

Notes: Quinquennial data before 1960Source of data: see Suppl. Table 1

The first is the ‘dietary fibre’ hypothesis. Traditional diets contain large amounts of fibre, which produce large and soft stools that traverse the intestine rapidly. This decreases the risk of obstruction and subsequent infection of the appendix. In this hypothesis, the rise of the incidence of appendicitis is due to a change from a traditional diet rich in vegetables and cereals to a diet rich in refined food, meat and sugar. Yet, because the decline in the incidence of appendicitis does not coincide with an increase in fibre intake, this hypothesis cannot explain the declining leg of the trend curve, for which other factors have to be invoked. These include the increased use of antibiotics (which may unintentionally have decreased the incidence of appendicitis) and improvements in medical care (safer and more effective surgery, which has certainly reduced the case fatality of the disease).118

The second hypothesis is – again – a shift in the age-distribution of infection with intestinal bacteria. Due to improved hygiene and decreased risks of transmission within the households, children born since the late 19th century have experienced intestinal infections at a later age, at which the lymphoid tissue surrounding the appendicular orifice is more developed, which may then upon infection more often lead to obstruction of the appendix. According to this hypothesis, the decline of appendicitis after the 1930s is due to further improved hygienic standards, which reduced risk of infections and/or shifted the age of infection even further upwards. Here again, the decline in mortality of the disease may reflect a combination of decreased incidence and decreased case fatality due to improvements in treatment.119

Lung Diseases Caused by Occupational and Environmental Exposures

That many diseases are man-made certainly applies to the diseases caused by the work people do. Working is indispensable to provide for the necessities of life, and so is perhaps some wear-and-tear as a result of the bodily and mental effort that work requires, but many forms of work cause specific health problems that go well beyond such wear-and-tear. Additionally, the actual production processes may lead to pollution of the environment affecting the health of residents of surrounding areas.

Rises-and-falls of health problems linked to occupational and environmental exposures have therefore accompanied European countries’ economic development over the past three centuries. However, as explained in Chapter 1, secular changes in these health problems are difficult to capture in a disease-specific approach. This section, although focused on lung diseases caused by occupational and environmental exposures, is therefore somewhat different from other sections.

That working conditions sometimes cause health problems must have been the case throughout human history. Many pre-modern occupations had health risks that were so obvious that Bernardo Ramazzini (1633–1714) could describe more than 50 hazardous occupations in his famous De Morbis Artificum Diatriba (On the Diseases of Artisans) – the first book ever on occupational diseases, published in 1700.120

However, the following centuries brought a massive increase of occupational diseases and injuries, because industrial modes of production exposed workers to hazardous physical forces and chemical compounds on a scale never seen before. For example, mines for the extraction of coal, iron and other minerals became deeper and more dangerous, and use of steam and electrical power to mechanize work processes increased the risk of injuries, and released dusts and fumes that upon inhalation could cause lung disease.121

Because of the enormous variety in physical and chemical exposures to which economic modernization led, the number of occupational diseases has become overwhelmingly large. Hunter’s Diseases of Occupations, currently the standard textbook in this area, describes hundreds of diseases caused by exposure to specific metals, gases, noise, vibration, heat or cold, barometric pressure, radiation, repeated movements, infections and stress.122

Often, it took many years of labour union and public health activism to demonstrate these dangers, and then many years again to enforce countermeasures against often powerful commercial interests. Due to laws, regulations and technical fixes which ultimately made work in mining and manufacturing more safe, and due to the rise of the service sector which reduced the number of people working in mining and manufacturing, many occupational diseases and injuries are much less frequent today than they were in the 19th century. Here again, therefore, we encounter many examples of the ‘rise-and-fall’ of diseases.

Occupational exposures may cause diseases in any organ system, but play a particularly prominent and specific role in some diseases of the respiratory system, probably because inhalation leads to more intense contact with hazardous substances than other forms of bodily contact. Diseases of the airways and lungs that may be caused by occupational exposures include asthma, chronic obstructive pulmonary disease, pneumoconiosis, lung cancer, and mesothelioma, of which we will only discuss pneumoconiosis and mesothelioma in some detail. Unfortunately, availability of European-wide data on trends in the occurrence of these diseases is seriously limited, so we will have to mainly rely on reports in the scientific literature.123

Pneumoconiosis is a disease caused by accumulation of dust in the lungs, and dependent on the type of dust involved is given more specific names, such as silicosis (due to inhalation of silica dust), anthracosis (due to inhalation of coal dust), and asbestosis. Pneumoconiosis leads to an increasing and disabling shortness of breath, and may ultimately lead to death.

That miners, stonecutters and similar occupations exposed to dust have an increased risk of lung disease had been known for a long time. However, the increase in mining, particularly of coal, and the increasing use of power-driven tools for grinding in the 19th century led to a big rise in occupational lung disease (Plate 13). This was first noted in England, where industrialization was farthest advanced, but other European countries such as Germany and France followed soon.124

Plate 13
Plate 13

Miners during their lunch-break in a coal mine in Limburg, 1945

Work in coal mines, often deep under the ground, was warm and dangerous, and inhalation of coal dust caused ‘black lung disease’, which was later recognized as one of the pneumoconioses. On this photograph, two miners in the Staatsmijn Wilhelmina (province of Limburg, the Netherlands), covered in coal dust, eat their lunch.Collection Regionaal Historisch Centrum Limburg (Fotocollectie Staatsmijnen/dsm). Reproduced with permission

Because other respiratory diseases, such as tuberculosis, were also common among workers in these occupations, it took a long time before pneumoconiosis was recognized as a separate disease. During most of the 19th century, the respiratory diseases of dust-exposed workers were regarded as a form of ‘phthisis’ or ‘consumption’ to which they were particularly sensitive.

The bacteriological revolution, which successfully uncovered the bacteriological origins of tuberculosis and many other respiratory diseases, made it seem plausible, that the high disease rates among dust-exposed workers were due to their poor personal hygiene and unsanitary living conditions. As a result, research into the effect of dust stopped for a while, and it was claimed that quartz lungs, coal lungs and the like “belong[ed] rather in a cabinet of curiosities than in industrial hygiene.”125

It was only in the 1920s and 1930s that the specific aetiology of these diseases was finally recognized. This was mainly due to studies among miners – and to miners’ social and political struggle to receive compensation for the health risks to which they were exposed. Mortality statistics in England & Wales showed increasingly high mortality among miners around the turn of 20th century, which led to government studies and official committees, to labour union activism, and to technical improvements in mining in the 1930s which finally reduced the risk of pneumoconiosis.126

Occupational exposures are also an important cause of cancer. Many agents to which workers in various industries may be exposed have been classified as carcinogenic, including asbestos, heavy metals, mineral dusts, polycyclic aromatic hydrocarbons and ionizing radiation. The contribution of occupational exposures to cancer occurrence in the population as a whole is in the order of 1–4%, but among workers in the relevant industries it may be up to 20%.127

Here we have space for just one illustration: mesothelioma, a cancer of (mostly) the lining of the lungs and chest wall that is caused by occupational exposure to asbestos. Asbestos has physical properties which make it attractive for the construction industry, e.g., to insulate buildings, ships, and electrical equipment. There is often a delay of 30 years or more between first exposure to asbestos and the incidence of mesothelioma, and the disease is almost uniformly fatal.

The association between asbestos exposure and mesothelioma was first noted in 1960, but regulatory efforts to reduce and ultimately eliminate asbestos exposure often took a long time to be enacted. Most countries in Northern and Western Europe adopted a ban on asbestos in the 1980s or 1990s, to be followed by many countries in Southern and Central-eastern Europe in the first decade of the 21st century. Many Eastern European countries still have no ban at all, and although asbestos use has declined there as well, it is still comparatively high.128

Asbestos use peaked around 1980, and had by then caused an epidemic of mesothelioma in Northern and Western Europe which started in the 1970s and 1980s, and which has been predicted to level off and then decline in the 2010s or 2020s. International data on mesothelioma mortality are only available since the early 1990s (Suppl. Figure 14).129

Rates of mesothelioma mortality have remained low in other parts of Europe (with the exception of Italy), which is somewhat surprising in view of the considerable levels of asbestos use in Southern, Central-eastern and Eastern Europe. This may partly be due to use of a less carcinogenic form of asbestos, but may also be a matter of under-registration (or of a rise-still-to-come).130

Most of the rise of mesothelioma in Northern and Western Europe could have been avoided if earlier action had been taken. This can be illustrated with the contrasting experiences of Sweden and the Netherlands. Whereas Sweden had its first asbestos regulation in 1964, followed by increasingly stricter rules on the use of asbestos in the 1970s and a ban in 1989, the Netherlands only started to act in the 1970s. This not only led to a decade’s delay before the peak in asbestos use was reached, but also to a much higher peak in asbestos use in the Netherlands than in Sweden.131

At present, other health problems than the specific diseases listed in Hunter’s Diseases of Occupations dominate the statistics of sickness absenteeism and work disability: common musculoskeletal diseases like arm and shoulder complaints and low back pain, and common mental health problems like depression. The first are partly due to the repetitive and sedentary work that is common in service economies, and the second are partly due to the equally common combination of high job demands and low job control.132

Although this shows that work may still pose a risk to health, being out of work currently poses an even greater risk to health. Due to the improvement of working conditions, the health promoting effects of having employment, e.g., as a result of the income, social contacts and/or self-fulfilment it provides, now generally outweigh its health risks.133

The Industrial Revolution not only increased workers’ exposure to hazardous conditions, but also created health risks for the general population. It led to large-scale changes in the physical environment, many of which hold potential risks for human health. The clearest example is the burning of fossil fuels, which allowed humans to multiply their energy use, but also led to massive air pollution. Also, rivers were dammed, water tables depleted, and fresh water sources polluted. Forests were felled, seas and oceans over-fished, and natural environments replaced by sprawling megacities. Humans themselves became a significant geological agent, by mining the Earth, moving rocks, and eroding the soil.134

Like in the case of occupational exposures, this is too large a topic to be covered completely, and we will therefore focus on just one, but extremely important, aspect: long-term trends in air pollution and its health consequences. Currently, air pollution is the most important cause of health problems in Europe linked to the physical environment, accounting for around 2.5% of deaths and 1% of disability-adjusted life-years lost. It is much more important than other health risks in the physical environment, such as exposure to lead or other metals. It is also much more important than occupational exposures, but considerably less important, in terms of the associated disease burden, than behavioural risk factors such as smoking, excessive alcohol consumption and obesity.

The most obvious effects of air pollution are on the risks of respiratory diseases such as Chronic Obstructive Pulmonary Disease and respiratory infections, but it has increasingly become clear that inhalation of various air pollutants also increases the risk of cardiovascular diseases (ischaemic heart disease, cerebrovascular disease) and lung cancer.135

Air pollution is not a single thing. Awareness of the health effects was boosted by a number of disastrous events, such as the Great Smog in London in December 1952. Stagnant weather conditions caused a sharp increase in the concentration of air pollutants, causing thousands of deaths. This and other events not only led to first steps in air pollution control, but also to research into the components which actually produced the deleterious health effects. This has, over the years, shown that at least four components are important. These are sulphur dioxide (released during combustion of traditional fuels such as coal), nitrogen oxides (mainly produced by combustion of liquid fuels in motorized vehicles), ozone (produced by the action of sunlight on other air pollutants), and small airborne particles (either emitted directly during combustion of diesel and other fuels, or formed in the atmosphere from gaseous air pollutants).136

In Europe, recent trends for air pollution have been favourable, but these declines only occurred after many years of dramatically rising emissions. The very beginnings of man-made air pollution date back to the discovery, hundreds of thousands of years ago, that it was possible to control fire, and to use it for heating and cooking. Indoor air pollution, which already caused health problems in prehistoric times, was replaced by outdoor air pollution as the main risk to human health much more recently, when fossil fuels came to be burned on a massive scale. In the late 19th and early 20th centuries coal combustion in factories and dwellings caused most of the outdoor air pollution, but by the end of the 20th century road traffic had become the largest single source.

Some of these trends can be seen in Figure 21. In Europe as a whole, sulphur dioxide emissions peaked around 1980, but with important differences between West and East. In Northern and North-western Europe, emission reductions already started in the 1970s, when countries began to implement new technologies and switch from coal to gas (the burning of which releases less pollutants than the burning of coal), and were later bound by United Nations protocols that set ceilings for sulphur dioxide emissions. In Central-eastern and Eastern Europe, emissions continued to rise during the 1980s, and only started to fall in the 1990s, first as a result of the economic recession and the closing down of old industries, and later as a result of implementation of new technologies and reduction targets agreed as part of accession to the European Union.137

Figure 21
Figure 21

Trends in sulphur dioxide emissions in Europe, 1850–2005

Notes: Gg = gigagram = one million kilogram. “West” = Northern, North-western and Southern Europe. “Central” = Central-eastern and South-eastern Europe. “East” = Eastern EuropeSource of data: S.J. Smith, et al. (2011). Anthropogenic sulfur dioxide emissions: 1850–2005. Atmospheric Chemistry and Physics, 11(3), 1101–16, Table 2

Nitrogen oxide emissions peaked somewhat later, around 1990, with again an important role for United Nations protocols and European Union targets. The increase in motorized transport during the 1950s and 1960s led to a huge rise in emissions, which was eventually curbed by technological improvements to vehicles (such as improved combustion and fitting of catalytic converters). As in the case of sulphur dioxide emissions, the turning-point came earlier in the West than in the East. Secular trend data for ozone and particulate matter like those pictured in Figure 21 are not available, but concentration of these pollutants in the atmosphere has probably followed a similar rise-and-fall pattern.138

The massive increases in air pollution during most of the 20th century must have had a negative impact on population health, but the exact magnitude of this effect is unknown. Above, we noted in passing that mortality from Chronic Obstructive Pulmonary Disease rose during the 1950s and 1960s in several European countries (Figure 15) – but that this rise was mainly due to smoking. Nevertheless, it is likely that increasing levels of air pollution also contributed to this increase, and also to the increase of cardiovascular diseases and lung cancer in the same period. Likewise, the substantial decline in air pollution in the last decades must have had a positive – but difficult to quantify – impact on population health as well.

While recent trends in air pollutants directly harming human health have thus been favourable, emissions of carbon dioxide have not yet been as effectively curbed by countermeasures, leaving the challenge of climate change mainly to future generations.

1

A general history of cholera can be found in Christopher Hamlin, Cholera: The Biography (Oxford etc.: Oxford University Press, 2009). Before cholera pandemics reached Europe, the term ‘cholera’ referred to all kinds of diarrheal diseases. After the first pandemic reached Europe this ‘new’ cholera was referred to as ‘Asiatic cholera’.

2

The spread of cholera during the pandemics has been documented in, e.g., Hirsch, Handbook, vol. 1, pp. 394–493, and in Reinhard Speck, “Cholera,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993).

3

K. David Patterson, “Cholera Diffusion in Russia, 1823–1923,” Social Science & Medicine 38, no. 9 (1994): 1171–91. In Western Europe, the numbers were not as large as the ensuing panic suggests: deaths from cholera accounted for no more than 1% of total mortality in the 1848–72 period in England and Wales; see John Charlton and Mike Murphy, The Health of Adult Britain 1841–1994. Decennial Supplement No. 12 (London: The Stationery Office, 1997), vol. i, p. 32.

4

Weaving many interesting threads together, the fascinating story of the last cholera epidemic in Hamburg has been told in Richard J. Evans, Death in Hamburg (Oxford: Clarendon Press, 1987).

5

A combination of civic pride and fear of economic damage led the authorities to deny that a cholera epidemic occurred, and to fabricate health statistics to hide the 2600 cholera deaths in Naples and 14,000 deaths in Italy as a whole; see Frank M. Snowden, Naples in the Time of Cholera, 1884–1911 (Cambridge etc.: Cambridge University Press, 1995). During the 20th century a seventh pandemic (1961–75), caused by the milder Vibrio El Tor, never reached Europe.

6

See Patrice Bourdelais, Épidémies Terrassées, Chapter 3, for a nuanced synthesis of the various factors (medical, cultural, political, …) contributing to the decline of cholera in Europe.

7

The classic history of anti-contagionism was written by Ackerknecht, “Anticontagionism.” There is a clear analogy between anti-contagionism in the 19th century and 20th century forms of ‘denialism’ such as the denial of the harmful effects of smoking, both supported by commercial interests; see Mark Harrison, Contagion (New Haven and London: Yale University Press, 2012).

8

The history of the scientific disputes and eventual consensus on the causes of cholera is more complex than can be explained here; see Jan P. Vandenbroucke, H.M. Eelkman Rooda, and Harm Beukers, “Who Made John Snow a Hero?,” American Journal of Epidemiology 133, no. 10 (1991): 967–73; George Davey Smith, “Commentary: Behind the Broad Street Pump,” International Journal of Epidemiology 31, no. 5 (2002): 920–32; Alfredo Morabia, “Epidemiologic Interactions, Complexity, and the Lonesome Death of Max Von Pettenkofer,” American Journal of Epidemiology 166, no. 11 (2007): 1233–38.

9

Hamlin has argued that the health problems highlighted in Chadwick’s Report on The Sanitary Condition of the Labouring Population of Great Britain could have been solved in other ways, e.g., by improving the economic situation of the lower classes. The choice for sanitation reflected a desire to stay away from politically less acceptable recommendations; see Christopher Hamlin, Public Health and Social Justice in the Age of Chadwick: Britain, 1800–1854 (Cambridge etc.: Cambridge University Press, 1998).

10

Richard J. Evans, “Epidemics and Revolutions: Cholera in Nineteenth-Century Europe,” Past & Present, no. 120 (1988): 123–46.

11

For a history of bacillary dysentery, see Hirsch, Handbook, Vol. iii, pp. 284–370, and K. David Patterson, “Bacillary Dysentery,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993).

12

For a history of typhoid fever, see Hirsch, Handbook, vol. i, pp. 617–87, and Dale Smith, “Typhoid Fever,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993).

13

Long-term trends in mortality from intestinal infections have been analysed for England & Wales in Woods, Demography; Galbraith and McCormack, “Infection in England and Wales”; for the Netherlands in Judith H. Wolleswinkel-van den Bosch et al., “Cause-Specific Mortality Trends in the Netherlands, 1875–1992,” International Journal of Epidemiology 26, no. 4 (1997): 772–81; and for Spain in Vicente Pérez Moreda, David S. Reher, and Alberto Sanz Gimeno, La Conquista de la Salud (Madrid: Marcial Pons, Ediciones de Historia, 2015).

14

Data for London are in Mercer, Disease. Data for Paris are in Victor K. Kuagbenou and Jean-Noel Biraben, Introduction à l'Etude de la Mortalite par Cause de Décès à Paris dans la Première Moitié du XIXème Siècle (Paris: Institut National d’Études Démographiques, 1998). Data for The Hague are in Pieter R.D. Stokvis, De Wording van Modern Den Haag (Zwolle: Waanders, 1987). Data for Stockholm are in Britt-Inger Puranen, Tuberkulos: En Sjukdoms Förekomst och Dess Orsaker: Sverige 1750–1980, vol. 7, Umeå Studies in Economic History, (Umeå: Umeå Universitet, 1984).

15

See Chapter 3 for a summary of this discussion.

16

Amy L. Fairchild and Gerald M. Oppenheimer, “Public Health Nihilism Vs Pragmatism,” American Journal of Public Health 88, no. 7 (1998): 1105–17.

17

In the 19th century, these symptoms gave rise to a certain ‘romanticization’ of tuberculosis. The usual pallor of tuberculosis patients inspired a new beauty ideal, and because artists who died of tuberculosis sometimes produced great works of art during their flare-ups, it was thought that tuberculosis sparked genius; see René J. Dubos and Jean Dubos, The White Plague (Boston: Little, Brown & Company, 1952), Chapter 5.

18

As a result, tuberculosis patients from the North who came to the Mediterranean to heal, were sometimes refused access to hotels, as happened to Frédéric Chopin (1810–1849) on Mallorca; see Dubos and Dubos, The White Plague, 31–32.

19

Sharon Levy, “The Evolution of Tuberculosis,” BioScience 62, no. 7 (2012): 625–29.

20

Long-term trends in mortality from tuberculosis in Sweden have been analysed in Puranen, Tuberkulos, 7; Britt-Inger Puranen, “Tuberculosis and the Decline of Mortality in Sweden,” in The Decline of Mortality in Europe, ed. Roger S. Schofield, David S. Reher, and Alain Bideau (Oxford etc.: Clarendon Press, 1991). These publications show that tuberculosis mortality in Finland rose to a peak in the late 1800s, and then declined, but slower than in Sweden, in the 20th century.

21

For McKeown’s analysis of the start of the decline in mortality from respiratory tuberculosis in England and Wales, see McKeown and Record, “Reasons for the Decline of Mortality in England and Wales.” For a detailed critique, see Szreter, “Social Intervention.”

22

As noted by McKeown, and later shown in interrupted time-series analyses in Finland (Hemminki and Paakkulainen, “Antibiotics”) and The Netherlands (Mackenbach and Looman, “Secular Trends”).

23

See Puranen, “Tuberculosis” for an argumentation. On changes in virulence as an explanation for the decline of tuberculosis, see Woods, Demography, 332–40.

24

E. Vynnycky and P.E. Fine, “Interpreting the Decline in Tuberculosis: The Role of Secular Trends in Effective Contact,” International Journal of Epidemiology 28, no. 2 (1999): 327–34.

25

War-related food shortages caused increases in tuberculosis mortality in, e.g., France during the Franco-Prussian War (1870–71), and in the Netherlands and Denmark during World War I (1914–1918); see Dubos and Dubos, The White Plague; Godias J. Drolet, “World War I and Tuberculosis. A Statistical Summary and Review,” American Journal of Public Health and the Nations Health 35, no. 7 (1945): 689–97.

26

Bernard Harris, “Public Health, Nutrition, and the Decline of Mortality,” Social History of Medicine 17, no. 3 (2004): 379–407. Periods of increasing and stagnating life expectancy also coincided with periods of increasing and stagnating height of British military ­recruits, lending further support to the idea that improved nutrition played a role in these secular trends (Floud et al., Changing Body, Chapter 4). Yet, the importance of nutrition should not be overrated, because as shown in Puranen, “Tuberculosis” in 18th century Sweden members of the royal family, who were well-fed, did not have lower tuberculosis mortality than their servants.

27

See Szreter, “Social Intervention” for a general overview. The decline of tuberculosis mortality in England & Wales in the second half of the 19th century was already studied by early 20th century scholars, who showed that segregation of poor people with tuberculosis in ‘poorhouses’ probably made an important contribution to mortality decline; see Arthur Newsholme, “An Inquiry into the Principal Causes of the Reduction in the Death-Rate from Phthisis During the Last Forty Years,” Journal of Hygiene (Camb.) 6 (1906): 304–84; Leonard G. Wilson, “The Historical Decline of Tuberculosis in Europe and America,” Journal of the History of Medicine and Allied Sciences 45, no. 3 (1990): 366–96.

28

The effect of the introduction of antibiotic treatment is not only apparent from an acceleration of mortality decline (Hemminki and Paakkulainen, “Antibiotics”; Mackenbach and Looman, “Secular Trends”) but also from a widening of the gap between incidence and mortality. Before the 1940s, trends for tuberculosis mortality and incidence ran in parallel, but after the 1940s mortality declined much faster than incidence; see Philippe Glaziou, Katherine Floyd, and Mario Raviglione, “Trends in Tuberculosis in the UK,” Thorax 73 (2018): 702–03.

29

Rising trends in tuberculosis incidence in the 1990s were signalled in Mario C. Raviglione et al., “Secular Trends of Tuberculosis in Western Europe,” Bulletin of the World Health Organization 71, no. 3–4 (1993): 297–306; Mario C. Raviglione et al., “Tuberculosis Trends in Eastern Europe and the Former USSR,” Tubercle and Lung disease 75, no. 6 (1994): ­400–16. For recent European trends, see European Centers for Disease Control, Tuberculosis Surveillance and Monitoring in Europe 2019 (Copenhagen: who Regional Office for Europe, 2019).

30

On the rise of extremely drug-resistant strains of Mycobacteria, see Mario C. Raviglione, “XDR-TB: Entering the Post-Antibiotic Era?,” International Journal of Tuberculosis and Lung Disease 10, no. 11 (2006): 1185–87. On the failure of tuberculosis control in the former Soviet Union, see Olga S. Toungoussova, Gunnar Bjune, and Dominique A. Caugant, “Epidemic of Tuberculosis in the Former Soviet Union,” Tuberculosis 86, no. 1 (2006): 1–10. On the role of immigration in the rise of tuberculosis incidence, see H.L. Rieder et al., “Tuberculosis Control in Europe and International Migration,” European Respiratory Journal 7, no. 8 (1994): 1545–53. On the role of social factors in the recent rise of tuberculosis, see Paul Farmer, “Social Inequalities and Emerging Infectious Diseases,” Emerging Infectious Diseases 2, no. 4 (1996): 259–69.

31

See European Centers for Disease Control, Tuberculosis Surveillance and Monitoring in Europe.

32

For trends in the 19th century we are skating on very thin ice, with some military statistics indicating a decline and other data indicating little systematic change in the population at large. Statistics on syphilis among young men enlisting for the army in the United Kingdom indicate a decline between 1870 and 1910 (O. Idsoe and Thorstein Guthe, “The Rise and Fall of the Treponematoses. i. Ecological Aspects and International Trends. In Venereal Syphilis,” British Journal of Venereal Diseases 43, no. 4 (1967): 227–43). However, it is unclear whether this can be extrapolated to the general British population, because reconstructions of trends in syphilis in Britain between the 1770s and 1911–12 found little change (Simon Szreter, “Treatment Rates for the Pox in Early Modern England,” Continuity and Change 32, no. 2 (2017): 183–223). My summary of trends in the 20th century is mainly based on compulsory notification data on syphilis in the general population as found in Idsoe and Guthe, “The Rise and Fall of the Treponematoses. i”; Inga Lind and Steen Hoffmann, “Recorded Gonorrhoea Rates in Denmark, 1900–2010,” bmj Open 5, no. 11 (2015): e008013; Thorstein Guthe, Worldwide Epidemiological Trends in Syphilis and Gonorrhea (Washington: World Health Organization, 1970); Annet Mooij, Geslachtsziekten en Besmettingsangst (Amsterdam: Boom, 1993). As recently as 1913–1916, around 8% (ranging between 4% in rural districts and 11% in London) of all men in their mid-thirties in England & Wales tested positive for syphilis; see Simon Szreter, “The Prevalence of Syphilis in England and Wales on the Eve of the Great War,” Social History of Medicine 27, no. 3 (2014): 508–29.

33

Syphilis also spreads from mother to child during pregnancy, leading to congenital syphilis, which used to be a frequent cause of congenital anomalies and infant mortality in Europe in the early 20th century; see Lori Newman et al., “Global Estimates of Syphilis in Pregnancy and Associated Adverse Outcomes,” PLoS Medicine 10, no. 2 (2013): e1001396. There is also an ‘endemic’ form of syphilis, spread by non-venereal routes and usually attracted in childhood. This was prevalent in poor, illiterate areas, as illustrated by Bosnia-Herzegovina where a campaign to eliminate the disease by serology surveys and penicillin treatment was necessary as recently as 1948–1955; see E.I. Grin and T. Guthe, “Evaluation of a Previous Mass Campaign against Endemic Syphilis in Bosnia and Herzegovina,” British Journal of Venereal Diseases 49, no. 1 (1973): 1–19; Thorstein Guthe and O. Idsoe, “The Rise and Fall of the Treponematoses. II. Endemic Treponematoses of Childhood,” British Journal of Venereal Diseases 44, no. 1 (1968): 35–48.

34

That syphilis was part of the ‘Columbian exchange’, and did not exist in Europe before the discovery of America, is supported by paleoanthropological findings and contemporary reports (Alfred W. Crosby, “The Early History of Syphilis: A Reappraisal,” American Anthropologist 71, no. 2 (1969): 218–27; Thomas A. Cockburn, “The Origin of the Treponematoses,” Bulletin of the World Health Organization 24, no. 2 (1961): 221–28), and by molecular-genetic studies (Kristin N. Harper et al., “On the Origin of the Treponematoses: A Phylogenetic Approach,” PLoS Neglected Tropical Diseases 2, no. 1 (2008): e148). On the change in character of the disease around the year 1500, see Eugenia Tognotti, “The Rise and Fall of Syphilis in Renaissance Europe,” Journal of Medical Humanities 30, no. 2 (2009): 99–113.

35

See, for example, Allan M. Brandt, No Magic Bullet (Oxford etc.: Oxford University Press, 1987); Mooij, Geslachtsziekten. The reverse is true as well: stds have often been used for the promotion of conservative or liberal ideas about sexuality, the family, and gender roles.

36

These stages in the response to syphilis have been proposed by Owsei Temkin, “Zur Geschichte von ‘Moral und Syphilis,’” Archiv für Geschichte der Medizin 19, no. 4 (1927): 331–48.

37

For the history of prostitution in the 19th century, see Alain Corbin, Les Filles de Noce (Paris: Flammarion, 1982). Parent-Duchâtelet’s work gives s shocking picture of the working conditions of Parisian prostitutes: Alexandre Parent-Duchâtelet, De la Prostitution dans la Ville de Paris, Considérée sous le Rapport de l’Hygiène Publique, de la Morale et de l’Administration (Paris: J.-B. Baillière et fils, 1836).

38

Treatment options continued to expand in the 1920s and 1930s, with neo-salvarsan, bismuth, and fever therapies. For the history of syphilis control measures in the 20th century in a few European countries, see Tana Green, M.D. Talbot, and R.S. Morton, “The Control of Syphilis, a Contemporary Problem: A Historical Perspective,” Sexually Transmitted Infections 77, no. 3 (2001): 214–17; Mooij, Geslachtsziekten; Roger Davidson, Dangerous Liaisons (Amsterdam & Atlanta: Rodopi, 2000).

39

See Suppl. Figure 10. The acceleration of decline in the late 1940s can easily be seen when the mortality rates are plotted on a logarithmic scale; see Mackenbach and Looman, “Secular Trends.”

40

Guthe, Worldwide Epidemiological Trends; Idsoe and Guthe, “Rise and Fall of the Treponematoses. i.” That syphilis and other stds could now be treated with antibiotics may also have played a role. On the rise of ‘sexual enthusiasm’, see Paul Robinson, The Modernization of Sex (Oxford etc.: Harper & Row, 1976).

41

Irena Jakopanec et al., “Syphilis Epidemiology in Norway, 1992–2008,” bmc Infectious Diseases 10, no. 1 (2010): 105. The rise of syphilis and other stds since the late 1950s, the loosening of sexual norms during the 1960s, and the ‘informalization of manners’ more generally, has by some been interpreted as a reversal of the ‘civilising process’; see Cas Wouters, “Formalization and Informalization: Changing Tension Balances in Civilizing Processes,” Theory, Culture & Society 3, no. 2 (1986): 1–18. A similar hypothesis has been proposed for the rise of homicide (see Chapter 4).

42

See Woods, Demography, Chapter 7, for an analysis of mortality from these diseases in Victorian Britain. Woods’ analysis shows that the causes of childhood mortality were different from the causes of infant mortality, and that – in Britain – the start of the decline of childhood mortality preceded that of infant mortality.

43

The distinction between these diseases was only gradually made. For example, the term ‘croup’, used in early cause-of-death statistics, referred to a laryngitis that could be due to diphtheria, whooping cough, scarlet fever, measles, and other infectious diseases; see Hirsch, Handbook, vol. iii, pp. 50–51.

44

For an analysis of the periodicity of childhood epidemic diseases, see Roy M. Anderson, Bryan T. Grenfell, and R.M. May, “Oscillatory Fluctuations in the Incidence of Infectious Disease and the Impact of Vaccination,” Epidemiology & Infection 93, no. 3 (1984): 587–608. The length of the inter-epidemic period depends on both biological and social factors; see C.J. Duncan, S.R. Duncan, and Susan Scott, “The Dynamics of Measles Epidemics,” Theoretical Population Biology 52, no. 2 (1997): 155–63.

45

For an analysis of the effect of lower fertility on average age at infection and case fatality, see Randall Reves, “Declining Fertility in England and Wales as a Major Cause of the Twentieth Century Decline in Mortality,” American Journal of Epidemiology 122, no. 1 (1985): 112–26. Lower infection pressure may also have directly reduced case fatality; see Peter Aaby et al., “Severe Measles in Sunderland, 1885,” International Journal of Epidemiology 15, no. 1 (1986): 101–07.

46

For a comprehensive analysis of the factors contributing to childhood mortality decline in Britain. see Woods, Demography, Chapters 7 and 8, and Hardy, Epidemic Streets, Chapters 1 to 4. Although children of better educated mothers had lower risks of dying from infectious diseases, schooling itself may have helped the spread of infectious diseases. The need to control infection in schools was one of the reasons why some countries tried to institute compulsory notification by teachers; see Mooney, Intrusive Interventions, Chapter 4.

47

There are many different strains of haemolytic streptococcus, which differ in virulence, and whose prevalence has varied over time. For an introduction to the history of scarlet fever, see Anne Hardy, “Scarlet Fever,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993) and Francis B. Smith, The People’s Health, 1830–1910 (Canberra: Australian National University Press, 1979), pp. 136–42. The rise and fall of scarlet fever in England during the 19th century has been shown in Galbraith and McCormack, “Infection in England and Wales”; Alex Mercer, “Relative Trends in Mortality from Related Respiratory and Airborne Infectious Diseases,” Population Studies 40, no. 1 (1986): 129–45. Mercer has argued, on the basis of a correlation between declines of scarlet fever and of dysentery and typhoid in Britain, that the apparent decline in streptococcal virulence was due to improvements in water supply and sewerage and the greater resistance against respiratory infections that this produced (Mercer, Infections, Chapter 8).

48

For the effect of sulphonamides and antibiotics, see Galbraith and McCormack, “Infection in England and Wales”; Mackenbach and Looman, “Secular Trends.” The decline of acute rheumatic fever has been particularly spectacular; see Edward F. Bland, “Declining Severity of Rheumatic Fever: A Comparative Study of the Past Four Decades,” New England Journal of Medicine 262, no. 12 (1960): 597–99; Leon Gordis, “The Virtual Disappearance of Rheumatic Fever in the United States,” Circulation 72, no. 6 (1985): 1155–62.

49

For a general history of measles, see Robert J. Kim-Farley, “Measles,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993) and Smith, The People’s Health, pp. 142–48.

50

The epidemic on the Faeroe islands was famously analysed by Danish physiologist Peter Panum (1820–1885); see Peter L. Panum, Lagttagelser, Anstillede under Maesllinge-Epidemien Paa Faeroerne I Aaret 1846 [Observations Made During the Epidemic of Measles on the Faroe Islands in the Year 1846] (Copenhagen: Bibiliothek for Laeger, 1847). For the likely connection with schooling, see Smith, The People’s Health, pp. 142–48.

51

An in-depth analysis of the reasons for the decline of measles mortality in England and Wales can be found in Woods, Demography, pp. 319–23. For the effect of measles vaccine on measles incidence see, e.g., Heikki Peltola et al., “Measles, Mumps, and Rubella in Finland,” Lancet Infectious Diseases 8, no. 12 (2008): 796–803.

52

For a general history of whooping cough, see Hardy, “Whooping Cough,” and Smith, The People’s Health, pp. 104–11. In addition to the factors mentioned in the main text, a decline of rickets, which reduced the child’s stamina in coping with whooping cough, may also have played a role; see Hardy, Epidemic Streets, Chapter 1. For long-term trends in whooping cough in England and Wales, see Galbraith and McCormack, “Infection in England and Wales.”

53

On the causes of the decline in case fatality in the first half of the 20th century, see Edward A. Mortimer Jr and Paul K. Jones, “An Evaluation of Pertussis Vaccine,” Reviews of Infectious Diseases 1, no. 6 (1979): 927–34. See Galbraith and McCormack, “Infection in England and Wales” for the effect of vaccination.

54

On this ‘vaccine scare’ and its effects in Britain, see Ingrid Wolfe, “Child Health,” in Successes and Failures of Health Policy in Europe, ed. Johan P. Mackenbach and Martin McKee (Maidenhead: Open University Press, 2013). On the decrease in vaccine efficacy and its possible causes, see, e.g., Douglas W. Jackson and Pejman Rohani, “Perplexities of Pertussis,” Epidemiology & Infection 142, no. 4 (2014): 672–84.

55

For a general history of diphtheria, see Ann Carmichael, “Diphtheria,” in Cambridge World History of Human Disease, ed. K.F. Kiple (Cambridge etc.: Cambridge University Press, 1993) and Smith, The People’s Health, pp. 148–152. The emergence of diphtheria has been reconstructed in Hirsch, Handbook, vol. iii, p. 73 ff.

56

For the development of antitoxin and its effects, see F.M. Lévy, “The Fiftieth Anniversary of Diphtheria and Tetanus Immunization,” Preventive Medicine 4, no. 2 (1975): 226–37; Mercer, Disease; Carmichael, “Diphtheria.” Others have noted, however, that the introduction of antitoxin in the 1890s coincided with a shift in virulence which also contributed to the decline in diphtheria mortality; see Hardy, Epidemic Streets; Thorvald Madsen, “Diphtheria in Denmark from 23,695 to 1 Case: Post or Propter? I. Serum Therapy,” Danish Medical Bulletin 3, no. 4 (1956): 112–15.

57

On the efficacy of diphtheria vaccination, see Lévy, “The Fiftieth Anniversary of Diphtheria and Tetanus Immunization.” For the early spread of vaccination, see W.T. Russell, “Epidemiology of Diphtheria During the Last Forty Years,” (London: His Majesty’s Stationery Office, 1943); Jane Lewis, “The Prevention of Diphtheria in Canada and Britain 1914–1945,” Journal of Social History 20, no. 1 (1986): 163–76, on the United Kingdom, and Dick Hoogendoorn, Over de Diphtherie in Nederland (Zwolle: Tijl, 1948) on the Netherlands. For the effect of mass vaccination, see Galbraith and McCormack, “Infection in England and Wales” on England and Wales, and Maarten van Wijhe et al., “Quantifying the Impact of Mass Vaccination Programmes on Notified Cases in the Netherlands,” Epidemiology & Infection 146, no. 6 (2018): 716–22 on the Netherlands. On the introduction of mass vaccination in Portugal, see M.C. Gomes, J.J. Gomes, and A.C. Paulo, “Diphtheria, Pertussis, and Measles in Portugal before and after Mass Vaccination,” European Journal of Epidemiology 15, no. 9 (1999): 791–98.

58

Gaylord W. Anderson, “Foreign and Domestic Trends in Diphtheria,” American Journal of Public Health and the Nations Health 37, no. 1 (1947): 1–6; Bo Vahlquist, “Studies on Diphtheria. 1. The Decrease of Natural Antitoxic Immunity against Diphtheria,” Acta Paediatrica 35, no. 1–2 (1948): 117–29.

59

Wolfe, “Child Health”; Charles R. Vitek and Melinda Wharton, “Diphtheria in the Former Soviet Union: Reemergence of a Pandemic Disease,” Emerging Infectious Diseases 4, no. 4 (1998): 539–50.

60

On Latvia, see Aija Griskevica et al., “Diphtheria in Latvia, 1986–1996,” Journal of Infectious Diseases 181, no. Supplement 1 (2000): S60–S64. On countermeasures and their effect, see Sieghart Dittmann et al., “Successful Control of Epidemic Diphtheria in the States of the Former USSR,” Journal of Infectious Diseases 181, no. Supplement 1 (2000): S10–S22.

61

For trends in mortality from these conditions in other countries, see, e.g., Galbraith and McCormack, “Infection in England and Wales”; Clare Griffiths and Anita Brock, “Twentieth Century Mortality Trends in England and Wales,” Health Statistics Quarterly 18, no. 2 (2003): 5–17. Trends in the Netherlands have been analysed in Wolleswinkel-van den Bosch et al., “Cause-Specific Mortality.”

62

Trends in copd mortality in European countries have been complex, largely reflecting the diffusion of smoking and smoking cessation. There is also an association with poverty in childhood, perhaps via higher exposure to acute respiratory infections. For international overviews, see Peter G.J. Burney et al., “Global and Regional Trends in Copd Mortality, 1990–2010,” European Respiratory Journal 45, no. 5 (2015): 1239–47; Alan D. Lopez et al., “Chronic Obstructive Pulmonary Disease: Current Burden and Future Projections,” European Respiratory Journal 27, no. 2 (2006): 397–412.

63

The rise in mortality from laryngitis and otitis coincided with a rise in mortality from scarlet fever, puerperal fever, and several other streptococcal infections, and has been attributed to a rise in virulence of the streptococcus (see section on scarlet fever above).

64

Pneumonia is one of the oldest diagnosed diseases, whose symptoms and physical signs were well known to doctors long before the discovery of its bacteriological origins (see Hirsch, Handbook, vol. iii, Chapter vi). For an introduction to the history of pneumonia, see Jacalyn Duffin, “Pneumonia,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993).

65

See Mackenbach and Looman, “Secular Trends” for an interrupted time-series analysis of mortality from pneumonia and other acute respiratory infections in the Netherlands. See Griffiths and Brock, “Twentieth Century Mortality Trends in England and Wales”; Galbraith and McCormack, “Infection in England and Wales” for pneumonia mortality trends in England.

66

For example, in the Netherlands age-standardized pneumonia mortality was 262 in 1900, 123 in 1930 and 28 in 1960, implying a relative decline between 1930 and 1960 of 77% and an absolute decline of 95 per 100,000, which is 41% of the total decline between 1900 and 1960. In Italy, age-standardized pneumonia mortality was 425 in 1900, 376 in 1930 and 62 in 1960, implying a relative decline between 1930 and 1960 of 84% and an absolute decline of 314 per 100,000, which is 87% of the total decline between 1900 and 1960. Data from Alderson, International Mortality Statistics, table 103.

67

For the deceleration of pneumonia decline, see Griffiths and Brock, “Twentieth Century Mortality Trends in England and Wales”; Galbraith and McCormack, “Infection in England and Wales.” For the role of pneumonia as a cause of death in other conditions, see, e.g., Eric M. Mortensen et al., “Causes of Death for Patients with Community-Acquired Pneumonia,” Archives of Internal Medicine 162, no. 9 (2002): 1059–64. For the recent increase in community-acquired pneumonia, see, e.g., Mette Søgaard et al., “Nationwide Trends in Pneumonia Hospitalization Rates and Mortality, Denmark 1997–2011,” Respiratory Medicine 108, no. 8 (2014): 1214–22; A.B. van Gageldonk-Lafeber et al., “Time Trends in Primary-Care Morbidity, Hospitalization and Mortality Due to Pneumonia,” Epidemiology & Infection 137, no. 10 (2009): 1472–78.

68

Systematic reviews have concluded that effectiveness of the vaccine has not been proven; see, e.g., Anke Huss et al., “Efficacy of Pneumococcal Vaccination in Adults: A Meta-Analysis,” Canadian Medical Association Journal 180, no. 1 (2009): 48–58.

69

For a general history of influenza, dating the first influenza epidemics in Europe to the 16th century, see Alfred W. Crosby, “Influenza,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993). For contemporary theories on how changes in the influenza virus occur, see Christopher W. Potter, “A History of Influenza,” Journal of Applied Microbiology 91, no. 4 (2001): 572–79; Martha I. Nelson and Michael Worobey, “Origins of the 1918 Pandemic: Revisiting the Swine ‘Mixing Vessel’ Hypothesis,” American Journal of Epidemiology 187, no. 12 (2018): 2498–502. There are three types of human influenza virus: A, B, C. Influenza A, which naturally occurs in wild aquatic birds, has been involved in pandemics. Important serotypes include H1N1 (‘Spanish flu’ in 1918, ‘Swine flu’ in 2009), H2N2 (‘Asian flu’ in 1957), H3N3 (‘Hong Kong flu’ in 1968), and H5N1 (‘Bird flu’ in 2004).

70

For a history of influenza pandemics in the 18th and 19th centuries, see K. David Patterson, Pandemic Influenza, 1700–1900 (Totowa: Rowman & Littlefield 1986).

71

It was called the ‘Spanish flu’ because contemporaries mistakenly thought it originated in Spain. On the history of ‘Spanish flu’, see John M. Barry, The Great Influenza (Harmondsworth: Penguin, 2005); Cécile Viboud and Justin Lessler, “The 1918 Influenza Pandemic: Looking Back, Looking Forward,” American Journal of Epidemiology 187, no. 12 (2018): 2493–97. For the death toll, see Niall P.A.S. Johnson and Juergen Mueller, “Updating the Accounts: Global Mortality of the 1918–1920 ‘Spanish’ Influenza Pandemic,” Bulletin of the History of Medicine 76, no. 1 (2002): 105–15.

72

On lower excess mortality from influenza in the US, see Stephen S. Morse, “Pandemic Influenza: Studying the Lessons of History,” Proceedings of the National Academy of Sciences 104, no. 18 (2007): 7313–14. On a more fatalistic attitude in Britain, see Sandra M. Tomkins, “The Failure of Expertise: Public Health Policy in Britain During the 1918–19 Influenza Epidemic,” Social History of Medicine 5, no. 3 (1992): 435–54. For an analysis of between-country differences in excess deaths, see S. Ansart et al., “Mortality Burden of the 1918–1919 Influenza Pandemic in Europe,” Influenza Other Respiratory Viruses 3, no. 3 (2009): 99–106. Aspects of the high death toll in Portugal were analysed in B. Nunes et al., “The 1918–1919 Influenza Pandemic in Portugal,” American Journal of Epidemiology 187, no. 12 (2018): 2541–49.

73

For analyses of recent influenza trends, see G.C. Donaldson and W.R. Keatinge, “Excess Winter Mortality: Influenza or Cold Stress?,” British Medical Journal 324, no. 7329 (2002): 89–90; Martin W. Brinkhof et al., “Influenza-Attributable Mortality among the Elderly in Switzerland,” Swiss Medical Weekly 136, no. 19–20 (2006): 302–09. Officially registered deaths from influenza underestimate the total death toll.

74

For the history of influenza vaccine development, see I. Barberis et al., “History and Evolution of Influenza Control through Vaccination,” Journal Prevention Medical Hygiene 57, no. 3 (2016): e115-e20. Evidence on the effectiveness of influenza vaccination among the elderly is not entirely convincing; see Alexander Domnich et al., “Effectiveness of Mf59-Adjuvanted Seasonal Influenza Vaccine in the Elderly,” Vaccine 35, no. 4 (2017): 513–20. Introduction of influenza vaccination coincided with declines in influenza mortality in the Netherlands (Angelique G. Jansen et al., “Decline in Influenza-Associated Mortality among Dutch Elderly,” Vaccine 26, no. 44 (2008): 5567–74), but not in Italy (Caterina Rizzo et al., “Influenza-Related Mortality in the Italian Elderly,” Vaccine 24, no. 42–43 (2006): 6468–75).

75

‘Swine flu’ was originally called ‘Mexican flu’. For a model-based estimate of the number of excess deaths, see Fatimah S. Dawood et al., “Estimated Global Mortality Associated with the First 12 Months of 2009 Pandemic Influenza a H1N1 Virus Circulation,” Lancet Infectious Diseases 12, no. 9 (2012): 687–95.

76

Even then, because death from other causes was so frequent, maternal mortality did not account for more than 10% of total mortality among women of child-bearing age. Yet, because women had many children the life-time risk could be in the order of 10%; see ­Irvine Loudon, Death in Childbirth (Oxford etc.: Oxford University Press, 1992).

77

Historical data for Sweden are in Loudon, Death in Childbirth, appendix 6. Data for England are in E. Anthony Wrigley et al., English Population History from Family Reconstitution 1580–1837 (Cambridge etc.: Cambridge University Press, 1997), table 6.21. Data for regions in France are in Hector Gutierrez and Jacques Houdaille, “La Mortalité Maternelle en France au xviiie Siècle,” Population (French Edition) 36, no. 6 (1983): 975–94, tables 1 and 2.

78

See Ulf Högberg, Stig Wall, and Göran Broström, “The Impact of Early Medical Technology on Maternal Mortality in Late 19th Century Sweden,” International Journal of Gynecology & Obstetrics 24, no. 4 (1986): 251–61. This paper cites a maternal mortality rate of 87 per 10,000 births in 1975–1982 among women from a religious group in the U.S. who avoided antenatal and obstetric care.

79

The most extensive analysis, relying on time-trend analyses and in-depth comparisons between countries, is Loudon, Death in Childbirth. It has been summarized in, e.g., Irvine Loudon, “Maternal Mortality: 1880–1950. Some Regional and International Comparisons,” Social History of Medicine 1, no. 2 (1988): 183–228; Irvine Loudon, “The Transformation of Maternal Mortality,” British Medical Journal 305, no. 6868 (Dec 19–26 1992): 1557–60. The Swedish experience has been analysed in several papers: Ulf Högberg, “The Decline in Maternal Mortality in Sweden: The Role of Community Midwifery,” American Journal of Public Health 94, no. 8 (2004): 1312–20; Högberg et al., “The Impact of Early Medical ­Technology on Maternal Mortality.” A good general overview can also be found in Vincent De Brouwere, “The Comparative Study of Maternal Mortality over Time,” Social History of Medicine 20, no. 3 (2007): 541–62.

80

Högberg, “The Decline in Maternal Mortality in Sweden”; Mart J. van Lieburg and Hilary Marland, “Midwife Regulation, Education, and Practice in the Netherlands During the Nineteenth Century,” Medical History 33, no. 3 (1989): 296–317; Loudon, “The Transformation of Maternal Mortality.”

81

For the history of puerperal fever from classical times to the late 19th century, see Hirsch, Handbook, vol. ii, pp. 416–75. The history of the discovery of the aetiology of childbed fever and the causes of its final retreat have been described in Irvine Loudon, The Tragedy of Childbed Fever (Oxford etc.: Oxford University Press, 2000). For the causes of the stagnation of the decline from puerperal fever in England, see Loudon, Death in Childbirth.

82

See Loudon, Death in Childbirth, Chapter 15.

83

Another consequence was a dramatic rise in the number of unwanted children, many of whom ended up in orphanages. See Charlotte Hord et al., “Reproductive Health in Romania: Reversing the Ceausescu Legacy,” Studies in Family Planning 22, no. 4 (1991): 231–40; Patricia Stephenson et al., “Commentary: The Public Health Consequences of Restricted Induced Abortion – Lessons from Romania,” American Journal of Public Health 82, no. 10 (1992): 1328–31.

84

Katherine Wildman and Marie-Helene Bouvier-Colle, “Maternal Mortality as an Indicator of Obstetric Care in Europe,” British Journal of Obstetrics and Gynaecology 111, no. 2 (2004): 164–69.

85

See Suppl. Figure 2.

86

In many European regions, only a small minority of infants were breast-fed, despite the fact that infant mortality was much higher among wet-nursed or artificially fed infants. For a history of infant feeding practices from 1500 to 1800, see Valerie Fildes, Breasts, Bottles and Babies (Edinburgh: Edinburgh University Press, 1986). For short histories of infant feeding in the 18th and 19th centuries, see Emily E. Stevens, Thelma E. Patrick, and Rita Pickler, “A History of Infant Feeding,” Journal of Perinatal Education 18, no. 2 (2009): 32–39; Ian G. Wickes, “A History of Infant Feeding: Part Iii: Eighteenth and Nineteenth Century Writers,” Archives of Disease in Childhood 28, no. 140 (1953): 332–40.

87

There is little information on the causes from which infants died in their first weeks of life in the 19th century. Prematurity, low birth weight and congenital malformations were probably already important; see Alice Reid and Eilidh Garrett, “Doctors and the Causes of Neonatal Death in Scotland in the Second Half of the Nineteenth Century,” Annales de Démographie Historique, no. 1 (2012): 149–79.

88

If we combine still-births with all deaths in the first year of life, and calculate proportions of all deaths occurring between 28 weeks of gestation and the first birth-day, we find that around 1870 still-births accounted for one-quarter, neonatal deaths for one-quarter, and post-neonatal deaths for one-half. Nowadays, these proportions are one-half, one-third, and one-sixth, showing that the share of still-births has doubled over time (data from Norway presented in Robert Woods, Death before Birth (Oxford etc.: Oxford University Press, 2009), fig. 4.2).

89

For sources of under-registration and registration differences between European countries, see G. Gourbin and Godelieve Masuy-Stroobant, “Registration of Vital Data: Are Live Births and Stillbirths Comparable All over Europe?,” Bulletin of the World Health Organization 73, no. 4 (1995): 449–60; Wilco C. Graafmans et al., “Comparability of Published Perinatal Mortality Rates in Western Europe,” bjog: An International Journal of Obstetrics & Gynaecology 108, no. 12 (2001): 1237–45. Registration criteria have also changed over time, reflecting the increasing possibilities of keeping babies with a short gestational age alive. More comparable data on perinatal mortality in European countries were collected in EURO-Peristat Project, European Perinatal Health Report (n.p.: EURO-Peristat Project, 2018).

90

For an analysis of trends in infant mortality in the five Nordic countries, see Sören Edvinsson, Ólöf Garðarsdóttir, and Gunnar Thorvaldsen, “Infant Mortality in the Nordic Countries, 1780–1930,” Continuity and Change 23, no. 3 (2008): 457–85. Finland also experienced an early decline, but infant mortality remained higher than in Sweden and Norway; see Oiva Turpeinen, “Les Causes des Fluctuations Annuelles du Taux de Mortalité Finlandais entre 1750 et 1806,” Annales de Démographie Historique (1980): 287–96. Infant mortality in Iceland only started to decline around 1870, after breast-feeding had been promoted; see Loftur Guttormsson and Ólöf Garðarsdóttir, “The Development of Infant Mortality in Iceland, 1800–1920,” Hygiea Internationalis 3, no. 1 (2002): 151–76. For an individual-level study showing the effect of public health measures and trained midwives on infant mortality decline, see Volha Lazuka, Luciana Quaranta, and Tommy Bengtsson, “Fighting Infectious Disease: Evidence from Sweden 1870–1940,” Population and Development Review 42, no. 1 (2016): 27–52.

91

The causes of the late 18th century decline in France have been discussed in Marie-France Morel, “Les Soins Prodigués aux Enfants: Influence des Innovations Médicales et des Institutions Médicalisées (1750–1914),” Annales de Démographie Historique (1989): 157–81.

92

On advances in obstetrics and their (modest) impact on infant mortality in the 18th century in England, see Robert Woods and Chris Galley, Mrs Stone & Dr Smellie: Eighteenth-Century Midwives and Their Patients (Liverpool: Liverpool University Press, 2014). On the causes of stagnation and decline of infant mortality in England & Wales in the late 19th and early 20th centuries, see Robert Woods, Patricia A. Watterson, and John H. Woodward, “The Causes of Rapid Infant Mortality Decline in England and Wales, 1861–1921. Part i,” Population Studies 42, no. 3 (1988): 343–66; Robert Woods, Patricia A. Watterson, and John H. Woodward, “The Causes of Rapid Infant Mortality Decline in England and Wales, 1861–1921. Part II,” Population studies 43, no. 1 (1989): 113–32.

93

The timing of the onset of fertility decline in European countries has been studied in the Princeton European Fertility Project. This concluded that the decline started simultaneously throughout Europe in the 1870s, suggesting that cultural diffusion of new ideas about fertility control was the main explanation; see Ansley J. Coale and Susan C. Watkins, eds., The Decline of Fertility in Europe (Princeton: Princeton University Press, 1986). Other theories have emphasized economic determinants, such as a “reversal of ­intra-familial wealth flows” due to the increasing importance of education which made children expensive instead of a source of income; see Karen Oppenheim Mason, “Explaining Fertility Transitions,” Demography 34, no. 4 (1997): 443–54; John C. Caldwell, Theory of Fertility Decline (New York: Academic, 1982).

94

Infant mortality was high in the Netherlands due to a combination of low breast-feeding and contaminated surface water. The onset of infant mortality decline has been attributed to cultural factors, such as the spread of modern hygienic practices and breast-feeding; see Evert W. Hofstee, De Demografische Ontwikkeling van Nederland in de Eerste Helft van de 19de Eeuw (Deventer: Van Loghum Slaterus, 1978); Wolleswinkel-van den Bosch et al., “Determinants of Infant and Early Childhood Mortality Levels.” For timing of decline of infant mortality in Germany, and the role of sanitary improvements in this decline, see Jorg Vogele, “Urbanization, Infant Mortality and Public Health in Imperial Germany,” in The Decline of Infant and Child Mortality, ed. Carlo A. Corsini and Pier P. Viazzo (Dordrecht: Martinus Nijhoff, 1997). For decline of infant mortality in Austria, see Josef Kytir, Christian Köck, and Rainer Münz, “Historical Regional Patterns of Infant Mortality in Austria,” European Journal of Population/Revue européenne de Démographie 11, no. 3 (1995): 243–59.

95

For an overview of the timing of infant mortality decline in different European countries, see Godelieve Masuy-Stroobant, “Infant Health and Infant Mortality in Europe,” in The Decline of Infant and Child Mortality, ed. C.A. Corsini and P.P. Viazzo (Dordrecht: Martinus Nijhoff, 1997). For in-depth studies of infant mortality decline and its causes in France, Central-Spain and Italy, see other chapters in the same book.

96

Catherine Rollet-Echalier, “La Politique à l'Egard de la Petite Enfance sous la IIIe République,” Population (French Edition) 46, no. 2 (1991): 349–58.

97

For a review of the contribution of technological innovations see Steven L. Gortmaker and Paul H. Wise, “The First Injustice: Socioeconomic Disparities, Health Services Technology, and Infant Mortality,” Annual Review of Sociology 23, no. 1 (1997): 147–70.

98

For under-registration of infant mortality in the Soviet period, see Ellen Jones and Fred W. Grupp, “Infant Mortality Trends in the Soviet Union,” Population and Development Review 9, no. 2 (1983): 213–46. Infant mortality rose in the 1970s and 1980s, due to environmental pollution, smoking and drinking in pregnancy, and changes in parity distribution; see Jones and Grupp, “Infant Mortality Trends.” For under-registration in the post-Soviet period, see Nadezhda Aleshina and Gerry Redmond, “How High Is Infant Mortality in Central and Eastern Europe and the Commonwealth of Independent States?,” Population Studies 59, no. 1 (2005): 39–54.

99

The history of still-birth registration in the Nordic countries has been described in Woods, Death before Birth, Chapter 4. Midwives had to report still-births to the parish priest. As is clear from Figure 19, Sweden had lower still-birth rates in the 19th century than the few other countries for which data are available. This was probably due to birth assistance by trained midwives; see Tobias Andersson, U. Hogberg, and S. Bergstrom, “Community-Based Prevention of Perinatal Deaths: Lessons from Nineteenth-Century Sweden,” International Journal of Epidemiology 29, no. 3 (2000): 542–48.

100

On trends in perinatal mortality, see Signild Vallgårda, “Trends in Perinatal Death Rates in Denmark and Sweden, 1915–1990,” Paediatric and Perinatal Epidemiology 9, no. 2 (1995): 201–18; Geoffrey Chamberlain, “Abc of Antenatal Care. Vital Statistics of Birth,” British Medical Journal 303, no. 6795 (1991): 178–81; J.H. de Haas-Posthuma and J.H. de Haas, Infant Loss in the Netherlands (Washington: National Center for Health Statistics, 1968). For recent trends, see Jennifer Zeitlin et al., “Declines in Stillbirth and Neonatal Mortality Rates in Europe between 2004 and 2010,” Journal of Epidemiology and Community Health 70, no. 6 (2016): 609–15.

101

An overview of risk factors for still-birth can be found in Joy E. Lawn et al., “Stillbirths: Rates, Risk Factors, and Acceleration Towards 2030,” Lancet 387, no. 10018 (2016): 587–603. See Woods, Death before Birth for an overview of the causes of the decline in still-births. See Anne Løkke, “The Antibiotic Transformation of Danish Obstetrics,” Annales de Démographie Historique, no. 1 (2012): 205–24, for an analysis of how antibiotics transformed obstetric care, with direct and indirect benefits for mother and child.

102

On the role of antenatal and perinatal care to declines in perinatal mortality, see Jennifer Zeitlin, Beatrice Blondel, and Babak Khoshnood, “Fertility, Pregnancy and Childbirth,” in Successes and Failures of Health Policy in Europe, ed. Johan P. Mackenbach and Martin McKee (Maidenhead: Open University Press, 2013).

103

For an analysis of trends in prematurity in European countries and an overview of factors involved in rising rates, see Jennifer Zeitlin et al., “Preterm Birth Time Trends in Europe: A Study of 19 Countries,” bjog: An International Journal of Obstetrics & Gynaecology 120, no. 11 (2013): 1356–65.

104

It is not known what the precise change was. It may reflect an increase in the completeness of the registration of still-births, as a result of harmonization with who criteria, but even then the peaks in Russia and Ukraine look improbably high. Perhaps early neonatal deaths were registered as still-births to create a flattering picture of the infant mortality rate; see Aleshina and Redmond, “How High Is Infant Mortality.”

105

For a general history of pellagra, see Elisabeth W. Etheridge, “Pellagra,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993). For the social history of pellagra, see Daphne A. Roe, A Plague of Corn (London: Cornell University Press, 1973). For political aspects, see, e.g., Alfred J. Bollet, “Politics and Pellagra: The Epidemic of Pellagra in the Us in the Early Twentieth Century,” Yale Journal of Biology and Medicine 65, no. 3 (1992): 211–21.

106

Quantitative data for Italy can be found in Renato Mariani-Costantini and Aldo Mariani-Costantini, “An Outline of the History of Pellagra in Italy,” Journal of Anthropological Sciences 85 (2007): 163–71; Monica Ginnaio and Amy Jacobs, “Pellagra in Late Nineteenth Century Italy: Effects of a Deficiency Disease,” Population (French edition) 66, no. 3 (2011): 583–609.

107

France was ahead of other Southern European countries in the fight against pellagra, as a result of early studies demonstrating the combined role of diet and poverty, and subsequent advocacy for dietary change and social reform, by Théophile Roussel (1816–1903)(see Roe, Plague of Corn, Chapter 6). For analyses of the decline of pellagra, see Youngmee K. Park et al., “Effectiveness of Food Fortification in the United States: The Case of Pellagra,” American Journal of Public Health 90, no. 5 (2000): 727–38; Mariani-Costantini and Mariani-Costantini, “An Outline of the History of Pellagra in Italy.”

108

For a general history of rickets, see, e.g., R. Ted Steinbock, “Rickets and Osteomalacia,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993); A. White Franklin, “Rickets,” in The History and Conquest of Common Diseases, ed. Walter R. Bett (Norman: University of Oklahoma Press, 1954). The first description is by Whistler in 1645, in a thesis entitled (translated from Latin) On the disease of English children which is commonly called the rickets.

109

Mortality from rickets in the London Bills of Mortality can be found in Mercer, Disease, App. 2a. Children’s hospital statistics on rickets were compiled by Hirsch, Handbook, vol. iii, p. 735.

110

For the history of cod-liver oil, see White Franklin, “Rickets.” The story of the Vienna study has been retold in Harriette Chick, “Study of Rickets in Vienna 1919–1922,” Medical History 20, no. 1 (1976): 41–51. For the current understanding of rickets, see Thomas O. Carpenter et al., “Rickets,” Nature Reviews Disease Primers 3 (2017): 17101.

111

Good introductions to the history of goitre and cretinism can be found in Henschen, History, pp. 183–95, and in Clark T. Sawin, “Goiter,” in Cambridge World History of Human Disease, ed. Kenneth F. Kiple (Cambridge etc.: Cambridge University Press, 1993).

112

Figures cited from Henschen, History, pp. 186–89.

113

Quantitative data on long-term trends are again hard to find. Although goitre was not a common cause of death, trends in mortality can be followed over time in several European countries since the 1920s or 1930s. These data show that in the 1940s to 1950s, rates were still relatively high in the Alpine countries Austria and Switzerland, as well as in Hungary. Rates declined rapidly everywhere during the 20th century (data from Alderson, International Mortality Statistics, table 68).

114

For the recent history of iodine deficiency in Europe, see Liselotte Schaefer Elinder and Caroline Bollars, “Food and Nutrition,” in Successes and Failures of Health Policy in Europe, ed. Johan P. Mackenbach and Martin McKee (Maidenhead: Open University Press, 2013). Epidemiological data can be found in Michael B. Zimmermann and M. Andersson, “Prevalence of Iodine Deficiency in Europe in 2010” (paper presented at the Annales d’Endocrinologie, 2011). For a review of recent (lack of) progress in tackling the problem, see John H. Lazarus, “Iodine Status in Europe in 2014,” European Thyroid Journal 3, no. 1 (2014): 3–6.

115

The birth-cohort effect in peptic ulcer mortality was discovered by American epidemiologists Mervyn Susser and Zena Stein, and used to argue that peptic ulcer was not simply a ‘disease of civilization’; see Mervyn Susser and Zena Stein, “Civilisation and Peptic Ulcer,” Lancet 279, no. 7221 (1962): 116–19. The existence of strong cohort effects was confirmed for six European countries (Amnon Sonnenberg, “Time Trends of Ulcer Mortality in Europe,” Gastroenterology 132, no. 7 (2007): 2320–27), but in more recent mortality declines cohort effects no longer play a role (Carlo La Vecchia et al., “The Impact of Therapeutic Improvements in Reducing Peptic Ulcer Mortality in Europe,” International Journal of Epidemiology 22, no. 1 (1993): 96–106).

116

Amnon Sonnenberg, “Causes Underlying the Birth-Cohort Phenomenon of Peptic Ulcer,” International journal of epidemiology 35, no. 4 (2006): 1090–97.

117

La Vecchia et al., “Impact.”

118

The ‘dietary fibre’ hypothesis was proposed in Trowell and Burkitt, Western Diseases.

119

The ‘hygiene’ hypothesis was proposed in David J. Barker, “Acute Appendicitis and Dietary Fibre: An Alternative Hypothesis,” British Medical Journal 290, no. 6475 (1985): 1125–27. It also covers several other diseases including poliomyelitis, see Barker, “Rise.” Poliomyelitis originally was a mild disease without paralysis occurring in very young children, but when sanitation improved the average age of infection increased, and the disease became more serious. Poliomyelitis caused major epidemics in 1916 and in the late 1940s and early 1950s; see Hays, Epidemics and Pandemics, p. 377 ff. and 411 ff.

120

Giuliano Franco, “Ramazzini and Workers’ Health,” Lancet 354, no. 9181 (1999): 858–61.

121

For a history of occupational diseases from Roman times to the 1940s, with some comparative data for several European countries, see Ludwig Teleky, History of Factory and Mine Hygiene (New York: Columbia University Press, 1948).

122

The tenth edition appeared in 2010; see Peter J. Baxter et al., Hunter’s Diseases of Occupations [Tenth Edition] (London: Arnold 2010).

123

The World Health Organization, Eurostat and the International Labour Organization all collect information on aspects of occupational health, but between-country comparability of these data is usually low, and time-series are short.

124

Andrew Meiklejohn, “History of Lung Diseases of Coal Miners in Great Britain: Part i, 1800–1875,” Occupational and Environmental Medicine 8, no. 3 (1951): 127–37.

125

The quote is from the Swiss hygienist Vogt, reproduced in Teleky, History, p. 199. For the recognition of pneumoconioses as separate disease entities, see David Rosner and Gerald Markowitz, “Consumption, Silicosis, and the Social Construction of Industrial Disease,” Yale Journal of Biology and Medicine 64, no. 5 (1991): 481–98; Gerald Markowitz and David Rosner, “The Illusion of Medical Certainty: Silicosis and the Politics of Industrial Disability, 1930–1960,” Milbank Quarterly 67, no. Suppl. 2 (1989): 228–53.

126

Andrew Meiklejohn, “History of Lung Diseases of Coal Miners in Great Britain. Part Iii, 1920–1952,” Occupational and Environmental Medicine 9, no. 3 (1952): 208–20. In addition to lung disease, miners were at risk of many other potentially fatal problems, such as explosions of methane gas. These had to be prevented by ventilation of mine shafts and by the use of safety lamps, which reduced explosion-related deaths among miners In Great Britain and Germany in the first decades of the 20th century; see Teleky, History, pp. 238–252.

127

Doll and Peto estimated the contribution of occupational exposures to the occurrence of cancer in the US in the 1970s to be 4% – much lower than earlier reports had claimed; see Richard Doll and Richard Peto, “The Causes of Cancer: Quantitative Estimates of Avoidable Risks of Cancer in the United States Today,” jnci: Journal of the National Cancer Institute 66, no. 6 (1981): 1192–308. This estimate was later revised even further downwards, to 1% among non-smokers, to reflect reductions in exposure since the 1970s; see Julian Peto, “Cancer Epidemiology in the Last Century and the Next Decade,” Nature 411, no. 6835 (2001): 390–95. Yet, these risks are concentrated in a small fraction of the population; see Paolo Boffetta, “Epidemiology of Environmental and Occupational Cancer,” Oncogene 23, no. 38 (2004): 6392–403.

128

For asbestos use in Europe, see Takashi Kameda et al., “Asbestos: Use, Bans and Disease Burden in Europe,” Bulletin of the World Health Organization 92 (2014): 790–97. A shift in marketing from high-income countries to the former Soviet Union was signalled in Laurie Kazan-Allen, “Asbestos and Mesothelioma: Worldwide Trends,” Lung Cancer 49 (2005): S3–S8. In the 1980s, the Soviet Union had the highest asbestos use in Europe; see Fabio Montanaro et al., “Pleural Mesothelioma Incidence in Europe,” Cancer Causes & Control 14, no. 8 (2003): 791–803.

129

Mesothelioma mortality started to rise in the 1970s in England and Wales; see Julian Peto et al., “Continuing Increase in Mesothelioma Mortality in Britain,” Lancet 345, no. 8949 (1995): 535–39. For Sweden, see Bengt Järvholm and Alex Burdorf, “Emerging Evidence That the Ban on Asbestos Use Is Reducing the Occurrence of Pleural Mesothelioma in Sweden,” Scandinavian Journal of Public Health 43, no. 8 (2015): 875–81. For the Netherlands, see O. Segura, Alex Burdorf, and Caspar Looman, “Update of Predictions of Mortality from Pleural Mesothelioma in the Netherlands,” Occupational and Environmental Medicine 60, no. 1 (2003): 50–55.

130

Countries in Central-eastern and Eastern Europe Eastern mainly used chrysotile which is less carcinogenic than other types of asbestos; see Montanaro et al., “Pleural Mesothelioma.”

131

For the effect of the early ban on mesothelioma in Sweden, see Järvholm and Burdorf, “Emerging Evidence.” For comparison between Sweden and the Netherlands, see Alex Burdorf, Bengt Järvholm, and Anders Englund, “Explaining Differences in Incidence Rates of Pleural Mesothelioma between Sweden and the Netherlands,” International Journal of Cancer 113, no. 2 (2005): 298–301.

132

On occupational musculoskeletal disorders, see, e.g., Bruno R. Da Costa and Edgar Ramos Vieira, “Risk Factors for Work-Related Musculoskeletal Disorders,” American Journal of Industrial Medicine 53, no. 3 (2010): 285–323. For a review of the health effects of job ­demands and job control, see, e.g., Wilmar B. Schaufeli and Toon W. Taris, “A Critical Review of the Job Demands-Resources Model,” in Bridging Occupational, Organizational and Public Health, ed. Georg F. Bauer and Oliver Hämmig (Dordrecht: Springer, 2014).

133

See, e.g., Maaike van der Noordt et al., “Health Effects of Employment: A Systematic Review of Prospective Studies,” Occupational Environmental Medicine 71, no. 10 (2014): 730–36.

134

John R. McNeill, Something New under the Sun (London: Allen Lane, 2000).

135

The term ‘air pollution’ here refers to outdoor or ‘ambient’ air pollution, and not to indoor air pollution which is still an important cause of health problems in developing countries. For quantitative estimates of the burden of disease caused by outdoor air pollution, see gbd 2017 Collaborators, “Global, Regional, and National Comparative Risk Assessment.”

136

For reviews of the health effects of various air pollutants, see, e.g. Bert Brunekreef and Stephen T. Holgate, “Air Pollution and Health,” Lancet 360, no. 9341 (2002): 1233–42; World Health Organization, Health Aspects of Air Pollution, who Regional Office for Europe (Copenhagen, 2004). The evidence was recently updated in World Health Organization, Review of Evidence on Health Aspects of Air Pollution (Copenhagen: who Regional Office for Europe, 2013).

137

For long-term reductions in sulphur dioxide emissions in Europe, see Vigdis Vestreng et al., “Twenty-Five Years of Continuous Sulphur Dioxide Emission Reduction in Europe,” Atmospheric Chemistry and Physics 7, no. 13 (2007): 3663–81.

138

For long-term reductions in nitrogen oxide emissions in Europe, see Vigdis Vestreng et al., “Evolution of NOx Emissions in Europe with Focus on Road Transport Control Measures,” Atmospheric Chemistry and Physics 9, no. 4 (2009): 1503–20. General overviews of historical trends in emissions can be found in Rachel M. Hoesly et al., “Historical (1750–2014) Anthropogenic Emissions of Reactive Gases and Aerosols,” Geoscientific Model Development 11 (2018): 369–408; J.-F. Lamarque et al., “Historical (1850–2000) Gridded Anthropogenic and Biomass Burning Emissions of Reactive Gases and Aerosols,” Atmospheric Chemistry and Physics 10, no. 15 (2010): 7017–39.

Citation Info

  • Collapse
  • Expand