I.1 Distributing

In: The Distributed Image
Author:
Simon Rothöhler
Search for other papers by Simon Rothöhler in
Current site
Google Scholar
PubMed
Close
Translators:
Daniel Hendrickson
Search for other papers by Daniel Hendrickson in
Current site
Google Scholar
PubMed
Close
and
Textworks Translations
Search for other papers by Textworks Translations in
Current site
Google Scholar
PubMed
Close
Textworks Translations
Open Access

Logistics deals with questions of distribution. How do we move something from A to B? How is transportation organized, and the route? How do we ensure that the delivery reaches its destination undamaged, in an organized and timely fashion? In order for something to appear ›everywhere‹ and ›in real time,‹ it must be accordingly distributable. We can therefore assume: the much-touted ubiquity of the digital image is an effect of its distributive versatility. The mobility of contemporary image traffic is realized in accordance with infrastructures and protocols of digital media technology that control the appearance of images algorithmically. This brings me to my initial questions: What are the calculations of distribution? How are images formatted as transport goods and what can be said about the nature of the transmission channels? What interwoven processes regulate the logistics of digital image traffic?

The question of distribution stands at the beginning of this study because we are dealing with an image that is distributed in a new way—a computable image, mobilized in multiple ways so as to be constantly moved onward. The empirical scope, intensity, and complexity of this distribution generates a raster-graphic-based stream of images that provides a wide variety of media-ecological spheres with bitmap material, loading displays, interfaces, environments with visual culture. Each individual stream-particle is regulated and can be logistically addressed and described by network protocols and codecs. The media agencies that enable particular digital data sets to temporarily become something taking on the iconic manifestation as ›image‹ within the flow of algorithmic performances are also first of all: distributed.

From the perspective of basic network protocols this is a matter of the minutely structured, highly computed transfer of datagrams, which are transmitted according to the distributional laws of packet switching. In order for a message to be distributed via computer networks, it must first be divided into basic transfer units and then put back together again at the destination. Transport routes, like transport goods, are distributed—the former over network nodes, the latter in data packets. Their dissemination generates an ostensibly uninterrupted stream of data, which is impossible to capture with common concepts of immateriality and ephemerality, or with the topoi of any sort of floating and traceless docking. A media format that translates data transmission into image circulation—and is responsible for a rapidly growing portion of the volume of global data traffic—expressly identifies this particular mode of transmission with a technical term that can be discussed as »time-critical«54: real-time streaming.

Streaming first of all denotes a general method of data distribution. Although transmission and playback processes do not completely coincide in this method, they are very closely interlinked. The time-related data material is processed and visualized while further components of the whole packet (which is only conceived as such in retrospect) continue to be received. From the viewpoint of the receiver, streaming has no immediately evident functionality in terms of storing data in memory beyond certain processes of caching. Within the conditions of computer networks, the respective operation occurs through a technically mediated simultaneity, which can be distinguished from both classical and so-called progressive downloading of a file primarily in that no complete copy of the data set exists at any point on the side of the client. It streams out and flows away again: a dual movement—nowadays usually in an adaptive bitrate format55—which is moderated through the transport channel and has only been able to be processed by the computing power and cache capacity of generally available end devices since the mid-1990s. Streaming thus avoids more extensive memory requirements of client usage through a process of transmission that not only continuously requests, distributes, and receives media server data, but also continuously discards it. A form of ›liveness‹ is therefore present even when the content—typically audiovisual, for my purposes—is not a live feed, but a comparatively consolidated data set (such as video on demand). But in a certain sense this, too, is not so much securely stored as permanently waiting: on server requests that are meant to be responded to instantaneously and just as quickly forgotten again. The logistical architecture thus addresses a dynamic standby position, which is conceived as fundamentally ready to distribute and whose primary ›liveness‹ is the real time of a transmission with adaptively switchable replay automatisms with no download. Already at this point it becomes clear that the »third, neglected function of media,«56 the processing of »relevant data,«57 to use the pertinent phrase provided by Friedrich Kittler, is so profoundly embedded in the transport channel in this process that transmission and processing overlap to a certain degree in essential respects—insofar as this includes, for instance, the significance of the compression algorithms involved and the now standard continuous calculation of connection quality. ›Real‹ in the real-time protocol family,58 which was originally exclusively responsible for streaming media but has now partly been superseded, is to this day a procedure of realization that guides and controls how the arrival of packets is temporally organized. With audiovisual signals, especially during a live stream (such as videotelephony, for instance), the client’s processing services have to be carried out with no time lag and in the right sequence. The real time to which streaming protocols are sensitized, as opposed to other network protocols, refers to this data-grammatical sequencing. Because successfully transported server data, in the case of such multimedia streams, are supposed to be processed instantaneously, traditionally by client plug-ins, there is in general very little tolerance for latency. Without data distribution structured in real time—that is, efficiently compensating for delayed transmission or the need for repetition due to defects or losses in the packet during transmission—the conditions for coherence codified in the time-based media of a live stream cannot be sufficiently stabilized in their transmission technology (with on-demand content there is more leeway for buffering cache storage). The critical point is that ›real time‹ indicates the relative imperceptibility of the transmission process, of the transmission and computation time, is therefore »pragmatically synchronous«59 and represents a relational category.60 Perceptual aspects are above all significant insofar as ›real time‹ is manifest not least as reaction time, which technically speaking nonetheless does not allow it to become real time.61

Sean Cubitt has described and theorized what the implied transmission economy means for digital images in terms of the difference between »recorded and real-time media.« The latter is characterized less by the possibility of a liveness that can be understood referentially (now/elsewhere), than by the fact that the transmission process analyzes, disassembles, and finally rewrites step-by-step perceptual data that are themselves, like audio and video signals, composed temporally. Viewed in terms of media archaeology, this media-technical temporalization of data streams that »form an environment of audiovisual signals«62 is nothing new. »Dot-scanned wire photography and broadcast television moved toward live pictures […]. The scanning principle […] does not provide a whole, complete image at any one moment: each image contains the principle of change in itself.«63 This principle of discretization,64 beginning at the elemental level of the image, is therefore not disrupted in the computer-network-based forms of transmission, but rater continued and deepened:

In digital images, where pixels act in the same way as film frames but much smaller and in much swifter succession, the frame itself is a temporal phenomenon. […] [T]his efflorescence of image particles and microtemporalities within the frame risks shattering the unity and discretion of the image. Coherence has to be constructed after the event by applying massively reductive processes to the frame. Coherence is achieved then by nominating as redundant the mass of detail, color, and nuance that human observation is capable of. Instead, the task of observing is first modeled on a good-enough solution for an imaginary statistical norm of perception and then the process of selecting what to observe is automated. […] [B]ecause each frame and each pixel or group thereof must be generated on the fly from stored datasets interpreted by software protocols, there is an active coming into existence at the level of the pixel.65

In this sense we are dealing less than ever with stable, dependably addressable, losslessly reproducible image objects, but rather always only with distributed image processes. Their ›real time‹ is on one hand measured according to particular mathematical modelings of perceptual faculties—so-called perceptual coding66—but at the same time is offset with available infrastructures of transmission. It must first be noted that a fundamental assumption of operative aggregate states of the image is required here, of the image as process—or as »impulse sequence«67—in which is inscribed a media-technical blueprint, realized by means of algorithmic performances, which is adaptively aligned to the parameters of the signal transmission and is therefore primarily a transport plan: »[T]he image has to be considered as a kind of program, a process expressed as, with, in or through software. […] [C]onsidered computationally or algorithmically the image is […] operating in a constant state of deferral.«68 In order for this deferring sense data unit ›image‹ to be created at all as an epistemic object, a certain heuristic immobilization of operative distribution processes is unavoidable in terms of terminology alone. At the same time, it is the case that the reference to microtemporalities, effects of adaptation and coherence necessitates incorporating relational descriptive figures, or at least marking the relevant limits. Digital images fundamentally draw their distributed genealogy and phenomenality from a »channel that calculates with time.«69

What terminology is to be employed, how various terms relate to one another, and where systematic distinctions are useful, is a highly contentious topic of debate in media theory—primarily in critical studies of media infrastructures, in platform and software studies.70 It is, however, widely agreed that a key term in this context is that of compression. Even at the level of agency, its modes of operation refer unambiguously to a distribution in which data centers appear as constitutively connected with network protocols, fiber optic cable routes with codec politics, infrastructures with interfaces, back-end with front-end technologies, broadband capacities with practices of use. Countless (human and non-human) actors are therefore involved in these media-technical distribution processes, in the »distributed character of culture in our age«71—actors that form the socio-technical ensembles by which persons, things, and signs are connected with one another and react to each other in a variety of constellations.

How the interdependencies that arise in this context can be understood conceptually is also dependent on epistemological interests. Cubitt, for instance, ultimately sees the digital image as a normalization tool within a comprehensive governmental regime of algorithmic temporal control, which therefore becomes hegemonic72 above all through the predictive mechanisms of standardized MPEG codecs—or to be more precise: through complex algorithmic techniques like motion compensation.73 His critique of compression algorithms—which at their core reduce transmission information by predicting this information (or its redundancy components) through calculation, interpolating and extrapolating it—seems to be at least partly grounded in remnants of a normative image aesthetics, insofar as one can only speak of »reducing quality« and of a »good-enough image«74 against the backdrop of somehow better (or more beautiful) forms of visual existence.

A theoretical model linking data and the transmission channel more closely to one another comes from Jonathan Sterne. He also assumes a central significance for processes of algorithmic compression, but he bases this on a broader understanding of the dimension of transport that is productive in a different way. For Sterne compression is not an inferior process of reduced information quality, but a cultural technique encountered in various forms in all phases of media history that fundamentally allows for signals to be perceived across spatial distances because it relates communications to infrastructures differentially, enacting a situational filtering or screening of redundant information and thus creating the functional context of successful information exchange as such in the first place:

[T]his means that media are not like suitcases; and images, sounds, and moving pictures are not like clothes. They have no existence apart from their containers and from their movements—or the possibility thereof. Compression makes infrastructures more valuable, capable of carrying or holding materials they otherwise would or could not, even as compression also transforms those materials to make them available to the infrastructure.75

The economization of signals—the codification of what is to be calculated at a particular point in the infrastructural capacity on the one hand as transmittable, on the other as lossy—serves to economically rationalize the channels against the backdrop of finite resources of transmission in information technologies, thus not least creating capacities for additional communications and increasing general mobility in the system. Because they would not be sent out on this journey in the first place as uncompressed or lossless (since they would be too bulky, too slow, too difficult to receive), the question of forms of packaging with different dimensions is secondary for Sterne, and consequently is replaced by a recursive model: »[C]ompression is the process that renders a mode of representation adequate to its infrastructures. But compression also renders the infrastructures adequate to representation.«76 Compression therefore facilitates additional channel configurations through alternative signals, whose circulation in turn feeds back into infrastructural conditions with concrete evolutionary dynamics, with structures, standards, and formats that would not have existed in the same way without the historically specific realities of compression in each case—that is, also: without certain data remaining undistributed because they are calculated as redundant.77

From this perspective it becomes immediately clear that the distributed image does not arise from or run toward an immaterial ›flow‹ (especially under real-time conditions), but is modulated as a stream over a whole series of connected media operations and structures—no matter how immediate its phenomenal presence on end user devices may appear to be, no matter how effortlessly it can be managed, sent, and manipulated on social media. Ubiquity as well as fungibility therefore arise from modes of distribution that are highly versatile not only on the level of perception accessible to empirical users, but also with regard to their switchability through media technology. Distribution as outlined here is not something that happens to this image after the fact, as if from the outside, since it only exists as a transport commodity in motion, as a data packet that is structured accordingly and processes as well as communicates back structural givens; therefore, the variably scalable question of concrete container formats, codecs, and protocols, for instance, inevitably also implies the question of less visible levels of network architecture.

Where, when, and in what temporality and concentration images materialize in digital environments is more and more often fundamentally related to so-called content delivery networks (CDN), a central infrastructure component of contemporary data circulation that operates in the background even more efficiently than, for instance, data centers, whose energy footprints have meanwhile at least become the subject of public debates.78 To put it simply, CDNs are proprietary distribution networks that reroute user requests via a request routing system on locally distributed replica servers in order to significantly optimize performance, especially for data-intensive and time-sensitive content with multimedia characteristics, on the basis of identified and geographically allocated IP addresses. Any market participant who distributes their streaming portfolio over market-dominating providers such as Akami or Amazon Web Services (AWS) brings the media server spatially closer to the edge of the network, to the client—thus reducing problems with latency times and volatile transmission rates.

The economic and strategic significance of CDNs is evident, for instance, in the fact that the streaming service Netflix no longer maintains any data centers of its own (and had no problem transferring its entire database—»everything that happens before you hit ›play‹«79—into the cloud of the AWS ecosystem80 despite competing with Amazon Video), while at the same time insisting on distributing the 125 million hours of video playback per day currently being requested by customers exclusively via Open Connect, their in-house CDN. The provider’s complex strategy of content delivery differs in specifics from region to region, but is fundamentally based on the principle of local Open Connect boxes (around a thousand worldwide at present), which are utilized during off-peak times with copies of the contents most likely to be requested regionally but otherwise attempt to minimize intercontinental traffic.81

Data material from providers that forgo the service of CDNs does continue to circulate, but in a somewhat second-rate, comparatively slow orbit. CDNs produce and market distribution hierarchies. They differentiate the stream of images according to ranges of circulation and graded signal transit times. Regardless of any final decisions about net neutrality, there have long been reservable fast tracks and express delivery options at this level, a differentiation of data circuits that manifests as variable mobility and stability in content delivery. Thus there are various flows depending on which infrastructures operationalize them.82 The logistical resources invested are therefore translated directly into the calculations of today’s economy of visual culture. Whatever can be distributed more quickly optimizes accessibility, accumulates visibility. What is clearly less transparent are the structural requirements of this redistribution of attention: »Unlike a public standard built into the protocols of the Internet, Akami is a proprietary system that acts as an overlay, an invisible network concealed inside the network.«83

In this context Christian Sandvig has shown how the massive demand for audiovisual content has led to reorganizing network architecture that was not initially oriented toward such a purpose. Compression technologies that aim first to reduce data traffic and second to direct it only under constant observation of transmission capacity—that is, adaptively—form a hybrid streaming system on the basis of CDNs, which in a certain respect aligns the Internet as a distribution structure with that medium whose format history on the content surface is already being constantly re-mediatized: television. Because mass media content such as the series format, which was initially shaped by television, has been successfully transferred to the Internet and remains in demand,84 because in addition many consumers want to call up popular content at more or less the same time, just as before, and, contrary to the empowerment utopia of ›prosumers‹ ostensibly engaged in constant broadcasting, the asymmetry between downstream and upstream has remained in place—it became problematic that the network architecture, in contrast to radio broadcasting, generates additional transmission costs for each additional consumer. Since multicast systems do increase efficiency but have thus far been unable to satisfy the obvious demand for broadcasting content by distributing information on a mass scale, a development arose that Sandvig quite convincingly characterizes as media-historical »retrofitting«:

As the Internet evolved, a remaining technical challenge was adapting its point-to-point architecture to the one-to-many asymmetries of audiences and attention. […] Recent empirical studies of Internet traffic [revealed] that the network has reached an inflection point, where the Internet is now, for the first time, centrally organized around serving video. And this does not refer to video as a mode of communication in general, but specifically to serving a particular kind of video from a very small number of providers to large numbers of consumers. […] During peak video watching times, two providers (Netflix and YouTube) account for more than half of all Internet traffic in North America.85

Internet infrastructure analysts such as Joon Ian Wong therefore maintain the thesis that the unbridled growth in video stream volume—the distribution of video data accounted for around 70% of all Internet traffic in 2018 (Cisco assumes that this will increase to 82% in 202186)—is the catalyst and the driving force behind an extensive restructuring of Internet architecture, which is becoming more and more »flat,« privatized, centralized, since corporations such as Alphabet/Google (especially after the company acquired the video platform YouTube in 2006), Facebook, and Netflix began investing heavily in CDN infrastructures.87

The general tendency toward centralization—the global dominance of a small, fundamentally oligopolistic group of vertically integrated media corporations88—is not only reflected at almost all infrastructural levels and in the prehistories of communications economics,89 but can also be seen in the implementation of a non-proprietary standard such as JPEG (Joint Photographic Experts Group). As an image compression norm that profoundly reorganizes the raw data of digital image acquisition (stored for instance as RAW files or so-called digital negatives), in part through processes of color space conversion and quantification, this format effectively represents a kind of gatekeeper, codifying what is at all communicable, conveyable, distributable in an image. Paul Caplan succinctly describes how a standard asserts itself in quite ordinary practices of social media image distribution:

In the dialog box that opened, I could see the files on my computer. JPEG-encoded and PNG-encoded files were visible. Their names were black. I could select them, add them to the waiting list and upload them to my account/Timeline/profile, tag them and make them part of the government of (my)self on the Open Graph. The RAW-encoded and WebP-encoded objects however are ›greyed out‹—a symbolic lesser status. They fade into the background. Inaccessible. Unvisible. They are locked out, unavailable for networking, tagging, recognising, data-mining, integrating into and exploiting (or being exploited by) the power of the Open Graph. My imaging was about encoding and then sharing and connecting light-as-data through standards. When I built that apparatus with JPEG, it worked fine. Light became social data. When I didn’t … it didn’t. Light became unsocial data.90

Standards like JPEG stipulate how data must be packaged and aggregated in order to be distributable. They generate connectivity as well as controllability, defining well into the infra-imaging parameters which configurations of visual culture can be formulated: »The code works to reorganize relations within and between images.«91 The exclusion effects typical of such deeply intervening distinguishing authorities (legible/illegible, distributable/indistributable) identify a further arena in which negotiations—framed »expertocratically« in the case of JPEG and MPEG92—are held concerning the format conditions under which an (audio)visual signal is taken up by the transport channels, how an image becomes an element of the general bitrate stream, what relationships between images are possible, and how the constellations that arise relate to economic calculations. This once again reveals the profusion of preconditions inherent in the processual character of contemporary imaging, which is not limited to an immaterial softwarization of the entirety of cultural material,93 but rather calls up infrastructural input and distributes the configuration of actors involved at every operative step.

This simultaneously outlines a spectrum of digital materiality94 that, as Caplan suggests, illuminates a whole series of interdependencies—even beyond »forensically«95 examined processors, hard drives, and displays—which connect a standard like JPEG with factory work and a raw material like oil.96 Furthermore, while dominant discursive figures such as ›ubiquity,‹ ›real time,‹ ›wirelessness,‹ ›cloud,‹ tend to postulate a network-based distribution of ›pure information‹ as a communicative state of connectivity that has always existed and has by now become practically natural, more recent studies such as Tung-Hui Hu’s A Prehistory of the Cloud97 or Nicole Starosielski’s The Undersea Network point to the historical processes of forming the socio-technical substructure, which, in the latter case, can be traced back to the colonial history of telegraph cable stations. Against the dominant cultural imaginary that sees the circulation of immaterial signs as a directly established reality of digital distribution, what we see now are path dependencies and conflict histories, the intertwining of practices and technologies, but also more generally the costs of energy, work, and raw materials,98 which are theoretically proportionately convertible to each individual data packet transmission, that is, scaling consumption rates and weights of datagrams that result from the transport cost itself.99

Recent growth spurts in distributed data volume correlate to a fundamentally observable temporal transformation of the Internet, in which streaming has in a certain sense been generalized as a form of transmission and experience beyond the question of standardized transport protocols and concrete modes of integrating audiovisual signals. More and more addresses no longer lead to static websites, but to data streams that are automatically updated in real time. Connections are established to be maintained for an unspecified time. Data packet transfer increasingly seems to be arranged as an open-ended series that can end, but does not have to from the point of view of the server. This gives rise to a permanent contemporaneity in data exchange, which refers neither to final data sets nor to concluded futures of transmission—which is why horizons of expectation arise in terms of media temporality, which David Berry, borrowing from Jacques Derrida, has called »messianic.«100

Viewed empirically, most streams begin and end at some point—but in information-technological terms they are designed to be continuous.101 Transmission thus always only relates to temporary states of data, to the present time of an ever-filling data pool, to filters that can be operated variably, generating and processing a corresponding aggregation. This development, whose beginning has repeatedly been linked to the advent of RSS feeds102 but then expanded further through push technologies, is now considered a fundamental principle of the general streaming quality of data traffic, especially due to the dominant architecture and addressing modes used by popular platforms with social media characteristics.103

The underlying model of real-time processing replaces one-time data bank requests with continuous queries,104 and is articulated on the surface primarily through responsive interfaces with the option of real-time interaction (or with a prompt for it: »What’s happening?« asks Twitter, »What’s on your mind?« asks Facebook). The continuous, dynamic implementation of data streams flowing in and out reaches the user in the form of a mediated ›nowness,‹ which may be programmed differently in terms of the media’s time-critical operational logic (how is the live feed synchronized, how is the newest content integrated into the timeline in each case, how are the interaction modes operated, etc.105), but in the final analysis mainly either generates synchronization effects, or transmits them in a way that »habitualizes,« as Wendy Chun suggests.106

Phenomena such as dynamically updating platforms, predictive search engine algorithms—which not only switch to autocomplete while search terms are being entered, but like Google Instant immediately start to produce queries and results—as well as the by now completely commonplace real-time integration and implementation of user-generated content, accordingly produce an ›environmental‹ model of data traffic, which seems to consist of continuously maintained data circuits, of data streams that swell and ebb according to filter and newsfeed flow timing: »The transfer of data beyond their contents becomes the permanent condition of our surroundings.«107 Users may have access to certain platform-specific options, ways of entering into and exiting out of the stream at the interface level, or its appearance during the interaction. Independent of this, however, streaming has become a regulatory principle of distributed data volume. It is no longer only of interest to privileged financial service providers and their real-time monitoring of market activity,108 nor does it first and foremost pertain to an efficient form of transmitting volume-intensive audiovisual signals. Instead, it characterizes our general relation to digital data traffic under the conditions of permanent connectivity and ambient computation.109

Image data, too, therefore are no longer shielded as they flow out of relatively static architectures, but are interwoven with other automatically updated stream data, which are subject to the specific agendas of datafication, of »real-time analytics.«110 Clearly, this distribution model does not generate stable, permanently fixable arrangements. All data here tend to be distributed data—data in which a temporal relation is inscribed, independent of the modes of their processing. The ›nowness‹ signals are put on a trajectory, made transportable to future states of nowness. The data stream, framed as both infrastructurally and materially complex, therefore allows only for snapshots, contingent cuts through fluid periods of data. On the outer surface, tentative status reports emerge that are perceptible for human actors and arise from flow samples, gathered for purposes of visualization, which are barely transmitted before being discarded again, attracted by new futures. Aggregation means: data are specifically accumulated on demand and organized intently for the purpose of representation. ›Real time‹ in this regard means that the presumed nanoseconds of computation and transmission do not make a perceivable difference at the front end, or, if so, then only as a glitch111 or in the sense of subliminal update rhythms, which arise from the user addressing of platforms and search engines that are finely nuanced temporally and their »distinctive real-time cultures.«112 At the level of these microtemporal intervals the operationalized »chronologistics« is not implemented in real time, but reactively: as switching and computing time it is synchronized according to models of »timeliness.«113

The image data are in motion, even if they are not directly mobilized graphically. Stream phenomena are—regardless of the temporality of their perceptually coded playback, as in the case of video data—defined in terms of the time-critical processes of their computation and distribution. The decisive question is therefore not (any longer) what an image is, but when, where, and how image data can be enunciated as an image: »[T]here is an endless succession of temporary constellations of images held together by a certain correlation of metadata, distribution of pixels or Boolean query […]. There is a shift here away from content to the rhythm, circulation, and proliferation of the utterance.«114 What is distributed are the media-technical infrastructures and processes of enunciation, but also phenomenalities, contexts, coverages, connectivities, and efficacies of those transmitted datagram series that can be iconically implemented and perceptible as (still or moving) image. In the context of a variably configured, but constantly connected streaming traffic that processes and transfers information in a semantically neutral way,115 every form of distributed imagery is surrounded by data material—and is itself data material. The next chapter will examine in more detail the differences that various data dimensions make, and to what extent the bitmap stream is a repeatedly datafied traffic volume, before we turn to the question of how stream and memory relate to one another.

54

Cf. Axel Volmar (ed.), Zeitkritische Medien, Berlin: Kadmos, 2009.

55

»Adaptive bitrate streaming […] is a technique wherein a sender encodes a video at a variety of different quality levels. […] [S]oftware on the viewer’s computer senses the quality of the network connection and acts as a switch directing the server to send a lower-quality version of the requested content when the network is busy, conserving network capacity« (Christian Sandvig, »The Internet as Anti-Television: Distribution Infrastructure as Culture and Power,« in: Lisa Parks, Nicole Starosielski (eds.), Signal Traffic: Critical Studies of Media Infrastructures, Champaign: University of Illinois Press, 2015, 225–245, here: 233).

56

Hartmut Winkler, Prozessieren. Die dritte, vernachlässigte Medienfunktion, Paderborn: Fink, 2015.

57

Friedrich Kittler, Discourse Networks, 1800–1900, trans. Michael Metteer, Stanford: Stanford University Press, 1990, 369.

58

Strictly speaking there are three network protocols—Real Time Transport Protocol (RTP), Real Time Transport Control Protocol (RTCP) and Real Time Streaming Protocol (RTSP)—which as a rule are run via the User Datagram Protocol (UDP), because this provides a higher data throughput than the more reliable, but slower Transmission Control Protocol (TCP), which also reacts to packet loss with insistent timeouts/retries (cf. Martin Warnke, Theorien des Internet zur Einführung, Hamburg: Junius, 2011, 76ff). Thanks to expanded broadband capacities, TCP-based HTTP Live Streaming (HLS) is now widely used, which has the advantage, among other things, of ensuring greater transfer security with a more economically efficient operation, and also of integrating the aforementioned adaptive bitrate streaming, thus being able to react flexibly to fluctuations in bandwidth instead of abruptly interrupting the stream or constantly having to buffer. Apple’s HTTP Live Streaming, developed as an alternative to Flash Video, is the de facto standard today due to its efficient compatibility with many browsers and mobile devices (the alternatives to this are MPEG Dash as well as proprietary protocols such as Adobe’s HTTP Dynamic Streaming or Microsoft’s Smooth Streaming).

59

Winkler, Prozessieren, 198.

60

»[R]ealtime concerns the rate at which computational processing takes place in relation to the time of lived audio-visual experience. It entails the progressive elimination of any perceptible delay between the time of machine processing and the time of conscious perception« (Adrian Mackenzie, »The Mortality of the Virtual: Real-Time, Archive and Dead-Time in Information Networks,« Convergence: The International Journal of Research into New Media Technologies 3/2 (1997), 59–71, here: 60).

61

As Wendy Chun has persuasively argued: »In computer systems, ›real time‹ reacts to the live: their ›liveness‹ is their quick acknowledgment of and response to users’ actions. Computers are ›feedback machines,‹ based on control mechanisms that automate decision making. As the definition of ›real time‹ makes clear, ›real time‹ refers to the time of computer processing, not the user’s time. ›Real time‹ is never real time—it is deferred and mediated« (Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media, Cambridge, MA: MIT Press, 2016, 79).

62

Stefan Heidenreich, FlipFlop. Digitale Datenströme und die Kultur des 21. Jahrhunderts, Munich: Hanser Verlag, 2004, 58.

63

Sean Cubitt, The Practice of Light: A Genealogy of Visual Technologies from Prints to Pixels, Cambridge, MA: MIT Press, 2014, 235.

64

Cf. II.2 (Discrete Distribution).

65

Cubitt, The Practice of Light, 251f.

66

Cf. Jonathan Sterne, MP3: The Meaning of a Format, Durham, NC: Duke University Press, 2012, 32–60.

67

»What flows in data streams, and here this means: what takes place in time and requires time, is the feeding of signals. It consists of a sequence of changes in the electrical field strength or, in fiber optic cables, luminosity. Transmitting a message does not last a certain time because a distance has to be overcome, but because the message signal occurs as a sequence of impulses« (Heidenreich, FlipFlop, 27).

68

Daniel Rubinstein, Katrina Sluis, »The Digital Image in Photographic Culture: Algorithmic Photography and the Crisis of Representation,« in: Martin Lister (ed.), The Photographic Image in Digital Culture, London: Routledge, 2013, 22–37, here: 29 (italics in the original).

69

Wolfgang Ernst, »Medienwissen(schaft) zeitkritisch: Ein Programm aus der Sophienstraße,« inaugural lecture at Humboldt University, 21 Oct., 2003, https://edoc.hu-berlin.de/bitstream/handle/18452/2327/Ernst.pdf?sequence=1, here: 20.

70

For a complementary approach, which reacts above all to the circumstance that platform operators like Google have de facto taken over infrastructure functions (»infrastructuralized platforms«), aided by neoliberal agendas of deregulating and privatizing infrastructural sponsorship since the 1980s (»platformization of infrastructures«), see Paul N. Edwards, Carl Lagoze, Jean-Christophe Plantin, Christian Sandvig, »Infrastructure Studies meet Platform Studies in the Age of Google and Facebook,« New Media & Society (pre-publication version), Aug. 2016, https://doi.org/10.1177/1461444816661553.

71

Sterne, MP3, 1.

72

As the central position of the concept of contingency reveals, Cubitt draws on the following as an implicit blueprint: Mary Ann Doane, The Emergence of Cinematic Time: Modernity, Contingency, the Archive, Cambridge, MA: Harvard University Press, 2002.

73

Cf. Adrian Mackenzie, »Codecs,« in: Matthew Fuller (ed.), Software Studies: A Lexicon, Cambridge, MA: MIT Press, 2008, 48–54, here: 52f. More on this in chapter II.3 (Video Signal Histories).

74

Cubitt, Practice of Light, 256, 247. Cf. also Hito Steyerl, »In Defense of the Poor Image,« e-flux journal 10 (Nov. 2009), https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/.

75

Jonathan Sterne, »Compression. A Loose History,« in: Lisa Parks, Nicole Starosielski (eds.), Signal Traffic: Critical Studies of Media Infrastructures, Champaign: University of Illinois Press, 2015, 31–52, here: 36.

76

Ibid., 35.

77

»We might simply call the reality relational. For instance, a single set of standards like those set by MPEG, the Moving Picture Experts Group, facilitated the circulation of video and audio recording on the Internet, but it also facilitated the development of new technologies of storage and transmission, like the video compact disc, satellite radio, and the DVD. Once again, it is not just communication adjusting to infrastructures, but infrastructures modified by phenomena of compression« (ibid., 47).

78

Cf. for instance James Glanz, »Power, Pollution and the Internet,« New York Times, Sept. 23, 2012, and Nicole Starosielski, Janet Walker (eds.), Sustainable Media: Critical Approaches to Media and Environment, New York: Routledge, 2016.

79

Ken Florance, »How Netflix Works With ISPs Around the Globe to Deliver a Great Viewing Experience,« Netflix Newsroom, March 17, 2016, https://about.netflix.com/en/news/how-netflix-works-with-isps-around-the-globe-to-deliver-a-great-viewing-experience.

80

Cf. Peter Judge, »Netflix’s Data Centers are Dead, Long Live the CDN!« DatacenterDynamics, August 20, 2015, https://www.datacenterdynamics.com/en/opinions/netflixs-data-centers-are-dead-long-live-the-cdn/.

81

Indeed, to protect the Internet from collapse: »These so-called Open Connect appliances serve a simple purpose: To keep Netflix from clogging up the Internet. In North America alone, Netflix is singlehandedly responsible for 37 percent of downstream Internet traffic during peak hours. The service as a whole streams 125 million hours of content every single day. Without relieving as many pressure points as possible, things could get ugly, fast. The total capacity of the Internet’s country-to-country backbone is 35TB per second, says Ken Florance, Netflix’s VP of content delivery. ›Our peak traffic is more than that … Our scale is actually larger than the international capacity of the Internet.‹ Netflix doesn’t literally break the Internet because the vast majority of its traffic is delivered locally, via Open Connect, rather than across the transoceanic cables that connect the Internet between continents« (Brian Barrett, »Netflix’s Grand, Daring, maybe Crazy Plan to Conquer the World,« WIRED, March 27, 2016, https://www.wired.com/2016/03/netflixs-grand-maybe-crazy-plan-conquer-world/). A more detailed examination of the traffic footprint of streaming servers can be found in Tim Boettger, Felix Cuadrado, Gareth Tyson, »Open Connect Everywhere: A Glimpse at the Internet Ecosystem through the Lens of the Netflix CDN,« ACM SIGCOMM Comput. Commun. Rev. 48/1 (Jan. 2018), 28–34, https://doi.org/10.1145/3211852.3211857. On the »integrative« logistical role of data centers and their »operative mobility,« see also Rossiter, Software, Infrastructure, Labor, 138–181.

82

In the long term, this can lead to the fragmentation of the Internet as a »globally consistent address space,« as Geoff Huston has argued: »We are seeing the waning use of a model that invests predominantly in carriage, such that the user is ›transported‹ to the door of the content bunker. In its place we are using a model that pushes a copy of the content towards the user, bypassing much of the previous carriage function. […] But this model also raises some interesting questions about the coherence of the Internet. […] [W]e are seeing some degree of segmentation, or fragmentation, in the architecture of the Internet as a result of the service delivery specialization« (Geoff Huston, »The Death of Transit?« APNIC Blog, Oct. 28, 2016, https://blog.apnic.net/2016/10/28/the-death-of-transit/).

83

Sandvig, »Internet as Anti-Television,« 234.

84

Cf. John T. Caldwell, »Convergence Television: Aggregating Form and Repurposing Content in the Culture of Conglomeration,« in: Lynn Spigel, Jan Olsson (eds.), Television After TV, Durham, NC: Duke University Press, 2004, 41–72; Ghislain Thibault, »Streaming: A Media Hydrography of Televisual Flows,« Journal of European Television History & Culture 4/7 (2015), 110–119; and Simon Rothöhler, »Content in Serie,« Merkur. Deutsche Zeitschrift für europäisches Denken 778 (March 2014), 231–235.

85

Sandvig, »Internet as Anti-Television,« 233, 237.

86

Cf. Cisco Public, The Zettabyte Era: Trends and Analysis (white paper), Cisco VNI, June 07, 2017.

87

»It’s a fundamental change to the way data has been routed over the internet for decades, which was classically conceived of as a tiered hierarchy of internet providers, with about a dozen large networks comprising the ›backbone‹ of the internet. The internet today is no longer tiered; instead, the experts who measure the global network have a new description for what’s going on: it’s the flattening of the internet. […] As video flows through increasingly vertically integrated networks, technologies that hew to the net’s principles of decentralization are getting left behind. It’s a simple function of supply and demand—video piped directly from Amazon or Netflix to a consumer ISP is simply a better experience« (Joon Ian Wong, »The Internet Has Been Quietly Rewired, and Video Is the Reason Why,« Quartz, Oct. 5, 2016, https://qz.com/742474/how-streaming-video-changed-the-shape-of-the-internet/).

88

The Wired author Bruce Sterling popularized the term »stacks« to describe the »Big Five« in terms of industrial policy (Alphabet, Amazon, Apple, Facebook, Microsoft; for an expanded understanding of the term as a »megastructure« of ubiquitous computation, see Benjamin Bratton, The Stack: On Software and Sovereignty, Cambridge, MA: MIT Press, 2015). Another central player in the allocation of global data traffic volume is the pornography monopolist MindGeek (previously Manwin), whose video aggregators (including Pornhub, YouPorn, RedTube) together claim more bandwidth than Amazon or Facebook (cf. Shira Tarrant, The Pornography Industry, Oxford: Oxford University Press, 2016; Joe Pinker, »The Hidden Economics of Porn,« The Atlantic, April 4, 2016, https://www.theatlantic.com/business/archive/2016/04/pornography-industry-economics-tarrant/476580/; Katrina Forrester, »Lights. Camera. Action: Making Sense of Modern Pornography,« The New Yorker, Sept. 26, 2016; David Auerbach, »Vampire Porn,« Slate, Oct. 23, 2014, https://slate.com/technology/2014/10/mindgeek-porn-monopoly-its-dominance-is-a-cautionary-tale-for-other-industries.html).

89

Cf. Nicole Starosielski, The Undersea Network, Durham, NC: Duke University Press, 2015.

90

Paul Caplan, JPEG: The Quadruple Object, PhD thesis, Birkbeck, University of London, 2014, 174.

91

Mackenzie, »Codecs,« 50.

92

Jonathan Sterne has more closely examined the regulative process of »standard-making« using the example of the Moving Pictures Experts Group (MPEG) established in 1988: Sterne, MP3, 128–147.

93

This is essentially the idea in: Manovich, Software Takes Command.

94

Cf. Paul Dourish, The Stuff of Bits: An Essay on the Materialities of Information, Cambridge, MA: MIT Press, 2017; and Jean-François Blanchette, »A Material History of Bits,« Journal of the American Society for Information Science and Technology 62/6 (2011), 1042–1057. On staking out the discursive field in media studies, cf. Ramón Reichert, Annika Richterich, »Introduction. Digital Materialism,« Digital Culture and Society 1/1 (2015), 5–18, https://doi.org/10.14361/dcs-2015-0102; and Jussi Parikka, A Geology of Media, Minneapolis: University of Minnesota Press, 2015, 1–29.

95

Matthew G. Kirschenbaum, Mechanisms: New Media and the Forensic Imagination, Cambridge, MA: MIT Press, 2008.

96

»JPEG photography is a complex ecology of human and unhuman objects connecting the photographer, the camera, the silicon and battery, the factories and poisoned workers, the card and the router, Web 2.0 businesses, servers and the power that runs them, the carbon burnt to keep those searchable archives running, the ›friend‹ and searcher, the IP lawyer and countless other actants. This project is about those objects and the complex, inaccessible relations and connections that make up digital imag(in)ing« (Caplan, JPEG, 11). See also Trebor Scholz (ed.), Digital Labor: The Internet as Playground and Factory, New York: Routledge, 2013; and Rossiter, Software, Infrastructure, Labor.

97

Tung-Hui Hu, A Prehistory of the Cloud, Cambridge, MA: MIT Press, 2015.

98

Cf. Jussi Parikka (ed.), Medianatures: The Materiality of Information Technology and Electronic Waste, Open Humanities Press, 2011, http://www.livingbooksaboutlife.org/books/Medianatures; Christian Fuchs, Digital Labour and Karl Marx, New York: Routledge, 2014; Babette B. Tischleder, Sarah Wasserman (eds.), Cultures of Obsolescence: History, Materiality, and the Digital Age, Basingstoke, UK: Palgrave Macmillan, 2015.

99

Cf. Joel Combiner, »Carbon Footprinting the Internet,« Consilience: The Journal of Sustainable Development 5/1 (2011), 119–124. The rapidly growing, volume-intensive proliferation of audiovisual traffic has also had an effect on the expansion of the transoceanic undersea cable network while, for instance, satellite technology has fallen behind, as Nicole Starosielski has shown: »Over the past twenty years, satellites’ capacity has filled up, and conditions have shifted significantly to favor fiber-optic cables. Cables are now able to carry a greater amount of information at faster speeds and at lower cost than satellites (a signal traveling between New York and London takes about one-eighth the time to reach its destination by cable as it does by satellite). With the emergence of high-definition video and high-bandwidth content on the Internet (a shift that favors cable infrastructure), the disparity between the two looks like it will increase. Despite the rhetoric of wirelessness, we exist in a world that is more wired than ever« (Starosielski, Undersea Network, 9).

100

David M. Berry, »Messianic Media: Notes on the Real-time Stream,« Stunlaw (blog), Sept. 12, 2011, http://stunlaw.blogspot.com/2011/09/messianic-media-notes-on-real-time.html.

101

»As opposed to previous mechanisms that work on opening and closing server connections, and pulling in information on request, this new type of data processing performs a continuous query for new data units that arrive in the database and pushes the result into the stream according to the filter being used. The result is thus a persistent, real-time connection between a server and a user« (Nadav Hochman, »The Social Media Image,« Big Data & Society, July-Dec. 2014, 1–15, here: 2).

102

Cf. John Borthwick, »Distribution … Now,« THINK/Musings (blog), May 13, 2009, http://www.borthwick.com/weblog/2009/05/13/699/.

103

»The way we have traditionally thought about the Internet has been in terms of pages, but we see this changing to the concept of ›streams.‹ In essence, the change represents a move from a notion of information retrieval, where a user would attend a particular machine to extract data as and when it was required, to an ecology of data streams that form intensive information environments. […] Importantly, the real-time stream is not just an empirical object; it also serves as a technological imaginary. […] In the world of the real-time stream, it is argued that the user will be constantly bombarded with data from a thousand (million) different places, all in real-time, and that without the complementary technology to manage and comprehend the data she would drown in information overload. But importantly, the user will also increasingly desire the real-time stream, both to be in it, to follow it, and to participate in it« (David M. Berry, »Real-Time Streams and the @Cloud,« Stunlaw (blog), Jan. 13, 2011, http://stunlaw.blogspot.com/2011/01/real-time-streams-and-cloud.html); see also Lev Manovich, »Data Stream, Database, Timeline,« Software Studies Initiative (blog), Oct. 27, 2012, http://lab.softwarestudies.com/2012/10/data-stream-database-timeline-new.html.

104

Cf. Minos Garofalakis, Johannes Gehrke, Rajeev Rastogi, »Data Stream Management. A Brave New World,« in: Data Stream Management: Processing High-Speed Data Streams, New York: Springer, 2016, 1–13.

105

Real-time is therefore not only conveyed through media technology, that is, produced by it, but can also be flexibly modulated and fine-tuned by it—there are various (e.g., platform-specific) forms of »realtimeness« (see Esther Weltevrede, Anne Helmond, Carolin Gerlitz, »The Politics of Real-Time: A Device Perspective on Social Media Platforms and Search Engines,« Theory, Culture & Society 31/6 (2014), 125–150).

106

Cf. Chun, Updating to Remain the Same.

107

Christoph Engemann, Florian Sprenger, »Im Netz der Dinge. Zur Einleitung,« in: Engemann, Sprenger (eds.), Das Internet der Dinge. Über smarte Objekte, intelligente Umgebungen und die technische Durchdringung der Welt, Bielefeld: Transcript, 2015, 7–58, here: 28.

108

On high-frequency trading (HFT) cf. Matthew Tiessen, »High-Frequency Trading and the Centering of the (Financial) Periphery,« Volume #32, Sept. 09, 2012, http://volumeproject.org/high-frequency-trading-and-the-centering-of-the-financial-periphery/.

109

More on this in Part III.

110

Cf. Byron Ellis, Real-Time Analytics: Techniques to Analyze and Visualize Streaming Data, New York: Wiley, 2014.

111

Cf. Peter Krapp, Noise Channels: Glitch and Error in Digital Culture, Minneapolis: University of Minnesota Press, 2009; and Rosa Menkan, The Glitch Moment(um), Amsterdam: INC, 2011.

112

Weltevrede, Helmond, Gerlitz, »The Politics of Real-Time,« 137.

113

Axel Volmar, »Zeitkritische Medien im Kontext von Wahrnehmung, Kommunikation und Ästhetik. Eine Einleitung,« in: Volmar (ed.), Zeitkritische Medien, Berlin: Kadmos, 2009, 9–26, here: 10. See also Julian Rohrhuber, »Das Rechtzeitige. Doppelte Extension und formales Experiment,« in: ibid., 195–212.

114

Rubinstein, Sluis, »The Digital Image,« 30f.

115

»Protocols are highly formal; that is, they encapsulate information inside a technically defined wrapper, while remaining relatively indifferent to the content of information contained within« (Alexander R. Galloway, Protocol: How Control Exists after Decentralization, Cambridge, MA: MIT Press, 2006, 7f.).

  • Collapse
  • Expand

The Distributed Image

Stream – Archive – Ambience