1 Introduction
The digital three-dimensional documentation of archaeological sites, features and artifacts has become a standard approach in the past decade, replacing traditional analog methods almost completely. The digital methods are more efficient and accurate. They also make possible to produce a digital copy of a physical reality, whereas the conventional methods are always a derivation of the three-dimensionality of the actual shape of a place, space or object. Such a derivation always loses something and can never represent the shape and appearance of reality to such a degree of accuracy. Moreover, a drawing, sketch or plan is always interpretative and depends on the experience and the specific research interest of the draughtsman.1 To a certain extent, this also applies to photography, since the photographer must decide on a shooting distance or angle, which again depends on a specific interest of the researcher. In theory, digital 3D methods are capable of capturing a scaleable and neutral copy of an excavated feature, whose shape is independent of the experience and interest of the researcher, whereby, of course, the decision to remove a layer is already an interpretative act that is very much dependent on the factors mentioned and this statement refers only to the pure process of documentation. From this digital facsimile of the site, all views, plans and illustrations can then be driven easily at any scale. In fact, digital recording in this case is a preservation by documentation, in which fragile objects and features are transformed from their physical forms into digital format. In addition, through digital recording of the excavation processes, archaeologists harvest large corpora of digitally born datasets that would offer present and future scholars unlimited research potentials.
In practice, however, this type of ideal documentation cannot be expected because no technology has yet been able to capture all the details of a site at all necessary scales in one single step. In the reality of fieldwork, researchers and digital specialists must decide beforehand what is to be recorded and at what level of detail, and which technology is best suited for this purpose. In most cases it is necessary to combine different technologies and methods to meet the requirements of archaeological research in the best possible way.2 All 3D data can be integrated into globally valid coordinate systems and combined in this way to produce ever newer and demand-oriented derivatives of reality that serve a specific research interest. The term copy does not really apply either, since such a copy can only preserve the geometry and coloration of a space or object. All the other senses we use in the perception of an archaeological site have so far been difficult or impossible to preserve virtually. And of course, even the best facsimile remains a copy and cannot preserve or replace a physical site or object even with the most sophisticated visualization or printing techniques.3
However, there is no doubt that there is no available approach at this stage that would allow us to copy and preserve such a large amount of archaeologically relevant information. Digital documentation is proven to be of utmost importance in the recording of fragile artifacts, organic materials, and human remains in their varied contexts. Indeed, human remains and archaeological objects made of organic, degradable and corroded materials are the most endangered finds, because they are prone to loss and disintegration once exposed to environmental agents of deterioration during excavation.
A problem that has received little attention so far is that 3D information is rarely integrated as such in the research process. Rather, it is usually used to derive two-dimensional plans and views from the three-dimensional data sets. This is not surprising to us, since archaeologists are used to and experienced in conducting their research with such derivatives for their printed books and articles, which have been for long the only possibility of knowledge transfer and presentation. Working with 3D data is moreover new to many archaeologists and often requires the use of complex software that must first be learned. Also, hardware and software are often missing to work efficiently with the recorded datasets. Therefore, specialists are usually required to extract information from the complex 3D data to present it to the researchers as conventional products like pictures or plans. Used in this way, the 3D models are only an intermediate product for creating traditional visualizations and as such have no value of their own. Another decisive reason for the limited distribution of 3D models is certainly the lack of possibilities to publish them in standardized and sustainable formats in combination with conventional publications. As Olson already pointed out, we run the risk of simply imitating traditional methods digitally and making them faster and more accurate, but not realizing the analytical potential of 3D methods.4
When we work directly with 3D data on a computer screen, we must always bear in mind that this is already a two-dimensional derivation that has lost some essential information, such as the impression of the size of the object or space. With the technologies of Virtual Reality (VR), however, possibilities have been available for years now to experience contexts and objects in their real size and their actual environment. So far, the use of such tools has mostly been limited to the dissemination of research results to the public and is hardly reflected in research or university education.5 Moreover, archaeological VR applications focus more on the reconstruction and immersive experience of the past and less on the current appearance of a site after or even during excavation.6 An exception to this is the Çatalhöyük project, which for decades has served as a testing ground for various digital technologies in a large-scale archaeological excavation.7 Already in 2009 the project started to document the excavation process in 3D in order to analyze the data in VR environments.8 The project has clearly demonstrated the potential of such an approach and it can be seen as a methodological blueprint for the digital recording and analysis of a site in the different phases of its investigation.9 However, it also shows the enormous technical effort and knowledge that is necessary for this. The technical and human resources used are only available to a fraction of the archaeological field projects, and a direct transfer to another project is therefore usually out of the question.
In our contribution we want to discuss how we have developed a digital and three-dimensional documentation strategy for the tombs in Shaft K24 in Saqqara with significantly less use of personnel and hardware, which nevertheless meets the requirements of the project and offers the researchers a direct scientific benefit, which could not have been achieved with conventional methods. We will focus specially on how we integrated the results directly into the research and conservation processes and how they have been influenced by it.
2 The Saqqara Saite Tombs Project (SSTP)
The Saqqara Saite Tombs Project (SSTP) of the University of Tübingen received two rounds of funding from the German Research Foundation from 2016–2019 and 2020–2023. It began as essentially a second round of excavation and documentation of the Saite-Persian tombs (Dynasties 26 and 27, ca. 664–404 BC) located to the south and east of the pyramid of King Wenis of Dynasty 5 (ca. 2345–2315 BC) at Saqqara.10 Since one of the main goals of the SSTP is to produce exact facsimiles of the texts of these tombs, we discussed the advantages of the employment of 3D terrestrial laserscanning (TLS) and image-based modelling (IBM) to obtain rectified and high-resolution images of the texts on the vaulted ceilings of the burial chambers. We also decided to employ a confluence of digital technologies in the mapping of the site’s subterranean and aboveground structures. The digital documentation strategy for the tombs was only recently described and discussed by the authors.11 The combination of laser scanning and image-based modeling has proven to be an efficient documentation method for the tombs investigated up to now. The integrated approach offered the possibility to record every feature in the desired scale and resolution. As already mentioned, the bigger part of the features recorded so far were excavated decades ago and the 3D models always represent the final stage of the excavation process. The process itself is no longer reproducible by using the models because the information was not recorded at the time. This reduction to the last phase of the excavation process applies not only to the shaft tombs in Saqqara that we investigated, but to the same extent to all other previous 3D documentation projects in Egyptian tombs such as those of Seti, Nefertari or Tutankhamun.12
The situation is quite different in the newly discovered Shaft K24. Here, for the first time, it was possible to record all phases of the excavation of an untouched Egyptian tomb complex in 3D and to integrate the results directly in the research and conservation processes.
Recognizing the advantages and research potential of digital documentation, we decided to digitally record the excavation processes of the hallways and burial chambers in Shaft K 24 of the Saite mummification workshop complex at Saqqara. Shaft K 24 is spatially and functionally associated with two embalming facilities, namely a subterranean embalming room and a tent of purification, i.e., a structure called ibu. The shaft is located in the middle of the ibu-structure and measures 3 m. × 3.50 m. It reaches down to a depth of 30 m and served as the communal burial shaft of the mummification workshop complex (Fig. 8.2). It has six tombs cut into its walls at different depths. Some of them are simple loculi with one or two mummies (Tombs 1 and 4), a large room with multiple burials (Tomb 5), a complex of niches arranged along corridors (Tomb 2) or a complex of burial chambers laid out around hallways (Tombs 3 and 6) (Fig. 8.3).
Tomb 6 is cut into the north wall of Shaft K 24 at a depth of 30 m. It consists of two hallways on a north-south axis and six burial chambers. These burial chambers are arranged in pairs around the two hallways: one pair on the west (K24 W1-W2), another on the east (K24 E1-E2), and a third on the north (K24 N1-N2). These burial chambers yielded diverse and significant archaeological finds, including 17 badly decayed human mummies, 19 calcite and pottery canopic jars, 4 limestone sarcophagi, 10 badly thermo-disintegrated and decayed wooden coffins, thousands of faience shawabti figurines, a dozen miniature marl clay and faience embalming cups, small symbolic sundried mud boat models, and a gilded silver mummy mask. Only the adoption and implementation of 3D digital technologies allowed us to precisely record these fragile human remains and degraded artifacts in their original archaeological contexts.
3 3D Documentation of the Archaeological Excavation Process
3D technologies are becoming an increasingly essential tool in the toolbox of field archaeology. Due to their accuracy and efficacy, they replace or have already completely replaced conventional methods.13 This development is mostly driven by the availability of efficient, robust, and easy-to-use software-environments based on image-based algorithms like Structure-From-Motion and Multi-Stereo-View.14 This Image-Based-Modelling (IBM) is comparatively inexpensive and its basic features are easy to learn by an archaeologist and do not require extensive specialist knowledge.15 Furthermore, no special equipment is required. A standard digital camera for recording the necessary overlapping image data sets, a Total Station or DGP s for locating and scaling the results and a computer for calculating the models. These are available equipment in almost every project. The necessary software is in most cases inexpensive for scientific use and the manufacturers all provide free trial versions that allow a low-threshold entry into IBM.
Terrestrial Laser Scanning (TLS) on the contrary is comparatively expensive and the necessary equipment is hardly affordable for most archaeological projects. A laser scanner, powerful computers and storage systems and the required special software easily reach the one hundred-thousand-dollar threshold. Furthermore, the skills of a specialist are necessary. This applies less to the scanning itself than to the processing of the data in special software packages, whose extensive functionalities are difficult to master.
These observations easily explain IBM’s great success in archaeology, while TLS has always been a highly specialized application, available only to a few projects, and could not establish itself as the standard documentation method for archaeology due to cost reasons and complex data processing. In the following we want to discuss why we nevertheless combined both methods for the documentation of the burial chambers in shaft K24.
4 3D Documentation Strategy of Shaft K24
The image-based approach has obviously some clear advantages over the laser scanner, which have been particularly effective in narrow spaces. The scanner must always be placed on a tripod, which is often a challenge, especially within complex archaeological contexts, and requires extreme caution from the operator. As the scanner is not able to capture the area under the device, a part of the space to be captured is always in the so-called scan shadow, which always requires the device to be placed several times in order to capture the whole feature.
Since the scanner uses an active sensor, it offers the advantage that it can be used in the dark without additional light sources to capture highly accurate 3D information.16 However, this only applies to the geometric information, so that we could only use this advantage in the corridors and shafts whose surfaces do not show archaeologically relevant color information. If the scan is to be provided with color information, this must be recorded in a second step with the scanners built-in camera system or an external setup. As the color information undoubtedly represents a decisive component of the most features and artefacts, these must be illuminated. As a result, the data acquisition with the scanner is a complex, physically demanding task and the use of the heavy equipment with the large tripod requires extreme caution by the operating crew.
In spite of these obvious disadvantages, laser scanning offers some benefits over the image-based methods. For example, several hundred individual scans can be combined in high accuracy in a semi or even fully automated process to create an integrated model of the entire site. This has made, in the case of the SSTP, the complex interrelationships of the underground and above-ground features visible (Fig. 8.2). Image-based models, on the other hand, usually have to be connected manually, and never reach the accuracy of the registered scans.17 Furthermore, they are not scaled, and control points measured with a total station or a measuring tape are necessary to scale the object to its actual size. On the contrary, the point-clouds recorded by the laser-scanner are always scaled in a metric system and no additional information to derive accurate measurements is required.18 They are therefore perfectly suited to derive highly accurate plans and views that cannot be achieved with any other technology at this level of accuracy, precision, and speed. In this manner, the scientists involved can access detailed plans and sections shortly after the recording process.
The result of the TLS process is always a dense point-cloud, which is only conditionally suitable to represent reality, since our physical environment does not consist of a multitude of points distributed in space, but of solid geometric objects with a closed surface. For the plans and sections, this becomes visible only at a very small scale and can therefore be neglected. In the case of photorealistic derivations or direct use of the objects in digital 3D environments, however, a strong alienation effect occurs, which makes the models look very far removed from reality.19 Using Poisson Surface Reconstruction, the point-clouds can be meshed together, but this process requires the point-normals, which indicate the orientation of the surface.20 While IBM calculates them automatically, TLS requires them to be derived from the point-clouds, such a step may lead to an incorrect reconstruction. In addition, the common tools for registering point-clouds are not able to calculate the meshes and another specific—in the most cases very expensive—software is required. Texturing of the meshes based on the images for the point-cloud coloring or on the color of the point-cloud itself is also possible, but the results are rarely satisfactory—especially in comparison to the models resulting from the image-based methods (Fig. 8.1).
Without doubt, the scanner with its resolution of 6.3 mm at a distance of 10 m is not suitable to capture all features in sufficient detail. In order to record all relevant information in a reasonable resolution, it would be necessary to apply a whole series of different scanner systems that have the respective suitable resolution. This approach is not very practical, for it requires extensive and expensive equipment with special hardware and software that produces a variety of different data formats.21 In contrast, the achievable resolution of the IBM is not limited by the used camera, but is defined by the resolution and size of the camera sensor, the selected focal length and the distance from the object to be recorded. This makes the procedure highly variable and can easily be adapted to the diverging requirements by changing the basic parameters like the lens or the distance from the object. In this way, the IBM approach is suitable for both large rooms and small objects.22
In contrast to the data sets acquired by laser scanning, the data for the IBM can be directly meshed and textured with the high-resolution images used for reconstruction in a combined and seamless workflow in the same software.23 Thus the models generated with IBM reproduce reality much more accurately than the colored point-clouds recorded with the laser scanner as already described above (Fig. 8.1).
Figure 8.1
Comparison of textures and meshes from IBM and TLS of chamber K24 E1. The top two shots show the textured models and the bottom ones only the meshes.
In contrast to those benefits, the approach has some clear downsides compared to the TLS. Although image-based methods are able to combine several hundred digital pictures in order to derive a 3D model, the processing of the data is time consuming and can easily take up several hours or even days related to the size of the image-set and the available computing resources.24 Therefore, it will always take some time to control the results of the data acquisition before the excavation process can proceed and previously documented features can be removed.25 Meanwhile, the data recorded by the laser-scanner are available directly after, or even during, the scanning process. The control of the generated models is of high significance, because not every acquisition is successful on the first attempt. This can be caused by a lack of overlapping of images, incorrect exposure, blurring due to too long exposure times or the lack of depth of field. During the recording process these errors are easily overlooked and only become visible during or after processing. Especially structures with only a few visible features like smooth walls or highly reflective objects are often difficult to capture and, in many cases, require alternative capture strategies.26 All this easily leads to incomplete or highly interpolated inaccurate models. In some cases, the model generation will only partially succeed or even fail. However, in this recording method, if the archaeological feature removed during excavation and before checking of the obtained records, the resulting models would be incomplete at best, and the context will be lost in worst cases.27 For this reason, adding more images iteratively to faulty models is a common procedure, but this is only possible if the recorded feature is left untouched for the time it takes to create a satisfactory model. Particularly in an excavation with a very tight schedule, such as the one carried out in K24 with a large number of specialists with different tasks involved in a very confined space. This requirement could only be met in the rarest of cases and the recording of the data for the IBM usually had to be done in one attempt, so as not to interrupt the ongoing excavation and conservation processes. In this case, it is essential that the documentation team has extensive experience in recording image data sets for the IBM.
Furthermore, the number of images to be processed simultaneously is limited by the computing power of the available hardware. Today, a few thousand photographs can be combined into a model in a single process, but this requires extremely powerful computers and the computing process still takes several days. The resulting models usually cannot be displayed and used in their full resolution and the reduction to a fraction of the original size is essential in order to make it possible to work with such a model. This is, however, one of the great strengths of the TLS, which allows the registration of several hundred scans in a manageable time in an integrated, usable model in its full resolution.
As we have shown, both methods have advantages and drawbacks. We have therefore decided to combine both technologies in order to record all relevant information in a suitable scale, resolution and accuracy.
5 Laserscanning of Shaft K24
In 2017 and 2018, the already excavated shafts as well as all surface structures and features were completely recorded with 240 individual laser scans, which have been merged into a single model of the whole site. Shaft K24 was individually scanned in order to integrate it into the general model. We used a Leica P40 scanner, which proved to be an excellent and efficient tool, especially on the surface with its range of 270 meters.28 A full dome 360° scan with full range can be obtained in less than two minutes with a resolution of 6.3 millimeters at a distance of ten meters. Plans and sections in a scale of 1:20 could thus be derived easily. We chose the resolution as a suitable compromise between speed, resolution and data-size, as we used an image-based approach for the more detailed models. With a higher resolution, the size of the data and the scanning-time increases significantly and so does the time needed for registration and post-processing. Less satisfactory, however, is the built-in camera of the scanner, which is necessary to colorize the point-cloud. With this camera, even under the best lighting conditions, the acquisition of an image set takes up to eight minutes and due to the use of a very small image sensor, the quality of the images is not sufficient for our purposes. The duration of scanning and image acquisition is of immense importance, especially on a busy site like Saqqara, where it is extremely difficult to keep workers, archaeologists and tourists away from the scanning area. Therefore, we have decided to use an iSTAR 360 panoramic camera, which collects a fifty-megapixel HDR data set within a few seconds, depending on the light conditions. The iSTAR panoramic camera sped up the process significantly, compared to the scanner’s built-in camera, and the undesirable capturing of the images of passersby and other individuals was consequently reduced. Despite the high resolution of the iSTAR 360 camera, the quality of the images is not entirely satisfactory. In comparison with a standard DSLR, the main problems are the unreliable white balance and the problematic behavior of the camera in backlight or sidelight, which very quickly leads to blurred and low-contrast images. Since the images are processed directly in Leica Cyclone, subsequent adjustment is hardly possible. Particularly when taking images in confined spaces, it quickly becomes apparent that the unrecorded area under the camera is larger than the shadow of the scanner. This means that in each scan an uncolored ring remains around the location of the scanner and camera, which must be cut out manually in a time-consuming process.
Also, in the narrow shafts, corridors and chambers, the limitations and problems of this approach became evident. The Leica P40 laser scanner weighs almost 13 kilograms and must be levelled on a tripod for each scan. It was often very difficult to find a suitable place to set up the tripod in the narrow chambers without disturbing the features and artifacts. An even bigger problem was again the large scan shadow. Especially in the very narrow burial chambers it was not always possible to place the device in such a way that the feature could be completely captured (Fig. 8.1). Equally complex was the acquisition of the image data set for the coloration of the point-cloud. For this purpose, the features had to be illuminated with hand-held LED lights that had to be placed behind the camera as evenly as possible. This proved to be very difficult due to the wide angle of the panorama camera and required almost artistic and physical skills from the operators in order not to step on artifacts distributed on the ground.
The post-processing of the data has been carried out in Leica Cyclone, Leica Cyclone REGISTER 360 and Autodesk Recap. First, we combined the scans with the images collected with the iStar-camera in a semi-automated process and exported a colorized version of every scan-station in Leica Cyclone. Second, we used the fully automated registration process, based on an Iterative-Closest-Point-algorithm (ICP) implemented in Leica Cyclone 360 REGISTER, to merge the single scans.29 The use of two different tools was necessary because the merging of panorama and scan only worked in Cyclone and the automated registration of scans was only available in Cyclone 360 REGISTER. The geo-referencing of features and structures was carried out using black and white targets measured with the Total Station, which was stationed in the UTM/WGS84 coordinate system used by the project, one that has been used for decades in Saqqara as the basis for all surveying work in order to link the maps of all archaeological projects.30 In a final step, the merged data set was exported into a standardized data format (E57) and imported into Autodesk Recap to clean them. The resulting Recap data set could be directly opened in Autodesk AutoCAD or PointCab to derive highly accurate plans and sections (Fig. 8.2). The data exportation to a standardized open data format guarantees their long-time usability and makes the data independent from Leica’s proprietary, expensive, unstable, and not very user-friendly in-house software.
Figure 8.2
Shaft K24 connected with the features on the surface by TLS
For each archaeological context we created a completely new model after each phase of the excavation and then connected them together. This model of shaft K24 was then integrated into the model of the entire site (Fig. 8.2). In Autodesk Recap, the individual rooms and contexts can thus be switched on and off at all stages, making both the chronological sequence of the research process and the complex spatial relationships of the underground and above-ground features visible. Although the procedure has proven to be less than ideal, the chambers, corridors and shafts could usually be surveyed in a short time and the resulting plans and sections could be made available to archaeologists and conservators a few hours later (Fig. 8.3).
Figure 8.3
TLS generated floor plan of the chambers
6 Image-Based 3D-Documentation
To obtain photorealistic virtual copies of the contexts and artifacts, we supplemented the laser scanning with an image-based approach based on Structure-From-Motion and the Multi-Stereo-View algorithms. We chose the Agisoft Metashape Pro for the whole process, from the orientation of the images to the texturing of the meshed model. Metashape is the most widespread IBM tool in archaeological contexts, due to its functionality, usability, stability and very moderate pricing compared to the competitors on the market.
As the burial chambers of Tomb 6 of Shaft K24 are all located deep underground, it was necessary to illuminate them artificially in order to obtain a well exposed image data-set for the 3D-reconstruction. In earlier campaigns, we used a handheld LED-light for the documentation of the Saite Persian Tombs. We tried to place it parallel to the camera sensor in order to obtain even lightning. Especially in narrow spaces, it was not always possible to shoot every image under the same lighting conditions. This led to uneven textures, which are also extremely difficult to adjust in the post processing. Even with a strong LED-lamp, the shutter speeds were so slow that it was not possible to hold the camera by hand. Therefore, a sturdy tripod and a remote shutter release were necessary. Despite the use of a Nikon D750 full format camera, increasing the ISO value led to image noise that became overly visible in the results. The process proved to be time consuming and unsatisfactory as it was not always possible to position the tripod and the LED well in the confined spaces. Therefore, we decided to experiment with a camera-mounted flash. Compared to the LED-light and tripod setup, the hand-held approach accelerated the process drastically and the results were much more evenly illuminated and color-controlled. Therefore, we decided to change our workflow and used the camera-mounted flash for all areas without natural light. The flash was directed upwards at an angle of 45° and equipped with a diffuser to illuminate the object to be photographed as completely as possible and to prevent too strong shadows. The procedure proved to be extremely flexible in order to capture all details of the complex of burial chambers. For example, the floor of chamber K24 E2 was completely covered with the remains of wooden coffins, mummies and grave goods, making it impossible to work without disturbing the context. We improvised in order to obtain a complete record of the situation; we attached the camera to a pole with mounted flash and used a smartphone as display and a remote shutter release.31
To scale and georeference the models, we placed small markers in the scene and measured them with the total station in the same grid as the laser scans. Since we always left all markers in place, it was possible to scale and locate each phase of the excavation accurately with the same set of control points.
The next step was to import all images into Agisoft Metashape Pro after some minor adjustments in Adobe Photoshop Lightroom to generate the 3D models. This semi-automated process consisted of six consecutive steps, (1) the image orientation and (2) sparse point-cloud generation, (3) dense 3D point-cloud generation, (4) meshing of the dense point-cloud, (5) texture mapping and (6) ortho-image generation.32 In addition to the pictures, the coordinates of the control-points can be imported into the software to combine them manually with the markers visible on the images in a semi-automated process in order to scale and georeference the model and its derivatives.
In the next step, the necessary ortho views were derived from each model and the achieved Ground Sampling Distance (GSD) was less than 0.25 mm in all recorded burial chambers. It was usually possible to completely generate the models of one or two chambers or other features after the field work in order to make the results available to the researchers the following day and to check whether a sufficiently overlapping image data set was recorded or whether further images had to be taken to fill gaps in the model. If this was necessary, it could be done directly as a first step in the morning before continuing with the archaeological work. New models of the chambers were also created using IBM after each phase of the excavation (Fig. 8.4)
Due to the considerable flexibility of the method, it was also possible to record a large number of the artifacts from the tombs in addition to the features. These were first brought to the depot and could then be photographed there under good lighting conditions on a rotating plate. We focused on the artifacts that were particularly fragile, such as gilded silver mummy mask from chamber K24 W2 (Fig. 8.5), and on the objects made of organic materials that are difficult to conserve, such as such as the inscribed, yet badly decayed, wooden coffin of Tadihor in chamber K24 E1.
Many of the objects from the chambers were in a poor condition that they could not be removed without destroying them. This was particularly true for the wooden boxes and coffins and the mummies. As an example, we show the mummies in the opened sarcophagi in chambers K24 W1 and K24 W2 (Fig. 8.6). These objects were documented in situ before they were removed from the chambers. Here a resolution between 0.1 and 0.2 mm could be achieved for all objects.
Figure 8.4
Three different phases of excavation of chamber K24 W1
Figure 8.5
Rendering of the silver-golden mask from K24 W2
Figure 8.6
Rendering of the open sarcophagi in the chambers K24 W1 and K24 W2
7 Data Management
One of the problems we have encountered in recent years is the amount of data produced. More than four terabytes of raw and processed data were accumulated, representing records of irrecoverable and unique features and structures that need to be preserved for the future.33 In the last two decades countless attempts have been made to develop new standards and infrastructures to save and secure data. We have decided to follow the guidelines of good practice published by the English Archaeology Data Service (ADS), which fully cover our requirements.34 This applies in particular to the extremely detailed metadata schema, which allows a meaningful description of the data in technical and in domain-specific aspects. First of all, we transformed all data to open and sustainable formats like TIFF for the imagery and E57 and OBJ for the 3D-data. All data will be described with metadata according to the guidelines for depositors and stored in the research-data-portal FDAT provided by the University of Tübingen.35
8 Results and Experiences
The hybrid approach we chose, to combine TLS and IBM, has proven to be effective for the project presented here, as both methods complement each other perfectly. With the TLS we were able to survey the chambers, shafts and hallways in a short time and the resulting plans and sections could be made available to the archaeologists and conservators a few hours later in order to support their decision-making process at work. All data could be integrated directly into the overall model of the site and thus the virtual copy becomes more and more condensed, revealing the extremely complex spatial relationships of the features below and above ground more clearly. Our project is not the first to visualize a part or even the whole necropolis in an integrated 3D model.36 However, unlike other projects, our model is not based on the reconstruction of the past ritual landscape, but rather on highly accurate measurements that create a visualization as neutral as possible,37 whose purpose is not to interpretively reconstruct, but to virtually copy the current state of the site. There can be a multitude of versions of this actual state on a timeline, documenting changes through scientific or illegal excavation, destruction and decay, but also through restoration and new discoveries.
We used IBM and TLS to created different versions of the same archaeological contexts due to the discussed limitations of these two technologies. It is hoped that the rapid technical development of 3D technologies will make it possible in the future to use only one method consistently. However, our approach of combining IBM and TLS has proven to be successful and has enabled us to record all relevant information in a target-oriented accuracy and resolution. Due to the highly accurate georeferencing of all models, we are always able to map the results from TLS and IBM to each other. In the achieved accuracy, both data sets can be regarded as equivalent, a comparison of both methods in chamber E1 shows a mean error of 4 mm. With Reality Capture, which has been available for some time now, it is even possible to process both data types together, thus combining the advantages of both technologies. However, due to the small measurement error, it did not seem necessary to adapt the workflow we use.38
Although laser scanning did not prove to be ideal for fully capturing the narrow spaces in K24, it did offer the decisive advantage of the fast availability of the results in form of plans and views. The further progress of the work was then dictated and coordinated on the basis of these derivatives from the 3D models. In addition, the scans made it possible to quickly decide where to support the ceilings of the chambers and corridors in order to prevent them from collapsing after removing walls and debris. Moreover, the scans make it possible to always view the new discoveries directly in the overall spatial and temporal context of the site and not as isolated phenomena.
The IBM-based models, on the other hand, are used by researchers to study the individual features and artifacts. They allow the chambers to be viewed repeatedly from all sides and from all distances under perfect lighting conditions without having to carefully move around the extremely fragile objects in the dark rooms. Orthoimages show the researchers the rooms from a bird’s eye view, which makes many of the complex micro-spatial relationships between the individual burials and the objects surrounding them understandable. This applies in particular to the wooden objects and their painted plasterwork. Their poor condition made it impossible in almost all cases to remove them from the chambers without destroying them. We were surprised that the various specialists in archaeology, conservation, geology and epigraphy, in addition to the plans and views, increasingly demanded a 3D PDF that would allow easy access to the 3D information.
As Polig and Llobera have already pointed out, archaeological research and its results depend heavily on discovering patterns and identifying relationships and connections.39 The recognizability of these, in turn, depends very much on the type of visualization available to understand these connections. Frischer also emphasizes the importance of data visualization, referring to Colin Ware, who lists five points with which visualization can support the process of understanding and interpreting.40
-
It may facilitate the cognition of large amounts of data
-
It can promote the perception of unanticipated emergent properties
-
It sometimes highlights problems in data quality
-
It makes clear the relationship of large-and small-scale features
-
It helps us to formulate hypotheses
The integration of all information in an integrated, neutral and scale-independent 3D visualization thus represents a major advantage over other, conventional visualization methods when it comes to recognizing patterns and correlations and interpreting them, since it fully meets the points that Ware has established. Of course, this integrated overall model represents an ideal solution that cannot be implemented in this way because the virtual research environment required for this purpose does not exist. The quantity, complexity and semantics of the data-set far exceed the capabilities of such a software at present. In the future, however, increasingly powerful software environments will make it possible to connect and visualize such complex data sets semantically correctly in order to recognize still unknown patterns and relationships. It is therefore crucial and our duty that we describe our data carefully and transparently with metadata and store them in sustainable data formats to allow their later use in superordinate virtual systems.
9 Perspectives
As we discussed in the introduction, we see the derivatives from the 3D models only as an intermediate step towards virtual realities that allow direct immersive research in the digital copies of the site. Initial experiments with Unreal engine 4 show that the underground chambers with their clear spatial boundaries are particularly suitable for this purpose. The chambers are exclusively determined by the archaeological context and can be derived directly from the 3D models, adding further elements such as a sky or a sound stage is not necessary here to facilitate the creation of an immersive environment. Also, the artificial lights used during the excavation could be modelled and integrated into the scene very easily. Although this creates a fascinating opportunity to visualize the individual steps of the excavation in an immersive virtual copy of the tombs, we believe it is still too early to attempt to integrate the approach into the research process. In our opinion, the effort to prepare the individual models for integration into the VR environment is still too big and the available head mounted displays are simply too impractical for longer use.41 Moreover, the possibilities to interact with the virtual world are still very limited and only possible with hand-held controllers that do not allow an intuitive and natural interaction with features and objects. The potential of the technology is undoubted and the rapid technical development of such technologies will solve the problems addressed, thus allowing the researcher to visit and study the site immersively in its various phases of excavation.
We already consider such environments to be ideally suited to make the results of our research tangible and understandable for everyone, while the ever-changing excavation itself is only accessible to a few specialists. VR makes it possible to experience the excavation and to virtually look over the scientists’ shoulders as they work. The artefacts from the tombs can thus be viewed directly in their original context and not as an isolated object in a museum showcase. In particular, the usefulness of such environments in the education of students cannot be overestimated, as they can offer students a direct experience of the site and the artefacts that can be enriched with any amount of additional information.42 Last but not least, it contributes to the democratization of the study of archaeology, as excursions and field schools are only open to a privileged few who can afford the high costs travelling the world. It is therefore essential that we make our data available under free and open licenses in standardized data formats and not, as is the case with most publications, behind paywalls that exclude non-privileged scholars and students. For this reason, it is equally essential that we as scientists are able to master and apply the necessary technologies ourselves so that we do not have to rely on the help of a few specialists and commercial companies. Therefore, we have to integrate the procedures and technologies discussed in this paper into the study of archaeology in order to enable future colleagues to assess and apply them independently.43
Morgan and Wright 2018.
Forte 2014; Siebke et al. 2018.
McCoy 2020, 196; Forte 2014.
Olson and Placchetti 2015.
Bekele et al. 2018; Hageneuer 2020; Kevin Kee 2014.
Holter and Schwesinger 2020; Forte 2014.
Berggren et al. 2015.
Forte 2014.
Lercari et al. 2018.
These are the tombs of Tjaninanihbu, Psamtek, Padinist, Padinit, and Hekaemsaf, see: Barsanti and Maspero 1900b; Barsanti and Maspero 1900a; Bresciani, Giangeri-Silvis, and Pernigotti 1977.
Lang et al. 2020.
Lowe 2018; Factum Arte 2009.
Doneus, M. et al. 2011; Reu et al. 2014; Reu et al. 2013; Galeazzi 2016.
Verhoeven, G. et al. 2013.
Aicardi et al. 2018; Douglass, Lin, and Chodoronek 2015; José Luis et al. 2019.
Historic England 2018.
Kersten, Mechelke, and Maziull 2015.
Reu et al. 2014.
Olson 2016.
Kazhdan, Bolitho, and Hoppe 2006.
Siebke et al. 2018.
Historic England 2017.
Reu et al. 2013; Verhoeven, G. et al. 2013; Galeazzi 2016; Davies; Davis et al. 2017.
Doneus, M. et al. 2011.
Olson and Placchetti 2015.
Galeazzi 2016.
Olson and Placchetti 2015.
Walsh 2015.
Holz et al. 2015; Besl and McKay 1992.
Tavares 2011.
José Luis et al. 2019.
Since the basics of Image Based Modelling have been widely discussed in recent years, we will only refer to further literature at this point: Verhoeven, G. et al. 2013; Reu et al. 2013; Reu et al. 2014; Historic England 2017; Aicardi et al. 2018; Kersten, Mechelke, and Maziull 2015; Howland, Kuester, and Levy 2014; Galeazzi 2016; Zachar, Horňák, and Novaković 2017.
Koller, Frischer, and Humphreys 2009; Richards-Rissetto and Schwerin 2017; Lowe 2018; Niven and Richards 2017.
Archaeology Data Service 2016.
eScience-Center 2018.
Sullivan 2020. For a summary of the problems of reconstructing past archaeological landscapes see in particular. Der Manuelian 2013–2013.
See Ch. 1, from the present volume, for “neutrality” in digital representations.
Luhmann et al. 2019.
Polig 2017; Llobera 2011.
Frischer 2009; Ware 2004.
Cassidy et al. 2019; Bekele et al. 2018.
Kevin Kee 2014.
Olson 2016.
References
Aicardi, Irene, Filiberto Chiabrando, Andrea Maria Lingua, and Francesca Noardo. 2018. “Recent Trends in Cultural Heritage 3D Survey: The Photogrammetric Computer Vision Approach.” Journal of Cultural Heritage 32:257–266. https://doi.org/10.1016/j.culher.2017.11.006 (accessed 02-25-2022).
Archaeology Data Service. 2016. “Guides to Good Practice.” Accessed September 19, 2018. http://guides.archaeologydataservice.ac.uk/g2gp/3d_Toc (accessed 02-25-2022).
Barsanti, Alexandre, and Gaston Maspero. 1900a. “Fouilles Autour De La Pyramide D’ounas (1899–1900). II. Les Tombeaux De Psammétique Et De Setariban. Les Inscriptions De La Chambre De Psammétique.” Annales du Service des Antiquités de l’Égypte 1: 161–184
Barsanti, Alexandre, and Gaston Maspero. 1900b. “Fouilles Autour De La Pyramide D’ounas (1899–1900). IV. Le Tombeau De Péténisis.” Annales du Service des Antiquités de l’Égypte 1: 230–259.
Bekele, Mafkereseb Kassahun, Roberto Pierdicca, Emanuele Frontoni, Eva Savina Malinverni, and James Gain. 2018. “A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage.” J. Comput. Cult. Herit. 11 (2): 1–36. https://doi.org/10.1145/3145534 (accessed 02-25-2022).
Berggren, Åsa, Nicolo Dell’Unto, Maurizio Forte, Scott Haddow, Ian Hodder, Justine Issavi, Nicola Lercari, Camilla Mazzucato, Allison Mickel, and James S. Taylor. 2015. “Revisiting Reflexive Archaeology at Çatalhöyük: Integrating Digital and 3D Technologies at the Trowel’s Edge.” Antiquity 89 (344): 433–448. https://doi.org/10.15184/aqy.2014.43 (accessed 02-25-2022).
Besl, Paul J., and N.D. McKay. 1992. “A Method for Registration of 3-D Shapes.” IEEE Trans. on Pattern Analysis and Machine Intelligence 14 (2): 239–256. https://doi.org/10.1109/34.121791 (accessed 02-25-2022).
Bresciani, Edda, Maria Paola Giangeri-Silvis, and Sergio Pernigotti. 1977. La Tomba Di Ciennehebu, Capo Della Flotta Del Re. Tombe d’eta saitica a Saqqara 1. Pisa: Giardini.
Cassidy, Brendan, Gavin Sim, David Wayne Robinson, and Devlin Gandy. 2019. “A Virtual Reality Platform for Analyzing Remote Archaeological Sites.” Interacting with Computers 31 (2): 167–176. https://doi.org/10.1093/iwc/iwz011 (accessed 02-25-2022).
Davies, Hugh E.H. “Design and Construction of Roman Roads in Britain: Ph.D.” University of Reading.
Davis, Annabelle, David Belton, Petra Helmholz, Paul Bourke, and Jo McDonald. 2017. “Pilbara Rock Art: Laser Scanning, Photogrammetry and 3D Photographic Reconstruction as Heritage Management Tools.” Herit Sci 5 (1): 48. https://doi.org/10.1186/s40494-017-0140-7 (accessed 02-25-2022).
Der Manuelian, Peter. 2013. “Giza 3D: Digital Archaeology and Scholarly Access to the Giza Pyramids: The Giza Project at Harvard University.” In Proceedings of DigitalHeritage 2013 [Digital Heritage International Congress], Marseilles, France, October 28—November 1, 2013, vol. 2, 727–734. https://dash.harvard.edu/handle/1/12560998 (accessed 02-25-2022).
Doneus, M., G. Verhoeven, M. Fera, Ch. Briese, M. Kucera, and W. Neubauer. 2011. “From Deposit to Point Cloud—a Study of Low-Cost Computer Vision Approaches for the Straightforward Documentation of Archaeological Excavations.” Geoinformatics FCE CTU 6: 81–88. https://doi.org/10.14311/gi.6.11 (accessed 02-25-2022).
Douglass, Matthew, Sam Lin, and Michael Chodoronek. 2015. “The Application of 3D Photogrammetry for in-Field Documentation of Archaeological Features.” Adv. archaeol. pract. 3 (02): 136–152. https://doi.org/10.7183/2326-3768.3.2.136 (accessed 02-25-2022).
eScience-Center. 2018. “Forschungsdatenarchiv (FDAT).” Accessed September 19, 2018. https://fdat.escience.uni-tuebingen.de/portal/#/start (accessed 01-20-2023).
Factum Arte. 2009. “Factum Arte’s Work in the Tombs of Tutankhamun, Nefertari and Seti 1.” Unpublished manuscript, last modified July 07, 2019. http://www.factum-arte.com/resources/files/ff/publications_PDF/Tutankhamun_Report_may2009.pdf (accessed 02-25-2022).
Forte, Maurizio. 2014. “3D Archaeology.” Journal of Eastern Mediterranean Archaeology & Heritage Studies 2 (1): 1. https://doi.org/10.5325/jeasmedarcherstu.2.1.0001 (accessed 02-25-2022).
Frischer, Bernard. 2009. “Introduction.” In Beyond Illustration: 2D and 3D Digital Technologies as Tools for Discovery in Archaeology, edited by Bernard Frischer, v–xxiv. BAR International series 1805. Oxford: Hadrian Books.
Galeazzi, Fabrizio. 2016. “Towards the Definition of Best 3D Practices in Archaeology: Assessing 3D Documentation Techniques for Intra-Site Data Recording.” Journal of Cultural Heritage 17:159–169. https://doi.org/10.1016/j.culher.2015.07.005 (accessed 02-25-2022).
Hageneuer, Sebastian, ed. 2020. Communicating the Past in the Digital Age: Proceedings of the International Conference on Digital Methods in Teaching and Learning in Archaeology (12th–13th October 2018): Ubiquity Press.
Historic England. 2017. Photogrammetric Applications for Cultural Heritage: Guidance for Good Practice. Swindon.
Historic England. 2018. 3D Laser Scanning for Heritage: Advice and Guidance on the Use of Laser Scanning in Archaeology and Architecture. Swindon.
Holter, Erika, and Sebastian Schwesinger. 2020. “Modelling and Simulation to Teach (Classical) Archaeology: Integrating New Media into the Curriculum.” In Communicating the Past in the Digital Age: Proceedings of the International Conference on Digital Methods in Teaching and Learning in Archaeology (12th–13thOctober 2018), edited by Sebastian Hageneuer: 167–177: Ubiquity Press.
Holz, Dirk, Alexandru E. Ichim, Federico Tombari, Radu B. Rusu, and Sven Behnke. 2015. “Registration with the Point Cloud Library: A Modular Framework for Aligning in 3-D.” IEEE Robotics Automation Magazine 22 (4): 110–124. https://doi.org/10.1109/MRA.2015.2432331 (accessed 02-25-2022).
Howland, Matthew D., Falko Kuester, and Thomas E. Levy. 2014. “Structure from Motion: Twenty-First Century Field Recording with 3D Technology.” Near Eastern Archaeology 77 (3): 187. https://doi.org/10.5615/neareastarch.77.3.0187 (accessed 02-25-2022).
José Luis, Pérez-García, Mozas-Calvache Antonio Tomás, Barba-Colmenero Vicente, and Jiménez-Serrano Alejandro. 2019. “Photogrammetric Studies of Inaccessible Sites in Archaeology: Case Study of Burial Chambers in Qubbet El-Hawa (Aswan, Egypt).” Journal of Archaeological Science 102:1–10. https://doi.org/10.1016/j.jas.2018.12.008 (accessed 02-25-2022).
Kazhdan, Michael, Matthew Bolitho, and Hugues Hoppe. 2006. “Poisson Surface Reconstruction.” In Proceedings of the Fourth Eurographics Symposium on Geometry Processing: 61–70. SGP ’06. Goslar, DEU: Eurographics Association.
Kersten, Thomas., Klaus. Mechelke, and Lena Maziull. 2015. “3D Model of Al Zubarah Fortress in Qatar—Terrestrial Laser Scanning Vs. Dense Image Matching.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XL-5/W4:1–8. https://doi.org/10.5194/isprsarchives-XL-5-W4-1-2015 (accessed 02-25-2022).
Kevin Kee, editor, ed. 2014. Pastplay: Teaching and Learning History with Technology. s.l. University of Michigan Press.
Koller, David, Bernard Frischer, and Greg Humphreys. 2009. “Research Challenges for Digital Archives of 3D Cultural Heritage Models.” Journal on Computing and Cultural Heritage 2 (3): 1–17. https://doi.org/10.1145/1658346.1658347 (accessed 02-25-2022).
Lang, Matthias, Ramadan Hussein, Benjamin Glissmann, and Philippe Kluge. 2020. “Digital Documentation of the Saite Tombs in Saqqara.” in press. Studies in Digital Heritage.
Lercari, Nicola, Emmanuel Shiferaw, Maurizio Forte, and Regis Kopper. 2018. “Immersive Visualization and Curation of Archaeological Heritage Data: Çatalhöyük and the Dig@IT App.” J Archaeol Method Theory 25 (2): 368–392. https://doi.org/10.1007/s10816-017-9340-4 (accessed 02-25-2022).
Llobera, Marcos. 2011. “Archaeological Visualization: Towards an Archaeological Information Science (AISc).” J Archaeol Method Theory 18 (3): 193–223. https://doi.org/10.1007/s10816-010-9098-4 (accessed 02-25-2022).
Lowe, Adam. 2018. Scanning Seti: The Re-Generation of a Pharaonic Tomb: 200 Years in the Life of a Tomb. Accessed September 19, 2018. http://www.factum-arte.com/resources/files/ff/articles/seti_basel_36.pdf (accessed 02-25-2022).
Luhmann, T., M. Chizhova, D. Gorkovchuk, H. Hastedt, N. Chachava, and N. Lekveishvili. 2019. “Combination of Terrestrial Laserscanning, UAV and Close-Range Photogrammetry For 3D Reconstruction of Complex Churches in Georgia.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W11:753–761. https://doi.org/10.5194/isprs-archives-XLII-2-W11-753-2019 (accessed 02-25-2022).
McCoy, Mark D. 2020. Maps for Time Travelers: How Archaeologists Use Technology to Bring Us Closer to the Past. Oakland: University of California Press.
Morgan, Colleen, and Holly Wright. 2018. “Pencils and Pixels: Drawing and Digital Media in Archaeological Field Recording.” Journal of Field Archaeology 43 (2): 136–151. https://doi.org/10.1080/00934690.2018.1428488 (accessed 02-25-2022).
Niven, Kieron, and Julian D. Richards. 2017. “The Storage and Long-Term Preservation of 3D Data.” In Human Remains: Another Dimension: The Application of Imaging to the Study of Human Remains, edited by Tim Thompson and David Errickson: 175–184. Place of publication not identified, s.l. Elsevier Ltd.
Olson, Brandon Richard. 2016. “The Things We Can Do with Pictures: Image-Based Modeling and Archaeology.” In Mobilizing the Past for a Digital Future: The Potential of Digital Archaeology, edited by Erin W. Averett, Jody M. Gordon, and Derek B. Counts. Version 1.1 (updated November 5, 2016): 237–250.
Olson, Brandon Richard, and Ryan A. Placchetti. 2015. “A Discussion of the Analytical Benefits of Image Based Modeling in Archaeology.” In Visions of Substance: 3D Imaging in Mediterranean Archaeology, edited by Brandon R. Olson, William R. Caraher, and Sebastian Heath: 17–26. Grand Forks: The Digital Press at The University of North Dakota.
Polig, Martina. 2017. “3D GIS for Building Archeology—Combining Old and New Data in a Three-Dimensional Information System in the Case Study of Lund Cathedral.” Studies in Digital Heritage 1 (2): 225–238. https://doi.org/10.14434/sdh.v1i2.23253 (accessed 02-25-2022).
Reu, Jeroen de, Gertjan Plets, Geert Verhoeven, Philippe de Smedt, Machteld Bats, Bart Cherretté, Wouter de Maeyer et al. 2013. “Towards a Three-Dimensional Cost-Effective Registration of the Archaeological Heritage.” Journal of Archaeological Science 40 (2): 1108–1121. https://doi.org/10.1016/j.jas.2012.08.040 (accessed 02-25-2022).
Reu, Jeroen de, Philippe de Smedt, Davy Herremans, Marc van Meirvenne, Pieter Laloo, and Wim de Clercq. 2014. “On Introducing an Image-Based 3D Reconstruction Method in Archaeological Excavation Practice.” Journal of Archaeological Science 41:251–262. https://doi.org/10.1016/j.jas.2013.08.020 (accessed 02-25-2022).
Richards-Rissetto, Heather, and Jennifer von Schwerin. 2017. “A Catch 22 of 3D Data Sustainability: Lessons in 3D Archaeological Data Management & Accessibility.” Digital Applications in Archaeology and Cultural Heritage 6:38–48. https://doi.org/10.1016/j.daach.2017.04.005 (accessed 02-25-2022).
Siebke, Inga, Lorenzo Campana, Marianne Ramstein, Anja Furtwängler, Albert Hafner, and Sandra Lösch. 2018. “The Application of Different 3D-Scan-Systems and Photogrammetry at an Excavation—A Neolithic Dolmen from Switzerland.” Digital Applications in Archaeology and Cultural Heritage 10:e00078. https://doi.org/10.1016/j.daach.2018.e00078 (accessed 02-25-2022).
Sullivan, Elaine A. 2020. Constructing the Sacred: Visibility and Ritual Landscape at the Egyptian Necropolis of Saqqara. Stanford: Stanford University Press.
Tavares, Ana. 2011. “Coordinate Systems and Archaeological Grids Used at Giza.” In Giza Plateau Mapping Project: Season … Preliminary Report, edited by Mark Lehner: 203–216. Giza occasional papers 5. Boston: Ancient Egypt Research Assoc.
Verhoeven, Geert, Christopher Sevara, Wilfried Karel, Camillo Ressl, Michael Doneus, and Christian Briese. 2013. “Undistorting the Past: New Techniques for Orthorectification of Archaeological Aerial Frame Imagery.” In Good Practice in Archaeological Diagnostics: Non-Invasive Survey of Complex Archaeological Sites, edited by Cristina Corsi:31–67. Natural Science in Archaeology. Cham, Heidelberg: Springer.
Walsh, Gregory. 2015. Leica Scanstation: White Paper. Heerbrugg.
Ware, Colin. 2004. Information Visualization: Perception for Design. Amsterdam: Elsevier.
Zachar, Jan, Milan Horňák, and Predrag Novaković, eds. 2017. 3D Digital Recording of Archaeological, Architectural and Artistic Heritage. CONPRA series vol. 1. Ljubljana.