This article is available at the URI http://dlib.nyu.edu/awdl/isaw/isaw-papers/21/ as part of the NYU Library's Ancient World Digital Library in partnership with the Institute for the Study of the Ancient World (ISAW). More information about ISAW Papers is available on the ISAW website.

©2021 Emily Frank, Sebastian Heath, Chantal Stein. Text and images distributed under the terms of the Creative Commons Attribution 4.0 International (CC-BY) license.
Creative Commons License

This article can be downloaded as a single file.

ISAW Papers 21 (2021)

Integration of Photogrammetry, Reflectance Transformation Imaging (RTI), and Multiband Imaging (MBI) for Visualization, Documentation, and Analysis of Archaeological and Related Materials

Emily Frank, Sebastian Heath, and Chantal Stein

URI: http://hdl.handle.net/2333.1/0cfxq0c3

Abstract: This paper describes a practical workflow that enables the integration of Photogrammetry-based 3D modeling, Reflectance Transformation Imaging (RTI), and Multiband Imaging (MBI) into a single representation that can, in turn, be rendered visually using existing open-source software. To illustrate the workflow, we apply it to a fragment of an Egyptian painted wood sarcophagus now in the Institute of Fine Arts Study (NYU) Collection and then show how the results can contribute to the visualization, documentation, and analysis of archaeological and related materials. One product of this work is an animation rendered using the open-source software Blender. The animation emphasizes aspects of surface variation and reveals the craftwork involved in producing the sarcophagus fragment. In doing so, it highlights that the workflow we describe can serve many purposes and contribute to a wide variety of research agendas.

Library of Congress Subjects: Imaging systems in archaeology; Multispectral imaging ; Three-dimensional imaging.

Introduction

This paper describes a practical workflow that enables the integration of photogrammetry-based 3D modeling, Reflectance Transformation Imaging (RTI), and Multiband Imaging (MBI)  into a single representation that can, in turn, be rendered visually using open-source software. To illustrate the workflow, we apply it to a fragment of an Egyptian painted wood sarcophagus depicting the winged goddess Isis that is now in the Institute of Fine Arts (New York University) Study Collection (Figure 1A).1 Our intent is to show that this workflow can contribute to the visualization, documentation, and analysis of a broad range of visual and material culture, though our focus in developing the workflow has been art historical and archaeological practice and research. The object used as an example here is from a museum collection, though its state of preservation, with broken edges and only partially extant original surfaces, is analogous to that of objects found in the field during archaeological fieldwork. The flexibility of the process we present is most readily seen in an animation that is embedded in the body of this article (Figure 1B and Figure 2) and that highlights particular aspects of the wooden sarcophagus fragment. Watching that animation will give readers a sense of what the workflow can accomplish. However, we believe that the steps we describe are suitable for use with a wide variety of material culture and artworks and that they are adaptable to many research agendas. One indication of the potential for application in varied contexts is that other projects are working to overlay various types of complementary imagery on photogrammetric models.2 It is likely, therefore, that this is an area of practice that will develop quickly. The workflow described here contributes to this collective progress by integrating multiple forms of imagery on a single 3D model and by using Blender as the tool that brings all the imagery into a single environment for both detailed rendering as single images and animated display.

Figure 1A. Fragment of an Egyptian painted wood sarcophagus (Conservation Center, Institute of Fine Arts, New York University; CCW01.11).
Figure 1B. Perspective view of a 3D model of the same object illuminated using the 3D animation software Blender. This image shows the successful integration of a "Normals Visualization" rendering exported from RTIViewer with a visible-induced infrared luminescence (VIL) image emphasizing the presence of "Egyptian Blue" as white patches. This image is the last frame of the animation in Figure 2.

In the discussion that follows, attention is given to the specific software used in each step. This includes the commercial package Agisoft Metashape for photogrammetry processing; the freely-available software packages RTIBuilder and RTIViewer for RTI processing; the freely-available CHARISMA3 software tools developed as add-ins for nip2 (the open-source graphical interface of VIPS) for MBI processing; and the open-source animation suite Blender, which is used for final rendering and animation.4 Despite this listing of software packages, what follows is not intended as a detailed tutorial but rather a presentation of the major steps that allow the integration we achieve.5 A strength of this workflow is that it allows the rendering of the particular and idiosyncratic aspects of any given object. This means that the exact application of the process described will depend on the specific goals of the researchers undertaking the effort. The Egyptian painted wood sarcophagus fragment used as our test case here has a very varied surface. The chisel marks that show the preparation of the wood surface, a partially extant plaster layer intended to receive paint directly, and evidence of different application techniques, along with varied pigments, are all aspects of this object that are present in its current state of preservation. Many of these surface details are readily revealed by RTI and MBI, which is why we chose to combine the output of those particular techniques with a photogrammetric 3D model. As seen in the animation and figures, the combination of all three techniques supports very close consideration of this individual piece. We stress, however, that the workflow is flexible and likely to have application in many other circumstances because it can potentially integrate a photogrammetric model with all types of imagery that can be captured by analogous methods to those described below.

Figure 2. Animation showing the integration of the imaging techniques discussed here. The major stages of the animation are: Opening frame using normal light as texture on 3D model; B) (at 0:18 [minutes:seconds]): RTI normals rendered on 3D model and artificial shadows to bring out surface variation; C) (0:30): Tilted detail with same rendering as "B)"; D) (0:55): VIL as texture with shadow showing surface variation still visible; E) (1:17): Tilted rendering mixing VIL, RTI normals, natural color, and artificial shadows all on 3D model. The display has been configured so that readers can slide the control back and forth as a form of interaction with the model.

Materials and Methods

Photogrammetry, RTI, and MBI are well-established imaging techniques widely used by cultural heritage professionals.6 Previous work has recognized the desirability of integrating the results of photogrammetry and of RTI more closely;7 and there have been efforts to recover 3D models directly from RTI data, which is a different process from what we pursue here.8 While no photographic technique produces a perfect or undistorted image, which means that absolutely perfect alignment between any set of independently produced images is likely impossible, we believe the techniques used here complement each other and produce a useful representation of the object under investigation. Specifically, the combination we have achieved results in visually rich and informative, high-resolution renderings that emphasize physical shape, surface variability, and spectral properties. The combination of photogrammetry, RTI, and MBI facilitates detailed study and visualization of an artifact’s surface that highlights otherwise difficult-to-perceive features. It is also a feature of our work that the digital representation it produces allows these aspects of the rendered object to be communicated and investigated without requiring direct physical or potentially destructive examination. Accordingly, this workflow shows promise for applications where prolonged access to and handling of material is ill-advised, restricted, or impractical.

It is fundamental to this project that we are adapting existing software and techniques. The following list provides brief definitions of the three techniques to be combined, along with the key points that allow for the integration of each one with the others.

Additionally, in this article “UV” is used in the context of “UV texture,” a term meaning a 2D image that is mapped onto a 3D mesh. Here, “U” and “V” refer to the axes of the plane. We distinguish this usage of UV from the same abbreviation meaning “ultraviolet.”

We also note that the development of the workflow described here has been collaborative. Two of the authors are conservators who have worked on archaeological and museum materials and share an interest in the application of digital technologies to the field of objects conservation.13 The other is a Roman archaeologist who has worked to integrate 3D modeling into his fieldwork and into the presentation of Greco-Roman material culture.14 The dialog between disciplines has led us to adopt the phrase “visualization, documentation, and analysis” to indicate our intent that the method we describe be relevant to many forms of work. The animation that we present as one potential output illustrates the utility of our workflow for enriched communication of multiple aspects of an object. That animation is inherently a visualization that can serve many roles, among which is the detailed documentation of the manufacture of the painted Egyptian panel. This in turn can support discussion of ancient Egyptian manufacturing techniques and craftsmanship. Such discussion is a form of analysis and interpretation. To the extent these terms represent stages, they overlap so we believe that digital tools can usefully elide the distinction between them. Communication of results, which is very much our goal, underlies all three terms.

Method of Integration

To introduce the workflow for integrating photogrammetry, RTI, and MBI, we start with an analogy: the photogrammetric 3D mesh serves as virtual scaffolding for the display of other types of imaging data, which we term “auxiliary imagery.” In slightly more technical language, the workflow can be succinctly described by emphasizing that individual source images for the 3D model’s photorealistic UV texture can be replaced during the texture build in Agisoft Metashape. Replacing source images with auxiliary imagery allows for the creation of a new UV texture that achieves repeatable visual alignment between that auxiliary imagery and the 3D mesh. In this instance, the auxiliary imagery that we employ originates from RTI and MBI, thereby allowing the output of those techniques to appear on the 3D model in the correct alignment.

This section gives a brief illustrated overview of the main steps that result in a robust alignment between the 3D models and the auxiliary imagery so that these other images can be displayed on the model. The basis for our alignment of RTI and MBI with 3D models relies on establishing common camera positions between photogrammetry and the auxiliary imagery. The description assumes that the following stages of work have been completed: (1) source images of the object from many overlapping camera positions have been captured and processed to build a 3D model; and (2) at least one of the camera positions used in building that model has also been used for collection of auxiliary imagery (here RTI and MBI).

Figure 3. A) Above: Screenshot of Agisoft Metashape with RTI connection shot highlighted in pink. B) Below: Screenshot of Agisoft Metashape with MBI connection shot highlighted in pink.

Figure 3 shows two screenshots of a 3D model of the test object as it appears in Metashape. In both Figure 3A and 3B, most camera positions are shown in blue. Figure 3A highlights in pink the position of the camera that was subsequently used to capture RTI source images. Figure 3B highlights the position of the camera that was then used to capture MBI images. Since these highlighted camera positions are the aspects of the workflow that enable integration, we call them “connection shots.”15 Successful connection shots should have the same camera position as the auxiliary imagery and the same even illumination as the photogrammetry set. When processing the photo set in software such as Metashape, it is fundamental for future integration that connection shots, along with all the other photos from which the 3D model is built, are aligned with the resulting 3D model in Metashape. This is an intentional side effect of the photogrammetric process: it results in a set of photographs aligned with a model. Once a set of photographs is so aligned, any other images captured by the same camera in any of the same positions are then also automatically aligned with the model, by which we mean that the color information from any image can be projected onto the 3D model in a predictable and meaningful manner.

Figure 4. A) RTI connection shot with even illumination, included in the photogrammetric image set so that its position is aligned to all other camera positions. B) RTI source image with raking light. Note the distinct highlight on both spheres. C) "Default” output of source image set as exported from the RTIViewer. D) "Normals Visualization" of source image set as exported from the RTIViewer. All these images show the object from the same camera position and because that camera position is aligned to all the other positions used for photogrammetry, any of these images can be used to texture the resulting 3D model.

After creation of a 3D model whose source imagery includes an aligned connection shot for each type of auxiliary imagery, the actual integration of that imagery takes places in Metashape. This next stage will be discussed using RTI data as the practical example. Figure 4A shows the RTI connection shot used in building the 3D model; this is the actual image associated with the highlighted camera position in Figure 3A. It has the same camera position as the RTI shots, though keeps the even illumination required for photogrammetry. Figure 4B shows a source image from the RTI source image set; again, it is in the same position. Figures 4C and 4D show two exports from the RTIViewer after processing the RTI source image set into an interactive RTI file. Figure 4C is an export from the "Default” rendering mode of the RTIViewer. Figure 4D is an export from the RTIViewer's "Normals Visualization" rendering mode. The camera position does not change between the connection shot, the RTI source images, and the RTI exports. This means that all RTI source images and exports from the processed RTI file are also aligned with the 3D model. This is the essential relationship leveraged by the next step.

Figure 5. A) Screenshot from Metashape showing figure 4A above as a UV texture on the model. B) Screenshot from Metashape showing figure 4C as a UV texture on the model. C) Screenshot from Metashape showing the figure 4D as a UV texture on the model.

In this workflow, the connection shot can be replaced by any image from the auxiliary imagery during creation of a UV texture in Metashape. The end result of this replacement is illustrated in Figures 5A, 5B, and 5C. Figures 5B and 5C show auxiliary imagery aligned to the 3D model as a result of having been taken from the same position as the RTI connection shot (see Fig. 3A). For best results, the UV texture-building software must allow for a single image to be identified as the UV texture source. Metashape permits such selection by way of enabling only the relevant connection shot camera position to be used in building a UV texture; all other images in the photogrammetric set are then ignored when building that UV texture.

Figure 6. A) Photorealistic color UV texture (see fig. 4A) as exported from Metashape. B) RTIViewer Default visualization UV texture (see fig. 4B) as exported from Metashape. C) RTIViewer "Normals Visualization" UV texture (see fig. 4C) as exported from Metashape.

In essence then, the practitioner (1) overwrites the connection shot image file that was used to build the model with an alternate bitmap from the auxiliary imagery set, (2) initiates building of a UV texture map using only that new bitmap, and (3) exports the resulting UV texture for rendering in Blender or other software. Figure 6 shows three UV texture maps exported from Metashape. The same features are visible in the same locations in each UV texture map, indicating their alignment to each other. It is important to manage files to keep copies of any images that are overwritten. This management could be automated; however, the specifics of how files are managed within any one operating system or scripting environment lie beyond the scope of this article.

Figure 7. A) MBI VIL image aligned to the connection shot highlighted in Figure 3B. B) Screenshot from Metashape showing the VIL image as a UV texture on the model.

MBI is integrated by the same method described above, using the connection shot indicated by the pink rectangle in Figure 3B. Figure 7A shows the visible-induced infrared luminescence (VIL) image, and Figure 7B shows the VIL bitmap as a UV texture on the object in Metashape. In Figure 7B, which is equivalent to the stage represented by the images in Figure 5, the white patches indicate the presence of the calcium copper silicate (cuprorivaite) pigment known as “Egyptian Blue.” This pigment is common on many objects and is known to emit infrared radiation when irradiated with visible light, a phenomenon that becomes clearly discernible through VIL imaging.16

After the 3D model is built, the underlying geometric mesh and its associated photorealistic UV texture can be exported from Metashape using such standard file formats as COLLAborative Design Activity (COLLADA) or Waveform (more commonly known as OBJ). These formats have the advantage of maintaining the relationship between the model and associated UV textures. Any UV textures that are made using alternate files, as described above, can also be exported. This export is easily accomplished in Metashape. As a result of the workflow described above, it is straightforward to assemble a set of UV textures that represent the output of varied auxiliary imaging techniques and that are also accurately mapped to a particular 3D model. This set of digital resources can then be imported into 3D-capable software for further rendering. The next section describes the use of the open-source software Blender to achieve this goal.

Rendering in Blender

Blender is a powerful tool for creating and viewing 3D models, one that has the distinct advantage of being freely available and supported by an active user community. In scientific applications, Blender can serve as a flexible virtual environment for aggregating data and then rendering visualizations that support documentation and analysis.17 The visualizations Blender creates can incorporate simple blending of UV textures collected according to the workflow we have described. These UV textures can be combined with non-photorealistic rendering methods to highlight particular aspects of an object. All such visualizations can be incorporated into an animated presentation (Fig. 2), which can facilitate effective communication. Creating animations of this sort is a particular strength of Blender.

Figure 8. 3D model rendered with three blended UV textures: RTI "Normals Visualization", MBI, and photorealistic. Artificial lighting is cast over the 3D model to emphasize exposed chisel marks and the partially extant plaster surfaces.

Figure 8 shows one possible output from the integration of multiple sources of imagery with a 3D model that can be achieved in Blender. The shadow that highlights surface variation is the result of casting artificial light over the model in Blender. The overall purple hue seen here comes from mixing in the “Normals Visualization” exported from the RTIViewer, as seen previously in Figures 4D, 5C, and 6C. The prominent white patches show the presence of Egyptian Blue pigment; this is the same MBI spectral data shown in a different form in Figure 7. The photorealistic colors of the surface are lightly mixed in and are most clearly seen in Figure 8 in the area of the shoulder, the upper edge of the extended wing, and the profile eye. This is essentially the same color data seen in Figure 4A. Figure 8 is therefore an artificially rendered image that takes advantage of all aspects of the integrated digital output of the current workflow: the casting of virtual shadow on a 3D model and the integration of three sources of color data via aligned UV textures.

Figure 9. Detail of 3D model rendered with two blended UV textures: RTI "Normals Visualization" and photorealistic colors. Artificial lighting is cast over the tilted 3D model.

It is important to note again that Figure 8 is only one possible rendering of the integration of photogrammetry, RTI, and MBI. Figure 9 is a detail towards the left edge of the object where Isis' body is best preserved. It drops out the VIL UV texture and combines photorealistic colors, the RTIViewer “Normals Visualization,” and artificial lighting in Blender so as to create virtual shadows on the artificially colored model in a tilted perspective. This highlights the three-dimensional aspect of the painted dots that appear white in Figure 9. The greater amount of shadow that they produce indicates their higher relief in comparison to the lines that delineate the body and decorative features of the depicted clothing.

Figure 10. Screenshot of Blender’s interface showing three different renderings of the same object. Top: 3D model using Blender’s default MATCAP rendering. Bottom left: RTI Viewer “Normals Visualization” UV texture used as color data. Bottom right: rendered  photorealistic UV texture with artificial raking light.

Figures 8 and 9 are the result of manipulation of digital resources in Blender. Some of the stages of using that software are now illustrated so that they can be adopted by anyone familiar with it or with other 3D-capable software suites. Figure 10 shows three renderings of the object as seen in a screenshot from Blender. The "Normals Visualization" exported from the RTIViewer is used as a UV texture on the bottom left. The photorealistic UV texture is used for rendering on the bottom right. At the top is the 3D model tilted and rendered using Blender’s default MATCAP mode to indicate surface variation. Within Blender, multiple UV textures and multiple views of an object can be seen at once and manipulated independently.

Figure 11. Screenshot of Blender’s interface, with the top pane showing the arrangement of Blender nodes that produced the rendering in Figure 8.

Figure 11 displays the arrangement of Blender material nodes - as used by Blender's Cycles rendering engine - that produced the rendering in Figure 9. The four vertically aligned nodes at the left show the “Image Texture” nodes that bring in the individual UV textures generated by Metashape. Adjusting the degree to which each of these contributes to the final render is the mechanism by which both visible and non-visible features of the object are explored. There are many approaches to achieving this goal so that Figures 10 and 11 are only examples of using those aspects of the Blender interface that especially allowed creative and communicative rendering.

Blender has the potential for an essentially infinite variety of node structures and other techniques that can be used to visualize, combine, and enhance the data obtained from each imaging technique. When considering the uses of the workflow described here, it has been useful that rendering 3D models in Blender is interactive and that the rendered model can respond to many user-initiated changes to the output in near real-time even with relatively inexpensive hardware. The display of individual UV textures on a 3D model from different sources of imagery is essentially instant. Alternatively, UV textures can be mixed together and this also happens quickly. Mixing of textures is particularly useful when combining color and non-color data. For example, we have successfully experimented with using overall color data from the integrated imagery sources while configuring Blender to integrate the RTI normal map into the rendering of shadows. This approach is a standard technique in computer animation. All UV textures can also be combined with non-color data derived from the 3D model itself. This technique can usefully involve using nodes such as Blender's implementation of Fresnel or "Pointiness" — the latter a colloquial term for degree of convexity. Again, these are well known techniques in computer animation that can be adapted to scientific analysis and visualization. In many cases the processing is speedy. All the individual images shown in this article can be rendered by Blender to high quality - by which we mean that the results effectively communicate aspects of surface variation - in just a few seconds each on a 2015 Macintosh iMac with 4 Gigahertz Intel Core I7 processor and 32 Gig of RAM. This is not an exceptionally powerful computer by current standards, which we believe is a factor that again makes this workflow accessible and usable in many circumstances. Rendering times are of course variable, depending on software settings and hardware capability, so that individual experience will vary.

Figure 12: Screen captures from animation. A) Opening frame of animation using normal light as texture on 3D model; B) (at 0:18 minutes:seconds): RTI normals rendered on 3D model and artificial shadows to bring out surface variation; C) (0:30): Tilted detail with same rendering as "B)"; D) (0:55): VIL as texture with shadow showing surface variation still visible; E) (1:17): Tilted rendering mixing VIL, RTI normals, natural color, and artificial shadows on 3D model, with animation controls showing in order to represent the possibilty of interaction.

Because this article cannot directly incorporate either open-ended, real-time interactivity or all the permutations of possible renderings, we instead embed an animation of the model that explores these approaches (see Fig. 2 above). Within the field of cultural heritage, rendering of 3D models and computer animation have been recognized as an effective means of communication.18 Figure 12 highlights individual frames with brief explanations of the imagery that is being shown; we believe they do constitute an effective representation of the object, though readers are encouraged to use the animation's own on-screen controls to interact with it themselves. There is also a higher resolution - and therefore much larger - file archived in New York University's institutional repository.19 Presentation in animated form merges the concepts of visualization, documentation, and analysis. It highlights discrete aspects of the object that allow for the further exploration of the individual steps that went into its creation. The broad tool marks that are a result of preparation of the surface of the wood stand out and offer an avenue to placing the production of this piece in its broader technological and artistic context that accounts for much more than just its final surface appearance.20 The thickness of the extant plaster layer to which paint was applied is readily seen. Furthermore and as discussed above, the shape and profiles of the painted lines and dots that form the image of Isis’ body, clothes, and wings emerge through specific renderings. The ending composition of the animation uses a virtual plane to emphasise the variability of the surface. In doing so, it shows that renderings that have little to do with any real world interaction with an object do, nonetheless, encourage close consideration of its materiality. Overall, the animation reveals techniques of manufacture and communicates the skill of the craftspeople who actually applied those techniques to a degree otherwise difficult to illustrate. Accordingly, it suggests analytical approaches that can be applied in many other contexts. This flexibility has been highlighted by other researchers as a goal of three dimensional models and renderings of archaeological materials and we hope that our workflow can contribute to such discussions.21

Conclusion

Beyond introducing the workflow described above, we stress in conclusion that it is adaptable to many purposes and circumstances, ranging from conservation treatment to materials science to art history and archaeology, and beyond. The work is very extensible in that multiple sets of auxiliary imagery can be integrated, providing that camera positions are maintained. Blender supports many rendering and animation techniques, which allows for a very flexible approach to investigating an object and communicating results. It is also likely the case that other software packages can be substituted at any stage for the ones we chose. Put most simply, the workflow described here relies on substituting individual image files within an aligned set of photographs. That is a generic operation that can be put to many uses.22

Acknowledgements

Special thanks to Margaret Holben Ellis, Hannelore Roemich, Stacey Mandelbaum, Marica and Jan Vilcek, the National Endowment for the Humanities (NEH), the Andrew W. Mellon Foundation, the Selz Foundation, the Hagop Kevorkian Foundation, the Conservation Center of the Institute of Fine Arts at NYU, and the Institute for the Study of the Ancient World at NYU for support throughout this project.

Notes

1 The object, with unknown provenance, has been in the study collection of the Conservation Center, Institute of Fine Arts, New York University, since the early 1970s. It is not precisely dated, though it is clearly ancient. It was selected for this study because of its availability for study and because its high degree of surface variation, with partial preservation of the varied stages of production, made it a suitable test case for the development of the workflow described here. The workflow discussed here grew out of a collaboration between the authors that began in 2016. Prior public presentation of the work has been in the form of co-authored talks, including “Blending Computational Imaging Techniques: Experiments in Combining Reflectance Transformation Imaging (RTI) and Photogrammetry in Blender” at Illumination of Material Culture: A Symposium on Computational Photography and Reflectance Transformation Imaging (RTI), The Metropolitan Museum of Art, New York, 8 March 2017; at the Archaeological Institute of America (AIA) and Society for Classical Studies (SCS) Joint Annual Meeting, Boston, 5 January 2018 (Stein et al. 2018); at the American Institute for Conservation (AIC)’s 46th Annual Meeting, Houston, 1 June 2018; and “Integrating Multispectral Imaging, Reflectance Transformation Imaging (RTI), and Photogrammetry for Archaeological Objects: An Update.” at Illumination of Material Culture Symposium (II), Digital Heritage 2018, 26 October 2018 (video available for download on Zenodo.org via <https://doi.org/10.5281/zenodo.1729569>). These talks have included discussion of a worked-stone table support excavated at Sardis in Turkey for which similar imaging was undertaken and the authors are grateful to Nicholas Cahill, Director of the Harvard-Cornell Sardis Excavations, for permission to work with that object.

2 Rivero et al. 2019; Solem and Nau 2020.

3 Cultural Heritage Advanced Research Infrastructures: Synergy for a Multidisciplinary Approach to Conservation/Restoration. At the time of publication the former website for this project was inactive.

4 The latest version of the CHARISMA software can be downloaded at https://github.com/jcupitt/bm-workspaces. We have confirmed that the process we describe works when using Blender 2.93 LTS, the most recent version available at the time of publication. The latest version of Blender can be downloaded from <https://blender.org>.

5 The video is available at <https://doi.org/10.5281/zenodo.1729569> (see note 1 above), while also not a full tutorial, may be useful for anyone pursuing their own application of the workflow described here.

6 Cosentino et al. 2015; Mudge et al. 2010; Mytum and Peterson 2018; Olson et al. 2013.

7 Miles et al. 2015.

8 Elfaragy et al. 2013.

9 Cultural Heritage Imaging, Photogrammetry, (n.d.), <http://culturalheritageimaging.org/Technologies/Photogrammetry/>.

10 Malzbender et al. 2001.

11 Cultural Heritage Imaging 2011; Cultural Heritage Imaging 2013a; Cultural Heritage Imaging 2013b.

12 Dyer et al. 2013

13 Frank 2014; Frank and Castriota 2021.

14 Heath 2015; Heath 2021.

15 Mudge 2017.

16 Verri 2009.

17 Kent 2012

18 Dellepiane 2011; Gilboa et al. 2013; Carrero-Pazos and Espinosa-Espinosa 2018.

19 The original .mov file and two converted versions are available at <http://hdl.handle.net/2451/62240>.

20 Banducci et al. 2018; Eschenbrenner-Diemer 2013; Murphy and Poblome 2012.

21 A. Bentkowska-Kafel et al. 2017; Di Giuseppantonio Di Franco 2018; Gartski 2017; Morris et al. 2018; Rabinowitz 2015.

22 A final point to address is that of data availability. The various steps and applications involved in the workflow mean that many files are created, across different directory structures, and - as a practical matter - on different machines. The total filesize is many gigabytes. The authors did not consider it practical to compile a single digital supplement that captures all steps in such a way that they are easily repeatable. We are, however, very willing to be in dialog with any readers who are implementing this workflow using their own data.

Works Cited

Banducci, L.M., Opitz, R. and Mogetta, M. (2018). “Measuring Usewear on Black Gloss Pottery from Rome through 3D Surface Analysis,“ Internet Archaeology 50. <https://doi.org/10.11141/ia.50.12>.

Bentkowska-Kafel, A., Moitinho De Almeida, V., Macdonald, L., Del Hoyo-Meléndez, J. & Mathys, A. (2017). Beyond Photography: An Interdisciplinary, Exploratory Case Study In The Recording And Examination Of Roman Silver Coins. In A. Bentkowska-Kafel & L. MacDonald (Eds.), Digital Techniques for Documenting and Preserving Cultural Heritage (35-65). Amsterdam: ARC Humanities Press.

Carrero-Pazos, M. and D. Espinosa-Espinosa. (2018). “Back to basics: a non-photorealistic rendering method for the analysis of texts from 3D Roman inscriptions,” Antiquity 92(364): e7. <https://doi.org/10.15184/aqy.2018.146>.

Cosentino, A., Sgarlata, M., Scandurra, C., Stout, S., Galizia, M., and Santagati, C. (2015). Multidisciplinary Investigations on the Byzantine Oratory of the Catacombs of Saint Lucia in Syracuse. In G. Guidi, R. Scopigno, and J. Barceló (Eds.), International Congress on Digital Heritage - Theme 3 - Analysis and Interpretation (137-140). IEEE. <https://doi.org/10.1109/DigitalHeritage.2015.7419471>.

Cultural Heritage Imaging. (2011). Reflectance Transformation Imaging: Guide to Highlight Image Processing. <http://culturalheritageimaging.org/What_We_Offer/​Downloads/rtibuilder/RTI_hlt_Processing_Guide_v14_beta.pdf>.

Cultural Heritage Imaging. (2013a). Reflectance Transformation Imaging: Guide to Highlight Capture. <http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf>.

Cultural Heritage Imaging. (2013b). Reflectance Transformation Imaging: Guide to RTIViewer. <http://culturalheritageimaging.org/What_We_Offer/Downloads/rtiviewer/RTIViewer_Guide_v1_1.pdf>.

Dellepiane, M., Callieri, M., Corsini, M., and Scopigno, R. (2011.) Using Digital 3D Models for Study and Restoration of Cultural Heritage Artifacts. In F. Stanco, S. Battiato, and G. Gallo (Eds.). Digital Imaging for Cultural Heritage Preservation: Analysis, Restoration, and Reconstruction of Ancient Artworks (37-67). London: CRC Press.

Di Giuseppantonio Di Franco, P., F. Galeazzi and V. Vassallo , eds. 2018. Authenticity and Cultural Heritage in the Age of 3D Digital Reproductions. Cambridge: McDonald Institute for Archaeological Research.

Dyer, J., Verri, G., and Cupitt, J. (2013). Multispectral Imaging in Reflectance and Photo-Induced Luminescence Modes: A User Manual. The British Museum, Charisma Project. <https://www.britishmuseum.org/pdf/charisma-multispectral-imaging-manual-2013.pdf>.

Elfaragy, M. , A. Rizq., and M. Rashwan. (2013). “3D Surface Reconstruction Using Polynomial Texture Mapping,” in Advances in Visual Computing. ISVC 2013, ed. G. Bebis G. et al. (353-362). Berlin: Springer.

Eschenbrenner-Diemer, G. (2013). Les «modèles» égyptiens en bois. Matériau, fabrication, diffusion, de la fin de l’Ancien à la fin du Moyen Empire (c. 2350-1630 BC) (Unpublished doctoral dissertation). University of Lyon, Lyon.

Frank, E. (2014). Documenting Archaeological Textiles with Reflectance Transformation Imaging (RTI). Archaeological Textiles Review (56): 3-13.

Frank, E. and B. Castriota. (2021) Fifty-plus years of on-site metals conservation at Sardis: Correlating treatment efficacy and implementing new approaches. Transcending Boundaries: Integrated Approaches to Conservation. Preprints of the ICOM-CC 19th Triennial Conference. Beijing, China (virtual). In Press.

Gartski, K. (2016). Virtual Representation: the Production of 3D Digital Artifacts. Journal of Archaeological Method and Theory, 24(3): 726-750. <https://doi.org/10.1007/s10816-016-9285-z>.

Gilboa, A., A. Tal, I. Shimshoni, and M. Kolomenkin. (2013). Computer-based, automatic recording and illustration of complex archaeological artifacts. Journal of Archaeological Science 40(2)): 1329-1339.

Heath, S. (2015). Closing Gaps with Low-Cost 3D. In B. Olson and W. Caraher (Eds.), Visions of Substance: 3D Imaging in Mediterranean Archaeology (53-62). Grand Forks: University of North Dakota Digital Press.

Heath, S. (2021). Virtual Context for Roman Sculpture. In P. De Staebler and A Hrychuk Kontokosta (Eds.), Roman Sculpture in Context (257-273). Boston: Archaeological Institute of America.

Hermon, S., Pilides, D., Iannone, G., Georgiou, R., Amico, N., and Ronzino, P. (2012). Ancient Vase 3D Reconstruction and 3D Visualization. In M. Zhou, I. Romanowska, Z. Wu, P. Xu and P. Verhagen (Eds.), Revive the Past. Computer Applications and Quantitative Methods in Archaeology (CAA). Proceedings of the 39th International Conference, Beijing, April 12-16 (59-64). Amsterdam: Pallas Publications.

Kent, B. (2015). 3D Scientific Visualization with Blender. San Rafael: Morgan and Claypool.

Malzbender, T., Gelb, D., and Wolters, H. (2001). Polynomial Texture Maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (519–28). New York: ACM Press. <http://dl.acm.org/citation.cfm?id=383320>.

Miles, J., Pitts, M., Pagi, H., and Earl, G. (2015). Photogrammetry and RTI Survey of Hoa Hakananai’a Easter Island Statue. In A. Traviglia (Ed.), Across Space and Time: Papers from the 41st Conference on Computer Applications and Quantitative Methods in Archaeology, Perth, 25-28 March 2013 (144-155). Amsterdam: Amsterdam University Press.

Morris, C., Peatfield, A., and O’Neill, B. (2018). ‘Figures in 3D:’ Digital Perspectives on Cretan Bronze Age Figurines. Open Archaeology, 4(1). <https://doi.org/10.1515/opar-2018-0003>.

Mudge, M. (2017). Advantages of Integrated RTI and Photogrammetric Acquisition. Presentation at Illumination of Material Culture: A Symposium on Computational Photography and Reflectance Transformation Imaging (RTI), New York, NY, March 7-8, 2017.

Mudge, M., Schroer, C., Earl, G., Martinez, K, Pagi, H, Toler-Franklin, C, Rusinkiewicz, S, Palma, G, Wachowiak, M, and Ashley, M. (2010). Principles and Practices of Robust, Photography-Based Digital Imaging Techniques for Museums. In 11th VAST International Symposium on Virtual Reality, Archaeology and Cultural Heritage, France, 21-24 September 2010. University of Southampton, 27 pages. <http://eprints.ecs.soton.ac.uk/21658/>.

Murphy, E. and Poblome, J. (2012). Technical and Social Considerations of Tools from Roman-period Ceramic Workshops at Sagalassos (Southwest Turkey): Not Just Tools of the Trade? Journal of Mediterranean Archaeology, 25(2): 197-217.

Mytum, H. and H. Peterson. (2018). The Application of Reflectance Transformation Imaging in Historical Archaeology. Historical Archaeology 52(2): 489-503.

Olson, B., Placchetti, R., Quartermaine, J., and Killebrew, A. (2013) The Tel Akko Total Archaeology Project (Akko, Israel): Assessing the suitability of multi-scale 3D field recording in archaeology. Journal of Field Archaeology, 38(3): 244-262.

Rabinowitz, A. (2015). The work of archaeology in the age of digital surrogacy. In B. Olson and W. Caraher (Eds.), Visions of Substance: 3D Imaging in Mediterranean Archaeology (27-42). Grand Forks: University of North Dakota Digital Press.

Rivera, O., J. Ruiz-López, I. Intxaurbe, S. Salazar, and D. Garate. (2018). On the limits of 3D capture: A new method to approach the photogrammetric recording of palaeolithic thin incised engravings in Atxurra Cave (northern Spain). Digital Applications in Archaeology and Cultural Heritage 14(e0016). <https://doi.org/10.1016/j.daach.2019.e00106>.

Solem, D.-O. and E. Nau. (2020). Two New Ways of Documenting Miniature Incisions Using a Combination of Image-Based Modelling and Reflectance Transformation Imaging. Remote Sensing 12(1626). <https://doi.org/10.3390/rs12101626>.

Stein, C., E. Frank, and S. Heath. (2018). Integrating Multispectral Imaging, Reflectance Transformation Imaging (RTI) and Photogrammetry for Archaeological Objects. In 119th Annual Meeting Abstracts (52-53). Boston: Archaeological Institute of America.

Verri, G. (2009). The Spatially Resolved Characterization of Egyptian Blue, Han Blue and Han Purple by Photo-Induced Luminescence Digital Imaging. Analytical and Bioanalytical Chemistry 394(4): 1011-1021. <https://doi.org/10.1007/s00216-009-2693-0>.