Abstract
Wood anatomy is one of the most important methods for timber identification. However, training wood anatomy experts is time-consuming, while at the same time the number of senior wood anatomists with broad taxonomic expertise is declining. Therefore, we want to explore how a more automated, computer-assisted approach can support accurate wood identification based on microscopic wood anatomy. For our exploratory research, we used an available image dataset that has been applied in several computer vision studies, consisting of 112 — mainly neotropical — tree species representing 20 images of transverse sections for each species. Our study aims to review existing computer vision methods and compare the success of species identification based on (1) several image classifiers based on manually adjusted texture features, and (2) a state-of-the-art approach for image classification based on deep learning, more specifically Convolutional Neural Networks (CNNs). In support of previous studies, a considerable increase of the correct identification is accomplished using deep learning, leading to an accuracy rate up to 95.6%. This remarkably high success rate highlights the fundamental potential of wood anatomy in species identification and motivates us to expand the existing database to an extensive, worldwide reference database with transverse and tangential microscopic images from the most traded timber species and their look-a-likes. This global reference database could serve as a valuable future tool for stakeholders involved in combatting illegal logging and would boost the societal value of wood anatomy along with its collections and experts.
Introduction
Progress in computer and internet technology is advancing next-generation phenomics, with innovations in computer science augmenting expert-based description, analysis, and comparison of phenotypes (Houle et al. 2010; MacLeod et al. 2010; Jordan & Mitchell 2015; Havlíček et al. 2019). During the last couple of years, computer vision technologies have boosted the development of tools for the automated description and identification of a wide array of biological objects, including amongst others the shape of leaves (Kumar et al. 2012; Wilf et al. 2016; Barré et al. 2017), herbarium specimens (Unger et al. 2016), wings of insects (Favret & Sieracki 2016), flying birds (Antanbori et al. 2016), shark fins (Hughes & Burghardt 2016) and animals caught in camera-traps (Gomez Villa et al. 2017). In this paper, we focus on a computer-assisted approach for the identification of wood samples from trees and suggest that this identification tool could help protect forests in the future.
Forests cover 30% of the land area on Earth, representing ca. four billion hectares and three trillion trees (Crowther et al. 2015; FAO 2015). The impact on forest net loss during the last 15 years — comparable to the area of France, Spain and the UK combined — has aroused great concern due to the potential loss of a range of ecosystem services, amongst others carbon storage and associated climatic feedbacks, conservation of biodiversity, public recreation, and medicinal products (Bonan 2008; FAO 2015). A detailed assessment of global forest change based on high-resolution satellite imaging shows that deforestation is occurring at an even more disturbing speed, especially in the tropics (Hansen et al. 2013; Finer et al. 2018). Unsustainable agriculture, mining, and illegal logging contribute to these alarming deforestation rates, representing a massive threat to global biodiversity. It is estimated that more than 100 million m3 of timber are harvested illegally each year, worth between US $30 and $100 billion per year (Nelleman 2012; UNODC Committee 2016), but prosecution of illegal logging crimes is hampered by the limited availability of forensic timber identification tools (Dormontt et al. 2015). This is especially true for DNA-based methods, because wood mainly consists of dead fibres and hollow conduits that have lost their living cell contents during cell maturation, and various wood-based treatments like heating, drying and ageing break down the DNA content of the remaining living wood cells (Jiao et al. 2012, 2015). Nevertheless, it remains possible to extract DNA from the sapwood of trees to sequence DNA barcodes for species identification (Jiao et al. 2014, 2018; Nithaniyal et al. 2014), assess geographic provenance (Jolivet & Degen 2012; Vlam et al. 2018), and to reconstruct DNA fingerprints verifying the intact chain of custody for routine trade (Lowe et al. 2010).
There is a considerable body of literature on non-genetic methods which have been successfully applied to identify wood samples: wood anatomy (InsideWood 2004–onwards; Hermanson & Wiedenhoeft 2011; Koch et al. 2011; Sarmiento et al. 2011; Helmling et al. 2018), chemical profiling based on mass spectrometry (Espinoza et al. 2015; McClure et al. 2015; Evans et al. 2017), near-infrared spectroscopy (Russ et al. 2009; Braga et al. 2011; Pastore et al. 2011; Bergo et al. 2016), and detector dogs (Braun 2013). Of these non-genetic tools, wood anatomy remains the most frequently used method for taxonomic identification, as highlighted in the recent UN report on the Best Practice Guide on Forensic Timber Identification (UNODC Committee 2016). However, years of wood anatomical training are required to become an expert, and the number of senior wood anatomists with vast taxonomic knowledge is declining. Likewise, wood anatomy generally allows identification of woods at the genus — not species — level (Gasson 2011), while CITES regulations often require species identification. For instance, only the Malagasy species of the ebony wood genus Diospyros are CITES protected, but wood anatomists are not able to distinguish the protected from the non-protected ebony woods. In an attempt to solve these problems, wood anatomists and computer scientists have been joining forces during the last decade, to come up with ways to strengthen the power of identification through wood anatomy to combat illegal logging (Hermanson & Wiedenhoeft 2011). Some computer-assisted identification tools based on wood anatomy — such as InsideWood, macroHolzdata, CITESwoodID, Pl@ntWood and a new softwoods identification system — are already available (InsideWood 2004–onwards; Hermanson & Wiedenhoeft 2011; Koch et al. 2011; Sarmiento et al. 2011; Richter & Dallwitz 2016), and the first portable applications based on macroscopic imaging have been developed to assist customs officers in the field, such as xylotron (https://www.fs.fed.us/research/highlights/highlights_display.php?in_high_id=585), xylorix (https://www.xylorix.com/), and MyWood-ID (http://mywoodid.frim.gov.my/). However, more sophisticated systems, based on morphometric analyses of cell shapes and mathematical analysis of texture patterns based on an extensive worldwide wood image reference dataset must be further developed in order to allow global species identification in ways unavailable so far.
There have been several studies focusing on computer-assisted wood identification based on transverse macroscopic wood images from a broad taxonomic set of timber species using an array of classical computer vision algorithms. For instance, Khalid et al. (2008) showed a high rate of image accuracy (> 95%) based on 20 different Malaysian forest species, and Paula Filho et al. (2014) found a recognition rate of up to 88% based on a single texture descriptor using 41 neotropical timber species. A more recent study applied a Convolutional Neural Network approach to macroscopic wood images from 10 species belonging to six genera of the Meliaceae family, with an image recognition success rate from 87% (at species level) to 97% (at genus level; Ravindran et al. 2018).
For our research, we have taken advantage of the most extensive, freely available dataset of light microscopic images of transverse wood sections with a sufficient number of replicates per species. This dataset consists of 112 — mainly neotropical — tree species in which each species is represented by 20 microscopic images from transverse sections (Martins et al. 2013). We prefer microscopic over macroscopic images because the former includes much more detail that is required to fully explore the value of wood anatomy as an identification tool. We compare different image classifiers based on manually adjusted texture features with a more advanced method based on Convolutional Neural Networks (CNNs) to evaluate computer-assisted microscopic wood anatomy for species identification, and to discuss its potential in the fight against illegal logging. Our results are compared with results from previous computer vision studies that used the same database to classify all the 2240 images in the Martins database.
Materials and methods
Martins et al. (2013) database
We have used the Martins et al. (2013) database, based on pictures from wood anatomical slides generated in the Laboratory of Wood Anatomy at the Federal University of Parana (UFPR) in Curitiba, Brazil. The wood sections were made following the standardized sliding microtome technique, coloured with a solution of acridine red, chrysoidine and astra blue, mounted between glass slides, and subsequently photographed with an Olympus Cx40 microscope equipped with a digital camera generating pictures with a resolution of 1024 × 768 pixels. The database includes 20 transverse microscopic pictures from 37 gymnosperm species (softwoods) and 75 angiosperm species (hardwoods), in total amounting to 2240 pictures for 112 species. The 20 images per species come from different individuals, but whether or not each sample is derived from another individual, covering the entire distribution range of the species, could not be traced (Luiz Eduardo S. Oliveira, personal communication). Therefore, it is possible that the intraspecific variation captured by the 20 images per species is underestimated for some of the species. The taxonomically cleaned species list is shown in Appendix A, with reference to the phylogenetic position, and its inclusion in CITES and the top 100 most traded timbers (UNODC 2016, Annex 11).
Since the original images of the Martins database are not very high quality, we decided to include a small side project to provisionally assess the impact of better-quality images on the performance of the texture features. Therefore, we made 162 original, higher quality images from transverse sections of 20 tree species in the Martins dataset for which a minimum number of slides per species were available in the wood slide collection of Naturalis Biodiversity Center (see species marked with an asterisk in Appendix A). Most Naturalis images were acquired with a 10× lens (NA 0.75). This results in 24-bit colour images of 2592 × 1944 pixels representing a field of view of 1.3 × 1 mm of wood tissue. These Naturalis images were only used to test the classifiers that we produced. The number of species and images in this additional Naturalis test set was used to investigate how well these higher quality images would perform in the classifier, while the original dataset was used to build the classifiers.
Wood identification based on five classifiers and four sets of features
In the jargon of machine learning and especially computational classifiers, we refer to tools based on supervised statistical methods that can be used to segregate a set of images into two or more categories, i.e. species. These species are represented by images of their wood anatomy, and a classifier categorises in such a way that features of different images within each species have similar values, while features of images belonging to different species have contrasting values. To enable classifying the images into the different species, a classifier needs to be trained. This is done by developing a function that optimises the segregation of these images based on features that are extracted from these images. Once the classifier is trained, it presents high precision — meaning accurate recognition of species — when new images are submitted to the trained classifier. This works under the assumption that these new images belong to the same universe of images compared to the images used in the training dataset. It is therefore essential for a good species recognition classifier that the training dataset embodies the natural intraspecific variation of mature wood, which is covered here by 20 images per species.
Systematic test of the performance of four different texture features for five different classifiers on all 112 species.
Citation: IAWA Journal 41, 4 (2020) ; 10.1163/22941932-bja10029
To develop an idea about the performance of species classification based on the Martins et al. (2013) wood dataset, we tested five common classifier strategies together with four manually selected types of features (Table 1). The five classifiers employ different strategies to build the segregating function based on the extracted features: (1) Support Vector Machines identify the function that maximises the distance between dissimilar images; (2) Nearest Neighbour classifiers identify the function that minimises the distance between similar images; (3) Random Forests are based on the construction of multiple classifiers (decision trees aka the forest) using different subsets of the available features extracted from images; the obtained classifiers are then applied to a new image, and a final decision about the classification of the new image is built based on the aggregation of the results of the different classifiers — for example, by selecting the classification that is produced equally by the largest number of classifiers; (4) Multi-layer Perceptrons employ a stacked collection of linear segregators, aka perceptrons, in order to simulate possibly non-linear segregators; and (5) Logistic Regression classifiers identify the likelihood of an image belonging to a category based on sigmoid non-linear functions. For each classifier, we extracted four features. These features correspond to different image properties: (1) Local Binary Patterns (LBP) correspond to colour and grey-level fluctuations found in a predefined window, i.e. neighbourhood, within a given image (Fig. 1C–D); (2) Grey-Level Co-occurrence Matrices (GLCM) correspond to grey-level fluctuations found in an image as a whole, thus characterising textures; and (3) Gabor features (Fig. 1E–F) and (4) Histogram of Oriented Gradients (HOG; Fig. 1G–H) characterise the gradient of variation in grey-level and colour in image pixels belonging to delimited regions within an image (pictures of Fig. 1 and additional images can also be consulted via http://bio-imaging.liacs.nl/galleries/IAWA-WoodClassification/).
(A) Cross-section of Grevillea robusta (angiosperm) and (B) Taxodium distichum (gymnosperm), two species included in the Martins database, with visualization of some of the computer extracted features used to classify the timbers. All images are generated using the MatLab Image Processing Toolbox. (A–B) Original cross-sections. (C–D) The Local Binary Pattern (LBP) texture feature maps from A–B; the window size for the LBP computation was 3 × 3 pixels. (E–F) Gabor file with 4 orientations and a scale of 4; the 4 orientations are assembled in one 4 × 4 panel (0, 45, 90, 135 orientation, respectively). (G–H) The Histogram of Oriented Gradient feature for A–B; a window size of 8 × 8 pixels was used to assemble the orientation of the gradients.
Citation: IAWA Journal 41, 4 (2020) ; 10.1163/22941932-bja10029
The aim of a classifier is to predict the species identity of an “unseen” sample. Therefore, the classifier needs to be validated in order to prevent overfitting or selection bias. We have made randomly generated training sets with 10-fold cross-validation for all classifiers. In other words, the image dataset was randomly split into 10 subsets, and for each classifier, the following procedure was used to optimise its performance: (1) one subset was selected, (2) the classifier was trained to segregate the target classes (i.e. wood images) using images in the remaining 9 subsets, (3) the performance of the optimised classifier was tested on the selected subset, (4) steps (1)–(3) were repeated for all of the 10 subsets, in order to obtain an average performance of the classifier when applied to subsets that were not used for training, and (5) the calibration of the classifier was incrementally adjusted to reach optimal average performance.
Average values for accuracy from this cross-validation are reported. The accuracy rates resulting from our classifiers (Table 1) can be compared with the rates in Table 2 that summarises results of previously published papers based on the same Martins dataset, enabling us to evaluate the accuracy of comparable classifiers applying manually adjusted texture features. Details of the computational classifiers used, and their extracted features are available upon request.
Summary of the results based on computer vision publications that used the Martins et al. (2013) dataset of 112 — mainly neotropical — timber species.
Citation: IAWA Journal 41, 4 (2020) ; 10.1163/22941932-bja10029
Wood identification based on Convolutional Neural Networks
Recent developments in pattern recognition have brought about new evolution to neural networks in the form of Convolutional Neural Networks (CNNs). CNNs advance on multi-layer perceptrons by employing (1) a large collection of layers of perceptrons, such that in each layer the connections between perceptrons are different, and (2) information is transferred between layers based on the mathematical notion of convolution, which combines the output of selected layers in order to build the input for a following layer. With this organisation of perceptrons based on structured connections within layers and convolutions to transfer information across layers, it is possible to combine supervised and unsupervised classification by aggregating images in classes, i.e. unsupervised classification, and then segregating classes according to observed features, i.e. supervised classification. Thus, a CNN can consist of different layers and each of these layers can be connected in different ways. The layers and the connections are the mark-up of the “net” and this structure is referred to as the architecture. So, different architectures can be used to organise perceptrons in CNNs, and each of these architectures has specific characteristics that work well only for specific datasets and require the data to be prepared in specific ways.
The preparation of the data before classification is an important step in CNN. The notion of transfer learning (Pan & Yang 2010), which corresponds to bootstrapping the training steps using images from a different domain — i.e. not related to wood anatomy but featuring similar structure when compared with the anatomical images — has greatly enhanced the learning outcome of the training of classifiers with CNN in our experiments. Using this technique, we have pre-trained the CNN in our experiments with a much larger dataset that is available in ImageNet (Russakovsky et al. 2013), a publicly available resource of labelled images which is often used for benchmarking of algorithms (http://www.image-net.org/). This helps to avoid training the CNN from scratch with a dataset that is relatively small. In addition, there are strategies in CNN to prevent overfitting by tuning the weights in the layers of the CNN. These strategies structurally enhance the learning outcome of the CNN, and the performance can still increase further when adding more labelled data. Details of the deep learning analyses are provided in Appendix B.
The results obtained with CNN have been outstanding; however, the deployment of a CNN is not out-of-the-box. We have investigated four popular CNN architectures, namely GoogLeNet, AlexNet, VGG16 and ResNet101, and explored their accuracy with respect to the Martins database. For Alexnet the images were resampled (256 × 256 pixels), but original images were used for the other architectures. The results of our experiments for the four CNN architectures are listed in Table 3.
Systematic test of the performance of Convolutional Neural Networks based on four different architectures (GoogLeNet, Alexnet, VGG16, ResNet101).
Citation: IAWA Journal 41, 4 (2020) ; 10.1163/22941932-bja10029
Results and discussion
We have evaluated the performance of computer vision classification for tree species at the species level based on the largest available database of microscopic images from transverse sections, covering 20 images for 112 timber species (Martins et al. 2013). Using a combination of five different classifiers extracting four features each, we conclude that the Support Vector Machine (SVM), using the Local Binary Pattern feature (LBP; Fig. 1C–D), has the best performance to identify the wood images, leading to 89.3% accuracy (Table 1). This surprisingly high recognition rate based on only transverse sections outperforms the LBP analysis in the initial Martins et al. (2012) experiments (89.3 vs 80.7%; Tables 1 and 2). This is due to the experimentation with the parameters of the LBP method: given the resolution of the image, there is an optimum that can be accomplished for the feature to come out best.
In the past, the Martins dataset has been subjected to the extraction of combinations of additional features to further increase the species recognition rate (Table 2). For instance, the combination of Local Binary Patterns and Local Phase Quantization features initially yielded a recognition rate of 86.5% (Martins et al. 2012). Martins et al. (2015) applied a large collection of texture-based features to explore local patterns found in images, namely Gabor filters and Local Phase Quantization, and point-based features to explore individual pixel values — namely scale-invariant feature transform (SIFT) and speed-up robust feature (SURF) — resulting in a similar recognition score of 93%. Another approach, referred to as the quadtree approach resulting in a regular subdivision in square regions within the image (Samet 1984), was applied by Cavalin et al. (2013). Computing an ensemble of features in these subdivisions resulted in a boosting of the recognition rate to 93.2%. Kapp et al. (2012) also used a quadtree-based approach to assess multiple sets of features on the same dataset and reported recognition scores of 95%. Finally, Hafemann et al. (2014) compared textural descriptors with classical classifiers to the performance of a Convolutional Neural Network (CNN), which yielded an accuracy of 95% (Table 2).
Based on CNN with ResNet101 architecture in our study, we could further improve this high species recognition rate in our study up to 96.4%, while at the gymnosperm versus angiosperm level a recognition rate of over 99% could be accomplished for all three CNN architectures in our evaluation (Table 3). Misidentification at the species level may have been caused by the striking resemblance between transverse sections of the same species, poor quality of some of the sections, limits to image quality in the dataset, or by taxonomic synonymy issues. For example, the original Martins dataset included Cephalotaxus drupacea and C. harringtonia as separate species, while these are now considered as synonyms (Appendix A). This implies that the species recognition rates in Tables 1–3 should be even higher because some of the mismatches (false negatives) are actually classified correctly (true positives). The few misidentifications at the higher gymnosperm vs angiosperm level is due to the inclusion of Ephedra, which is one of the few vessel-bearing gymnosperms that resemble more the vessel-bearing angiosperms compared to the typical tracheid-based gymnosperm wood in transverse sections. Further insight in classification performance requires the images to be of excellent quality, which is not the case for the original images in the Martins database. We have learned from our preliminary analysis based on the original, higher resolution Naturalis images that the texture features studied resulted in higher performance scores over the complete spectrum of features. However, for a classifier to work well with the self-generated images, we could not add a sufficient number of images to support this. This means that we only used the Naturalis images in testing. Further experiments are required to define how many images per species need to be added to get a stable and accurate prediction from the trained classifier, based on the quality of the images. In future work, we will also combine our results with ongoing research about the combination of knowledge-based reasoning and machine learning to improve the performance of the classifiers, enabling us to account explicitly for expert knowledge together with knowledge synthesis obtained through machine learning.
The high species recognition scores presented in Tables 1–3 support the validity of wood anatomy in species identification and encourage us to further expand the existing Martins dataset with extra microscopic images. As a first step, it would be interesting to evaluate how the species recognition rates would be further improved when a complementary dataset of tangential wood images (20 per species, preferably from 20 mature individuals covering the entire range of the species) is added. As a second step, building a more complete reference database, including also transverse and tangential images from the top 100 traded timber species (annex 11 in UNODC Committee 2016) and all the CITES-listed timbers and their look-a-likes, is desired to further finetune existing computer-assisted tools to deal with more challenging classifying problems. Adding radial sections is not desired as the length of the rays in radial sections heavily relies on the exact radial orientation plane and the waviness of the rays, thereby jeopardizing the accuracy of the classifier.
In terms of the workflow for the extension of the dataset, it is of vital importance that a wood anatomist makes a first assessment on the representativeness of a wood sample or an available glass slide. The online information provided by scientific wood collections around the world summarised in Index Xylariorum (Stern 1988) and continuously updated under the guidance of IAWA (http://www.iawa-website.org/en/Wood_collection/Overview_of_Wood_Collection.shtml) and GTTN (https://globaltimbertrackingnetwork.org/products/iawa-index-xylariorum/), is helpful in this regard. Once collected and approved, the wood sample can be further processed (sections and images), and subsequently added to the dataset. In order to understand texture features, the resolving power of the image should be properly addressed and taken into account at the acquisition. For example, the underperformance of some of the features (Tables 1 and 2) can be partially understood by the lack of resolution — and thus detail — of some of the images. Therefore, the process of selecting wood samples and making high-quality images should be carried out in close collaboration between a wood anatomist and a computational scientist with thorough knowledge in imaging and machine learning. In this manner, we can produce a high-quality dataset for the training of the classifier.
If these global wood databases would be matched with complementary databases (anatomy, chemical profiling, DNA) representing the same species, wood scientists can develop a robust tool that facilitates (1) identification of traded woods in an efficient and accurate way, (2) enables fine-scale assessment of geographic provenance of imported logs, and (3) allows DNA fingerprinting. These complementary tools would boost forensic timber research and permit courts to prosecute wood trading companies for smuggling illegal timber. Examples of detrimental fines have already been imposed by the court to wood trading companies (e.g., Lumber Liquidators, Gibson Guitars), proving that implementation of timber laws (e.g., EUTR, Lacey act) can be successful. To make the anatomy tool generally accessible, the trained computer algorithms can be implemented in a cloud-based, computer vision tool enabling everyone to identify illegally traded timbers. However, this can only work when front-line officers and other stakeholders in the field know how to prepare (unstained) wood sections with sufficient quality and upload pictures to the wood identification web server. Also, here wood anatomists can play an important role in training stakeholders using proper basic tools (cooking pot, sliding microtome, sharp disposable blades, standard light microscope with camera). Evidently, before deciding to make wood sections, stakeholders can do a preliminary screening using one of the available portable devices to identify woods based on a cut surface (e.g. xylotron, xylorix), and then continue making wood sections of the suspicious samples.
Conclusions
The use of traditional wood anatomical descriptions following standardized features (IAWA Committee 1989, 2004) has been successfully applied to characterize and compare angiosperm and gymnosperm genera, but we are facing its limits with respect to timber identification at the species level. Computer-assisted classification based on texture features takes into account the standardized anatomical features and at the same time integrates other features like colour variation and acuteness of edges. We, therefore, believe that the use of computer vision techniques presents a powerful tool to boost the accuracy of timber recognition based on microscopic wood anatomy. In this paper, we found a remarkably high success rate in computer-assisted identification using only transverse wood sections from 112 species (Tables 1–3). However, we must stress that this Martins database is far from complete as a test database to rigorously assess the value of computer vision algorithms in timber identification. Therefore, we encourage expanding this database by adding wood samples from the top 100 most traded timber species, along with the CITES-listed timber species and their look-a-likes. Using close interaction between computational scientists and wood anatomists, our community can compile an extensive, worldwide reference database including a sufficient number of transverse and tangential microscopic images from the species of interest. Whether or not 20 images per species will suffice, is a topic for further investigation and will probably also depend on various image-related aspects, such as image resolution, biological and staining variation amongst the images, and section artefacts.
In the short term, it will be important to develop this global reference database of microscopic wood images representing all relevant traded timber species. In addition, training of classifiers should be benchmarked on a large image dataset in combination with analyses of the mismatches, as only then the true benefits of the classification procedure emerge. In the longer term, a global concerted effort between wood anatomists and computer scientists should be able to implement an open-access, cloud-based, computer-vision-based classification tool enabling a more efficient and more accurate identification process of wood. This wood identification tool would benefit scientists, customs officers, and other stakeholders who are confronted with identification of wood samples, including bona fide wood trading companies that are eager to receive more support in efficient identification protocols for their shipments. Moreover, we envision that the accuracy of this online tool will be robust enough to support prosecutions of wood trading companies in court that violate (inter)national regulations, and evidently these online identifications must be validated in a court of law by wood anatomy experts. It would be a great opportunity for IAWA to play a leading role in facilitating both the short- and longer-term goals, and to ideally also act as a coordinator to help building complementary reference datasets based on DNA based methods and chemical profiling. Only a collaborative effort amongst wood biologists will be able to contribute to the conservation of our forests and its wildlife and will generate an increasing awareness of the societal relevance of scientific wood collections and wood anatomical expertise worldwide.
Corresponding author; email: frederic.lens@naturalis.nl
Acknowledgements
We would like to thank the scientists who compiled the Martins dataset and made it publicly available. YG and XT are partially funded by a scholarship from the China Scholarship Council (CSC). MJ is supported by Plant.ID, which received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 765000. Two anonymous reviewers are acknowledged for their valuable suggestions. We also thank Bertie Joan van Heuven for imaging the available slides in Naturalis Biodiversity Center.
References
Bergo MCJ, Pastore TCM, Coradin VTR, Wiedenhoeft AC, Braga JWB. 2016. NIRS identification of Swietenia macrophylla is robust across specimens from 27 countries. IAWA J. 37: 420–430. DOI: 10.1163/22941932-20160144.
Bonan GB. 2008. Forests and climate change: forcings, feedbacks, and the climate benefits of forests. Science 320: 1444–1449. DOI: 10.1126/science.1155121.
Braga J, Pastore T, Coradin V, Camargos J, da Silva A. 2011. The use of near-infrared spectroscopy to identify solid wood specimens of Swietenia macrophylla (CITES Appendix II). IAWA J. 32: 285–296. DOI: 10.1163/22941932-90000058.
Braun B. 2013. Wildlife detector dogs — a guideline on the training of dogs to detect wildlife in trade. WWF, Frankfurt.
Cavalin PR, Kapp MN, Martins J, Oliveira LS. 2013. A multiple feature vector framework for forest species recognition. In: 28th annual ACM symposium on applied computing: 16–20. ACM, New York.
Crowther TW, Glick HB, Covey KR, Bettigole C, Maynard DS, et al.2015. Mapping tree density at a global scale. Nature 525: 201–205. DOI: 10.1038/nature14967.
Dormontt EE, Boner M, Braun B, Breulmann G, Degen B, et al.2015. Forensic timber identification: it’s time to integrate disciplines to combat illegal logging. Biol. Cons. 191: 790–798. DOI: 10.1016/j.biocon.2015.06.038.
Espinoza EO, Wiemann MC, Barajas-Morales J, Chavarria GD, McClure PJ. 2015. Forensic analysis of CITES-protected Dalbergia timber from the Americas. IAWA J. 36: 311–325. DOI: 10.1163/22941932-20150102.
Evans PD, Mundo IA, Wiemann MC, Chavarria GD, McClure PJ, Voin D, Espinoza EO. 2017. Identification of selected CITES-protected Araucariaceae using DART TOFMS. IAWA J. 38: 266–281. DOI: 10.1163/22941932-20170171.
FAO. 2015. Global Forest Resources Assessment 2015: how have the world’s forests changed? FAO, Rome.
Favret C, Sieracki JM. 2016. Machine vision automates species identification scaled towards production levels. Syst. Entomol. 41: 133–143. DOI: 10.1111/syen.12146.
Finer M, Novoa S, Weisse MJ, Petersen R, Mascaro J, Souto T, Stearns F, Martinez RG. 2018. Combatting deforestation: from satellite to intervention. Science 360: 1303–1305. DOI: 10.1126/science.aat1203.
Gasson P. 2011. How precise can wood identification be? Wood anatomy’s role in support of the legal timber trade, especially CITES. IAWA J. 32: 137–154. DOI: 10.1163/22941932-90000049.
Gomez Villa A, Salazar A, Vargas F. 2017. Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks. Ecol. Inform. 41: 24–32. DOI: 10.1016/j.ecoinf.2017.07.004.
Hafemann LG, Oliveira LS, Cavalin PR. 2014. Forest species recognition using deep convolutional neural networks. In: International conference on pattern recognition: 1103–1107.
Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, Thau D, Stehman SV, Goetz SJ, Loveland TR, Kommareddy A, Egorov A, Chini L, Justice CO, Townshend JRG. 2013. High-resolution global maps of 21st-century forest cover change. Science 342: 850–853. DOI: 10.1126/science.1244693.
Havlíček V, Córcoles AD, Temme K, Harrow AW, Kandala A, Chow JM, Gambetta JM. 2019. Supervised learning with quantum-enhanced feature spaces. Nature 567: 209–212. DOI: 10.1038/s41586-019-0980-2.
Helmling S, Olbrich A, Heinz I, Koch G. 2018. Atlas of vessel elements. Identification of Asian timbers. IAWA J. 39: 249–352. DOI: 10.1163/22941932-20180202.
Hermanson JC, Wiedenhoeft AC. 2011. A brief review of machine vision in the context of automated wood identification systems. IAWA J. 32: 233–250. DOI: 10.1163/22941932-90000054.
Houle D, Govindaraju DR, Omholt S. 2010. Phenomics: the next challenge. Nature Rev. Gen. 11: 855–866. DOI: 10.1038/nrg2897.
Hughes B, Burghardt T. 2017. Automated visual fin identification of individual great white sharks. Int. J. Comput. Vis. 122: 542–557. DOI: 10.1007/s11263-016-0961-y.
IAWA Committee. 1989. IAWA list of microscopic features for hardwood identification. IAWA Bull. n.s. 10: 219–332. DOI: 10.1002/fedr.19901011106.
IAWA Committee. 2004. IAWA list of microscopic features for softwood identification. IAWA J. 25: 1–70. DOI: 10.1163/22941932-90000349.
InsideWood. 2004–onwards. Available online at http://insidewood.lib.ncsu.edu/search (accessed 11 January 2018).
Jiao L, Yin Y, Xiao F, Sun Q, Song K, Jiang X. 2012. Comparative analysis of two DNA extraction protocols from fresh and dried wood of Cunninghamia lanceolata (Taxodiaceae). IAWA J. 33: 441–456. DOI: 10.1163/22941932-90000106.
Jiao L, Yin Y, Cheng Y, Jiang X. 2014. DNA barcoding for identification of the endangered species Aquilaria sinensis: comparison of data from heated or aged wood samples. Holzforschung 68: 487–494. DOI: 10.1515/hf-2013-0129.
Jiao L, Liu X, Jiang X, Yin Y. 2015. Extraction and amplification of DNA from aged and archaeological Populus euphratica wood for species identification. Holzforschung 69: 925–931. DOI: 10.1515/hf-2014-0224.
Jiao L, Yu M, Wiedenhoeft AC, He T, Li J, Liu B, Jiang X, Yin Y. 2018. DNA barcode authentication and library development for the wood of six commercial Pterocarpus species: the critical role of xylarium specimens. Scientific Reports 8: 1945. DOI: 10.1038/s41598-018-20381-6.
Jolivet C, Degen B. 2012. Use of DNA fingerprints to control the origin of sapelli timber (Entandrophragma cylindricum) at the forest concession level in Cameroon. Forensic Sci. Int. Genet. 6: 487–493. DOI: 10.1016/j.fsigen.2011.11.002.
Jordan MI, Mitchell TM. 2015. Machine learning: trends, perspectives, and prospects. Science 349: 255–260. DOI: 10.1126/science.aaa8415.
Kapp M, Bloot R, Cavalin PR, Oliveira LS. 2012. Automatic forest species recognition based on multiple feature sets. In: International joint conference of neural networks: 1296–1303.
Khalid M, Lee ELY, Yusof R, Nadaraj M. 2008. Design of an intelligent wood species recognition system. IJSSST 9(3): 9–19.
Koch G, Richter HG, Schmitt U. 2011. Design and application of CITESwoodID — computer-aided identification and description of CITES-protected timbers. IAWA J. 32: 213–220. DOI: 10.1163/22941932-90000052.
Kumar N, Belhumeur PN, Biswas A, Jacobs DW, Kress WJ, Lopez IC, Soares JVB. 2012. Leafsnap: a computer vision system for automatic plant species identification. Lect. Notes Comput. Sci. 7573: 502–516.
Lowe A, Wong K, Tiong Y, Iyerh S, Chew F. 2010. A DNA method to verify the integrity of timber supply chains; confirming the legal sourcing of merbau timber from logging concession to sawmill. Silvae Genet. 59: 263–268. DOI: 10.1515/sg-2010-0037.
MacLeod N, Benfield M, Culverhouse P. 2010. Time to automate identification. Nature 467: 154–155. DOI: 10.1038/467154a.
Martins J, Oliveira LS, Sabourin R. 2012. Combining textural descriptors for forest species recognition. In: 38th annual conference on IEEE Industrial Electronics Society: 1483–1488.
Martins J, Oliveira LS, Nisgoski S, Sabourin R. 2013. A database for automatic classification of forest species. Mach. Vis. Appl. 24: 567–578. DOI: 10.1007/s00138-012-0417-5.
Martins J, Oliveira LS, Britto Jr AS, Sabourin R. 2015. Forest species recognition based on dynamic classifier selection and dissimilarity feature vector representation. Mach. Vis. Appl. 26: 279–293. DOI: 10.1007/s00138-015-0659-0.
McClure PJ, Chavarria GD, Espinoza E. 2015. Metabolic chemotypes of CITES protected Dalbergia timbers from Africa, Madagascar, and Asia. Rapid Commun. Mass Spectrom. 29: 783–788. DOI: 10.1002/rcm.7163.
Nellemann C. 2012. Green carbon, black trade: illegal logging, tax fraud and laundering in the world’s tropical forests. A rapid response assessment. United Nations Environment Programme, GRID-Arendal. Birkeland Trykkeri AS, Birkeland.
Nithaniyal S, Newmaster SG, Ragupathy S, Krishnamoorthy D, Vassou SL, Parani M. 2014. DNA barcode authentication of wood samples of threatened and commercial timber trees within the tropical dry evergreen forest of India. PLoS ONE 9: e107669. DOI: 10.1371/journal.pone.0107669.
Pan SJ, Yang Q. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22: 1345–1359. DOI: 10.1109/TKDE.2009.191.
Pastore TCM, Braga JWB, Coradin VTR, Magalhães WLE, Okino EYA, Camargos JAA, de Muñiz GIB, Bressan OA, Davrieux F. 2011. Near-infrared spectroscopy (NIRS) as a potential tool for monitoring trade of similar woods: discrimination of true mahogany, cedar, andiroba, and curupixá. Holzforschung 65: 73–80. DOI: 10.1515/HF.2011.010.
Paula Filho PL, Oliveira LS, Nisgoski S, Britto Jr AS. 2014. Forest species recognition using macroscopic features. Mach. Vis. Appl. 25: 1019–1031. DOI: 10.1007/s00138-014-0592-7.
POWO. 2019. Plants of the world online. Facilitated by the Royal Botanic Gardens, Kew. Published on the Internet; http://www.plantsoftheworldonline.org/ (retrieved 4 October 2019).
Ravindran P, Costa A, Soares R, Wiedenhoeft AC. 2018. Classification of CITES-listed and other neotropical Meliaceae wood images using convolutional neural networks. Plant Meth. 14: 25. DOI: 10.1186/s13007-018-0292-9.
Richter HG, Dallwitz MJ. 2016 onwards. Commercial timbers: descriptions, illustrations, identification, and information retrieval. In English, French, German, Portuguese, and Spanish. Version: February 2016. http://delta-intkey.com.
Russ A, Fišerová M, Gigac J. 2009. Preliminary study of wood species identification by NIR spectroscopy. Wood Research 54: 23–31.
Russakovsky O, Deng J, Huang Z, Berg A, Fei-Fei L. 2013. Detecting avocados to zucchinis: what have we done, and where are we going? In: Proc. IEEE int. conf. comput. vision: 2064–2071.
Samet H. 1984. The quadtree and related hierarchical data structures. ACM Computing Surveys 16: 187–260. DOI: 10.1145/356924.356930.
Sarmiento C, Détienne P, Heinz C, Molino J-F, Grard P, Bonnet P. 2011. Pl@ntWood: a computer-assisted identification tool for 110 species of Amazon trees based on wood anatomical features. IAWA J. 32: 221–232. DOI: 10.1163/22941932-90000053.
Stern WL. 1988. Index Xylariorum. Institutional wood collections of the world. 3. IAWA J. n.s. 9: 203–252. DOI: 10.1163/22941932-90001072.
Unger J, Merhof D, Renner S. 2016. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification. BMC Evol. Biol. 16: 248. DOI: 10.1186/s12862-016-0827-5.
UNODC Committee. 2016. Best practice guide on forensic timber identification. United Nations Office for Drugs and Crime, Vienna.
Vlam M, de Groot GA, Boom A, Copini P, Laros I, Veldhuijzen K, Zakamdi D, Zuidema PA. 2018. Developing forensic tools for an African timber: regional origin is revealed by genetic characteristics, but not by isotope signature. Biol. Conserv. 220: 262–271. DOI: 10.1016/j.biocon.2018.01.031.
Wilf P, Zhang S, Chikkerurd S, Little SA, Wing SL, Serre T. 2016. Computer vision cracks the leaf code. PNAS 113: 3305–3310. DOI: 10.1073/pnas.1524473113.
Footnotes
Edited by Pieter Baas
Overview of the 112 woody species included in the Martins database, with reference to the phylogenetic position, and its inclusion in CITES and the top 100 most traded timbers sensu UNODC (2016). Species synonymy has updated using the Plants of the World Online portal (POWO 2019; http://www.plantsoftheworldonline.org/). Additional images for transverse wood sections from available microscopic slides from the Naturalis collection were added to the original Martins database for the 20 species marked with an asterisk (with number of Naturalis pictures in parentheses).
Citation: IAWA Journal 41, 4 (2020) ; 10.1163/22941932-bja10029
Appendix B. Details of the deep learning analyses
The different nets for the deep learning classification were implemented in Caffe (https://caffe.berkeleyvision.org/). In addition, we used numpy, scikit-learn, and OpenCV-python in the experiments. For convergent nets, we have used deploy.prototxt to do inference work. The deep nets, i.e., VGGnets, Residual Nets and GoogLeNet, were all run with the same training parameters. Both the Martins and the Naturalis-extended data set were used in the training. The nets were trained on an Ubuntu-based Linux workstation that is equipped with two TITAN GPU NVidia cards.
Within the Caffe environment the following settings are used:
Detailed information is available on request. We do realize that the current approach for deep learning has been moved to Tensorflow (https://www.tensorflow.org/) and other environments. However, our original experiments were performed with Caffe, and therefore we present specifically these parameters.