Named Entity Recognition and Classification on Historical Documents: A Survey MAUD EHRMANN, Ecole Polytechnique Fédérale de Lausanne AHMED HAMDI, University of La Rochelle ELVYS LINHARES PONTES, University of La Rochelle MATTEO ROMANELLO, Ecole Polytechnique Fédérale de Lausanne ANTOINE DOUCET, University of La Rochelle After decades of massive digitisation, an unprecedented amount of historical documents is available in digital format, along with their machine-readable texts. While this represents a major step forward with respect to preservation and accessibility, it also opens up new opportunities in terms of content mining and the next fundamental challenge is to develop appropriate technologies to efficiently search, retrieve and explore information from this ‘big data of the past’. Among semantic indexing opportunities, the recognition and classification of named entities are in great demand among humanities scholars. Yet, named entity recognition (NER) systems are heavily challenged with diverse, historical and noisy inputs. In this survey, we present the array of challenges posed by historical documents to NER, inventory existing resources, describe the main approaches deployed so far, and identify key priorities for future developments. CCS Concepts: •Computing methodologies →Information extraction ;Machine learning ;Language resources ;•Information systems →Digital libraries and archives. Additional Key Words and Phrases: named entity recognition and classification, historical documents, natural language processing, digital humanities 1 INTRODUCTION For several decades now, digitisation efforts by cultural heritage institutions are contributing an increasing amount of facsimiles of historical documents. Initiated in the 1980s with small scale, in-house projects, the ‘rise of digitisation’ grew further until it reached, already in the early 2000s, a certain maturity with large-scale, industrial-level digitisation campaigns [ 188]. Billions of images are being acquired and, when it comes to textual documents, their content is transcribed either manually via dedicated interfaces, or automatically via optical character recognition (OCR) or handwritten text recognition (HTR) [ 31,129]. As a result, it is nowadays commonplace for memory institutions (e.g. libraries, archives, museums) to provide digital repositories that offer rapid, time- and location-independent access to facsimiles of historical documents as well as, increasingly, full-text search over some of these collections. Beyond this great achievement in terms of preservation and accessibility, the availability of historical records in machine-readable formats bears the potential of new ways to engage with their contents. In this regard, the application of machine reading to historical documents is potentially transformative, and the next fundamental challenge is to adapt and develop appropriate technolo- gies to efficiently search, retrieve and explore information from this ‘big data of the past’ [ 98]. Here research is stepping up and the interdisciplinary efforts of the digital humanities (DH), natural language processing (NLP) and computer vision communities are progressively pushing forward the processing of facsimiles, as well as the extraction, linking and representation of the complex Authors’ addresses: Maud Ehrmann, maud.ehrmann@epfl.ch, Ecole Polytechnique Fédérale de Lausanne; Ahmed Hamdi, ahmed.hamdi@univ-lr.fr, University of La Rochelle; Elvys Linhares Pontes, elvys.linhares_pontes@univ-lr.fr, University of La Rochelle; Matteo Romanello, matteo.romanello@epfl.ch, Ecole Polytechnique Fédérale de Lausanne; Antoine Doucet, antoine.doucet@univ-lr.fr, University of La Rochelle.arXiv:2109.11406v1 [cs.CL] 23 Sep 20212 Ehrmann et al. Fig. 1. Swiss journal L’Impartial , issue of 31 Dec 1918. Facsimile of the first page (left), zoom on an article (middle), and OCR of this article as provided by the Swiss National Library (completed in the 2010s) (right). information enclosed in transcriptions of digitised collections. In this endeavor, information extrac- tion techniques, and particularly named entity (NE) processing, can be considered among the first and most crucial processing steps. Named entity recognition and classification (NER for short) corresponds to the identification of entities of interest in texts, generally of the types Person ,Organisation andLocation . Such entities act as referential anchors which underlie the semantics of texts and guide their interpretation. Acknowledged some twenty years ago, NE processing has undergone major evolution since then, from entity recognition and classification to entity disambiguation and linking, and is representative of the evolution of information extraction from a document- to a semantic-centric view point [ 156]. As for most NLP research areas, recent developments around NE processing are dominated by deep neural networks and the usage of embedded language representations [ 37,110]. Since their inception up to now, NE-related tasks are of ever-increasing importance and at the core of virtually any text mining application. From the NLP perspective, NE processing is useful first and foremost in information retrieval, or the activity of retrieving a specific set of documents within a collection given an input query. Guo et al. [ 78] as well as Lin et al. [ 118] showed that more than 70% of queries against modern search engines contain a named entity, and it has been suggested that more than 30% of content- bearing words in news text correspond to proper names [ 69]. Entity-based document indexing is therefore desirable. NEs are also highly beneficial in information extraction, or the activity of finding information within large volumes of unstructured texts. The extraction of salient facts about predefined types of entities in free texts is indeed an essential part of question answering [ 127], media monitoring [ 182], and opinion mining [ 9]. Besides, NER is helpful in machine translation [ 85], text summarisation [97], and document clustering [62], especially in a multilingual setting [181]. As for historical material (cf. Figure 1), primary needs also revolve around retrieving documents and information, and NE processing is of similar importance [ 35]. There are less query logs over historical collections than for the contemporary web, but several studies demonstrate how prevalent entity names are in humanities users’ searches: 80% of search queries on the national library of France’s portal Gallica contain a proper name [ 33], and geographical and person names dominate the searches of various digital libraries, be they of artworks, domain-specific historical documents, historical newspapers, or broadcasts [ 14,32,92]. Along the same line, several user studies emphasise the role of entities in various phases of the information-seeking workflow of historians [ 47,71], now also reflected in the ‘must-have’ of exploration interfaces, e.g. as search facets over historicalNamed Entity Recognition and Classification on Historical Documents: A Survey 3 newspapers [ 49,145] or as automatic suggestions over large-scale cultural heritage records [ 72]. Besides document indexing, named entity recognition can also benefit downstream processes (e.g. biography reconstruction [ 64] or event detection [ 176]), as well as various data analysis and visualisation (e.g. on networks [ 194]). Finally, and perhaps most importantly, NER is the first step of entity linking, which can support the cross-linking of multilingual and heterogeneous collections based on authority files and knowledge bases. Overall, entity-based semantic indexing can greatly support the search and exploration of historical documents, and NER is increasingly being applied on such a material. Yet, the recognition and classification of NEs in historical texts is not straightforward, and performances are rarely on par with what is usually observed on contemporary, well-edited English news material [ 50]. In particular, NER on historical documents faces the challenges of domain heterogeneity, input noisiness, dynamics of language, and lack of resources. If some of these issues have already been tackled in isolation in other contexts (with e.g. user-generated text), what makes the task particularly difficult is their combination, as well as their magnitude: texts are severely noisy, domains and time periods are far apart, and there is no (or not yet) historical web to easily crawl to capture language models. In this context of new material, interests and needs, and in times of rapid technological change with deep learning, this paper presents a survey of NER research on historical documents. The objectives are to study the main challenges facing named entity recognition and classification when applied to historical documents, to inventory the strategies deployed to deal with them so far, and to identify key priorities for future developments. Section 2 outlines the objectives, the scope and the methodology of the survey, and Section 3 provides background on NE processing. Next, Section 4 introduces and discusses the challenges of NER on historical documents. In response, Section 5 proposes an inventory of existing resources, while Section 6 and 7 present the main approaches, in general and in view of specific challenges, respectively. Finally, Section 8 discusses next priorities and concludes. 2 FRAMING OF THE SURVEY 2.1 Objectives This survey focuses on NE recognition and classification, and does not consider entity linking nor entity relation extraction. With the overall objective of characterising the landscape of NER on historical documents, the survey reviews the history, the development, and the current state of related approaches. In particular, we attempt to answer the following questions: Q1What are the key challenges posed by historical documents to NER? Q2Which existing resources can be leveraged in this task, and what is their coverage in terms of historical periods, languages and domains? Q3Which strategies were developed and successfully applied in response to the challenges faced by NER on historical documents? Which aspects of NER systems require adaptation in order to obtain satisfying performances on this material? While investigating the answers to these questions, the survey will also shed light on the variety of domains and usages of NE processing in the context of historical documents. 2.2 Document Scope and Methodology Cultural heritage covers a wide range of material and the document scope of this survey, centred on ‘historical documents’, needed to be clearly delineated. From a document processing perspective, there is no specific definition of what a historical document is, but only shared intuitions based on multiple criteria. Time seems an obvious one, but where to draw the line between historical and contemporary documents is a tricky question. Other aspects include the digital origin (digitised or4 Ehrmann et al. Title Type Discipline Annual Meeting of the Association for Computational Linguistics (ACL) proceedings CL/NLP Digital Humanities conference proceedings DH Digital Scholarship in the Humanities (DSH) journal DH Empirical Methods in Natural Language Processing (EMNLP) proceedings NLP International Conference on Language Resources and Evaluation (LREC) proceedings NLP International Journal on Digital Libraries journal DH Journal of Data Mining and Digital Humanities (JDMDH) journal DH Journal on Computing and Cultural Heritage (JOCCH) journal DH Language Resources and Evaluation (LRE) journal NLP SIGHUM Workshop on Computational Linguistics for Cultural Heritage proceedings CL/NLP/DH Table 1. Publication venues whose archives were scanned as part of this survey (in alphabetical order). born-digital), the type of writing (handwritten, typeset or printed), the state of the material (heavily degraded or not), and of the language (historical or not). None of these criteria define a clear set of documents and any attempt of definition resorts to, eventually, subjective decisions. In this survey, we consider as historical document any document of textual nature mainly, produced or published up to 1979, regardless of its topic, genre, style or acquisition method. The year 1979 is not arbitrary and corresponds to one of the most recent ‘turning points’ acknowledged by historians [ 26]. This document scope is rather broad, and the question of the too far-reaching ‘textual nature’ can be raised in relation to documents such as engravings, comics, card boards or even maps, which can also contain text. In practice, however, NER was mainly applied on printed documents so far, and these represent most of the material of the work reviewed here. The compilation of the literature was based on the following strategies: scanning of the archives of relevant journals and conference series, search engine-based discovery, and citation chaining. We considered key journals and conference series both in the fields of natural language processing and digital humanities (see Table 1). For searching, we used a combination of keywords over the Google Scholar and Semantic Scholar search engines.1With a few exceptions, we only considered publications that included a formal evaluation. 2.3 Previous surveys and target audience Previous surveys on NER focused either on approaches in general, giving an overview of features, algorithms and applications, or on specific domains or languages. In the first group, Nadeau et al. [ 130] provided the first comprehensive survey after a decade of work on NE processing, reviewing existing machine learning approaches of that time, as well as typologies and evaluation metrics. Their survey remained the main reference until the introduction of neural network-based systems, recently reviewed by Yadav et al. [ 200] and Li et al. [ 116]. The latest NER survey to date is the one by Nazar et al. [ 131], which focuses specifically on generic domains and on relation extraction. In the second group, Leaman et al. [ 111] and Campos et al. [ 29] presented a survey of advances in biomedical named entity recognition, while Lei et al. [ 114] considered the same domain in Chinese. Shaalan focused on general NER in Arabic [175], and surveys exist for Indian languages [ 142]. Recently, Georgescu et al. [ 68] focused on NER aspects related to the cybersecurity domain. Turning our attention to digital humanities, Sporlerder [ 177] and Piotrowski [ 147] provided general overviews of NLP processing for cultural heritage domains, considering institutional, documentary and technical aspects. To the best of our knowledge, this is the first survey on the application of NER to historical documents. 1E.g. ‘named entity recognition’, ‘nerc’, ‘named entity processing’, ‘historical documents’, ‘old documents’ over https: //scholar.google.com and https://www.semanticscholar.org/Named Entity Recognition and Classification on Historical Documents: A Survey 5 The primary target audiences are researchers and practitioners in the fields of natural language processing and digital humanities, as well as humanities scholars interested in knowing and applying NER on historical documents. Since the focus is on adapting NER to historical documents and not on NER techniques themselves, this study assumes a basic knowledge of NER principles and techniques; however, it will provide information and guidance as needed. We use the terms ‘historical NER’ and ‘modern NER’ to refer to work and applications which focus on, respectively, historical and non-historical (as we define them) materials. 3 BACKGROUND Before delving into NER for historical documents, this section provides a generic introduction to named entity processing and modern NER (Section 3.1 and 3.2), to the types of resources required (Section 3.3), and to the main principles underlying NER techniques (Section 3.4). 3.1 NE processing in general As of today, named entity tasks correspond to text processing steps of increasing level of complexity, defined as follows: (1)recognition and classification – or the detection of named entities, i.e. elements in texts which act as a rigid designator for a referent, and their categorisation according to a set of predefined semantic categories; (2)disambiguation/linking – or the linking of named entity mentions to a unique reference in a knowledge base, and (3) relation extraction – or the discovery of relations between named entities. First introduced in 1995 during the 6𝑡ℎMessage Understanding Conference [ 75], the task of NE recognition and classification (task 1 above) quickly broadened and became more complex, with the extension and refinement of typologies,2the diversification of languages taken into account, and the expansion of the linguistic scope with, along proper names, the consideration of pronouns and nominal phrases as candidate lexical units (especially during the ACE program [ 45]). Later on, as recognition and classification were reaching satisfying performances, attention shifted to finer-grained processing, with metonymy recognition [ 123] and fine-grained classification [ 57,122], and to the next logical step, namely entity resolution or disambiguation (task 2 above, not covered in this survey). Besides the general domain of clean and well-written news wire texts, NE processing is also applied to specific domains, particularly bio-medical [ 73,102], and to more noisy inputs such as speech transcriptions [ 66] and tweets [ 148,159]. In recent years, one of the major developments of NE processing is its application to historical material. Importantly, and although the question of the definition of named entities is not under focus here, we shall specify that we adopt in this regard the position of Nadeau et al. [ 130] for which “ the word ‘Named’ aims to restrict [Named Entities] to only those entities for which one or many rigid designators, as defined by S. Kripke, stands for the referent ”. Concretely speaking, named entities correspond to different types of lexical units, mostly proper names and definite descriptions, which, in a given discourse and application context, autonomously refer to a predefined set of entities of interest. There is no strict definition of named entities, but only a set of linguistic and application-related criteria which, eventually, compose a heterogeneous set of units.3 Finally, let us mention two NE-related specific research directions: temporal information process- ing and geoparsing. This survey does not consider work related to temporal analysis and, when relevant, occasionally mentions some related to geotagging. 2See e.g. the overviews of Nadeau et al. [130, pp. 3-4] and Ehrmann et al. [51]. 3See Ehrmann [48, pp.81-188] for an in-depth discussion of NE definition.6 Ehrmann et al. Table 2. Illustration of IOB tagging scheme (example 1). Tokens (X) NER label (Y) POS Chunk Switzerland B-LOC NNP I-NP stands O VBZ I-VP accused O VBN I-VP by O IN I-PP Senator O NNP I-NP Alfonse B-PER NNP I-NP D’Amato I-PER NNP I-NP ... ... ... ... 3.2 NER in a nutshell 3.2.1 A sequence labelling task. Named entity recognition and classification is defined as a sequence labelling task where, given a sequence of tokens, a system seeks to assign labels (NE classes) to this sequence. The objective for a system is to observe, in a set of labelled examples, the word-labels correspondences and their most distinctive features in order to learn identification and classification patterns which can then be used to infer labels for new, unseen sequences of tokens. This excerpt from the CoNLL-03 English test dataset [ 190] illustrates a training example (or the predictions a system should output): (1)[𝐿𝑂𝐶 Switzerland] stands accused by Senator [𝑃𝐸𝑅Alfonse D’Amato], chairman of the powerful [𝑂𝑅𝐺 U.S. Senate Banking Committee], of agreeing to give money to [𝐿𝑂𝐶 Poland] (...) Such input is often represented with the IOB tagging scheme, where each token is marked as being at the beginning (B), inside (I) or outside (O) of an entity of a certain class [ 155]. Fig. 2 represents the above example in IOB format, from which systems try to extract features to learn NER models. 3.2.2 Feature space. NER systems’ input corresponds to a linear representation of text as a sequence of characters, usually processed as a sequence of words and sentences. This input is enriched with features or ‘clues’ a system consumes in order to learn (or generalise) a model. Typical NER features may be observed at three levels: words, close context or sentences, and document. At the morphological level, features include e.g. the word itself, its length, whether it is (all) capitalised or not, whether it contains specific word patterns or specific affixes (e.g. the suffixes -vitch or-sson for person names in Russian and Swedish), its base form, its part of speech (POS), and whether it is present in a predefined list. At the contextual level, features reflect the presence or absence of surrounding ‘trigger words’ (or combination thereof, e.g. Senator andtopreceding a person or location name, Committee ending an organisation name), or of surrounding NE labels. Finally, at the document level, features correspond to e.g. the position of the mention in the document or paragraph, the occurrence of other entities in the document, or the document metadata. These features can be absent or ambiguous, and none of them is systematically reliable; it is therefore necessary to combine them, and this is where statistical models are helpful. Features are observed in positive and negative examples, and are usually also encoded according to the IOB scheme (e.g. part-of-speech and chunk annotation columns in Fig. 2). In traditional, feature-based machine learning, features are specified by the developer (feature engineering), while in deep learning they are learned by the system itself (feature learning) and go beyond those specified above. 3.2.3 NER evaluation. Systems are evaluated in terms of precision (P), recall (R) and F-measure (F-score, the harmonic mean of P and R). Over the years, different scoring procedures and measuresNamed Entity Recognition and Classification on Historical Documents: A Survey 7 were defined in order to take into account various phenomena such as partial match or incorrect type but correct mention, or to assign different weights to various entity and/or error types. These fine-grained evaluation metrics allow for a better understanding of the system’s performance and for tailoring the evaluation to what is relevant for an application. Examples include the (mostly abandoned) ACE ‘entity detection and recognition value’ (EDR), the slot error rate (SER) or, increasingly, the exact vs. fuzzy match settings where entity mention boundaries need to correspond exactly vs. to overlap with the reference. We refer the reader to [ 130, pp.12-15], [ 116, pp.3-4] and [136, chapter 6]. This survey reports systems’ performances in terms of P, R and F-score. 3.3 NER resource types Resources are essential when developing NER systems. Four main types of resources may be distinguished, each playing a specific role. 3.3.1 Typologies. Typologies define a semantic framework for the entities under consideration. They corresponds to a formalised and structured description of the semantic categories to consider (the objects of the world which are of interest), along with a definition of their scope (their realisation in texts). There exist different typologies, which can be multi-purpose or domain-specific, and with various degrees of hierarchisation. Most of them are defined and published as part of evaluation campaigns, with no tradition of releasing typologies as such outside this context. Typologies form the basis of annotation guidelines, which explicit the rules to follow when manually annotating a corpus and are crucial for the quality of the resulting material. 3.3.2 Lexicons and knowledge bases. Next, lexicons and knowledge bases provide information about named entities which may be used by systems for the purposes of recognition, classification and disambiguation. This type of resource has evolved significantly in the last decades, as a result of the increased complexity of NE-related tasks and of technological progress made in terms of knowledge representation. Information about named entities can be of lexical nature, relating to the textual units making up named entities, or of encyclopædic nature, concerning their referents. The first case corresponds to simple lists named lexica or ‘gazetteers’4which encode entity names, used in look-up procedures, and trigger words, used as features to guess names in texts. The second case corresponds to knowledge bases which encode various non-linguistic information about entities (e.g. date of birth/death, alma mater, title, function), used mainly for entity linking (Wikipediaand DBpedia [ 113] being amongst the best-known examples). With the advent of neural language models, the use of explicit lexical information stored in lexica could have been definitely sealed, however gazetteer information still proves useful when incorporated as feature concatenated to pre-trained embeddings [37, 89], confirming that NER remains a knowledge-intensive task [157]. 3.3.3 Word embeddings and language models. Word embeddings are low-dimensional, dense vec- tors which represent the meaning of words and are learned from word distribution in running texts. Stemming from the distributional hypothesis, they are part of the representation learning paradigm where the objective is to equip machine learning algorithms with generic and efficient data representations [ 16]. Their key advantage is that they can be learned in a self-supervised fash- ion, i.e. from unlabelled data, enabling the transition from feature engineering to feature learning. The principle of learning and using distributional word representations for different tasks was already present in [ 13,37,193], but it is with the publication of word2vec, a software package which provided an efficient way to learn word embeddings from large corpora [ 126], that embeddings started to become a standard component of modern NLP systems, including NER. 4A term initially devoted to toponyms afterwards extended to any NE type.8 Ehrmann et al. Since then, much effort has been devoted to developing effective means of learning word rep- resentations, first moving from words to sub-words and characters, and then from words to words-in-context with neural language models. The first generation of ‘traditional’ embeddings corresponds to static word embeddings where a single representation is learned for each word independently of its context (at the type level). Common algorithms for such context-independent word embeddings include Google word2vec [ 126], Stanford Glove [ 143] and SENNA [ 37]. The main drawbacks of such embeddings are their poor modelling of ambiguous words (embeddings are static) and their inability to handle out-of-vocabulary (OOV) words, i.e. words not present in the training corpus and for which there is no embedding. The usage of character-based word embeddings, i.e. word representations based on a combination of its character representations, can help process OOV words and make better use of morphological information. Such representations can be learned in a word2vec fashion, as with fastText [ 21], or via CNN or RNN-based architectures (see Section 3.4 for a presentation of types of networks). However, even enriched with sub-word information, traditional embeddings are still ignorant of contextual information. This short-coming is addressed by a new generation of approaches which takes as learning objective language modelling, i.e. the task of computing the probability distribution of the next word (or character) given the sequence of previous words (or characters) [ 17]. By taking into account the entire input sequence, such approaches can learn deeper representations which capture many facets of language, including syntax and semantics, and are valid for various linguistic contexts (at the token level). They generate powerful language models (LMs) which can be used for downstream tasks and from which contextual embeddings can be derived. These LMs can be at the word level (e.g. ELMo [ 144], ULMFiT [ 88], BERT [ 43] and GPT [ 153]), or character-based such as the contextual string embeddings proposed by Akbik et al. [ 4] (a.k.a flair embeddings). Overall, alongside static character-based word and word embeddings, character-level and word-level LM embeddings are pushing the frontiers in NLP and are becoming key elements of NER systems, be it for contemporary or historical material. 3.3.4 Corpora. Finally, a last type of resource essential for developing NER systems is labelled documents and, to some extent, unlabelled textual data. Labelled corpora illustrate an objective and are used either as a learning base or as a point of reference for evaluation purposes. Unlabelled textual material is necessary to acquire embeddings and language models. 3.4 NER methods Similarly to other NLP tasks, NER systems are developed according to three standard families of algorithms, namely rule-based, feature-based (traditional machine learning) and neural-based (deep learning). 3.4.1 Rule-based approaches. Early NER methods in the mid-1990s were essentially rule-based. Such approaches rely on rules manually crafted by a developer (or linguist) on the base of regularities observed in the data. Rules manipulate language as a sequence of symbols and interpret associated information. Organised in what makes up a grammar, they often rely on a series of linguistic pre- processing (sentence splitting, tokenization, morpho-syntactic tagging), require external resources storing language information (e.g. triggers words in gazetteers) and are executed using transducers. Such systems have the advantages of not requiring training data and of being easily interpretable, but need time and expertise for their design. 3.4.2 Machine-learning based approaches. Very popular until the late 1990s, rule-based approaches were superseded by traditional machine learning approaches when large annotated corpora became available and allowed the machine learning of statistical models in supervised, semi-supervised, andNamed Entity Recognition and Classification on Historical Documents: A Survey 9 later unsupervised fashion. Traditional, feature-based machine learning algorithms learn inductively from data on the base of manually selected features. In supervised NER, they include support vector machines [ 94], decision trees [ 185], as well as probabilistic sequence labelling approaches with generative models such as hidden markov models [ 19] and discriminative ones such as maximum entropy models [ 15] and linear-chained conditional random fields (CRFs) [ 109]. Thanks to their capacity to take into account the neighbouring tokens, CRFs proved particularly well-suited for NER tagging and became the standard for feature-based NER systems. 3.4.3 Deep learning approaches. Finally, latest research on NER is largely (if not exclusively) domi- nated by deep learning (DL). Deep learning systems correspond to artificial neural networks with multiple processing layers which learn representations of data with multiple levels of abstrac- tion [ 112]. In a nutshell, (deep) neural networks are composed of computational units, which take a vector of input values, multiply it by a weight vector, add a bias, apply a non-linear activation function, and produce a single output value. Such units are organised in layers which compose a network, where each layer receives its input from the previous one and passes it to the next (forward pass), and where parameters that minimise a loss function are learned with gradient descent (backward pass). The key advantage of neural networks is their capacity to automatically learn input representations instead of relying on manually elaborated features, and very deep networks (with many hidden layers) are extremely powerful in this regard. Deep learning architectures for sequence labelling have undergone rapid change over the last few years. These developments are function of two decisive aspects for successful deep learning-based NER: at the architecture level, the capacity of a network to efficiently manage context, and, at the input representation level, the capacity to benefit from or learn powerful embeddings or language models. In what follows we briefly review main deep learning architectures for modern NER and refer the reader to Lin et al. [116] for more details. Motivated by the desire to avoid task-specific engineering as much as possible, Collobert et al. [ 37] pioneered the use of neural nets for four standard NLP tasks (including NER) with convolutional neural networks (CNN) that made used of trained type-level word embeddings and were learned in an end-to-end fashion. Their unified architecture SENNA5reached very competitive results for NER ( 89 .86%F-score on the CoNLL-03 English corpus) and near state-of-the-art results for the other tasks. Following Collobert’s work, developments focused on architectures capable of keeping information of the whole sequence throughout hidden layers instead of relying on fixed-length windows. These include recurrent neural networks (RNN), either simple [ 59] or bi-directional [ 170] (where input is processed from right to left and from left to right), and their more complex variants of long short-term memory networks (LSTM) [ 86] and gated recurrent units (GRU) [ 34] which mitigate the loss of distant information often observed in RNN. Huang et al. [ 89] were among the first to apply a bidirectional LSTM (BiLSTM) network with a CRF decoder to sequence labelling, obtaining 90 .1%F-score on the NER CoNLL-03 English dataset. Soon, BiLSTM networks became the de facto standard for context-dependent sequence labelling, giving rise to a body of work including Lample et al. [ 110], Chiu et al. [ 110], and Ma et al. [ 121] (to name but a few). Besides making use of bidirectional variants of RNN, these work also experiment with various input representations, in most cases combining learned character-based representations with pre-trained word embeddings. Character information has proven useful for inferring information for unseen words and for learning morphological patterns, as demonstrated by the 91 .2%F-score of Ma et al. [121] on CoNLL-03, and the systematically better results of Lample et al. [ 110] on the same dataset when using character information. A more recent study by Taillé et al. [ 186] confirms the role of sub-word representations for unseen entities. 5‘Semantic/syntactic Extraction using a Neural Network Architecture’.10 Ehrmann et al. The latest far-reaching innovation in the DL architecture menagerie corresponds to self-attention networks, or transformers [ 196], a new type of simple networks which eliminates recurrence and convolutions and are based solely on the attention mechanism. Transformers allow for keeping a kind of global memory of the previous hidden states where the model can choose what to retrieve from (attention), and therefore use relevant information from large contexts. They are mostly trained with a language modelling objective and are typically organised in transformer blocks, which can be stacked and used as encoders and decoders. Major pre-training transformer architectures include the Generative Pre-trained Transformer (GPT, a left-to-right architecture) [ 153] and the Bidirectional Encoder Representation from Transformer (BERT, a bidirectional architecture) [ 43], which achieves 92 .8%NER F-score on CoNLL-03. More recently, Yamada et al. [ 201] proposed an entity-aware self-attention architecture which achieved 94 .3%F-score on the same dataset. Transformer-based architectures are the focus of extensive research and many model variants were proposed, of which Tay et al. [187] propose an overview. Overall, two points should be noted. First, that beyond the race for the leader board (based on the fairly clean English CoNLL-03 dataset), pre-trained embeddings and language models play a crucial role and are becoming a new paradigm in neural NLP and NER (the ‘NLP’s ImageNet moment’ [ 167]). Second, that powerful language models are also paving the way for transfer learning, a method particularly useful with low-resource languages and out-of-domain contexts, as is the case with challenging, historical texts. 4 CHALLENGES Named entity recognition on historical documents faces four main challenges for which systems developed on contemporary datasets are often ill-equipped. Those challenges are intrinsic to the historical setting, like time evolution and types of documents, and endemic to the text acquisition process, like OCR noise. This translates into a variable and sparse feature space, a situation com- pounded by the lack of resources. This section successively considers the challenges of document type and domain variety, noisy input, dynamics of language, and lack of resources. 4.1 The (historical) variety space First, NER on historical texts corresponds to a wide variety of settings, with documents of different types (e.g. administrative documents, media archives, literary works, documentation of archival sites or art collections, correspondences, secondary literature), of different nature (e.g. articles, letters, declarations, memoirs, wires, reports), and in different languages, which, moreover, spans different time periods and encompasses various domains and countless topics. The objective here is not to inventory all historical document types, domains and topics, but to underline the sheer variety of settings which, borrowing an expression from B. Plank [ 149], compose the ‘variety space’ NLP is confronted with, intensified in the present case by the time dimension.6 Two comments should be made in connection with this variety. First, domain shift is a well- known issue for NLP systems in general and for modern NER in particular. While B. Plank [ 149] and J. Einsenstein [ 56] investigated what to do about bad and non-standard (or non-canonical) language with NLP in general, Augenstein et al. [ 8] studied the ability of modern NER systems to generalise over a variety of genres, and Taillé et al. [ 186] over unseen mentions. Both studies demonstrated a NER transfer gap between different text sources and domains, confirming earlier findings of Vilain et al. [ 197]. While no studies have (yet) been conducted on the generalisation 6Considering there is no common grounds on what constitutes a domain and that the term is overloaded, Plank proposes the concept of “variety space”, defined as a “ unknown high-dimensional space, whose dimensions contain (fuzzy) aspects such as language (or dialect), topic or genre, and social factors (age, gender, personality, etc.), amongst others. A domain forms a region in this space, with some members more prototypical than others ” [149].Named Entity Recognition and Classification on Historical Documents: A Survey 11 capacities of NER systems within the realm of historical documents, there are strong grounds to believe that systems are equally impacted when switching domain and/or document type. Second, this (historical) variety space is all the more challenging as the scope of needs and applications in humanities research is much broader than the one usually addressed in modern NLP. For sure the variety space does not differ much between today and yesterday’s documents (i.e. if we were NLP developers living in the 18C we would be more or less confronted with the same ‘amount’ of variety as today), however here the difference lies in the interest for all or part of this variety: while NLP developments tend to focus on some well-identified and stable domains/sub- domains (sometimes motivated by commercial opportunities), the (digital) humanities and social sciences research communities are likely interested in the whole spectrum of document types and domains. In brief, if the magnitude of the variety space is more or less similar for contemporary and historical documents, the range of interests and applications in humanities and cultural heritage requires—almost by design—the consideration of an expansive array of domains and document types. 4.2 Noisy input Next, historical NER faces the challenges of noisy input derived from automatic text acquisition over document facsimiles. Text is acquired via two processes: 1) optical character recognition (OCR) and handwritten text recognition (HTR), which recognise text characters from images of printed and handwritten documents respectively, and 2) optical layout recognition (OLR), which identifies, orders and classifies text regions (e.g. paragraph, column, header). We consider both successively. 4.2.1 Character recognition. The OCR transcription of the newspaper article on the right-hand side of Figure 1 illustrates a typical, mid-level noise, with words perfectly readable ( la Belgique ), others illegible ( pu. s >s « _jnces ), and tokenization problems ( n’à’pas ,le’Conseiller ). While this does not really affect human understanding when reading, the same is not true for machines which face numerous OOV words. Be it by means of OCR or HTR, text acquisition performances can be impacted by several factors, including: a) the quality of the material itself, affected by the poor preservation and/or original state of documents with e.g. ink bleed-through, stains, faint text, and paper deterioration; b) the quality of the scanning process, with e.g. an inadequate resolution or imaging process leading to frame or border noise, skew, blur and orientation problems; or c) as per printed documents and in absence of standardisation, the diversity of typographic conventions through time including e.g. varying fonts, mixed alphabets but also diverse shorthand, accents and punctuation. These difficulties naturally challenge character recognition algorithms which are, what is more, evolving from one OCR campaign to another, usually conducted at different times by libraries and archives. As a result, not only the transcription quality is below expectations, but the type of noise present in historical machine-readable corpora is also very heterogeneous. Several studies investigated the impact of OCR noise on downstream NLP tasks. While Lo- presti [ 120] demonstrated the detrimental effect of OCR noise propagation through a typical NLP pipeline on contemporary texts, Van Strien et al. [195] focused on historical material and found a consistent impact of OCR noise on the six NLP tasks they evaluated. If sentence segmentation and dependency parsing bear the brunt of low OCR quality, NER is also affected with a significant drop of F-score between good and poor OCR (from 87%to63%for person entities). Focusing specifically on entity processing, Hamdi et al. [ 79,80] confronted a BiLSTM-based NER model with OCR outputs of the same text but of different qualities and observed a 30 percentage point loss in F-score when the character error rate increased from 7% to 20%. Finally, in order to assess the impact of noisy entities on NER during the CLEF-HIPE-2020 NE evaluation campaign on historical newspapers12 Ehrmann et al. (HIPE-2020 for short),7Ehrmann et al. [ 53] evaluated systems’ performances on various entity noise levels, defined as the length-normalised Levenshtein distance between the OCR surface form of an entity and its manual transcription. They found remarkable performance differences between noisy and non-noisy mentions, and that already as little noise as 0.1 severely hurts systems’ abilities to predict an entity and may halve their performances. To sum up, whether focused on a single OCR version of text(s) [ 195], on different artificially-generated ones [ 79], or on the noise present in entities themselves [ 53], these studies clearly demonstrate how challenging OCR noise is for NER systems. 4.2.2 Layout recognition. Beside incorrect character recognition, textual input quality can also be affected by faulty layout recognition. Two problems surface here. The first relates to incorrect page region segmentation which mixes up text segments and produces, even with correct OCR, totally unsuitable input (e.g. a text line reading across several columns). Progress in OLR algorithms makes this problem rarer, but it is still present for collections processed more than a decade ago. The second has to do with the unusual text segmentation resulting from correct OLR of column-based documents, with very short line segments resulting in numerous hyphenated words (cf. Figure 1). The absence of proper sentence segmentation and word tokenization also affects performances, as demonstrated in HIPE-2020, in particular Boros et al [ 25], Ortiz Suárez et al . [137] and Todorov et al. [191] (see Section 6.3). Overall, OCR and OLR noises lead to a sparser feature space which greatly affects NER perfor- mances. What makes this ‘noisiness’ particularly challenging is its wide diversity and range: an input can be noisy in many different ways, and be little to very noisy. Compared to social media, for which Baldwin et al. [ 10] demonstrated that there exists a noise similarity from a medium to another (blog, Twitter, etc.) and that this noise is mostly ‘NLP-tractable’, OCR and OLR noises in historical documents appear as real moving targets. 4.3 Dynamics of language Another challenge relates to the effects of time and the dynamics of language. As a matter of fact, historical languages exhibit a number of differences with modern ones, having an impact on the performances of NLP tools in general, and of NER in particular [147]. 4.3.1 Historical spelling variations. The first source of difficulty relates to spelling variations across time, due either to the normal course of language evolution or to more prescriptive orthographic reforms. For instance, the 1740 edition of the dictionary of the French Academy (which had 8 editions between 1694 and 1935) introduced changes in the spelling of about one third of the French vocabulary and, in Swedish 19C literary texts, the letters were systematically used instead of in modern Swedish [ 23]. NER can therefore be affected by poor morpho-syntactic tagging over such morphological variety, and by spelling variation of trigger words and of proper names themselves. While the latter are less affected by orthographic reforms, they do vary through time [23]. 4.3.2 Naming conventions. Changes in naming conventions, particularly for person names, can also be challenging. Let alone the numerous aristocratic and military titles that were used in people’s addresses, it was, until recently, quite common to refer to a spouse using the name of her husband (which affects more the linking than recognition), and to use now outdated addresses, e.g. the French expression sieur . These changes have been studied by Rosset et al. [ 165] who compared the structure of entity names in historical newspapers vs. in contemporary broadcast news. Differences 7https://impresso.github.io/CLEF-HIPE-2020/Named Entity Recognition and Classification on Historical Documents: A Survey 13 include the prevalence of the structure title + last name vs.first + last name forPerson in historical newspapers and contemporary broadcast news respectively, and of single-component names vs. multiple-component names for Organisation (idem). Testing several classifiers, the authors also showed that it is possible to predict the period of a document from the structure of its entities, thus confirming the evolution of names over time. For their part, Lin et al. [ 117] studied the generalisation capacities of a state-of-the-art neural NER system on entities with weak name regularity in a modern corpus and concluded that name regularity is critical for supervised NER models to generalise over unseen mentions. 4.3.3 Entity and context drifts. Finally, a further complication comes from the historicity of entities, also known as entity drift, with places, professions, and types of major entities fading and emerging over time. For instance, a large part of profession names, which can be used as clues to recognise persons, has changed from the 19C to the 21C.8This dynamism is still valid today (NEs are an open class) and its characteristics as well as its impact on performances is particularly well documented for social media: Fromreide et al. showed a loss of 10 F-score percentage points between two Twitter corpora sampled two years apart [ 65], and Derczynski et al. systematised the analysis with the W-NUT2017 shared task on novel and emerging entities where, on training and test sets with very little entity overlaps, the maximum F-score was only 40%[42]. Besides confirming some degree of ‘artificiality’ of classical NE corpora where the overlap between mentions in the train and the test sets do not reflect real-life settings, these studies illustrate the poor generalisation capacities of NER systems to unseen mentions due to time evolution. How big and how quick is entity drift in historical corpora? We could not find any quantitative study on this, but a high variability of the global referential frame through time is more than likely. Overall, the dynamics of language represent a multi-faceted challenge where the disturbing factor is not anymore an artificially introduced noise like with OCR and OLR, but the naturally occurring alteration of the signal by the effects of time. Both phenomena result in a sparser feature space, but the dynamics of language appear less elusive and volatile than OCR. Compared to OCR noise, its impact on NER performances is however relatively under-studied, and only a few diachronic evaluations were conducted on historical documents so far. Worth of mention is the evaluation of several NER systems on historical newspaper corpora spanning ca. 200 years, first with the study of Ehrmann et al. [ 50], second on the occasion of the HIPE-2020 shared task [ 53]. Testing the hypothesis of the older the document, the lower the performance, both studies reveal a contrasted picture with non-linear F-score variations over time. If a clear trend of increasing recall over time can be observed in [ 50], further research is needed to distinguish and assess the impact of each of the aforementioned time-related variations. 4.4 Lack of resources Finally, the three previous challenges are compounded by a fourth one, namely a severe lack of resources. As mentioned in Section 3.3, the development of NER systems relies on four types of resources—typologies, lexicons, embeddings and corpora—which are of particular importance for the adaptation of NER systems to historical documents. With respect to typologies, the issue at stake is, not surprisingly, their dependence on time and domain. While mainstream typologies with few ‘universal’ classes (e.g. Person ,Organisation , Location , and a few others) can for sure be re-used for historical documents, this obviously does not mean that they are perfectly suited to the content or application needs of any particular historical collection. Just as universal entity types cannot be used in all contemporary application contexts, 8See for example the variety of occupations in the HISCO database: iisg.amsterdam/en/data/data-websites/history-of-work14 Ehrmann et al. neither can they be systematically applied to all historical documents: only a small part can be reused, and they require adaptation. An example is warships, often mentioned in 19C documents, for which none of the mainstream typologies has an adequate class. To say that typologies need to be adapted is almost a truism, but it is worth mentioning for it implies that the application of off-the-shelf NER tools–as is often done–is unlikely to capture all entities of interest in a specific collection and, therefore, is likely to penalise subsequent studies. Besides the (partial) inadequacy of typologies, the lack of annotated corpora severely impedes the development of NER systems for historical documents, for both training and evaluation purposes. While unsupervised domain adaptation approaches are gaining interest [ 154], most methods still depend on labelled data to train their models. Little training data usually results in inferior perfor- mances, as demonstrated—if proof were needed—by Augenstein et al. for NER on contemporary data [ 8, p. 71], and by Ehrmann et al. on historical newspapers [ 53, Section 7]. NE-annotated historical corpora exist, but are still rare and scattered over time and domains (cf. Section 5). This paucity also affects systems’ evaluation and comparison which, besides the lack of gold standards, is also characterised by fragmented and non-standardised evaluation approaches. The recently organised CLEF-HIPE-2020 shared task on NE processing in multilingual and historical newspapers is a first step towards alleviating this situation [53]. Last but not least, if large quantities of textual data are being produced via digitisation, several factors slow down their dissemination and usage as base material to acquire embeddings and language models. First, textual data is acquired via a myriad of OCR softwares which, despite the definition of standards by libraries and archives, supply quite disparate and heavy-to-process output formats [ 52,164]. Second, even when digitised, historical collections are not systematically openly accessible due to copyright restrictions. Despite the recent efforts and the growing awareness of cultural institutions of the value of such assets for machine learning purposes [ 139], these factors still hamper the learning of language representations from large amounts of historical texts. Far from being unique to historical NER, lack of resources is a well-known problem in modern NER [ 51], and more generally in NLP [ 96]. In the case at hand, the lack of resources is exacerbated by the somewhat youth of the research field and the relatively low attention towards the creation of resources compared to other domains. Moreover, considering how wide is the spectrum of domains, languages, document types and time periods to cover, it is likely that a certain resource sparsity will always remain. Finding ways to mitigate the impact of the lack of resources on system development and performances is thus essential. Conclusion on challenges . NER on historical documents faces four main challenges, namely historical variety space, noisy input, dynamics of language, and lack of resources. If none of these challenges is new per se—which does not lessen their difficulty—, what makes the situation particularly challenging is their combination, in what could somehow be qualified an ‘explosive cocktail’. This set of challenges has two main characteristics: first, the prevalence of the time dimension, which not only affects language and OCR quality but also causes domain and entity drifts; and, second, the intensity of the present difficulties, with OCR noise being a real moving target, and domains and (historical) languages being highly heterogeneous. As a result, with feature sparsity adding up to multiple confounding factors, systems’ learning capacities are severely affected. NER on historical documents can therefore be cast as a domain and time adaptation problem, where approaches should be robust to non-standard, historical inputs, what is more in a low-resource setting. A first step towards addressing these challenges is to rely on appropriate resources, discussed in the next section.Named Entity Recognition and Classification on Historical Documents: A Survey 15 5 RESOURCES FOR HISTORICAL NER This section surveys existing resources for historical NER, considering typologies and annotation guidelines, annotated corpora, and language representations (see Section 3.3 for a presentation of NER resource types). Special attention is devoted to how these resources distribute over languages, domains and time periods, in order to highlight gaps that future efforts should attempt to fill. 5.1 Typologies and annotation guidelines Typologies and annotation guidelines for modern NER cover primarily the general and bio-medical domains, and the most used ones such as MUC [ 76], CoNLL [ 190], and ACE [ 45] consist mainly of a few high-level classes with the ‘universal’ triad Person ,Organisation andLocation [51]. Although they are used in various contexts, they do not necessarily cover the needs of historical documents. To the best of our knowledge, very few typologies and guidelines designed for historical material were publicly released so far. Exceptions include the Quaero [ 165,166], SoNAR [ 125] and impresso (used in HIPE-2020) [ 54] typologies and guidelines adapted or developed for historical newspapers in French, German, and English. Designing guidelines and effectively annotating NEs in historical documents is not as easy as it sounds and peculiarities of historical texts must be taken into account. These include for example OCRed text, with the question of how to determine the boundaries of mentions in gibberish strings, and historical entities, with the existence of various historical statuses of entities through times (e.g. Germany has 8 Wikidata IDs over the 19C and 20C [ 55, pp.9-10]). 5.2 Annotated corpora Annotated corpora correspond to sets of documents manually or semi-automatically tagged with NEs according to a given typology, and are essential for the development and evaluation of NER systems (see Section 3.3). This section inventories NE-annotated historical corpora documented in publications and released under an open license.9Their presentation is organised into three broad groups (‘news’, ‘literature(s)’ and ‘other’), where they appear in alphabetical order. Unless otherwise noted, all corpora consist of OCRed documents. Let us start with some observations on the general picture. We could inventory 17 corpora, whose salient characteristics are summarised in Table 3. It is worth noting that collecting information about released corpora is far from easy and that our descriptions are therefore not homogeneous. In terms of language coverage, the majority of corpora are monolingual, and less than a third include documents written in two or more languages. Overall, these corpora provide support for eleven currently spoken languages and two dead languages (Coptic and Latin). With respect to corpus size, the number of entities appears as the main proxy and we distinguish between small (< 10k), medium (10-30k), large (30-100k) and very large corpora (> 100k).10In the present inventory, very large corpora are rather exceptional; roughly one third of them are small-sized, while the remaining are medium- or large-sized corpora. Next, and not surprisingly, a wide spectrum of domains is represented, from news to literature. This tendency towards domain specialisation is also reflected in typologies with, alongside the ubiquitous triad of Person ,Location , and Organisation types, a long tail of specific types reflecting the information or application needs of particular domains. Finally, in terms of time periods covered, we observe a high concentration of corpora in the 19C, directly followed by 20C and 21C, while corpora for previous centuries are either scarce or absent. 9Inventory as of June 2021. The Voices of the Great War corpus [ 27] is not included for not released under an open license. 10For comparison, the CoNLL-03 dataset contains ca. 70k mentions for English and 20k for German [ 190], while OntoNotes v5.0 contains 194k mentions for English, 130k for Chinese and 34k for Arabic [151].16 Ehrmann et al. Corpus Doc. type Time period Tag set Lang. # NE s Size License Quaero Old Press [165] newspapers 19C Quaero fr 147,682 xl elra Europeana [132] newspapers 19C per,loc,org fr, de, nl 40,801 l cc0 De Gasperi [180] various types 20C per,gpe it 35,491 l cc by-nc-sa Latin NER [60] literary texts 1C bce-2C per,geo,grp la 7,175 s gpl v3.0 HIMERA [189] medical lit. 19C-21C custom en 8,400 s cc by Venetian references [36] publications 19C-21C custom Multi 12,879 m cc by Finnish NER [169] newspapers 19C-20C per,loc,org fi 26,588 m n/a droc [106] novels 17C-20C custom de 6,013 s cc by Travel writings [178] travelogues 19C-20C loc en 2,228 sn/a Czech Hist. NE Corpus [90] newspapers 19C custom cz 4,017 s cc by-nc-sa LitBank [12] novels 19C-20C ace(w/o wea) en 14,000 l cc by-sa BIOfid [2] publications 18C-20C extended GermEval de 33,545 l gpl v3.0 HIPE [55] newspapers 18C-21C impresso de, en, fr 19,848 m cc by-nc-sa BDCamões [74] literary texts 16C-21C custom pt 144,600 xl cc by-nc-nd Coptic Scriptorium corpora literary texts 3C-5C custom cop 88,068 l cc by GeoNER [104] literary texts 16C-17C geo fr 264 s lgpl-lr NewsEye [81] newspapers 19C-20C impresso -comp. de, fr, fi,s v 30,580 l cc by Table 3. Overview of reviewed NE-annotated historical corpora (ordered by publication year). 5.2.1 News. The first group brings together corpora built from historical newspaper collections. With corpora in five languages (Czech, Dutch, English, French and German), news emerges as the best-equipped domain in terms of labelled data availability. The Czech Historical NE Corpus [ 91] is a small corpus produced out of the year 1872 of the Czech title Posel od Čerchova . Articles are annotated according to six entity types—persons, institu- tions, artifacts & objects, geographical names, time expressions and ambiguous entities—which, despite being custom, bear substantial similarities with major typologies. The corpus was manually annotated by two annotators with an inter-annotator agreement (IAA) of 0.86 (Cohen’s Kappa). Europeana NER corpora11[132] is a large-sized collection of NE-annotated historical newspaper articles in Dutch, French and German, containing primarily 19C materials. These corpora were sampled from the Europeana newspaper collection [ 133] by randomly selecting 100 pages from all titles for each language, considering only pages with a minimum word-level accuracy of 80%. Three entity types were considered (person, location, organisation), yet no IAA for the annotations is reported. Instead, the quality and usefulness of these annotated corpora were assessed by training and evaluating the Stanford CRF NER classifier (see Section 3.4.2). The Finnish NER corpus12[169] is composed of a selection of pages from journals and newspapers published between 1836 and 1918 and digitized by the national library of Finland. The OCR of this medium-size corpus was manually corrected by librarians and NE annotations were made manually for half of them, semi-automatically for the other (via the manual correction of the output of a Stanford NER system trained on the manually corrected subset). Overall, the annotations show a good IAA of 0.8 (Cohen’s kappa). The HIPE corpus13[55] is a medium-sized, historical news corpus in French, German and English, created as part of HIPE-2020. It consists of newspaper articles sampled from Swiss, Luxembourgish and American newspaper collections covering a time span of ca. 200 years (1798-2018). OCR quality of the corpus corresponds to real-life setting and varies depending on the digitisation time and preservation state of original documents. The corpus was annotated following the impresso 11https://github.com/EuropeanaNewspapers/ner-corpora 12https://digi.kansalliskirjasto.fi/opendata/submit (Digitalia (2017-2019) package). 13Version 1.3, https://github.com/impresso/CLEF-HIPE-2020/tree/master/dataNamed Entity Recognition and Classification on Historical Documents: A Survey 17 guidelines [ 54], which are based on and are retro-compatible with the Quaero guidelines [ 166]. The annotation tag set comprises 5 coarse-grained and 23 fine-grained entity types, and includes entity components as well as nested entities. Wrongly OCRed entity surface forms are manually corrected and entities are linked towards Wikidata. NERC and EL annotations reached an average IAA across languages of 0.8 (Krippendorf’s alpha). The NewsEye dataset14[81] is a large-sized corpus composed of articles extracted from news- papers published between mid 19C and mid 20C in French, German, Finnish, and Swedish. Four entity types were considered (person, location, organisation and human product) and annotated according to guidelines15similar to the impresso ones; entities are linked towards Wikidata and articles are further annotated with authors’ stances. The annotation reaches high IAAs exceeding 0.8 for Swedish and 0.9 for German, French and Swedish (Cohen’s kappa). The Quaero Old Press Extended NE corpus16[165] is a very large annotated corpus composed of 295 pages sampled from French newspapers of December 1890. The OCR quality is rather good, with a character and word error rates of 5% and 36.5% respectively. Annotators were asked to transcribe wrongly OCRed entity surface forms—similarly to what was done for the HIPE corpus—which makes both corpora suitable to check the robustness of NER systems to OCR noise. The annotator agreement on this corpus reaches 0.82 (Cohen’s Kappa). 5.2.2 Literature(s). The second group of corpora relates to literature and is more heterogeneous in terms of domains and document types, ranging from literary texts to scholarly publications. To begin with, two resources consist of ancient literary texts. First, the Latin NER corpus17[60] comprises ancient literary material sampled from three texts representatives of different literary genres (prose, letters and elegiac poetry) and spanning over three centuries. The annotation tag set covers persons, geographical place names and group names (e.g. ‘Haeduos’, a Gallic tribe). Next, the Coptic Scriptorium corpus18is a large-sized collection of literary works written in Coptic, the language of Hellenistic era Egypt (3C-5C CE), and belonging to multiple genres (hagiographic texts, letters, sermons, martyrdoms and the Bible). Besides lemma and POS tags, this corpus also contains (named and non-named) entity annotations, with links towards Wikipedia. In addition to persons, places and organisations, the entity types include abstract entities (e.g. ‘humility’), animals, events, objects (e.g. ‘bottles’), substances (e.g. ‘water’) and time expressions. Entity annotations were produced automatically (resulting in 11k named entities and 6k linked entities), a subset of which was manually corrected (2,4k named entities and 1,5k linked entities). Then, several corpora were designed to support computational literary analysis. This is the case of the BDCamões Collection of Portuguese Literary Documents19[74], a very large annotated corpus composed of 208 OCRized texts (4 million words) representative of 14 literary genres and covering five centuries of Portuguese literature (16C-21C). Due to the large time span covered, texts adhere to different orthographic conventions. Named entity annotations correspond to locations, organisations, works, events and miscellaneous entities, and were automatically produced (silver annotations). They constitute only one of the many layers of linguistic annotations of this corpus, alongside POS tags, syntactic analysis and semantic roles. Next, the LitBank20[12] dataset is a medium-sized corpus composed of 100 English literary texts published between mid 19C and beginning 20C. Entities were annotated following the ACE guidelines—with the only exception 14Version 1.0, https://doi.org/10.5281/zenodo.4573313 15https://zenodo.org/record/4574199 16http://catalog.elra.info/en-us/repository/browse/ELRA-W0073/ 17https://github.com/alexerdmann/Herodotos-Project-Latin-NER-Tagger-Annotation 18https://github.com/copticscriptorium/corpora 19https://portulanclarin.net/ 20https://github.com/dbamman/litbank18 Ehrmann et al. of weapons as rarely attested—and include noun phrases as well as nested entities. Finally, the Deutsches ROman Corpus (DROC) [ 106] is a set of 90 richly-annotated fragments of German novels published between 1650 and 1950. The DROC corpus is enriched with character mentions, character co-references, and direct speech occurrences. It features more than 50,000 character mentions, of which only 12% (6,013) contain proper names and thus correspond to traditional person entity mentions (others correspond to pronouns or appellatives). Next, two of the surveyed corpora in this group focus specifically on place names. First, Travel writings21[178] is a small corpus of 38 English travelogues printed between 1850 and 1940. Its tag set consists of a single type ( Location ), which encompasses geographical, political and functional locations, thus corresponding to ACE’s gpe,locandfacentity types altogether. Second, the GeoNER corpus22[104] is a very small corpus consisting of three 16C-17C French literary texts by Racine, Molière and Marguerite de Valois. Each annotated text is available in its original version, as well as with automatic and manual historical spelling normalization. Despite its limited size, this corpus can be a valuable resource for researchers investigating the effects of historical normalisation on NER. Finally, moving from literature to scholarly literature, three corpora should be mentioned. First, BIOfid23[2] is a large NE-annotated corpus composed of ca. 1000 articles sampled from German books and scholarly journals in the domain of biodiversity and published between 18C and 20C. The annotation guidelines used for this corpus build upon those used for the GermEval dataset [ 18], with the addition of time expressions and taxonomies ( Taxon ), i.e. systematic classifications of organisms by their characteristics (e.g. “northern giant mouse lemur”). Second, HIstory of Medicine CoRpus Annotation (HIMERA)24[189] is a small-sized corpus in the domain of medical history, consisting of journal articles and medical reports published between 1840 and 2013. This corpus is annotated with NEs according to a custom typology comprising, for example, medical conditions, symptoms, or biological entities. While all annotations were performed on manually corrected OCR output, the annotation of certain types was carried out in a semi-automatic fashion. Globally, the annotation reaches good IAAs of 0.8 and 0.86 for exact and relaxed match respectively (F-score). Third, the Venetian References corpus25[36] contains about 40,000 annotated bibliographic references from a corpus of books and journal articles on the history of Venice (19C-21C century) in Italian, English, French, German, Spanish and Latin. Components of references (e.g. author, title, publication date, etc.) are annotated according to a custom tag set of 26 tags, and references themselves are classified according to the type of work they refer to (e.g. primary vs. secondary sources). 5.2.3 Other. We found one corpus in the domain of political writings. The De Gasperi corpus26[192] consists of the complete collection of public documents by Alcide De Gasperi, Italy’s Prime Minister in office from 1945 to 1953 and one of the founding fathers of the European Union. This large corpus includes 2,762 documents published between 1901 and 1954 and belonging to a wide variety of genres. It was automatically annotated with parts of speech, lemmas, person and place names (by means of TextPro [ 146]). This corpus consists of clean texts extracted from the electronic versions of previously published volumes. 21https://github.com/dhfbk/Detection-of-place-names-in-historical-travel-writings 22https://github.com/PhilippeGambette/GeoNER-corpus 23https://github.com/FID-Biodiversity/BIOfid 24http://www.nactem.ac.uk/himera/ 25https://github.com/dhlab-epfl/LinkedBooksReferenceParsing 26https://github.com/StefanoMenini/De-Gasperi-s-CorpusNamed Entity Recognition and Classification on Historical Documents: A Survey 19 5.3 Language representations As distributional representations, embeddings and language models need to be trained on large textual corpora in order to be effective. There exist several large-scale, diachronic collections of historical documents, such as the Europeana Newspaper collection [ 133], the Trove Newspaper corpus [ 30], the Digi corpus [ 99], and the impresso public corpus [ 52] (to mention but a few), which are now used to acquire historical language representations. Given their usefulness in many NLP tasks, embeddings and language models are increasingly shared by researchers, thus constituting a growing and quickly evolving pool of resources that can be used in historical NER. This section inventories existing historical language representations, an overview of which is given in Table 4. 5.3.1 Static embeddings. As to traditional word embeddings, we could inventory two main re- sources. Sprugnoli et al. [ 179] have released a collection of pre-trained word and sub-word English embeddings learned from a subset of the Corpus of Historical American English [ 40], considering 37k texts published between 1860 and 1939 amounting to about 198 million words. These embed- dings of 300 dimensions are available according to three types of word representations: embeddings based on linear bag-of-words contexts (GloVe [ 143]), on dependency parse-trees (Levy et al. [ 115]), and on bag of character n-grams (fastText [ 21]).27Doughman et al. Doughman et al . [46] have created Arabic word embeddings from three Lebanese news archives, with materials published between 1933 and 2011.28Archive-level as well as decade-level embeddings were trained using word2vec with a continuous bag of words model. Given the imperfect OCRed, hyper-parameter tuning was used to maximise accuracy on a set of analogy tasks. Another set of traditional word embeddings consists of diachronic or dynamic embeddings, i.e. static embeddings trained on different time bins of a corpus and thereafter aligned according to different strategies (post-hoc alignment after training on different time bins, or incremental training). Such resources provide a view of words over time and are usually used in diachronic studies such as culturomics and semantic change, but can also be used to feed neural architectures for other tasks. Some of the pioneers in releasing such material were Hamilton et al. [ 82], who published a collection of diachronic word embeddings29for English, French, German and Chinese, covering roughly 19C-20C. They were computed from many different corpora by using word2vec skip-gram with negative sampling. Later on, Hengchen et al. [ 83] released a set of diachronic embeddings of the same type in English, Dutch, Finnish and Swedish trained on large corpora of 19C-20C newspapers.30More recently, Hengchen et al. [ 84] pursued these efforts with the publication of diachronic word2vec and fastText models trained on a large corpus of Swedish OCRed newspapers (1645-1926) (the Kubhist 2 corpus, 5.5 billion tokens). Thanks to its ability to capture sub-word information, their fastText model allows for retrieving OCR misspellings and spelling variations, thus being a useful resource for post-OCR correction and historical normalisation. 5.3.2 Contextualised embeddings. Historical character-level LM embeddings are currently avail- able for German, French, and English. For historical German, Schweter et al. [ 172] have trained contextualised string embeddings (flair) on articles from two titles from the Europeana newspaper collection, the Hamburger Anzeiger (about 741 million tokens, 1888-1945) and the Wiener Zeitung (some 801 million tokens, 1703-1875). Resulting embeddings are part of the Flair library.31Next, in the context of the HIPE-2020 shared task, fastText word embeddings and flair contextualised 27For the link to the published embeddings see https://github.com/dhfbk/Histo. 28Models as well as evaluation details can be found at: https://doi.org/10.5281/zenodo.3538880. 29https://nlp.stanford.edu/projects/histwords/ 30https://zenodo.org/record/3270648 31With the ID de-historic-ha-X (HHA) and de-historic-wz-X (WZ) respectively.20 Ehrmann et al. Publication Type(s) Model(s) Language(s) Training Corpus Hamilton et al. [82] classic word embeddings PPMI, SVD, word2vec de, fr, en, cn Google Books +COHA Hengchen et al. [83] classic word embeddings word2vec en, nl, fi, se newspapers and periodicals Hengchen et al. [84] char.-based word & word embeddings fastText, word2vec sv Kubhist 2 Sprugnoli et al. [179] char.-based word & word embeddings dependency-based, fastText, GloVe en CHAE Doughman et al. [46] classic word embeddings word2vec ar Lebanese News Archives Ehrmann et al. [52, 55] char.-based word & char.-level LM embeddings fastText, flair de,fr,en impresso corpus Hosseini et al. [87] all types word2vec, fastText, flair, BERT en Microsoft British Library corpus Schweter et al. [172] character-level LM embeddings BERT, ELECTRA de, fr Europeana Newspaper corpus Bamman et al. [11] word-level LM embeddings BERT la various Latin corpora Table 4. Overview of available word embeddings and LMs trained on historical corpora. string embeddings were made available as auxiliary resources for participants.32They were trained on newspaper materials in French, German and English, and cover roughly 18C-21C (full details in [55] and [ 52]). Similarly, Hosseini et al . [87] published a collection of static (word2vec, fastText) and contextualised embeddings (flair) trained on the Microsoft British Library (MBL) corpus. MBL is a large-scale corpus composed of 47,685 OCRed books in English (1760-1900) which cover a wide range of subject areas including philosophy, history, poetry and literature, for a total of approximately 5.1 billion tokens. For each architecture, authors released models trained either on the whole corpus or on books published before 1850. Word-level LM embeddings trained on historical data are available for Latin, French, German and English. Latin BERT is a LM for Latin trained on 640 million tokens spanning 22 centuries.33 In order to reach a sufficiently large volume of training material, a wide variety of datasets was employed including the Perseus Digital Library, the Latin Wikipedia (Vicipaedia), and Latin texts of the Internet Archive. Extrinsic evaluation of the model was performed on POS tagging and word sense disambiguation, for which Latin BERT demonstrated state-of-the-art results. For historical German and French, Schweter [171] published BERT and ELECTRA models trained on two subsets of the Europeana newspapers corpus, consisting of 8 and 11 billion tokens for German and French respectively. The German models were evaluated on two historical NE datasets, on which the ELECTRA models over-performed the BERT ones, leading to an overall improvement on the current state-of-the-art results reported by Schweter and Baiter [172] . Finally, for 19C English, BERT-based language models trained on the MBL corpus are available in the histLM model collection [ 87]. One model was trained on the entire corpus, and additional models were created for different time slices to enable the study of linguistic and cultural changes over the 19C, by fine-tuning an existing contemporary model (BERT base uncased). Conclusion on Resources. Resources for historical NER are not numerous but do exist. A few typologies and guidelines adapted for historical OCRed texts were published. More and more annotated corpora are being released, but the 17 that we could inventory here are far from the 121 inventoried in [ 51] for modern NE processing. They are to a large extent built from historical newspaper collections, a type of document massively digitised during the last years. If historical newspaper contents lend themselves particularly well to NER, this preponderance could also be taken as an early warning of the risk of reproducing the news bias already observed for contempo- rary NLP [ 149]. Besides, NE-annotated historical corpora show a modest degree of multilingualism, and most of them are published under open licenses. As per language representations, historical embeddings and language models are not numerous but multiply rapidly. 32Available at files.ifi.uzh.ch/impresso/clef-hipe-2020/ and on Zenodo platform under DOI 10.5281/zenodo.3706808; Flair embeddings were also integrated into the Flair framework: https://github.com/flairNLP/flair. CC BY-NC 4.0 license applies. 33https://github.com/dbamman/latin-bertNamed Entity Recognition and Classification on Historical Documents: A Survey 21 6 APPROACHES TO HISTORICAL NER This section provides an overview of existing work on NER for historical documents, organised by type of approach: rule-based, traditional machine learning and deep learning. The emphasis here is more on the implementation and settings of historical NER methods, while strategies to deal with specific challenges—regardless of the method—are presented in Section 7. Since research was almost exclusively done in the context of individual projects, and since there was no shared gold standard up to recently, system performances are often not comparable. We therefore report results only when computed on sufficiently large data and explicitly state when results are comparable. All works deal with OCRed material unless mentioned otherwise. In absence of obvious thematic or technical grouping criteria, they are presented in order of publication (oldest to newest). Table 5 presents a synthetic view of the reviewed literature. 6.1 Rule-based approaches As for modern NER, first NER works dealing with historical documents were mainly symbolic. Rule-based systems do not require training data and are easily interpretable, but need time and expertise for designing the rules. Numerous rule-based systems have been developed for modern NER, and they usually obtain good results on well-formed texts (see Section 3.4.1). Early work performed NER over historical collections using the GATE language technology environment [ 38], which supports the manual creation of rules and gazetteers. Those work do not include formal evaluations but are worth mentioning as early exploration efforts, e.g. the adaptation of rules and gazetteers by Bontcheva et al. [ 22] to recognise Person ,Location ,Occupation and Status entity types in 18C English court trials. Among other difficulties, authors mention historical occupation names not present in gazetteers, orthographic variations (punctuation, spelling, capitalisation), and person name abbreviations. Thereafter, most systems relied on custom rule sets and made substantial use of gazetteers, with the objective of addressing the domain and language peculiarities of historical documents. Jones et al. [95] designed a rule-based system to extract named entities from the Civil War years (1861-1865) of the American newspaper the Richmond Times Dispatch (on manually segmented and transcribed issues). They focus on 10 entity types, some of them specific to the period and the material at hand such as warships, military units and regiments. Their system consists of three main phases: gazetteer lookup to extract easily identifiable entities; application of high precision rules to guess new names; and learning of frequency-based rules (e.g. how often Washington appears as a person rather than a place, and in which context). Best results are obtained for Location andDate, while the identification of Person ,Organisation andNewspaper titles is lower. Based on a thorough error analysis, authors conclude that shorter but historically relevant gazetteers may be better than long ones, and make a plea for the development of comprehensive domain-specific knowledge resources. Working on Swedish literary classics from the 19C, Borin et al. [ 23] designed a system made of multiple modules: a gazetteer lookup and finite-state grammars module to recognise entities, a name similarity module to address lexical variation, and a document centred module to propagate labels based on documents’ global evidence. They focused on 8 entity types and evaluated system modules’ performances on an incremental basis. On all types together, the best F-measure reaches 89%, and recall is systematically lower than precision in all evaluation iterations (evaluation setting is partial match). The main sources of error are spelling variations, unknown names, and noisy word segmentation due to hyphenation in the original document. Grover et al. [ 77] focused on two subsets of the Journal of the House of Lords of Great Britain, one from the late 17C and the other from early 19C, OCRed with different systems and at different times. OCR quality is erratic, and suffers from numerous quotation marks as well as from the22 Ehrmann et al. presence of marginalia and of text portions in Latin. An in-house rule-based system, consisting of a set of rules applied incrementally with access to a variety of lexica, is applied to recognise person and place names. Before NE tagging, the system attempts to identify marginalia words and noisy characters in order to ignore them during parsing. The overall performance is evaluated against test sets of each period, which comprise significantly more person than location names. Results are comparable for person names for both 17C and 19C sets (ca. 75%F-score), but the earliest period has significantly worse performance for locations ( 24 .1%and 66 .5%). In most configurations, precision is slightly above recall (evaluation setting not specified, most likely exact match). An error analysis revealed that character misspellings and segmentation errors (broken NEs) were the main factors impacting performances. The experiments conducted by Broux et al. [ 28] are part of an initiative aiming at improving access to texts from the ancient world. Working with a large collection of documentary texts produced between 800 BCE and 800 CE, including all languages and scripts written on any surface (mainly papyrological and epigraphical resources), one of the objective is to develop and curate onomastic lists and prosopographies of non-royal individuals attested as living during this period.34 Authors apply a rule-based system benefiting from a huge onomastic gazetteer covering names, name variants and morphological variants in several ancient languages and scripts. Rules encode various sets of onomastic patterns specific to Greek, Latin and Egyptian (Greek names are ‘simpler’ than the often multiple Roman names, e.g. Gaius Iulius Caesar ) and specifically designed to capture genealogical information. This system is used to speed up manual NE annotation of texts, which in turn is used for network analysis in order to assist the creation of prosopographies. No formal evaluation is provided. Fast-forwarding to contemporary times, Kettunen et al. [ 100] experimented with NER on a collection of Finnish historical newspapers from late 19C - early 20C. Authors insist on the overall poor quality of the OCR (word level correctness around 70%−75%), as well as on the fact that they use an existing rule-based system designed for modern Finnish with no adaptation. Not surprisingly, this combination leads to rather low results with F-scores ranging from 30%to45% for the 8 targeted entity types (evaluation setting is exact match). The main sources of errors are bad OCR and multi-word entities. A recent work by Platas et al. [ 150] focuses on a set of manually transcribed Medieval Spanish texts (12C-15C) covering various genres such as legal documents, epic poetry, narrative, or drama. Based on the needs of literary scholars and historians, the authors defined a custom entity typology of 8 main types (plus sub-types). It covers traditional but also more specific types for the identification of name parts, especially relevant for Medieval Spanish person names featuring many attributes and complex syntactic structures ( Don Alfonso por la gracia de Dios rey de Castiella de Toledo de Leon de Gallizia de Seuilla de Cordoua de Murcia e de Jaen ). The system is composed of several modules dedicated to recognising names using rules and/or gazetteers, increasing the coverage using variant generation and matching, and recognising person attributes using dependency parsing. Evaluated on a manually annotated corpus representative of the time periods and genres of the collection, the system reached satisfactory results with an overall F-score of 77%, ranging from 74%to87%depending on the entity type (evaluation setting is exact match). As usual, recall is lower than precision, but differences are not high. Although these numbers are lower than what neural-based systems can achieve, this demonstrates the capacities and suitability of a carefully designed rule-based system. 34Onomastic relates to the study of the history and origin of proper names (Oxford English dictionary), and prosopography relates to the collection and study of information about a person.Named Entity Recognition and Classification on Historical Documents: A Survey 23 Finally, it is also worth mentioning a series of work on the geoparsing of historical and literary texts. With the aim of analysing the interplay between geographical and fictional landscapes, Moncla et al . [128] experimented with a rule-based system relying on extensive gazetteers to recognise names of streets, houses, bridges, etc. in French Parisian novels from the 19C. With spatial entities featuring a high degree of regularity, the system reached very good results on a relatively small test set (evaluation settings are not entirely clear). Adapting the existing Edinburgh Geoparser system (derived from Grover et al . [77] above) for historical texts, Alex et al. [ 5] carried out experiments to recognise place names in different types of 19C British historical documents. Besides the impact of OCR errors, main observations are that it is essential to perform place and person name recognition in tandem in order to better handle homonyms—even when dealing with place names only—, and that gazetteers need substantial adaptation, with careful switching on and off of standard vs domain-specific lexica. This system was also applied on a set of historical Edinburgh-specific documents, this time targeting fine-grained location names and considering three types of material: OCRed documents from 19C British novels, manually crowd-corrected OCRed texts from the Project Gutenberg collection, and contemporary (born-digital) texts from Scottish authors [ 7]. Not surprisingly, place name recognition performs best on contemporary texts (but remains low with an F-score of 75%), worst on historical OCRed text (F-score 68%), and roughly in-between on crowd-corrected OCRed documents (F-score 72%). Precision scores are similar across the three collections, but recall scores vary considerably. Much research has been done on the geoparsing of cultural heritage material but is not further surveyed here. Conclusion on rule-based approaches . Symbolic approaches were applied on a large variety of document types, domains and time periods (see Table 5 for an overview of characteristics). In general, rule-based systems are modular and almost systematically include gazetteer lookup, rule incremental application, and variant matching. They have difficulties dealing with noisy and historical input, for which they require normalisation rules and additional linguistic knowledge. The number of work we could inventory, from the beginning of the 2000s until today, confirms the long-standing need for NER on historical documents as well as the suitability of symbolic approaches that can be better dealt with by non experts. Research nevertheless moved away from such systems in favour of machine learning ones. 6.2 Traditional Machine Learning Approaches Machine learning algorithms inductively learn statistical models from annotated data on the basis of manually selected features (see Section 3.4.2). Heavily researched and applied in the 2000s, machine learning-based approaches contributed strong baselines for mainstream NER, and were rapidly adopted for NER on historical documents. In this section we review the usage of such traditional, pre-neural machine learning approaches on historical material, first considering works which apply already existing models, second which train new ones. 6.2.1 Applying existing models. Early achievements adopted the ‘off-the-shelf’ strategy with the application of pre-trained NER systems or web services to various historical documents, mainly with the objectives of assessing baselines and/or comparing system performances. This is the case of Rodriquez et al. [ 163], who compared the performances of four NER systems (Stanford CRF classifier, OpenNLP, AlchemyAPI, and OpenCalais) on two English datasets related to WWII: individual Holocaust survivor testimonies from the Wiener Library of London and letters of soldiers from King’s College archive. Evaluated on a small dataset, the recognition of Person ,Location and Organization reached an F-score between 47%and 54%for the testimonies (Stanford CRF being the most accurate), and between 32%and 36%for the letters (OpenCalais performing best). Surprisingly, running the same evaluation on manually corrected OCR did not improve results significantly.24 Ehrmann et al. Major sources of errors were different ways of naming and metonymy phenomena (e.g. warships named after people), and lack of background knowledge, especially for organisations. Along the same line, Ehrmann et al. [ 50] conducted experiments on French historical newspapers on a diachronic basis (covering 200 years) for the types Person andLocation , with the objective of investigating whether NER performance degrades when going back in time. Their study includes four systems representative of major approaches for NER: a rule-based system, a supervised machine learning one (MaxEnt classifier), and two proprietary web services offering NER functionalities (AlchemyAPI and DandelionAPI). They showed that, compared to a baseline on contemporary news, all systems feature degraded performances, both in absolute terms and over time (maximum of 67 .6% F-score for person names for the best system, with exact match). As for time-based observation, precision is quite irregular, with several ups and downs for all systems for both entity types, but recall shows less variability and a slight but regular increase for Person , suggesting that person names are less stable than location names and therefore better recognised when more recent. Focusing on the impact of historical language normalisation (in this respect see also Section 7.2), Kogkitsidou et al. [ 104] also used and benchmarked several systems (rule-based and machine learning) for the recognition of Location names in French literary texts from the 16C and 17C. When applied without any adaptation, systems features very diverse performances, from very low (36%) to reasonable ( 70%) F-scores, with rule-based ones being better at precision, and machine learning ones at recall. Ritze et al. [ 160] worked on historical records of the English High Court of Admiralty of the 17C and used the Stanford CRF classifier with its default English model to recognise Person and Location types (others were considered but not evaluated). Given the very specific domain of this corpus, obtained results were reasonable, with a precision in the 77%for both types (recall was not reported). Finally, some adopt the approach of ensembling systems, i.e. of considering NE predictions not from one but several recognisers, according to various voting strategies. Packer et al. [ 138] applied three algorithms (dictionary-based, regular expressions-based, and HMM-based) in isolation and in combination for the recognition of person names in various types of English OCRed documents. They observed increased performances (particularly a better P/R balance) with a majority vote ensembling. Won et al. [ 198] worked on British personal archives from 16C and 17C and applied five different systems to recognise place names. They too observed that the combination of multiple systems through a majority vote (with a minimum of two to a maximum of three votes) was able to consistently outperform the individual NER systems. Mere application of existing systems, these work illustrate the inadequacy of already trained NER models for historical texts. Performances (and settings) of these baseline studies are extremely diverse, but the following constants are observed: recall is always the most affected, and the Location type is usually the most robust. 6.2.2 Training models. Other work trained NER systems anew on custom material. Early attempts include the experiments of Nissim et al. [ 135] on Location entity type in manually transcribed Scottish parish registers of the late 18C and early 19C. They trained a maximum entropy tagger with its in-built standard features on a dataset of ca. 6000 location mentions and obtained very satisfying performance ( 94 .2%F-score), which they explained by the custom training data and the binary classification task (location vs non-location). Subsequently, the most frequently used system is the Stanford CRF classifier35[63], particularly on historical newspapers. Working with the press collection of the National Library of Australia, Kim et al. [ 103] evaluated two Stanford CRF models, the default English one trained on CoNLL-03 35https://nlp.stanford.edu/software/CRF-NER.htmlNamed Entity Recognition and Classification on Historical Documents: A Survey 25 Publication Domain Document type Time period Language(s) System Comp. Rule-based Bontcheva et al. [22] legal court trials 18C en-GB rule-based Jones et al. [95] news newspapers mid 19C en-US rule-based Borin et al. [23] literature literary classics 19C sv rule-based Grover et al. [77] state parliamentary proc. 17C & 19C en-GB rule-based Broux and Depauw [28] state papyri 4C-1C bce egy, el, la lookup Kettunen et al. [100] news newspapers 19C-20C fi rule-based Alex et al. [5] state/literature parl. proc./classics var en-scotland lookup Alex et al. [7] literature novels 19C en-scotland lookup Moncla et al. [128] literature novels 19C fr lookup Platas et al. [150] literature poetry, drama 12C-15C es rule-based Traditional machine learning Nissim et al. [135] admin parish registers 18C-19C en-scotland MaxEnt Packer et al. [138] mix various - en ensemble Rodriquez et al. [163] egodocs letters & testimonies WWII en-GB several Galibert et al. [67] news newspapers 19C fr several Dinarelli et al. [44] news newspapers 19C fr CRF+PCFG Ritze et al. [160] state admiralty court rec. 17C en-GB CRF Neudecker et al. [134] news newspapers 19C-20C de, fr, nl CRF Passaro et al. [141] state war bulletins 20C it CRF Kim et al. [103] news newspapers - en CRF Ehrmann et al. [50] news newspapers 19C-20C fr several Aguilar et al. [1] news medieval charters 10C-13C la CRF Erdmann et al. [60] literature classical texts 1C bce-2C la CRF Ruokolainen et al. [169] news newspapers 19C-20C fi CRF+gaz Won et al. [198] egodocs letters 17-18C en-GB ensemble El Vaigh et al. [58] news newspapers ( hipe) 19C-20C de, en, fr CRF Kogkitsidou et al. [104] literature theatre and memoirs 16C-17C French several Deep Learning Riedl et al. [158] news newspapers 19C-20C de BiLSTM-CRF ♢ Rodrigues A. et al. [162] bibliometry journals & monographs 19C-20C multi BiLSTM-CRF Sprugnoli [178] literature travel writing 19C-20C en-US BiLSTM-CRF Ahmed et al. [2] biodiversity scholarly pub. 19C-20C de BiLSTM-CRF Kew et al. [101] literature alpine texts 19C-20C multi BiLSTM-CRF Schweter et al. [172] news newspapers 19C-20C de BiLSTM-CRF ♢ Labusch et al. [108] news newspapers 19C-20C de BERT ♢ Dekhili and Sadat [41] news newspapers ( hipe) 19C-20C fr BiLSTM-CRF ♦ Ortiz S. et al. [137] news newspapers ( hipe) 19C-20C fr, de BiLSTM-CRF ♦ Kristanti et al. [105] news newspapers ( hipe) 19C-20C en, fr BiLSTM-CRF ♦ Provatorova et al. [152] news newspapers ( hipe) 19C-20C de, en, fr BiLSTM-CRF ♦ Todorov et al. [191] news newspapers ( hipe) 19C-20C de, en, fr BiLSTM-CRF ♦ Schweter et al. [173] news newspapers ( hipe) 19C-20C de BiLSTM-CRF ♦ Labusch et al. [107] news newspapers ( hipe) 19C-20C de, en, fr BERT ♦ Ghannay et al. [70] news newspapers ( hipe) 19C-20C fr ♦ Boros et al. [25] news newspapers ( hipe) 19C-20C de, en, fr BERT ♦ Swaileh et al. [184] economy financial yearbooks 20C de, fr BiLSTM-CRF Yu et al. [203] history state official books 1 bce-17C zh BERT Hubková et al. [91] news newspapers 19C-20C cz BiLSTM Table 5. Historical NER literature overview. Papers are grouped by family of approaches and ordered by publication year. ‘ Comp. ’ stands for comparable and denotes works whose results are obtained on same test sets.26 Ehrmann et al. English data, and a custom one trained on 600 articles of the Trove collection (the time period of the sample is not specified). Interestingly, the model trained on in-domain data did not outperform the default one, and both yielded F-scores around 75%forPerson andLocation , with a drop below 50%for Organisation . Neudecker et al. [ 134] focused on newspaper material in French, German and Dutch from the Europeana collection [ 132], on which they trained a Stanford CRF model with additional gazetteers. The 4-fold cross-evaluation yielded F-scores in the range of 70-80% for Dutch and French, while no results were reported for German. For both languages, recall was significantly lower than precision. Working on Finnish historical newspapers, Ruokolainen et al. [ 169] considered Person andLocation and trained the Stanford CRF classifier on manually corrected OCRed material, with large gazetteers covering inflected forms. The model gave satisfying performances with F-scores of 87%(location) and 80%(person) on a test set taken from the same manually corrected data, and of 78%and 71%on non-corrected OCR texts (with recall being lower than precision). This time on French, and taking advantage of the Quaero Old Press corpus, Galibert et al. [ 67] organised a small evaluation campaign where three anonymous systems participated. Stochastic systems performed best (especially on noisy entities), with an F-score of 65 .2%across all types (person, location and organisation). Also working on French newspapers in the context of HIPE-2020, Elvaigh et al. [ 58] (slightly) fine-tuned the CRF baseline provided by the organisers and reached 66%on all types (exact match), two points more than the baseline. Going back in time, Aguilar et al. [ 1] experimented NER on manually transcribed Latin medieval charters from the 10C to 13C. Focusing on person and place names, they used dedicated pre- processing and trained a CRF classifier using the Wapiti toolkit.36Results are remarkable, on average in the 90%for both types, certainly due to the regularity of the documents in terms of names, naming conventions, context and overall structure. Finally, Passaro et al. [ 141] attempted to extract entities from WWI and WWII Italian official war bulletins. They focused on the traditional entity types, plus Military organisations ,Ships and Airplanes . The Stanford system was trained (without gazetteers) on semi-automatically annotated data from the two periods as well as on contemporary Italian news, and various experiments mixing in- vs. out-of-time data were carried out. Results showed that performances are highest when the model is trained on data close in time, that entities of type Location are systematically better recognised, and that custom types (ships, military organisations, etc.) are poorly recognised. Conclusion on traditional machine learning approaches. Overall, the availability of machine learning-based NER systems that could either be applied as such or trained on new material greatly fostered a second wave of experiments on historical documents. Settings are quite diverse, and so are the performances, but F-scores are usually in the order of 60−70%, which is significantly lower than those usually obtained on contemporary material (frequently in the 90%). The Stanford CRF classifier is by far the most commonly used, as well as CRF in general. Not surprisingly, performances are higher when systems are trained on in-domain material. 6.3 Deep Learning Approaches Latest developments in historical NER are dominated by deep learning techniques which have recently shown state-of-the-art results for modern NER. Deep learning-based sequence labelling approaches rely on word and character distributed representations and learn sentence or sequence features during end-to-end training. Most models are based on BiLSTM architectures or self- attention networks, and use a CRF layer as tag decoder to capture dependencies between labels (see Section 3.4.3). Building on these results, much work attempt to apply and/or adapt deep learning approaches to historical documents, under different settings and following different strategies. 36https://wapiti.limsi.fr/Named Entity Recognition and Classification on Historical Documents: A Survey 27 6.3.1 Preliminary comments. Let us begin with some observations on the main lines of research. In a feature learning context the crucial point is, by definition, the capacity of the model to learn or reuse appropriate knowledge for the task at hand. Given a situation of time and domain shifts and of resource scarcity, what is at stake for neural-based historical NER approaches is to capture historical language idiosyncrasies (including OCR noise) and to adequately leverage previously learned knowledge — a process made increasingly possible with the usage of pre-trained language models in a transfer learning context. Transfer learning (TL) refers to a set of methods which aims at leveraging knowledge from a source setting and adapting it to a target setting [ 140]. TL is not new in NLP but was recently given considerable momentum, in particular sequential transfer learning where the source task (e.g. language modeling) differs from the target task (e.g. NER). In this supervised TL setting, a widely used process is to first learn representations on a large unlabelled corpus (source), before adapting them to a specific task using labelled data (target). The previously learned model can be adapted to the target task in different ways, the most frequent being weight adaptation, where pre-trained weights are either kept unchanged (‘frozen’) and used as features in the downstream model (feature extraction), or fine-tuned to the target task and used as initialisation in the downstream model (fine-tuning) [168]. To date, most DL approaches to historical NER have primarily focused on experimenting with a) different input representations, that is to say embeddings of different granularity (character, sub-word, word), learned at the type or token level (static vs. contextualised) and derived from domain data or not (in vs. out-of-domain), and b) different transfer learning strategies. Those aspects are often intermingled in various experiments reported in the literature, which does not easily lend itself to a clear-cut narrative outline. The discussion which follows is organised according to the demarcation line ‘words vs. words-in-context’, complemented with observations on TL settings and types of networks. However imperfect this line is, it reflects the recent evolution of incorporating more context and of testing all-round language models in historical settings. As a complement, and in order to frame further the discussion, we identified a set of key research questions from the types of experiments reported in publications, summarised in Table 6. 6.3.2 Approaches based on static embeddings. First attempts are based on state-of-the-art BiLSTM- CRF and investigate the transferability of various types of pre-trained static embeddings to historical material. They all use traditional CRFs as baseline. Focusing on location names in 19-20C English travelogues,37Sprugnoli [ 178] compares two classifiers, Stanford CRF and BiLSTM-CRF, and experiment with different word embeddings: GloVe embeddings, based on linear bag-of-words contexts and trained on Common Crawl data [ 143], Levy and Goldberg embeddings, produced from the English Wikipedia with a dependency-based approach [ 115], and fastText embeddings, also trained on the English Wikipedia but using sub-word information [ 21]. Additionally to these pre-trained vectors, Sprugnoli trains each embedding type afresh on historical data (a subset of the Corpus of Historical American English), ending up with 3×2 input options for the neural model. Both classifiers are trained on a relatively small labelled corpus. Results show that the neural approach performs systematically and remarkably better than CRF, with a difference ranging from 11 to 14 F-score percentage points, depending on the word vectors used (best F-score is 87.4 %). If in-domain supervised training improves the F-score of the Stanford CRF module, it is worth noting that the gain is mainly due to recall, the precision of the English default model remaining higher. In this regard, the neural approach shows a better P/R balance across all settings. With respect to embeddings, linear bag-of-words contexts (GloVe) prove to be more appropriate (at least in this context), with its historical embeddings yielding the highest scores across all metrics (fastText following immediately after). A detailed examination of results 37Corpus presented in Section 5.2.2.28 Ehrmann et al. Research questions Experiments Publication Input representation Which type of embedding is best? Test different static embedding algorithms [178] Test different static embedding granularity [162] Use modern static embeddings (word2vec, fastText) [91] Use modern char-level LM embeddings (Flair) [184] Use modern word-level LM embeddings (BERT, ELMo) [70, 152, 203] Uses stack of modern embeddings [105, 137, 162] Transfer learning How well modern embeddings can transfer to historical texts? What is the impact of in-domain embeddings? Is more task-specific labelled data more helpful than big or in-domain LMs? Test modern vs. historical static embeddings [158] Test modern vs. historical char-level LM embeddings [41, 101, 137, 172, 173] Test modern vs. historical word-level LM embeddings [2, 108, 172] Test stack of embeddings [2, 25, 108, 172, 173, 191] Test feature extraction (frozen) vs. fine-tuning [91, 152, 162] Test different training corpus sizes [2, 105, 158] Test cross-corpus model application [25, 105, 108, 158, 191] Test cross-corpus model training [158] Neural architecture How neural approaches compare to traditional CRFs? What is the best neural architecture with which decoder? Compare BiLSTM and traditional CRF [137, 158, 162, 178] Compare CRF decoder vs. softmax decoder [162] Compare BiLSTM and LSTM [91] Test single vs. multitask learning [162, 191] Compare transformers and BiLSTM [25] Table 6. Synthetic view of DL experiments mapped with research questions. reveals an uneven impact of in-domain embeddings, leading either to higher precision but lower recall (Levy and GloVe), or higher recall but lower precision (fastText and GloVe). Overall, this work shows the positive impact of in-domain training data: the BiLSTM-CRF approach, combined with in-domain training set and in-domain historical embeddings, systematically outperforms the linear CRF classifier. In the context of reference mining in the arts and humanities, Rodriguez et al. [ 162] also inves- tigate the benefit of BiLSTM over traditional CRFs, and of multiple input representations. Their experiments focus on three architectural components: input layer (word and character-level word embeddings), prediction layers (Softmax and CRF), and learning setting (multi-task and single-task). Authors consider a domain-specific tagset of 27 entity types covering reference components (e.g. author, title, archive, publisher) and work with 19-21C scholarly books and journals featuring a wide variety of referencing styles and sources.38While character-level word embeddings, likely to 38Corpus presented in Section 5.2.2Named Entity Recognition and Classification on Historical Documents: A Survey 29 help with OCR noise and rare words, are learned either via CNN or BiLSTM, word embeddings are based on word2vec and are tested under various settings: present or not, pre-trained on the in- domain raw corpus or randomly initialised, and frozen or fined-tuned on the labelled corpus during training. Among those settings, the one including in-domain word embeddings further fine-tuned during training and CRF prediction layer yields the best results ( 89 .7%F-score). Character-level embeddings provide a minor yet positive contribution, and are better learned via BiLSTM than with CNN. The BiLSTM architecture outperforms the CRF baseline by a large margin (+ 7%), except for very infrequent tags. Overall, this work confirms the importance of word information (rather in-domain, though here results with generic embeddings were not reported) and the remarkable capacities of a BiLSTM network to learn features, better decoded by a CRF classifier than a softmax function. Working with Czech historical newspapers,39Hubková et al. [ 91] target the recognition of five generic entity types. Authors experiment with two neural architectures, LSTM and BiLSTM, followed by a softmax layer. Both are trained on a relatively small labelled corpus (4k entities) and fed with modern fastText embeddings (as released by the fastText library) under three scenarios: randomly initialised, frozen, and fine-tuned. Character-level word embeddings are not used. Results show that the BiLSTM model based on pre-trained embeddings with no further fine-tuning performs best ( 73%F-score). Authors do not comment on the performance degradation resulting from fine- tuning, but one reason might be the small size of the training data. Rather than aiming at calibrating a system to a specific historical setting, Riedl et al. [ 158] adopt a more generic stance and investigate the possibility of building a German NER system that performs at the state of the art for both contemporary and historical texts. The underlying question—whether one type of model can be optimised to perform well across settings— naturally resonates with the needs of cultural heritage institution practitioners (see also Schweter et al. [ 172] and Labush et al. [ 108] hereafter). Experimental settings consist of: two sets of German labelled corpora, with large contemporary datasets (CoNNL-03 and GermEval) and small historical ones (from the Friedrich Temann and Austrian National library); two types of classifiers, CRFs (Stanford and GermaNER) and BiLSTM-CRF; finally, for the neural system, usage of fastText embeddings derived from generic (Wikipedia) and in-domain (Europeana corpus) data. On this base, authors perform three experiments. The first investigates the performances of the two types of systems on the contemporary datasets. On both GermEval and CoNNL, the BiLSTM-CRF models outperform the traditional CRF ones, with Wikipedia-based embeddings yielding better results than the Europeana- based ones. It is noteworthy that the GermaNER CRF model performs better than the LSTM of Lample et al. [ 110] on CoNLL-03, but suffers from low recall compared to BiLSTM. The second experiment focuses on all-corpora crossing, with each system being trained and evaluated on all possible combinations of contemporary and historical corpora pairs. With no surprise, best results are obtained when models are trained and evaluated on the same material. Interestingly, CRFs perform better than BiLSTM in the historical setting (i.e. train and test sets from historical corpora) by quite a margin, suggesting that although not optimised for historical texts, CRFs are more robust than BiLSTM when faced with small training datasets. The type of embeddings (Wikipedia vs. Europeana) plays a minor role in the BiLSTM performance in the historical setting. Ultimately, the third experiment explores how to overcome this neural net dependence on large data with domain adaptation transfer learning: the model is trained on a contemporary corpus until convergence and then further trained on a historical one for a few more epochs. Results show consistent benefits for BiLSTM on historical datasets (ca. +4 F-score percentage points). In general, main difficulties relate to OCR mistakes and wrongly hyphenated words due to line breaks, and to the Organisation 39Corpus presented in Section 5.2.130 Ehrmann et al. type. Overall, this work shows that BiLSTM and CRF achieve similar performances in a small-data historical setting, but that BiLSTM-CRF outperforms CRF when supplied with enough data or in a transfer learning setting. This first set of work confirms the suitability of the state-of-the-art BiLSTM-CRF approach for historical documents, with the major advantage of not requiring feature engineering. Provided that there is enough in-domain training data, this architecture obtains better performances than traditional CRFs (the latter performing on par or better otherwise). In-domain pre-training of static word embeddings seems to contribute positively, although to various degrees depending on the experimental settings and embedding types. Sub-word information (either character embeddings or character-based word embeddings) also appears to have positive effect. 6.3.3 Approaches based on character-level LM embeddings. Approaches described above rely on static, token-level word representations which fail to capture context information. This drawback can be overcome by context-dependent representations derived from the task of modelling language, either as distribution over characters, such as the Flair contextual string embeddings [ 3], or over words, such as BERT [ 43] and ELMo [ 144] (see Section 3.3.3). Such representations have boosted performances of modern NER and are also used in the context of historical texts. This section considers work based on character-based contextualised embeddings (flair). In the context of the CLEF-HIPE-2020 shared task [ 53], Dekhili et al. [ 41] proposed different variations of a BiLSTM-CRF network, with and without the in-domain HIPE flair embeddings and/or an attention layer. The gains of adding one or the other or both are not easy to interpret, with uneven performances of the model variants across NE types. Their overall F-scores range from 62%to65%under the strict evaluation regime. For some entity types the CRF baseline is better than the neural models, and the benefit of in-domain embeddings is overall more evident than the one of the attention layer (which proved more useful in handling metonymic entities). Kew et al . [101] address the recognition of toponyms in an alpine heritage corpus consisting of over 150 years of mountaineering articles in five languages (mainly from the Swiss and British Alpine Clubs). Focusing on fine-grained entity types (city, mountain, glacier, valley, lake, and cabin), the authors compare three approaches. The first is a traditional gazetteer-based approach completed with a few heuristics which achieves high precision across types ( 88%P,73%F-score), and even very high precision ( >95%) for infrequent categories with regular patterns. Suitable for reliable location- based search but suffering from low recall, this approach is then compared with a BiLSTM-CRF architecture. The neural system is fed with stacked embeddings composed of in-domain contextual string embeddings pre-trained on the alpine corpus concatenated with general-purpose fastText word embeddings pre-trained on web data, and trained on a silver training set containing 28k annotations obtained via the application of the gazetteer-based approach. The model leads to an increase of recall for the most frequent categories, without degrading precision scores ( 76% F-score). This shows the generalisation capacity of the neural approach in combination with context- sensitive string embeddings and given sufficient training data. Finally, authors experiment with crowd-corrected annotations and observe that already a small number of corrections on the silver data has a positive impact (+3 F-score percentage point). Swaileh et al . [184] target even more specific entity types in French and German financial yearbooks from the first half of 20C. They apply a BiLSTM-CRF network trained on custom data and fed with modern flair embeddings. Results are very good (between 85%to95%F-score depending on the book sections), with the CRF baseline and the BiLSTM model performing on par for French books, and BiLSTM being better than CRF for the German one, which has a lower OCR quality. Overall, these performances can be explained by the regularity of the structure and language as well as the quality of the considered material, resulting in stable contexts and non-noisy entities.Named Entity Recognition and Classification on Historical Documents: A Survey 31 6.3.4 Approaches based on word-level LM embeddings. The release of pre-trained contextualised language model-based word embeddings such as BERT (based on transformers) and ELMo (based on LSTM) pushed further the upper bound of modern NER performances. They show promising results either in replacement or in combination with other embedding types, and offer the possibility of being further fine-tuned [ 116]. If they are becoming a new paradigm of modern NER, the same seems to be true for historical NER. Using pre-trained modern embeddings. We first consider work based on pre-trained modern LM-based word embeddings (BERT or ELMo) without extensive comparison experiments. They make use of BiLSTM or transformer architectures. Working on the “Chinese Twenty-Four Histories”, a set of Chinese official history books covering a period from 3000 BCE to 17C, Yu et al. [ 203] face the problems of the complexity of classical Chinese and of the absence of appropriate training data in their attempt to recognise Person and Location . Their BiLSTM-CRF model is trained on a NE-annotated modern Chinese corpus and makes use of modern Chinese BERT embeddings in a feature extraction setting (frozen). Evaluated on a (small) dataset representative of the time span of the target corpus, the model achieves relatively good performances (from 72%to82%F-score depending on the book), with a pretty good P/R balance, better results for Location than for Person , and on recent books. Given the complete ‘modern’ setting of embeddings and training labelled data, those results shows the benefit of large LM-based embeddings—keeping in mind the small size of the test set and perhaps the regularity of entity occurrences in the material, not detailed in the paper. Also based on the bare usage of state-of-the-art LM-based representations is a set of work from the HIPE-2020 evaluation campaign. These work tackle the recognition of five entity types in about 200 years of historical newspapers in French, English, and German.40The task included various NER settings, however only the coarse literal NE recognition is considered here. Ortiz Suárez et al . [137] focused on French and German. They first pre-process the newspaper line-based format (or column segments) into sentence-split segments before training a BiLSTM-CRF model using a combination of modern static fastText and contextualised ELMo embeddings as input representations. They favoured ELMo over BERT because of its capacity to handle long sequences and its dynamic vocabulary thanks to its CNN character embedding layer. In-domain fastText embeddings provided by the organisers were tested but performed lower. Their models ranked third on both languages during the shared task, with strict F-score of 79%and 65%for French and German respectively. The considerably lower performance of their improved CRF baseline illustrates the advantage of contextual embeddings-based neural models. Ablation experiments on sentence splitting showed an improvement of 3.5 F-score percentage points on French data (except for Location ) confirming the importance of proper context for NER neural tagging. Running for French and English, Kristanti et al. [ 105] also make use of a BiLSTM-CRF relying on modern fastText and ELMo emddings. In the absence of training set for English, authors use the CoNLL-2012 corpus, while for French the training data is further augmented with another NE- annotated journalistic corpus from 1990, which proved to have positive impact. They scored at 63% and 52%in terms of strict F-score for French and English respectively. Compared to the French results of Ortiz Suàez et al., Kristanti et al. use the same French embeddings but a different implementation framework and different hyper-parameters, and does not apply sentence segmentation. Finally, still within the HIPE-2020 context, two teams tested pre-trained LM embeddings with transformer-based architectures. Provatorova et al . [152] proposed an approach based on the fine- tuning of BERT models using Huggingface’s transformer framework for the three shared task’s languages, using the cased multilingual BERT base model for French and German and the cased 40Corpus presented in Section 5.2.1.32 Ehrmann et al. monolingual BERT base model for English. They used the CoNLL-03 data for training their English model, the HIPE data for the others, and additionally set up a majority vote ensemble of 5 fine-tuned model instances per language in order to improve the robustness of the approach. Their models achieved F-scores of 68%,52% and 47% for French, German and English respectively. Ghannay et al. [70] used CamemBERT, a multi-layer bidirectional transformer similar to ROBERTa [ 119,124] initialised with a pre-trained modern French CamemBERT and completed with a CRF tag decoder. This model obtained the second-best results for French with 81%strict F-score. Even when learned from modern data, pre-trained LM-based word embeddings encode rich prior knowledge that effectively support neural models trained on (usually) small historical training sets. As for HIPE-related systems, it should be noted that word-level LM embeddings systematically lead to slightly higher recall than precision, demonstrating their powerful generalisation capacities, even on noisy texts. Using modern and historical pre-trained embeddings. As for static embeddings, it is logical to expect higher performances from LM-embeddings when pre-trained on historical data, in combination with modern ones or not. The set of work reviewed here explores this perspective. Ahmed et al . [2] work on the recognition of universal and domain-specific entities in German historical biodiversity literature.41They experiment with two BiLSTM-CRF implementations (their own and Flair framework) which both use modern token-level German word embeddings and are trained on the BIOfid corpus. Experiments consist in adding richer representations (modern Flair embeddings, additionally completed by newly trained ELMo embeddings or BERT base multilingual cased embeddings) or adding more task-specific training data (GermEval, CoNLL-03 and BIOfid). Models perform more or less equally, and authors explained the low gain of in-domain ELMo embdedings by the small size of the training data (100k sentences). Higher gains come with larger labelled data, however the absence of ablation tests hinders the complete understanding of the contribution of the historical part of this labelled data, and the use of two implementation frameworks does not warrant full results comparability. Both Schweter et al. [ 172] and Labusch et al. [ 108] build on the work of Riedl et al. [ 158] and try to improve NER performances on the same historical German evaluation datasets, thereby constituting (with HIPE-2020) one of the few sets of comparable experiments. Schweter et al. seek to offset the lack of training data by using only unlabelled data via pre-trained embeddings and language models. They use the Flair framework to train and combine (“stack”) their language models, and to train a BiLSTM-CRF model. Their first experiment consists in testing various static word representations, with: character embeddings learned during training, fastText embeddings pre-trained on Wikipedia or Common Crawl (with no sub-word information), and the combination of all of these. While Riedl et al. experimented with similar settings (character embeddings and pre-trained modern and historical fastText embeddings), it appears that combining Wikipedia and Common Crawl embeddings leads to better performances, even higher than the transfer learning setting of Riedl et al. using more labelled data. As a second experiment, Schweter et al. use pre- trained LM embeddings: flair embeddings newly trained on two historical corpora having temporal overlaps with the test data, and two modern pre-trained BERT models (multilingual and German). On both historical test sets, in-domain LMs yield the best results (outperforming those of Riedl et al.), all the more so when the temporal overlap between embedding and task-specific training data is large. This demonstrates that the selection of the language model corpus plays an important role, and that unlabelled data close in time might have more impact than more (and difficult to obtain) labelled data. 41Corpus presented in Section 5.2.2Named Entity Recognition and Classification on Historical Documents: A Survey 33 With the objective of developing a versatile approach that performs decently on texts of different epochs without intense adaptation, Labusch et al. [ 108] experiment with BERT under different pre-training and fine-tuning settings. In a nutshell, they apply a model based on multilingual BERT embeddings, which is further pre-trained on large OCRed historical German unlabelled data (the Digital Collection of the Berlin State Library) and subsequently fine-tuned on several NE-labelled datasets (CoNLL-03, GermEval, and the German part of Europeana NER corpora). Tested across different contemporary/historical dataset pairs (similar to the all-corpora crossing of Riedl et al. [ 158]), it appears that additional in-domain pre-training is most of the time beneficial for historical pairs, while performances worsen on contemporary ones. The combination of several task-specific training datasets has positive yet less important impact than BERT pre-training, as already observed by Schweter et al. [ 172]. Overall, this work shows that an appropriately pre-trained BERT model delivers decent recognition performances in a variety of settings. In order to further improve them, authors purpose to use the BERT large instead of the BERT base model, to build more historical labelled training data, and to improve the OCR quality of the collections. The same spirit of combinatorial optimization drove the work of Todorov et al. [ 191] and Schweter et al. [ 173] in the context of HIPE-2020. Todorov et al. build on the bidirectional LSTM- CRF architecture of Lample et al. and introduce a multi-task approach by splitting the top layers for each entity type. Their general embedding layer combines a multitude of embeddings, on the level of characters, sub-words and words; some newly trained by the authors, as well as pre- trained BERT and HIPE’s in-domain fastText embeddings. They also vary the segmentation of the input: line segmentation, document segmentation as well as sub-document segmentation for long documents. No additional NER training material was used for German and French, while for English, the Groningen Meaning Bank42was adapted for training. Results suggest that splitting the top layers for each entity type is not beneficial. However, the addition of various embeddings improves the performance, as shown in the very detailed ablation test report. In this regard, character-level and BERT embeddings are particularly important, while in-domain embeddings contribute mainly to recall. Fine-tuning pre-trained embeddings did not prove beneficial. Using (sub-)document segmentation clearly improved results when compared to the line segmentation found in newspapers, emphasising once again the importance of context. Post-campaign F-scores for coarse literal NER are 75%and 66%for French and German (strict setting). English experiments yielded poor results, certainly due to the time and linguistic gaps between training and test data, and the pretty bad OCR quality of the material (in the same way as for Provatorova et al . [152] and Kristanti et al. [105]). For their part, Schweter et al. [ 173] focused on German and experimented with ensembling different word and subword embeddings (modern fastText and historical self-trained and HIPE flair embeddings), as well as transformer-based language models (trained on modern and historical data), all integrated by the neural Flair NER tagging framework [ 3]. They used a state-of-the- art BiLSTM with an on-top CRF layer as proposed by [ 89], and perform sentence splitting and hyphen normalisation as pre-processing. To identify the optimal combination of embeddings and LMs, authors first selected the best embeddings for each type before combining them. Using richer representations (fastText