Model Card for sbb_ner
A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks. It predicts the classes PER
, LOC
and ORG
.
The model was developed by the Berlin State Library (SBB) in the QURATOR project.
Table of Contents
- Model Card for sbb_ner
- Table of Contents
- Model Details
- Uses
- Bias, Risks, and Limitations
- Training Details
- Evaluation
- Model Examination
- Environmental Impact
- Technical Specifications [optional]
- Citation
- Glossary [optional]
- More Information [optional]
- Model Card Authors [optional]
- Model Card Contact
- How to Get Started with the Model
Model Details
Model Description
A BERT model trained on three German corpora containing contemporary and historical texts for Named Entity Recognition (NER) tasks.
It predicts the classes PER
, LOC
and ORG
.
- Developed by: Kai Labusch, Clemens Neudecker, David Zellhöfer
- Shared by [Optional]: Staatsbibliothek zu Berlin / Berlin State Library
- Model type: Language model
- Language(s) (NLP): de
- License: apache-2.0
- Parent Model: The BERT base multilingual cased model as provided by Google
- Resources for more information:
Uses
Direct Use
The model can directly be used to perform NER on historical German texts obtained by Optical Character Recognition (OCR) from digitized documents.
Supported entity types are PER
, LOC
and ORG
.
Downstream Use
The model has been pre-trained on 2,333,647 pages of OCR-text of the digitized collections of Berlin State Library. Therefore it is adapted to OCR-error prone historical German texts and might be used for particular applications that involve such text material.
Out-of-Scope Use
More info needed.
Bias, Risks, and Limitations
The identification of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitized historical texts. The aim of the development of this model was to improve this knowledge creation process, an endeavour that is not for profit. The results of the applied model are freely accessible for the users of the digital collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified. As a limitation, it has to be noted that there is a lot of performance to gain for historical text by adding more historical ground-truth data.
Recommendations
The general observation that historical texts often remain silent and avoid naming of subjects from the colonies and address them anonymously cannot be remedied by named entity recognition. Disambiguation of named entities proves to be challenging beyond the task of automatically identifying entities. The existence of broad variations in the spelling of person and place names because of non-normalized orthography and linguistic change as well as changes in the naming of places according to the context adds to this challenge. Historical texts, especially newspapers, contain narrative descriptions and visual representations of minorities and disadvantaged groups without naming them; de-anonymizing such persons and groups is a research task in itself, which has only been started to be tackled in the 2020's.
Training Details
Training Data
- CoNLL 2003 German Named Entity Recognition Ground Truth (Tjong Kim Sang and De Meulder, 2003)
- GermEval Konvens 2014 Shared Task Data (Benikova et al., 2014)
- DC-SBB Digital Collections of the Berlin State Library (Labusch and Zellhöfer, 2019)
- Europeana Newspapers Historic German Datasets (Neudecker, 2016)
Training Procedure
The BERT model is trained directly with respect to the NER by implementation of the same method that has been proposed by the BERT authors (Devlin et al., 2018). We applied unsupervised pre-training on 2,333,647 pages of unlabeled historical German text from the Berlin State Library digital collections, and supervised pre-training on two datasets with contemporary German text, conll2003 and germeval_14. Unsupervised pre-training on the DC-SBB data as well as supervised pre-training on contemporary NER ground truth were applied. Unsupervised and supervised pre-training are combined where unsupervised is done first and supervised second. Performance on different combinations of training and test sets was explored, and a 5-fold cross validation and comparison with state of the art approaches was conducted.
Preprocessing
The model was pre-trained on 2,333,647 pages of German texts from the digitized collections of the Berlin State Library. The texts have been obtained by OCR from the page scans of the documents.
Speeds, Sizes, Times
Since it is an incarnation of the original BERT-model published by Google, all the speed, size and time considerations of that original model hold.
Evaluation
The model has been evaluated by 5-fold cross-validation on several German historical OCR ground truth datasets. See publication for details.
Testing Data, Factors & Metrics
Testing Data
Two different test sets contained in the CoNLL 2003 German Named Entity Recognition Ground Truth, i.e. TEST-A and TEST-B, have been used for testing (DE-CoNLL-TEST). Additionally, historical OCR-based ground truth datasets have been used for testing - see publication for details and below.
Factors
The evaluation focuses on NER in historical German documents, see publication for details.
Metrics
Performance metrics used in evaluation is precision, recall and F1-score. See publication for actual results in terms of these metrics.
Results
See publication.
Model Examination
See publication.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: V100
- Hours used: Roughly 1-2 week(s) for pre-training. Roughly 1 hour for final NER-training.
- Cloud Provider: No cloud.
- Compute Region: Germany.
- Carbon Emitted: More information needed
Technical Specifications [optional]
Model Architecture and Objective
See original BERT publication.
Compute Infrastructure
Training and pre-training has been performed on a single V100.
Hardware
See above.
Software
See published code on GitHub.
Citation
BibTeX:
@article{labusch_bert_2019,
title = {{BERT} for {Named} {Entity} {Recognition} in {Contemporary} and {Historical} {German}},
volume = {Conference on Natural Language Processing},
url = {https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf},
abstract = {We apply a pre-trained transformer based representational language model, i.e. BERT (Devlin et al., 2018), to named entity recognition (NER) in contemporary and historical German text and observe state of the art performance for both text categories. We further improve the recognition performance for historical German by unsupervised pre-training on a large corpus of historical German texts of the Berlin State Library and show that best performance for historical German is obtained by unsupervised pre-training on historical German plus supervised pre-training with contemporary NER ground-truth.},
language = {en},
author = {Labusch, Kai and Neudecker, Clemens and Zellhöfer, David},
year = {2019},
pages = {9},
}
APA:
(Labusch et al., 2019)
Glossary [optional]
More information needed.
More Information [optional]
In addition to what has been documented above, it should be noted that there are two NER Ground Truth datasets available:
- Data provided for the 2020 HIPE campaign on named entity processing
- Data provided for the 2022 HIPE shared task on named entity processing
Furthermore, two papers have been published on NER/EL, using BERT:
- Entity Linking in Multilingual Newspapers and Classical Commentaries with BERT
- Named Entity Disambiguation and Linking Historic Newspaper OCR with BERT
Model Card Authors [optional]
Model Card Contact
Questions and comments about the model can be directed to Kai Labusch at [email protected], questions and comments about the model card can be directed to Jörg Lehmann at [email protected]
How to Get Started with the Model
How to get started with this model is explained in the ReadMe file of the GitHub repository over here.
- Downloads last month
- 18