AI & ML interests

Pretraining Historical Multilingual Language Models

Recent Activity

hmTEAMS

Historical Multilingual TEAMS Models. Following languages are currently covered:

  • English (British Library Corpus - Books)
  • German (Europeana Newspaper)
  • French (Europeana Newspaper)
  • Finnish (Europeana Newspaper, Digilib)
  • Swedish (Europeana Newspaper, Digilib)
  • Dutch (Delpher Corpus)
  • Norwegian (NCC Corpus)

More details can be found in our GitHub repository.

Leaderboard

We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana. The following table shows an overview of used datasets.

Language Datasets
English AjMC - TopRes19th
German AjMC - NewsEye - HIPE-2020
French AjMC - ICDAR-Europeana - LeTemps - NewsEye - HIPE-2020
Finnish NewsEye
Swedish NewsEye
Dutch ICDAR-Europeana

All results can be found in the hmLeaderboard.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historical Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️