File size: 752 Bytes
466d484 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
--
language: en
license: apache-2.0
--
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
### BibTeX entry and citation info
```bibtex
@article{wkbl2021,
title={ClimateBERT: A Pretrained Language Model for Climate-Related Text},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
journal={arXiv preprint arXiv:2110.12010},
year={2021}
}
``` |