webersni's picture
Create README.md
b133d6c
|
raw
history blame
No virus
755 Bytes
metadata
language: en
license: apache-2.0

Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.

BibTeX entry and citation info

@article{wkbl2021,
        title={ClimateBERT: A Pretrained Language Model for Climate-Related Text},
        author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
        journal={arXiv preprint arXiv:2110.12010},
        year={2021}
}