webersni's picture
Create README.md
466d484
|
raw
history blame
752 Bytes

-- language: en license: apache-2.0

Using the DistilRoBERTa model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our language model research paper.

BibTeX entry and citation info

@article{wkbl2021,
        title={ClimateBERT: A Pretrained Language Model for Climate-Related Text},
        author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
        journal={arXiv preprint arXiv:2110.12010},
        year={2021}
}