|
--- |
|
language: en |
|
license: apache-2.0 |
|
--- |
|
|
|
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010). |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@article{wkbl2021, |
|
title={ClimateBERT: A Pretrained Language Model for Climate-Related Text}, |
|
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus}, |
|
journal={arXiv preprint arXiv:2110.12010}, |
|
year={2021} |
|
} |
|
``` |