--- language: en license: apache-2.0 --- Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010). ### Climate performance model card | 1. Is the resulting model publicly available? | Yes | | 2. How much time does the training of the final model take? | 8 hours | | 3. How much time did all experiments take (incl. hyperparameter search)? | 288 hours | | 4. What was the energy consumption (GPU/CPU)? | 0.7 kW | | 5. At which geo location were the computations performed? | Germany | ### BibTeX entry and citation info ```bibtex @article{wkbl2021, title={ClimateBERT: A Pretrained Language Model for Climate-Related Text}, author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus}, journal={arXiv preprint arXiv:2110.12010}, year={2021} } ```