Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,14 @@ license: apache-2.0
|
|
5 |
|
6 |
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
### BibTeX entry and citation info
|
9 |
|
10 |
```bibtex
|
|
|
5 |
|
6 |
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
|
7 |
|
8 |
+
### Climate performance model card
|
9 |
+
|
10 |
+
| 1. Is the resulting model publicly available? | Yes |
|
11 |
+
| 2. How much time does the training of the final model take? | 8 hours |
|
12 |
+
| 3. How much time did all experiments take (incl. hyperparameter search)? | 288 hours |
|
13 |
+
| 4. What was the energy consumption (GPU/CPU)? | 0.7 kW |
|
14 |
+
| 5. At which geo location were the computations performed? | Germany |
|
15 |
+
|
16 |
### BibTeX entry and citation info
|
17 |
|
18 |
```bibtex
|