Update README.md
Browse files
README.md
CHANGED
@@ -9,9 +9,11 @@ license: apache-2.0
|
|
9 |
|
10 |
This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy.
|
11 |
|
12 |
-
*Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to).*
|
13 |
|
14 |
-
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally
|
|
|
|
|
15 |
|
16 |
## Climate performance card
|
17 |
|
|
|
9 |
|
10 |
This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy.
|
11 |
|
12 |
+
*Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to). This is also the only language model we will try to keep up to date.*
|
13 |
|
14 |
+
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
|
15 |
+
|
16 |
+
*Update September 2, 2022: Now additionally pre-trained on an even larger text corpus, compromising >2M paragraphs.*
|
17 |
|
18 |
## Climate performance card
|
19 |
|