Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/iarfmoose/roberta-base-bulgarian/README.md
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: bg
|
3 |
+
---
|
4 |
+
|
5 |
+
# RoBERTa-base-bulgarian
|
6 |
+
|
7 |
+
|
8 |
+
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a version of [RoBERTa-base](https://huggingface.co/roberta-base) pretrained on Bulgarian text.
|
9 |
+
|
10 |
+
## Intended uses
|
11 |
+
|
12 |
+
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
|
13 |
+
|
14 |
+
## Limitations and bias
|
15 |
+
|
16 |
+
The training data is unfiltered text from the internet and may contain all sorts of biases.
|
17 |
+
|
18 |
+
## Training data
|
19 |
+
|
20 |
+
This model was trained on the following data:
|
21 |
+
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
|
22 |
+
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
|
23 |
+
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
|
24 |
+
|
25 |
+
## Training procedure
|
26 |
+
|
27 |
+
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
|
28 |
+
|
29 |
+
It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
|