sofiaoliveira
commited on
Commit
•
68d075c
1
Parent(s):
b406843
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ metrics:
|
|
13 |
- F1 Measure
|
14 |
---
|
15 |
|
16 |
-
#
|
17 |
|
18 |
## Model description
|
19 |
|
@@ -56,15 +56,9 @@ To use the full SRL model (transformers portion + a decoding layer), refer to th
|
|
56 |
|
57 |
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
|
58 |
|
59 |
-
|
60 |
-
## Training data
|
61 |
-
|
62 |
-
Pretrained weights were left identical to the original model [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large). A randomly initialized embeddings layer for "token_type_ids" was added.
|
63 |
-
|
64 |
-
|
65 |
## Training procedure
|
66 |
|
67 |
-
The
|
68 |
|
69 |
## Eval results
|
70 |
|
|
|
13 |
- F1 Measure
|
14 |
---
|
15 |
|
16 |
+
# XLM-R large fine-tuned on Portuguese semantic role labeling
|
17 |
|
18 |
## Model description
|
19 |
|
|
|
56 |
|
57 |
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
## Training procedure
|
60 |
|
61 |
+
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
|
62 |
|
63 |
## Eval results
|
64 |
|