BramVanroy commited on
Commit
2020cc8
·
1 Parent(s): d0a42ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
  base_model: facebook/mbart-large-cc25
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: en_es_nl+no_processing
7
- results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # en_es_nl+no_processing
 
 
14
 
15
  This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
16
  It achieves the following results on the evaluation set:
@@ -84,4 +84,4 @@ The following hyperparameters were used during training:
84
  - Transformers 4.34.0.dev0
85
  - Pytorch 2.0.1+cu117
86
  - Datasets 2.14.2
87
- - Tokenizers 0.13.3
 
1
  ---
2
  base_model: facebook/mbart-large-cc25
3
+ language:
4
+ - en
5
+ - nl
6
+ - es
 
7
  ---
8
 
9
+ # EN, ES and NL to AMR parsing (stratified)
 
10
 
11
+ This version was trained on a subselection of the data. The AMR 3.0 corpus was translated to all the relevant languages. We then divided the dataset so
12
+ that in total we only see a third of the each language's dataset (so that in total we only see the full AMR 3.0 corpus in size once). In other words,
13
+ all languages were undersampled for research purposes.
14
 
15
  This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
16
  It achieves the following results on the evaluation set:
 
84
  - Transformers 4.34.0.dev0
85
  - Pytorch 2.0.1+cu117
86
  - Datasets 2.14.2
87
+ - Tokenizers 0.13.3