abdoutony207 commited on
Commit
97687f7
·
1 Parent(s): 530ccd8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - opus100
7
+ metrics:
8
+ - bleu
9
+ model-index:
10
+ - name: mbart-large-cc25-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize2
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: opus100
17
+ type: opus100
18
+ args: ar-en
19
+ metrics:
20
+ - name: Bleu
21
+ type: bleu
22
+ value: 10.5645
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # mbart-large-cc25-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize2
29
+
30
+ This model is a fine-tuned version of [akhooli/mbart-large-cc25-en-ar](https://huggingface.co/akhooli/mbart-large-cc25-en-ar) on the opus100 dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 0.4673
33
+ - Bleu: 10.5645
34
+ - Meteor: 0.0783
35
+ - Gen Len: 10.23
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 2e-05
55
+ - train_batch_size: 2
56
+ - eval_batch_size: 2
57
+ - seed: 42
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - num_epochs: 11
61
+ - mixed_precision_training: Native AMP
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
66
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
67
+ | 8.1731 | 0.25 | 100 | 2.8417 | 0.9599 | 0.028 | 230.885 |
68
+ | 0.6743 | 0.5 | 200 | 0.4726 | 6.4055 | 0.0692 | 14.81 |
69
+ | 0.3028 | 0.75 | 300 | 0.4572 | 6.7544 | 0.0822 | 23.92 |
70
+ | 0.2555 | 1.0 | 400 | 0.4172 | 8.4078 | 0.0742 | 13.655 |
71
+ | 0.1644 | 1.25 | 500 | 0.4236 | 9.284 | 0.071 | 13.03 |
72
+ | 0.1916 | 1.5 | 600 | 0.4222 | 4.8976 | 0.0779 | 32.225 |
73
+ | 0.2011 | 1.75 | 700 | 0.4305 | 7.6909 | 0.0738 | 16.675 |
74
+ | 0.1612 | 2.0 | 800 | 0.4416 | 10.8622 | 0.0855 | 10.91 |
75
+ | 0.116 | 2.25 | 900 | 0.4673 | 10.5645 | 0.0783 | 10.23 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.18.0
81
+ - Pytorch 1.11.0
82
+ - Datasets 2.1.0
83
+ - Tokenizers 0.12.1