cecilemacaire
commited on
Commit
•
330102a
1
Parent(s):
1b853c5
Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ The model is used only for **inference**.
|
|
21 |
|
22 |
## Training details
|
23 |
|
|
|
|
|
24 |
### Datasets
|
25 |
|
26 |
The [Propicto-commonvoice dataset](https://www.ortolang.fr/market/corpora/propicto) is used, which was created from the CommmonVoice v.15.0 corpus.
|
@@ -33,7 +35,7 @@ This dataset was built with the method presented in the research paper titled ["
|
|
33 |
|
34 |
### Parameters
|
35 |
|
36 |
-
|
37 |
|
38 |
```bash
|
39 |
fairseq-train \
|
@@ -72,7 +74,7 @@ Comparison to other translation models :
|
|
72 |
|
73 |
### Environmental Impact
|
74 |
|
75 |
-
|
76 |
|
77 |
## Using t2p-nmt-commonvoice model
|
78 |
|
|
|
21 |
|
22 |
## Training details
|
23 |
|
24 |
+
The model was trained with [Fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/README.md).
|
25 |
+
|
26 |
### Datasets
|
27 |
|
28 |
The [Propicto-commonvoice dataset](https://www.ortolang.fr/market/corpora/propicto) is used, which was created from the CommmonVoice v.15.0 corpus.
|
|
|
35 |
|
36 |
### Parameters
|
37 |
|
38 |
+
This is the arguments in the training pipeline :
|
39 |
|
40 |
```bash
|
41 |
fairseq-train \
|
|
|
74 |
|
75 |
### Environmental Impact
|
76 |
|
77 |
+
Training was performed using a single Nvidia V100 GPU with 32 GB of memory which took around 2 hours in total.
|
78 |
|
79 |
## Using t2p-nmt-commonvoice model
|
80 |
|