Michael Feil
commited on
Commit
•
9a13450
1
Parent(s):
88ac285
initial commit m2m100_1.2B to ctranslate2:v3.13.0
Browse files- README.md +164 -0
- README.mde +218 -0
- config.json +8 -0
- generation_config.json +11 -0
- model.bin +3 -0
- sentencepiece.bpe.model +3 -0
- shared_vocabulary.txt +0 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- af
|
5 |
+
- am
|
6 |
+
- ar
|
7 |
+
- ast
|
8 |
+
- az
|
9 |
+
- ba
|
10 |
+
- be
|
11 |
+
- bg
|
12 |
+
- bn
|
13 |
+
- br
|
14 |
+
- bs
|
15 |
+
- ca
|
16 |
+
- ceb
|
17 |
+
- cs
|
18 |
+
- cy
|
19 |
+
- da
|
20 |
+
- de
|
21 |
+
- el
|
22 |
+
- en
|
23 |
+
- es
|
24 |
+
- et
|
25 |
+
- fa
|
26 |
+
- ff
|
27 |
+
- fi
|
28 |
+
- fr
|
29 |
+
- fy
|
30 |
+
- ga
|
31 |
+
- gd
|
32 |
+
- gl
|
33 |
+
- gu
|
34 |
+
- ha
|
35 |
+
- he
|
36 |
+
- hi
|
37 |
+
- hr
|
38 |
+
- ht
|
39 |
+
- hu
|
40 |
+
- hy
|
41 |
+
- id
|
42 |
+
- ig
|
43 |
+
- ilo
|
44 |
+
- is
|
45 |
+
- it
|
46 |
+
- ja
|
47 |
+
- jv
|
48 |
+
- ka
|
49 |
+
- kk
|
50 |
+
- km
|
51 |
+
- kn
|
52 |
+
- ko
|
53 |
+
- lb
|
54 |
+
- lg
|
55 |
+
- ln
|
56 |
+
- lo
|
57 |
+
- lt
|
58 |
+
- lv
|
59 |
+
- mg
|
60 |
+
- mk
|
61 |
+
- ml
|
62 |
+
- mn
|
63 |
+
- mr
|
64 |
+
- ms
|
65 |
+
- my
|
66 |
+
- ne
|
67 |
+
- nl
|
68 |
+
- no
|
69 |
+
- ns
|
70 |
+
- oc
|
71 |
+
- or
|
72 |
+
- pa
|
73 |
+
- pl
|
74 |
+
- ps
|
75 |
+
- pt
|
76 |
+
- ro
|
77 |
+
- ru
|
78 |
+
- sd
|
79 |
+
- si
|
80 |
+
- sk
|
81 |
+
- sl
|
82 |
+
- so
|
83 |
+
- sq
|
84 |
+
- sr
|
85 |
+
- ss
|
86 |
+
- su
|
87 |
+
- sv
|
88 |
+
- sw
|
89 |
+
- ta
|
90 |
+
- th
|
91 |
+
- tl
|
92 |
+
- tn
|
93 |
+
- tr
|
94 |
+
- uk
|
95 |
+
- ur
|
96 |
+
- uz
|
97 |
+
- vi
|
98 |
+
- wo
|
99 |
+
- xh
|
100 |
+
- yi
|
101 |
+
- yo
|
102 |
+
- zh
|
103 |
+
- zu
|
104 |
+
license: mit
|
105 |
+
tags:
|
106 |
+
---
|
107 |
+
|
108 |
+
# M2M100 1.2B
|
109 |
+
|
110 |
+
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
|
111 |
+
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
|
112 |
+
|
113 |
+
The model that can directly translate between the 9,900 directions of 100 languages.
|
114 |
+
To translate into a target language, the target language id is forced as the first generated token.
|
115 |
+
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
|
116 |
+
|
117 |
+
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
|
118 |
+
|
119 |
+
To install `sentencepiece` run `pip install sentencepiece`
|
120 |
+
|
121 |
+
|
122 |
+
```python
|
123 |
+
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
124 |
+
|
125 |
+
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
|
126 |
+
chinese_text = "生活就像一盒巧克力。"
|
127 |
+
|
128 |
+
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
|
129 |
+
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
|
130 |
+
|
131 |
+
# translate Hindi to French
|
132 |
+
tokenizer.src_lang = "hi"
|
133 |
+
encoded_hi = tokenizer(hi_text, return_tensors="pt")
|
134 |
+
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
|
135 |
+
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
136 |
+
# => "La vie est comme une boîte de chocolat."
|
137 |
+
|
138 |
+
# translate Chinese to English
|
139 |
+
tokenizer.src_lang = "zh"
|
140 |
+
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
|
141 |
+
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
|
142 |
+
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
143 |
+
# => "Life is like a box of chocolate."
|
144 |
+
```
|
145 |
+
|
146 |
+
|
147 |
+
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
|
148 |
+
|
149 |
+
|
150 |
+
## Languages covered
|
151 |
+
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
|
152 |
+
|
153 |
+
|
154 |
+
## BibTeX entry and citation info
|
155 |
+
```
|
156 |
+
@misc{fan2020englishcentric,
|
157 |
+
title={Beyond English-Centric Multilingual Machine Translation},
|
158 |
+
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
|
159 |
+
year={2020},
|
160 |
+
eprint={2010.11125},
|
161 |
+
archivePrefix={arXiv},
|
162 |
+
primaryClass={cs.CL}
|
163 |
+
}
|
164 |
+
```
|
README.mde
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- af
|
5 |
+
- am
|
6 |
+
- ar
|
7 |
+
- ast
|
8 |
+
- az
|
9 |
+
- ba
|
10 |
+
- be
|
11 |
+
- bg
|
12 |
+
- bn
|
13 |
+
- br
|
14 |
+
- bs
|
15 |
+
- ca
|
16 |
+
- ceb
|
17 |
+
- cs
|
18 |
+
- cy
|
19 |
+
- da
|
20 |
+
- de
|
21 |
+
- el
|
22 |
+
- en
|
23 |
+
- es
|
24 |
+
- et
|
25 |
+
- fa
|
26 |
+
- ff
|
27 |
+
- fi
|
28 |
+
- fr
|
29 |
+
- fy
|
30 |
+
- ga
|
31 |
+
- gd
|
32 |
+
- gl
|
33 |
+
- gu
|
34 |
+
- ha
|
35 |
+
- he
|
36 |
+
- hi
|
37 |
+
- hr
|
38 |
+
- ht
|
39 |
+
- hu
|
40 |
+
- hy
|
41 |
+
- id
|
42 |
+
- ig
|
43 |
+
- ilo
|
44 |
+
- is
|
45 |
+
- it
|
46 |
+
- ja
|
47 |
+
- jv
|
48 |
+
- ka
|
49 |
+
- kk
|
50 |
+
- km
|
51 |
+
- kn
|
52 |
+
- ko
|
53 |
+
- lb
|
54 |
+
- lg
|
55 |
+
- ln
|
56 |
+
- lo
|
57 |
+
- lt
|
58 |
+
- lv
|
59 |
+
- mg
|
60 |
+
- mk
|
61 |
+
- ml
|
62 |
+
- mn
|
63 |
+
- mr
|
64 |
+
- ms
|
65 |
+
- my
|
66 |
+
- ne
|
67 |
+
- nl
|
68 |
+
- no
|
69 |
+
- ns
|
70 |
+
- oc
|
71 |
+
- or
|
72 |
+
- pa
|
73 |
+
- pl
|
74 |
+
- ps
|
75 |
+
- pt
|
76 |
+
- ro
|
77 |
+
- ru
|
78 |
+
- sd
|
79 |
+
- si
|
80 |
+
- sk
|
81 |
+
- sl
|
82 |
+
- so
|
83 |
+
- sq
|
84 |
+
- sr
|
85 |
+
- ss
|
86 |
+
- su
|
87 |
+
- sv
|
88 |
+
- sw
|
89 |
+
- ta
|
90 |
+
- th
|
91 |
+
- tl
|
92 |
+
- tn
|
93 |
+
- tr
|
94 |
+
- uk
|
95 |
+
- ur
|
96 |
+
- uz
|
97 |
+
- vi
|
98 |
+
- wo
|
99 |
+
- xh
|
100 |
+
- yi
|
101 |
+
- yo
|
102 |
+
- zh
|
103 |
+
- zu
|
104 |
+
license: mit
|
105 |
+
tags:
|
106 |
+
- ctranslate2
|
107 |
+
---
|
108 |
+
|
109 |
+
Converted 5/13/23 to Ctranslate2
|
110 |
+
```bash
|
111 |
+
export ORG="facebook"
|
112 |
+
export NAME="m2m100_PARAMS"
|
113 |
+
ct2-transformers-converter --model "$ORG/$NAME" --copy_files .gitattributes README.md generation_config.json sentencepiece.bpe.model special_tokens_map.json tokenizer_config.json vocab.json --quantization float16
|
114 |
+
```
|
115 |
+
Fast-Inference with Ctranslate2
|
116 |
+
Speedup inference by 2x-8x using int8 inference in C++
|
117 |
+
quantized version of facebook/m2m100_1.2B
|
118 |
+
|
119 |
+
```python
|
120 |
+
import ctranslate2
|
121 |
+
import transformers
|
122 |
+
|
123 |
+
translator = ctranslate2.Translator("m2m100_PARAMS")
|
124 |
+
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/m2m100_PARAMS")
|
125 |
+
tokenizer.src_lang = "en"
|
126 |
+
|
127 |
+
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("Hello world!"))
|
128 |
+
target_prefix = [tokenizer.lang_code_to_token["de"]]
|
129 |
+
results = translator.translate_batch([source], target_prefix=[target_prefix])
|
130 |
+
target = results[0].hypotheses[0][1:]
|
131 |
+
|
132 |
+
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
|
133 |
+
```
|
134 |
+
|
135 |
+
Alternative:
|
136 |
+
pip install hf_hub_ctranslate2>=1.0.0 ctranslate2>=3.13.0
|
137 |
+
|
138 |
+
Checkpoint compatible to ctranslate2 and hf-hub-ctranslate2
|
139 |
+
|
140 |
+
compute_type=int8_float16 for device="cuda"
|
141 |
+
compute_type=int8 for device="cpu"
|
142 |
+
```python
|
143 |
+
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
|
144 |
+
|
145 |
+
model_name = "michaelfeil/ct2fast-m2m100_PARAMS"
|
146 |
+
model = TranslatorCT2fromHfHub(
|
147 |
+
# load in int8 on CUDA
|
148 |
+
model_name_or_path=model_name,
|
149 |
+
device="cuda",
|
150 |
+
compute_type="int8_float16"
|
151 |
+
)
|
152 |
+
model.tokenizer = AutoTokenizer.from_pretrained("facebook/m2m100_PARAMS")
|
153 |
+
outputs = model.generate(
|
154 |
+
text=["Translate to german: How are you doing?"],
|
155 |
+
min_decoding_length=24,
|
156 |
+
max_decoding_length=32,
|
157 |
+
max_input_length=512,
|
158 |
+
beam_size=5
|
159 |
+
)
|
160 |
+
print(outputs)
|
161 |
+
```
|
162 |
+
# Original: M2M100 418M
|
163 |
+
|
164 |
+
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
|
165 |
+
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
|
166 |
+
|
167 |
+
The model that can directly translate between the 9,900 directions of 100 languages.
|
168 |
+
To translate into a target language, the target language id is forced as the first generated token.
|
169 |
+
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
|
170 |
+
|
171 |
+
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
|
172 |
+
|
173 |
+
To install `sentencepiece` run `pip install sentencepiece`
|
174 |
+
|
175 |
+
|
176 |
+
```python
|
177 |
+
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
178 |
+
|
179 |
+
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
|
180 |
+
chinese_text = "生活就像一盒巧克力。"
|
181 |
+
|
182 |
+
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
|
183 |
+
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
|
184 |
+
|
185 |
+
# translate Hindi to French
|
186 |
+
tokenizer.src_lang = "hi"
|
187 |
+
encoded_hi = tokenizer(hi_text, return_tensors="pt")
|
188 |
+
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
|
189 |
+
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
190 |
+
# => "La vie est comme une boîte de chocolat."
|
191 |
+
|
192 |
+
# translate Chinese to English
|
193 |
+
tokenizer.src_lang = "zh"
|
194 |
+
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
|
195 |
+
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
|
196 |
+
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
197 |
+
# => "Life is like a box of chocolate."
|
198 |
+
```
|
199 |
+
|
200 |
+
|
201 |
+
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
|
202 |
+
|
203 |
+
|
204 |
+
## Languages covered
|
205 |
+
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
|
206 |
+
|
207 |
+
|
208 |
+
## BibTeX entry and citation info
|
209 |
+
```
|
210 |
+
@misc{fan2020englishcentric,
|
211 |
+
title={Beyond English-Centric Multilingual Machine Translation},
|
212 |
+
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
|
213 |
+
year={2020},
|
214 |
+
eprint={2010.11125},
|
215 |
+
archivePrefix={arXiv},
|
216 |
+
primaryClass={cs.CL}
|
217 |
+
}
|
218 |
+
```
|
config.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_source_bos": false,
|
3 |
+
"add_source_eos": false,
|
4 |
+
"bos_token": "<s>",
|
5 |
+
"decoder_start_token": "</s>",
|
6 |
+
"eos_token": "</s>",
|
7 |
+
"unk_token": "<unk>"
|
8 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 0,
|
4 |
+
"decoder_start_token_id": 2,
|
5 |
+
"early_stopping": true,
|
6 |
+
"eos_token_id": 2,
|
7 |
+
"max_length": 200,
|
8 |
+
"num_beams": 5,
|
9 |
+
"pad_token_id": 1,
|
10 |
+
"transformers_version": "4.27.0.dev0"
|
11 |
+
}
|
model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:773500ab6fdf9abc29bb5fb0e601df2fc21b33b3e45863d04f206e59b89af6a7
|
3 |
+
size 2480836892
|
sentencepiece.bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d8f7c76ed2a5e0822be39f0a4f95a55eb19c78f4593ce609e2edbc2aea4d380a
|
3 |
+
size 2423393
|
shared_vocabulary.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "additional_special_tokens": ["__af__", "__am__", "__ar__", "__ast__", "__az__", "__ba__", "__be__", "__bg__", "__bn__", "__br__", "__bs__", "__ca__", "__ceb__", "__cs__", "__cy__", "__da__", "__de__", "__el__", "__en__", "__es__", "__et__", "__fa__", "__ff__", "__fi__", "__fr__", "__fy__", "__ga__", "__gd__", "__gl__", "__gu__", "__ha__", "__he__", "__hi__", "__hr__", "__ht__", "__hu__", "__hy__", "__id__", "__ig__", "__ilo__", "__is__", "__it__", "__ja__", "__jv__", "__ka__", "__kk__", "__km__", "__kn__", "__ko__", "__lb__", "__lg__", "__ln__", "__lo__", "__lt__", "__lv__", "__mg__", "__mk__", "__ml__", "__mn__", "__mr__", "__ms__", "__my__", "__ne__", "__nl__", "__no__", "__ns__", "__oc__", "__or__", "__pa__", "__pl__", "__ps__", "__pt__", "__ro__", "__ru__", "__sd__", "__si__", "__sk__", "__sl__", "__so__", "__sq__", "__sr__", "__ss__", "__su__", "__sv__", "__sw__", "__ta__", "__th__", "__tl__", "__tn__", "__tr__", "__uk__", "__ur__", "__uz__", "__vi__", "__wo__", "__xh__", "__yi__", "__yo__", "__zh__", "__zu__"]}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"src_lang": null, "tgt_lang": null, "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "special_tokens_map_file": "m2m_100_1.2B_v2/special_tokens_map.json", "tokenizer_file": null, "name_or_path": "m2m_100_1.2B_v2"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|