File size: 1,345 Bytes
f5123f4 eecdaae f5123f4 0ff802c f5123f4 bab48ce f5123f4 bab48ce f5123f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
tags:
- translation
- torch==1.8.0
widget:
- text: Inference Unavailable
language:
- th
- zh
---
### marianmt-th-zh_cn
* source languages: th
* target languages: zh_cn
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: 15.53
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn).
```
export WANDB_PROJECT=marianmt-th-zh_cn
python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\
\\\\t--output_dir ../models/marianmt-th-zh_cn \\\\\\\\
\\\\t--source_lang th --target_lang zh \\\\\\\\
\\\\t--metric_tokenize zh --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu()
src_text = [
'ฉันรักคุณ',
'ฉันอยากกินข้าว',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['我爱你', '我想吃饭。']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` |