Update README.md
Browse files
README.md
CHANGED
@@ -39,26 +39,23 @@ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tence
|
|
39 |
python3 preprocess.py --corpus_path corpora/ancient_chinese.txt \
|
40 |
--vocab_path models/google_zh_vocab.txt \
|
41 |
--dataset_path ancient_chinese_dataset.pt --processes_num 16 \
|
42 |
-
--seq_length 320 --
|
43 |
```
|
44 |
|
45 |
```
|
46 |
python3 pretrain.py --dataset_path ancient_chinese_dataset.pt \
|
47 |
--vocab_path models/google_zh_vocab.txt \
|
48 |
--config_path models/bert_base_config.json \
|
49 |
-
--output_model_path models/
|
50 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
51 |
--total_steps 500000 --save_checkpoint_steps 100000 --report_steps 10000 \
|
52 |
-
--learning_rate 5e-4 --batch_size 32
|
53 |
-
--embedding word_pos --remove_embedding_layernorm \
|
54 |
-
--encoder transformer --mask causal --layernorm_positioning pre \
|
55 |
-
--target lm --tie_weights
|
56 |
```
|
57 |
|
58 |
Finally, we convert the pre-trained model into Huggingface's format:
|
59 |
|
60 |
```
|
61 |
-
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path
|
62 |
--output_model_path pytorch_model.bin \
|
63 |
--layers_num 12
|
64 |
```
|
@@ -79,6 +76,4 @@ python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path ancie
|
|
79 |
pages={241},
|
80 |
year={2019}
|
81 |
}
|
82 |
-
|
83 |
-
胡韧奋,李绅,诸雨辰.基于深层语言模型的古汉语知识表示及自动断句研究[C].第十八届中国计算语言学大会(CCL 2019).
|
84 |
```
|
|
|
39 |
python3 preprocess.py --corpus_path corpora/ancient_chinese.txt \
|
40 |
--vocab_path models/google_zh_vocab.txt \
|
41 |
--dataset_path ancient_chinese_dataset.pt --processes_num 16 \
|
42 |
+
--seq_length 320 --data_processor lm
|
43 |
```
|
44 |
|
45 |
```
|
46 |
python3 pretrain.py --dataset_path ancient_chinese_dataset.pt \
|
47 |
--vocab_path models/google_zh_vocab.txt \
|
48 |
--config_path models/bert_base_config.json \
|
49 |
+
--output_model_path models/ancient_chinese_gpt2_model.bin \
|
50 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
51 |
--total_steps 500000 --save_checkpoint_steps 100000 --report_steps 10000 \
|
52 |
+
--learning_rate 5e-4 --batch_size 32
|
|
|
|
|
|
|
53 |
```
|
54 |
|
55 |
Finally, we convert the pre-trained model into Huggingface's format:
|
56 |
|
57 |
```
|
58 |
+
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path ancient_chinese_gpt2_model.bin-500000 \
|
59 |
--output_model_path pytorch_model.bin \
|
60 |
--layers_num 12
|
61 |
```
|
|
|
76 |
pages={241},
|
77 |
year={2019}
|
78 |
}
|
|
|
|
|
79 |
```
|