Update README.md
Browse files
README.md
CHANGED
@@ -35,31 +35,31 @@ Training data contains 150,000 Chinese lyrics which are collected by [Chinese-Ly
|
|
35 |
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 100,000 steps with a sequence length of 512 on the basis of the pre-trained model [gpt2-base-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-base-chinese-cluecorpussmall)
|
36 |
|
37 |
```
|
38 |
-
python3 preprocess.py --corpus_path corpora/lyric.txt
|
39 |
-
--vocab_path models/google_zh_vocab.txt
|
40 |
-
--dataset_path lyric_dataset.pt --processes_num 32
|
41 |
--seq_length 512 --target lm
|
42 |
```
|
43 |
|
44 |
```
|
45 |
-
python3 pretrain.py --dataset_path lyric_dataset.pt
|
46 |
-
--pretrained_model_path models/cluecorpussmall_gpt2_seq1024_model.bin-250000
|
47 |
-
--vocab_path models/google_zh_vocab.txt
|
48 |
-
--config_path models/gpt2/config.json
|
49 |
-
--output_model_path models/lyric_gpt2_model.bin
|
50 |
-
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7
|
51 |
-
--total_steps 100000 --save_checkpoint_steps 10000 --report_steps 5000
|
52 |
-
--learning_rate 5e-5 --batch_size 64
|
53 |
-
--embedding word_pos --remove_embedding_layernorm
|
54 |
-
--encoder transformer --mask causal --layernorm_positioning pre
|
55 |
-
--target lm --
|
56 |
```
|
57 |
|
58 |
Finally, we convert the pre-trained model into Huggingface's format:
|
59 |
|
60 |
```
|
61 |
-
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path lyric_gpt2_model.bin-100000
|
62 |
-
--output_model_path pytorch_model.bin
|
63 |
--layers_num 12
|
64 |
```
|
65 |
|
|
|
35 |
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 100,000 steps with a sequence length of 512 on the basis of the pre-trained model [gpt2-base-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-base-chinese-cluecorpussmall)
|
36 |
|
37 |
```
|
38 |
+
python3 preprocess.py --corpus_path corpora/lyric.txt \
|
39 |
+
--vocab_path models/google_zh_vocab.txt \
|
40 |
+
--dataset_path lyric_dataset.pt --processes_num 32 \
|
41 |
--seq_length 512 --target lm
|
42 |
```
|
43 |
|
44 |
```
|
45 |
+
python3 pretrain.py --dataset_path lyric_dataset.pt \
|
46 |
+
--pretrained_model_path models/cluecorpussmall_gpt2_seq1024_model.bin-250000 \
|
47 |
+
--vocab_path models/google_zh_vocab.txt \
|
48 |
+
--config_path models/gpt2/config.json \
|
49 |
+
--output_model_path models/lyric_gpt2_model.bin \
|
50 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
51 |
+
--total_steps 100000 --save_checkpoint_steps 10000 --report_steps 5000 \
|
52 |
+
--learning_rate 5e-5 --batch_size 64 \
|
53 |
+
--embedding word_pos --remove_embedding_layernorm \
|
54 |
+
--encoder transformer --mask causal --layernorm_positioning pre \
|
55 |
+
--target lm --tie_weights
|
56 |
```
|
57 |
|
58 |
Finally, we convert the pre-trained model into Huggingface's format:
|
59 |
|
60 |
```
|
61 |
+
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path lyric_gpt2_model.bin-100000 \
|
62 |
+
--output_model_path pytorch_model.bin \
|
63 |
--layers_num 12
|
64 |
```
|
65 |
|