File size: 1,284 Bytes
c7ca3be 2317545 c7ca3be 2317545 97c3149 2317545 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
- lucasmccabe-lmi/CodeAlpaca-20k
---
# Instruct_Yi-6B_Dolly15K
Fine-tuned from Yi-6B, used Dolly15k for the dataset. 90% for training, 10% validation. Trained for 2.0 epochs using Lora. Trained with 2048 context window. Compared with https://huggingface.co/HenryJJ/Instruct_Yi-6B_Dolly15K, I add additional CodeAlpaca_20K dataset that good at coding.
# Model Details
* **Trained by**: trained by HenryJJ.
* **Model type:** **Instruct_Yi-6B_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Instruct_Yi-6B_Dolly15K**: apache-2.0 license
# Prompting
## Prompt Template With Context
<|startoftext|>[INST]{instruction} {context}[/INST]{response}<|endoftext|>
```
<|startoftext|>[INST]
Write a 10-line poem about a given topic
The topic is about racecars
[/INST]
```
## Prompt Template Without Context
```
<|startoftext|>[INST]
Who was the was the second president of the United States?
[/INST]
```
# Training script:
Fully opensourced at: https://github.com/hengjiUSTC/learn-llm/blob/main/trl_finetune.py. Run on aws g4dn.12xlarge instance for 10 hours.
```
python3 trl_finetune.py --config configs/yi_6b-large.yml
``` |