Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text2text-generation
|
6 |
+
tags:
|
7 |
+
- alpaca
|
8 |
+
- llama
|
9 |
+
- chat
|
10 |
+
- gpt4
|
11 |
---
|
12 |
+
|
13 |
+
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|
14 |
+
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
|
15 |
+
- Training script:
|
16 |
+
```shell
|
17 |
+
python finetune.py \
|
18 |
+
--base_model='decapoda-research/llama-7b-hf' \
|
19 |
+
--num_epochs=10 \
|
20 |
+
--cutoff_len=512 \
|
21 |
+
--group_by_length \
|
22 |
+
--output_dir='./gpt4-alpaca-lora-7b' \
|
23 |
+
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
|
24 |
+
--lora_r=16 \
|
25 |
+
--batch_size=... \
|
26 |
+
--micro_batch_size=...
|
27 |
+
```
|
28 |
+
|
29 |
+
You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/nl1xi6ru?workspace=user-chansung18).
|