File size: 1,145 Bytes
576e8c6 c47853c e86ab76 c47853c 576e8c6 c47853c 576e8c6 e86ab76 576e8c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
---
# Convert tools
https://github.com/practicaldreamer/vicuna_to_alpaca
# Training tool
https://github.com/oobabooga/text-generation-webui
ATM I'm using 2023.05.04v0 of the dataset and training full context.
# Notes:
So I will only be training 1 epoch, as full context 30b takes so long to train.
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
# How to test?
1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
3. Rename LLaMA-30B-HF to vicuna-30b
4. Download the checkpoint-xxxx you want and put it in the loras folder.
5. Load ooba: ```python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx```
6. Instruct mode: Vicuna-v1, ooba will load Vicuna-v0 by defualt
# Want to see it Training?
https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7 |