First commit of GGML models.
Browse files
README.md
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- gozfarb/ShareGPT_Vicuna_unfiltered
|
4 |
+
---
|
5 |
+
|
6 |
+
# VicUnlocked-30B-LoRA GGML
|
7 |
+
|
8 |
+
This is GGML format quantised 4-bit, 5-bit and 8-bit models of [Neko Institute of Science's VicUnLocked 30B LoRA](https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA).
|
9 |
+
|
10 |
+
The files in this repo are the result of merging the above LoRA with the original LLaMA 30B, then converting to GGML for CPU (+ CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
11 |
+
|
12 |
+
## Repositories available
|
13 |
+
|
14 |
+
* [4-bit, 5-bit and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GGML).
|
15 |
+
* [4bit's GPTQ 4-bit model for GPU inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GPTQ).
|
16 |
+
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF).
|
17 |
+
|
18 |
+
## THESE FILES REQUIRE LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
|
19 |
+
|
20 |
+
llama.cpp recently made a breaking change to its quantisation methods.
|
21 |
+
|
22 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
23 |
+
|
24 |
+
## Provided files
|
25 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
26 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
27 |
+
`VicUnlocked-30B-LoRA.ggml.q4_0.bin` | q4_0 | 4bit | 19GB | 21GB | 4-bit. |
|
28 |
+
`VicUnlocked-30B-LoRA.ggml.q4_1.bin` | q4_1 | 5bit | 23GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
`VicUnlocked-30B-LoRA.ggml.q5_0.bin` | q5_0 | 5bit | 21GB | 23GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
30 |
+
`VicUnlocked-30B-LoRA.ggml.q5_1.bin` | q5_1 | 5bit | 23GB | 25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
|
31 |
+
`VicUnlocked-30B-LoRA.ggml.q8_0.bin` | q8_0 | 8bit | 35GB | 37GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
32 |
+
|
33 |
+
## How to run in `llama.cpp`
|
34 |
+
|
35 |
+
I use the following command line; adjust for your tastes and needs:
|
36 |
+
|
37 |
+
```
|
38 |
+
./main -t 8 -m VicUnlocked-30B-LoRA.ggml.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
|
39 |
+
```
|
40 |
+
|
41 |
+
Change `-t 8` to the number of physical CPU cores you have.
|
42 |
+
|
43 |
+
## How to run in `text-generation-webui`
|
44 |
+
|
45 |
+
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
|
46 |
+
|
47 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
48 |
+
|
49 |
+
|
50 |
+
# Original model card
|
51 |
+
|
52 |
+
# Convert tools
|
53 |
+
https://github.com/practicaldreamer/vicuna_to_alpaca
|
54 |
+
|
55 |
+
# Training tool
|
56 |
+
https://github.com/oobabooga/text-generation-webui
|
57 |
+
|
58 |
+
ATM I'm using 2023.05.04v0 of the dataset and training full context.
|
59 |
+
|
60 |
+
# Notes:
|
61 |
+
So I will only be training 1 epoch, as full context 30b takes so long to train.
|
62 |
+
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
|
63 |
+
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
|
64 |
+
|
65 |
+
Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.
|
66 |
+
|
67 |
+
Update: Training Finished at Epoch 1, These 8 days sure felt long. I only have one A6000 lads there's only so much I can do. Also RIP gozfarb IDK what happened to him.
|
68 |
+
|
69 |
+
# How to test?
|
70 |
+
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
|
71 |
+
2. Make a folder called VicUnLocked-30b-LoRA in the loras folder.
|
72 |
+
3. Download adapter_config.json and adapter_model.bin into VicUnLocked-30b-LoRA.
|
73 |
+
4. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora VicUnLocked-30b-LoRA```
|
74 |
+
5. Select instruct and chose Vicuna-v1.1 template.
|
75 |
+
|
76 |
+
|
77 |
+
# Training Log
|
78 |
+
https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7
|
VicUnlocked-30B-LoRA.ggml.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8f8a58e8c12184da13347fb890ee4b5bd7e947b7826a5b6324cd22d62764a22b
|
3 |
+
size 20333775232
|
VicUnlocked-30B-LoRA.ggml.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6163a8dd4dbb9f9d233a757e81e5c03f2c2204f3f12bc7d4e7217c9e99f65d03
|
3 |
+
size 24399792512
|
VicUnlocked-30B-LoRA.ggml.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a88af1488ec82e985ba2ffe16c18a9132ed6a3eaffdf9c45632ca8b4c119248
|
3 |
+
size 22366783872
|
VicUnlocked-30B-LoRA.ggml.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8df5ff5281e3f1b66f957e00f5ac7862c680cbd1d267c034de9d98bd1c712bee
|
3 |
+
size 24399792512
|
VicUnlocked-30B-LoRA.ggml.q8_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6b07b66ef69e56560451f12e05d9917bff47e4b1fe875bcc9aa1f381aa252f54
|
3 |
+
size 36597844352
|