bartowski commited on
Commit
a892258
1 Parent(s): 3bbc256

Update with VRAM estimates

Browse files
Files changed (1) hide show
  1. README.md +8 -17
README.md CHANGED
@@ -13,26 +13,17 @@ pipeline_tag: text-generation
13
 
14
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
15
 
16
- ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
17
 
18
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
19
 
20
- Conversion was done using the default calibration dataset.
21
-
22
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
23
-
24
- Original model: https://huggingface.co/Kquant03/Buttercup-4x7B-V2-laser
25
-
26
-
27
- <a href="https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/8_0">8.0 bits per weight</a>
28
-
29
- <a href="https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/6_5">6.5 bits per weight</a>
30
-
31
- <a href="https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/5_0">5.0 bits per weight</a>
32
-
33
- <a href="https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/4_25">4.25 bits per weight</a>
34
-
35
- <a href="https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/3_5">3.5 bits per weight</a>
36
 
37
 
38
  ## Download instructions
 
13
 
14
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
15
 
16
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
17
 
18
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
19
 
20
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
21
+ | ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
22
+ | [8_0](https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/8_0) | 8.0 | 8.0 | 24.8 GB | 26.3 GB | 28.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
23
+ | [6_5](https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/6_5) | 6.5 | 8.0 | 20.3 GB | 21.8 GB | 23.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
24
+ | [5_0](https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/5_0) | 5.0 | 6.0 | 15.8 GB | 17.3 GB | 19.3 GB | Slightly lower quality vs 6.5. |
25
+ | [4_25](https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/4_25) | 4.25 | 6.0 | 14.0 GB | 15.5 GB | 17.5 GB | GPTQ equivalent bits per weight, slightly higher quality, great for 16gb cards with 16k context. |
26
+ | [3_5](https://huggingface.co/bartowski/Buttercup-4x7B-V2-laser-exl2/tree/3_5) | 3.5 | 6.0 | 11.3 GB | 12.8 GB | 14.8 GB | Lower quality, not recommended, only suitable for 12GB cards. |
 
 
 
 
 
 
 
 
 
27
 
28
 
29
  ## Download instructions