Update README.md
Browse files
README.md
CHANGED
@@ -8,14 +8,15 @@ inference: false
|
|
8 |
## Model description
|
9 |
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
10 |
|
11 |
-
This is a 4-bit GPTQ quantization of OPT-13B-Erebus
|
|
|
12 |
|
13 |
### Quantization Information
|
14 |
Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
|
15 |
```
|
16 |
-
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/4bit-128g.pt
|
17 |
|
18 |
-
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/4bit-128g.safetensors
|
19 |
```
|
20 |
|
21 |
### License
|
|
|
8 |
## Model description
|
9 |
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
10 |
|
11 |
+
This is a 4-bit GPTQ quantization of OPT-13B-Erebus, original model:
|
12 |
+
**https://huggingface.co/KoboldAI/OPT-13B-Erebus**
|
13 |
|
14 |
### Quantization Information
|
15 |
Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
|
16 |
```
|
17 |
+
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt
|
18 |
|
19 |
+
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.safetensors
|
20 |
```
|
21 |
|
22 |
### License
|