File size: 466 Bytes
b323437 80af3d4 aa07aa2 b323437 aa07aa2 b323437 |
1 2 3 4 5 6 7 8 9 |
4-bit GPTQ quantization of https://huggingface.co/KoboldAI/OPT-13B-Erebus
Using this fork of GPTQ: https://github.com/0cc4m/GPTQ-for-LLaMa
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.safetensors
|