notstoic's picture
Update README.md
80af3d4
|
raw
history blame
466 Bytes

4-bit GPTQ quantization of https://huggingface.co/KoboldAI/OPT-13B-Erebus

Using this fork of GPTQ: https://github.com/0cc4m/GPTQ-for-LLaMa

python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt

python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.safetensors