Here is a 4 bit GPTQ quantized version
#5
by
chplushsieh
- opened
https://huggingface.co/chplushsieh/Meta-Llama-3-8B-Instruct-abliterated-v3-GPTQ-4bit
for people who want to use it with GPTQ and a 8GB VRAM GPU.
https://huggingface.co/chplushsieh/Meta-Llama-3-8B-Instruct-abliterated-v3-GPTQ-4bit
for people who want to use it with GPTQ and a 8GB VRAM GPU.