llama_model_quantize: failed to quantize:

#223
by biu12 - opened

When I wanted to quantify the fine-tuned model(llama3 8b), I ran into this problem。
command:
sudo ./llama-quantize /root/hg_to_gguf.gguf/Llama_Lora_Merge-8.0B-F16.gguf /root/quan
tize_model q4_0
problem:
llama_model_quantize: failed to quantize: basic_ios::clear: iostream error
main: failed to quantize model from '/root/hg_to_gguf.gguf/Llama_Lora_Merge-8.0B-F16.gguf'

Sign up or log in to comment