Rename gptq_model-4bit-128g.safetensors to model.safetensors
#1
by
edgelesssys
- opened
Tools such as https://github.com/lm-sys/FastChat expect the model weights in certain naming schemes.
Not saying this is necessarily a good idea, but most models on HF seem to follow this scheme.
Do you think you can change it to make this model work out of the box with these tools?