Text Generation
Transformers
Safetensors
openelm
custom_code

Issue while converting to gguf

#2
by romyull - opened

Hi,
I got the below error while tying to convert using llama.cpp. could you please let me know what is the problem with that. I have copied this repository(git clone https://huggingface.co/apple/OpenELM-450M).

python3 convert.py ../OpenELM-450M/ --outtype f16
Loading model file ../OpenELM-450M/model.safetensors
Traceback (most recent call last):
File "/home/ksu/SLM/llama.cpp/convert.py", line 1466, in
main()
File "/home/ksu/SLM/llama.cpp/convert.py", line 1413, in main
params = Params.load(model_plus)
File "/home/ksu/SLM/llama.cpp/convert.py", line 317, in load
params = Params.loadHFTransformerJson(model_plus.model, hf_config_path)
File "/home/ksu/SLM/llama.cpp/convert.py", line 236, in loadHFTransformerJson
raise Exception("failed to guess 'n_ctx'. This model is unknown or unsupported.\n"
Exception: failed to guess 'n_ctx'. This model is unknown or unsupported.
Suggestion: provide 'config.json' of the model in the same directory containing model files.

Sign up or log in to comment