Spaces:
Running
Running
Add support for converting GGUF models to MLX
#43
by
Fmuaddib
- opened
Currently, any attempt to convert GGUF models rises an error.
For example, selecting the model unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF
will print this error message:
Error: No safetensors found in cache/models--unsloth--Qwen2.5-Coder-32B-Instruct-128K-GGUF/snapshots/eef243f6abc9b246fcda059141e8ce73a3e27d1f
Is the GGUF format not supported? There are many models only available in GGUF format, and I need them in MLX format. What can I do?
Thanks.