Correct README for model filename consistency
Browse files- Corrected README for model quantized model filenames with instructions.
README.md
CHANGED
@@ -64,7 +64,7 @@ huggingface-cli download gorilla-llm/gorilla-openfunctions-v2-gguf gorilla-openf
|
|
64 |
|
65 |
It will store the QUANTIZATION_METHOD GGUF file to your local directory, `gorilla-openfunctions-v2-GGUF`.
|
66 |
|
67 |
-
We support QUANTIZATION_METHOD = {`q2_K`, `
|
68 |
Please let us know what other quantization methods you would like us to include!
|
69 |
|
70 |
Please follow the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for `llama-cpp-python` package installation on your machine.
|
|
|
64 |
|
65 |
It will store the QUANTIZATION_METHOD GGUF file to your local directory, `gorilla-openfunctions-v2-GGUF`.
|
66 |
|
67 |
+
We support QUANTIZATION_METHOD = {`q2_K`, `q3_K_S`, `q3_K_M`, `q3_K_L`, `q4_K_S`, `q4_K_M`, `q5_K_S`, `q5_K_M`, `q6_K`}.
|
68 |
Please let us know what other quantization methods you would like us to include!
|
69 |
|
70 |
Please follow the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for `llama-cpp-python` package installation on your machine.
|