base_model: mesolitica/malaysian-tinyllama-1.1b-16384-instructions | |
inference: false | |
model_creator: mesolitica | |
model_name: malaysian-tinyllama-1.1b-16384-instructions | |
pipeline_tag: text-generation | |
quantized_by: afrideva | |
tags: | |
- gguf | |
- ggml | |
- quantized | |
- q2_k | |
- q3_k_m | |
- q4_k_m | |
- q5_k_m | |
- q6_k | |
- q8_0 | |
# mesolitica/malaysian-tinyllama-1.1b-16384-instructions-GGUF | |
Quantized GGUF model files for [malaysian-tinyllama-1.1b-16384-instructions](https://huggingface.co/mesolitica/malaysian-tinyllama-1.1b-16384-instructions) from [mesolitica](https://huggingface.co/mesolitica) | |
| Name | Quant method | Size | | |
| ---- | ---- | ---- | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q2_k.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q2_k.gguf) | q2_k | 482.14 MB | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q3_k_m.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q3_k_m.gguf) | q3_k_m | 549.85 MB | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q4_k_m.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q4_k_m.gguf) | q4_k_m | 667.81 MB | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q5_k_m.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q5_k_m.gguf) | q5_k_m | 782.04 MB | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q6_k.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q6_k.gguf) | q6_k | 903.41 MB | | |
| [malaysian-tinyllama-1.1b-16384-instructions.q8_0.gguf](https://huggingface.co/afrideva/malaysian-tinyllama-1.1b-16384-instructions-GGUF/resolve/main/malaysian-tinyllama-1.1b-16384-instructions.q8_0.gguf) | q8_0 | 1.17 GB | | |
## Original Model Card: | |