Custom GGUF quants of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), where the Output Tensors are quantized to Q8_0 or upcast to F32, while the Embeddings are kept at F32. Enjoy! 🧠🔥🚀