--- base_model: Nitral-AI/Sekhmet_Bet-L3.1-8B-v0.2 language: - en license: other pipeline_tag: text-generation quantized_by: Reiterate3680 --- # Experimental fixed long context GGUFs. Requires [this release](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.71013_b3455%2B9) or newer of the KoboldCPP frankenfork Unfixed: https://huggingface.co/Reiterate3680/Sekhmet_Bet-L3.1-8B-v0.2-GGUF-BAD-LONG-CONTEXT Original Model: https://huggingface.co/Nitral-AI/Sekhmet_Bet-L3.1-8B-v0.2 made with https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script Models Q2_K_L, Q4_K_L, Q5_K_L, Q6_K_L, are using Q_8 output tensors and token embeddings using bartowski's imatrix dataset