--- base_model: xxx777xxxASD/L3.1-ClaudeMaid-4x8B language: - en license: other pipeline_tag: text-generation quantized_by: Reiterate3680 --- # Experimental fixed long context GGUFs. Requires [this release](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.71013_b3455%2B9) or newer of the KoboldCPP frankenfork Original Model: https://huggingface.co/xxx777xxxASD/L3.1-ClaudeMaid-4x8B made with https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script Models Q2_K_L, Q4_K_L, Q5_K_L, Q6_K_L, are using Q_8 output tensors and token embeddings using bartowski's imatrix dataset