--- language: - en pipeline_tag: text-generation tags: - text-generation-inference - instruct - conversational - roleplay - sillytavern - gguf - anime - quantized - mistral license: cc-by-4.0 --- # **THIS VERSION IS NOW DEPRECATED. USE V3-0.2. V2 HAS PROBLEMS WITH ALIGNMENT AND THE NEW VERSION IS A SUBSTANTIAL IMPROVMENT!** This repository hosts deprecated GGUF-IQ-Imatrix quants for [localfultonextractor/Erosumika-7B-v2](https://huggingface.co/localfultonextractor/Erosumika-7B-v2). *"Better, smarter erosexika!!"* [Quantized as per user request.](https://huggingface.co/Lewdiculous/Model-Requests/discussions/19) Quants: ```python quantization_options = [ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" ] ``` **What does "Imatrix" mean?** It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data. **Steps:** ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` *Using the latest llama.cpp at the time.* # Original model information: