|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- text-generation-inference |
|
- instruct |
|
- conversational |
|
- roleplay |
|
- sillytavern |
|
- gguf |
|
- anime |
|
- quantized |
|
- mistral |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# **THIS VERSION IS NOW DEPRECATED. USE V3-0.2. V2 HAS PROBLEMS WITH ALIGNMENT AND THE NEW VERSION IS A SUBSTANTIAL IMPROVMENT!** |
|
|
|
This repository hosts deprecated GGUF-IQ-Imatrix quants for [localfultonextractor/Erosumika-7B-v2](https://huggingface.co/localfultonextractor/Erosumika-7B-v2). |
|
|
|
*"Better, smarter erosexika!!"* |
|
|
|
[Quantized as per user request.](https://huggingface.co/Lewdiculous/Model-Requests/discussions/19) |
|
|
|
Quants: |
|
```python |
|
quantization_options = [ |
|
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", |
|
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" |
|
] |
|
``` |
|
|
|
**What does "Imatrix" mean?** |
|
|
|
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. |
|
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. |
|
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. |
|
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
|
|
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data. |
|
|
|
**Steps:** |
|
|
|
``` |
|
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) |
|
``` |
|
*Using the latest llama.cpp at the time.* |
|
|
|
# Original model information: |
|
|
|
<h1 style="text-align: center">Erosumika-7B-v2</h1> |
|
|
|
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/jkrt-bDxaI9Z-V-9fBTbx.gif) |
|
|
|
## Model Details |
|
A DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test), Epiculous' [Mika-7B](https://huggingface.co/Epiculous/Mika-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). In my brief testing, v2 is a significant improvement over the original Erosumika; I guess it won the DARE TIES lottery. Alpaca and Mistral seem to work best. Chat-ML might also work but I expect it to never end generations. Anything goes! |
|
|
|
Due to it being an experimental model, there are some quirks... |
|
|
|
- Rare occasion to misspell words |
|
- Very rare occasion to have random formatting artifact at the end of generations |
|
|
|
[GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v2-GGUF) |
|
|
|
## Limitations and biases |
|
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. |
|
It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. |
|
|
|
|
|
```yaml |
|
base_model: localfultonextractor/FlatErosAlpha |
|
models: |
|
- model: localfultonextractor/FlatErosAlpha |
|
- model: Epiculous/Mika-7B |
|
parameters: |
|
density: 0.5 |
|
weight: 0.25 |
|
- model: Nitral-AI/Kunocchini-7b |
|
parameters: |
|
density: 0.5 |
|
weight: 0.75 |
|
merge_method: dare_ties |
|
dtype: bfloat16 |
|
``` |