Edit model card

GGUF-Imatrix quantizations for SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE.

What does "Imatrix" mean?

It stands for Importance Matrix, a technique used to improve the quality of quantized models.

The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.

One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.

More information: [1] [2]

For --imatrix data, imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat was used.

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

Using llama.cpp-b2280.

The new IQ3_S quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in koboldcpp-1.59.1 or higher.

If you want any specific quantization to be added, feel free to ask.

All credits belong to the creator.

Original model information:

image/png

Description

This repository hosts FP16 files for Loyal-Toppy-Bruins-Maid-7B, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time.

Its foundation is Starling-LM-7B-alpha, notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates rwitz/go-bruins-v2, a Q-bert/MetaMath-Cybertron-Starling derivative with Alpaca RP data tuning.

The other foundational model is chargoddard/loyal-piano-m7, chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.

Undi95/Toppy-M-7B, known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on OpenRouter for a good reason.

NeverSleep/Noromaid-7b-v0.1.1, a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model.

The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the MergeKit GitHub Repo.

Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on lilblam's LLM Logic Test. My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much 😊

The sauce

models: # Top-Loyal-Bruins-Maid-DARE-7B_v2
  - model: mistralai/Mistral-7B-v0.1
    # no parameters necessary for base model
  - model: rwitz/go-bruins-v2 # MetamathCybertronStarling base
    parameters:
      weight: 0.5
      density: 0.6
  - model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
    parameters:
      weight: 0.5
      density: 0.6
  - model: Undi95/Toppy-M-7B
    parameters:
      weight: 0.1
      density: 0.5
  - model: NeverSleep/Noromaid-7b-v0.1.1
    parameters:
      weight: 0.1
      density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Prompt template: Custom format, or Alpaca

Custom format:

I found the best SillyTavern results from using the Noromaid template.

SillyTavern config files: Context, Instruct.

Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.

Alpaca:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:
Downloads last month
216
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Collection including Lewdiculous/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF-Imatrix