kukulemon-7B-GGUF / README.md
grimjim's picture
Initial release
dced1d0
|
raw
history blame
No virus
1.96 kB
metadata
base_model:
  - grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
  - KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
  - mergekit
  - merge
license: cc-by-nc-4.0

kukulemon-7B-GGUF

This is a Q8_0 GGUF quant of (kukulemon-7B)[https://huggingface.co/grimjim/kukulemon-7B].

A merger of two similar Kunoichi models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.

I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, it seemed to lose coherence after 8K in my informal testing.

This is a merge of pre-trained language models created using mergekit.

You can also download GGUF-IQ-Imatrix quants courtesy of Lewdiculous.

There's also an 8.0bpw h8 exl2 quant available.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
        layer_range: [0, 32]
      - model: KatyTheCutie/LemonadeRP-4.5.3
        layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16