What is this?

Simple merge, I can say it's good enough to play RP, ERP, but decent.

Eval scores better than WolfFrame, but I can't tell how good is it.

Overall, very nice-to-try model. 😁

GGUF here, https://huggingface.co/mradermacher/MN-12B-Kakigori-GGUF

Imatrix here, https://huggingface.co/mradermacher/MN-12B-Kakigori-i1-GGUF

My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-Kakigori-Q6_K-GGUF

Merge Detail

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
  - model: crestf411/MN-Slush
merge_method: slerp
base_model: crestf411/MN-Slush
parameters:
  t: [0, 0.1, 0.2, 0.25, 0.25, 0.2, 0.1, 0]
dtype: bfloat16
tokenizer_source: base

Downloads last month
38
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for DoppelReflEx/MN-12B-Kakigori

Collection including DoppelReflEx/MN-12B-Kakigori