Edit model card

MS-Meadowlark-22B

Big thanks to @inflatebot for the image.
A roleplay and storywriting model based on Mistral Small 22B.

GGUF models: https://huggingface.co/mradermacher/MS-Meadowlark-22B-GGUF/

EXL2 models: https://huggingface.co/CalamitousFelicitousness/MS-Meadowlark-22B-exl2

Datasets used in this model:

Each dataset was trained separately onto Mistral Small Instruct, and then the component models were merged along with nbeerbower/Mistral-Small-Gutenberg-Doppel-22B to create Meadowlark.

I tried different blends of the component models, and this one seems to be the most stable while retaining creativity and unpredictability added by the trained data.

Instruct Format

Rosier/bodyinf and SpringDragon were trained in completion format. This model should work with Kobold Lite in Adventure Mode and Story Mode.

Creative_Writing_Multiturn and Gutenberg-Doppel were trained using the official instruct format of Mistral Small Instruct:

<s>[INST] {User message}[/INST] {Assistant response}</s>

This is the Mistral Small V2&V3 preset in SillyTavern and Kobold Lite.

For SillyTavern in particular I've had better luck getting good output from Mistral Small using a custom instruct template that formats the assembled context as a single user turn. This prevents SillyTavern from confusing the model by assembling user/assistant turns in a nonstandard way. Note: This preset is not compatible with Stepped Thinking, use the Mistral V2&V3 preset for that.

Downloads last month
204
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for allura-org/MS-Meadowlark-22B

Finetuned
(8)
this model
Merges
4 models
Quantizations
10 models