MS-24B-Instruct-Mullein-v0
V0 note from Severian: This instruct variant is tamer and less unhinged than the base version, losing some ability to characterize NPCs but with further improved char/scenario portrayal, a tradeoff of sorts. We couldn't actually decide what to put out, because both are fun and good in their own way.
Let us know what you think, we're looking forward to seeing people test it.
Big Thanks
The folks in the trashpanda and ArliAI discords for testing
(In no particular order)
The Allura folks for their Sugarquill 10k dataset (which I lightly cleaned for stuff like unicode quotes)
fizz for her floyd-instruct, woke-identity, and benchmaxxing (lol) datasets
Gryphe for their Sonnet3.5 RP and 4o WP datasets, which I heavily filtered for slop
kalo's Opus-22k dataset, which was usable basically OOTB
Norquinal for their OpenCAI dataset
Dampfinchen for their Creative Writing Multiturn dataset
The Recursal folks for their SCP wiki dataset
(we also used some other private datasets of our own)
Reviews
Base is more unhinged but I see more slops. Would be interesting to see if a merge can balance it out in a good way
Instruct gives me more swipes that I like, it's less horny but it can definitely cook during actual smut
I still like instruct more I think, but I appreciate how unhinged base model can be lol
โ OMGWTFBBQ
Hard to send with one hand. What did you feed this model?
โ Myscell
It spoke to my body and soul.
โ Raihanbook
my cock twitched in interest, 10/10 model
โ AIELO
Reroll varies the response by a lot. It's giving Starcannon.
โ Sam
Tried the base version with my card. It's just a narrative card and the model makes the character portray right, it also mentions my persona detail often.
โ Azula
Just us having fun, don't mind it
Big thanks to the folks in the trashpanda-org discord for testing and sending over some logs!
Merge Details
Merge Method
This model was merged using the TIES merge method using unsloth/Mistral-Small-24B-Instruct-2501 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: unsloth/Mistral-Small-24B-Instruct-2501
- model: trashpanda-org/MS-24B-Mullein-v0
parameters:
density: 1
weight: 1
merge_method: ties
base_model: unsloth/Mistral-Small-24B-Instruct-2501
parameters:
normalize: true
dtype: bfloat16
- Downloads last month
- 44
Model tree for trashpanda-org/MS-24B-Instruct-Mullein-v0
Base model
mistralai/Mistral-Small-24B-Base-2501