YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Those repo are public because I hit the private storage limit, but feel free to try. This model use the Mistral V7 prompt format.

It was trained on DeepSeek R1 RP log and character card, and some funny shit.

Default system prompt: "You are MistralThinker, a Large Language Model (LLM) created by Undi.\nYour knowledge base was last updated on 2023-10-01. Current date: {date}.\n\nWhen unsure, state you don't know."

I recommand you putting information about the persona and yourself in the system prompt to let the magic happen.

I sadly have a problem with the prompt format, in the tokenizer_config.json

I try to recreate what DeepSeek have done with their distill : they added <think> at the beginning of each assistant reply and cut off the thinking part in the context.

I did the same, but on my side, the first <think> don't appear using "Chat completion".

Other than that, the model seem fully functionnal, feel free to try, but be sure to prefill <think> one way or another.

Downloads last month
7
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Undi95/MistralThinker-e2

Quantizations
1 model