image

mistral-small-r1-tensopolis

This model is a reasoning fine-tune of unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit. Trained in 1xA100 for about 100 hours. Please refer to the base model and dataset for more information about license, prompt format, etc.

Base model: mistralai/Mistral-Small-24B-Instruct-2501

Dataset: ServiceNow-AI/R1-Distill-SFT

Basic Instruct Template (V7-Tekken)

<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
5
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for tensopolis/mistral-small-r1-tensopolis

Finetuned
(11)
this model
Quantizations
1 model

Dataset used to train tensopolis/mistral-small-r1-tensopolis