license: mit
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
language:
- en
- ru
tags:
- mistral
- chat
- conversational
- transformers
inference:
parameters:
temperature: 0
pipeline_tag: text-generation
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
library_name: vllm
Zero-Mistral-Small-24B-Instruct-2501
Zero-Mistral-Small is an improved version of mistralai/Mistral-Small-24B-Instruct-2501, primarily adapted for Russian and English languages. The training involved SFT stage on GrandMaster-PRO-MAX dataset.
📚 Model versions
- Merged 16-bit - original 16bit merged version.
- LoRa adapter for mistralai/Mistral-Small-24B-Instruct-2501
- F16 GGUF
- BF16 GGUF
- Q8_0 GGUF
- Q4_K_M GGUF
📊 Benchmarks for main 16-bit merged version
MERA
MERA score: 0.518
Task | Result | Metric |
---|---|---|
LCS | 0.03 | Accuracy |
RCB | 0.534 / 0.495 | Avg. F1 / Accuracy |
USE | 0.285 | Grade Norm |
RWSD | 0.565 | Accuracy |
PARus | 0.864 | Accuracy |
ruTiE | 0.652 | Accuracy |
MultiQ | 0.414 / 0.289 | F1-score/EM |
CheGeKa | 0.297 / 0.219 | F1 / EM |
ruModAr | 0.708 | EM |
MaMuRAMu | 0.773 | Accuracy |
ruMultiAr | 0.286 | EM |
ruCodeEval | 0.043 / 0.161 / 0.25 | pass@k |
MathLogicQA | 0.476 | Accuracy |
ruWorldTree | 0.962 / 0.962 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.885 / 0.886 | Avg. F1 / Accuracy |
Оценка по открытым задачам:
Задача | Результат | Метрика |
---|---|---|
BPS | 0.961 | Accuracy |
ruMMLU | 0.663 | Accuracy |
SimpleAr | 0.982 | EM |
ruHumanEval | 0.09 / 0.276 / 0.39 | pass@k |
ruHHH | 0.601 | Accuracy |
ruHateSpeech | 0.823 | Accuracy |
ruDetox | 0.184 / 0.75 / 0.621 / 0.451 | Общая средняя оценка (J) / Оценка сохранения смысла (SIM) / Оценка натуральности (FL) / Точность переноса стиля (STA) |
ruEthics | [[0.316, 0.373, 0.362, 0.334, 0.295], [0.424, 0.439, 0.457, 0.398, 0.373], [0.54, 0.53, 0.549, 0.488, 0.461]] | 5 MCC |
Ru Arena General submitted result
ZeroAgency.ru-Zero-Mistral-Small-24B-Instruct-2501
- Score: 87.43
- 95% CI: +1.4 / -1.2
- lower: 86.22
- upper: 88.88
- avg_tokens: 565.19
- std_tokens: 339.27
- lc_score: 55.37
Arena-Hard-Ru lm_eval
Model | Score | 95% CI | Avg. #Tokens |
---|---|---|---|
gpt-4-1106-preview | 90.9 | (-1.2, 1.3) | 541 |
gpt-4o-mini | 83.9 | (-1.6, 1.4) | 448 |
T-Tech-T-pro-it-1.0 | 83.8 | (-1.6, 1.4) | 502 |
gigachat_max_26.20_uncen | 82.7 | (-1.8, 1.5) | 514 |
gigachat_max_with_censor | 80.0 | (-1.9, 1.7) | 515 |
vikhr-nemo-12b-instruct-r-21-09-24 | 79.8 | (-2.0, 1.4) | 627 |
❗ Zero-Mistral-Small-24B-Instruct-2501 | 77.5 | (-1.9, 2.2) | 565 |
gemma-2-9b-it-sppo-iter3 | 73.6 | (-2.2, 2.0) | 509 |
Mistral-Small-24B-Instruct-2501 | 73.1 | (-2.2, 2.2) | 487 |
T-Tech-T-lite-it-1.0 | 71.0 | (-2.2, 2.2) | 544 |
qwen2.5-14b-instruct | 70.5 | (-2.0, 2.4) | 434 |
gigachat_pro_26.20_uncen | 70.4 | (-2.4, 2.6) | 549 |
gemma-2-9b-it | 69.2 | (-2.3, 1.7) | 459 |
CohereForAI/aya-expanse-8b | 67.1 | (-2.4, 2.1) | 698 |
t-lite-instruct-0.1 | 64.7 | (-2.2, 2.1) | 810 |
vikhr-llama3.1-8b-instruct-r-21-09-24 | 63.4 | (-2.1, 2.2) | 618 |
suzume-llama-3-8B-multilingual-orpo-bor… | 57.1 | (-2.2, 2.0) | 682 |
gigachat_lite_26.20_uncen | 56.4 | (-2.4, 2.3) | 561 |
phi-3-medium-4k-instruct | 55.1 | (-2.4, 2.6) | 566 |
mistral-nemo-instruct-2407 | 50.5 | (-2.1, 2.2) | 403 |
yandex_gpt_pro_v4_26102024 | 50.5 | (-2.3, 2.3) | 384 |
Training config
lora_r = 96
lora_alpha = 96
lora_dropout = 0
learning_rate = 1e-4
lr_scheduler_type = 'cosine'
per_device_train_batch_size = 8
per_device_eval_batch_size = 8
num_train_epochs = 1
weight_decay = 0.01
Metrics:
- Training Loss:
0.628300
- Validation Loss:
0.704708
Total time for training and validation on 1xH100: 15:21:10
Usage
The model can be used with the following frameworks;
vllm
: See heretransformers
: See here
vLLM
We recommend using this model with the vLLM library to implement production-ready inference pipelines.
Note 1: We recommond using a relatively low temperature, such as temperature=0.15
.
Note 2: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt:
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
Installation
Make sure you install vLLM >= 0.6.4
:
pip install --upgrade vllm
Also make sure you have mistral_common >= 1.5.2
installed:
pip install --upgrade mistral_common
You can also make use of a ready-to-go docker image or on the docker hub.
Server
We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.
- Spin up a server:
vllm serve ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
Note: Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
- To ping the client you can use a simple Python snippet.
import requests
import json
from datetime import datetime, timedelta
url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501"
messages = [
{
"role": "system",
"content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
},
{
"role": "user",
"content": "Give me 5 non-formal ways to say 'See you later' in French."
},
]
data = {"model": model, "messages": messages}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
Function calling
Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. E.g.:
Example
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools}
response = requests.post(url, headers=headers, data=json.dumps(data))
import ipdb; ipdb.set_trace()
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
Offline
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model="ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501", tokenizer_mode="mistral", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
Transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
]
chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
chatbot(messages)