mlx-community/Mistral-Small-Instruct-2409-bf16

The Model mlx-community/Mistral-Small-Instruct-2409-bf16 was converted to MLX format from mistralai/Mistral-Small-Instruct-2409 using mlx-lm version 0.18.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Mistral-Small-Instruct-2409-bf16")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
10
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Collection including mlx-community/Mistral-Small-Instruct-2409-bf16