parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx
The Model parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx was converted to MLX format from agentica-org/DeepScaleR-1.5B-Preview using mlx-lm version 0.20.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Citation
@misc{deepscaler2025,
title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Erran Li and Raluca Ada Popa and Ion Stoica},
year={2025},
howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
note={Notion Blog}
year={2025}
}
- Downloads last month
- 31
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Finetuned
agentica-org/DeepScaleR-1.5B-Preview