Model Sources

https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct

Uses

v v small model for running on edge with :fire: TTFT & Throughput

Direct Use

Use llama.cpp to inference the model

Downloads last month
73
GGUF
Model size
1.71B params
Architecture
llama

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.