|
--- |
|
license: other |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
tags: |
|
- transformers |
|
- gguf |
|
- imatrix |
|
- SmolLM2-1.7B |
|
--- |
|
Quantizations of https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B |
|
|
|
|
|
### Inference Clients/UIs |
|
* [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
* [KoboldCPP](https://github.com/LostRuins/koboldcpp) |
|
* [ollama](https://github.com/ollama/ollama) |
|
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui) |
|
* [GPT4All](https://github.com/nomic-ai/gpt4all) |
|
* [jan](https://github.com/janhq/jan) |
|
--- |
|
|
|
# From original readme |
|
|
|
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. |
|
|
|
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). |
|
|
|
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). |
|
|
|
### How to use |
|
|
|
```bash |
|
pip install transformers |
|
``` |
|
|
|
#### Running the model on CPU/GPU/multi GPU |
|
* _Using full precision_ |
|
```python |
|
# pip install transformers |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
checkpoint = "HuggingFaceTB/SmolLM2-1.7B" |
|
device = "cuda" # for GPU usage or "cpu" for CPU usage |
|
tokenizer = AutoTokenizer.from_pretrained(checkpoint) |
|
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` |
|
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) |
|
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device) |
|
outputs = model.generate(inputs) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
* _Using `torch.bfloat16`_ |
|
```python |
|
# pip install accelerate |
|
# for fp16 use `torch_dtype=torch.float16` instead |
|
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) |
|
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda") |
|
outputs = model.generate(inputs) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
```bash |
|
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") |
|
Memory footprint: 3422.76 MB |
|
``` |