duyntnet commited on
Commit
fd11bdf
·
verified ·
1 Parent(s): a499d25

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - SmolLM2-1.7B
12
+ ---
13
+ Quantizations of https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B
14
+
15
+
16
+ ### Inference Clients/UIs
17
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
18
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
19
+ * [ollama](https://github.com/ollama/ollama)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
22
+ * [jan](https://github.com/janhq/jan)
23
+ ---
24
+
25
+ # From original readme
26
+
27
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
28
+
29
+ The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
30
+
31
+ The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
32
+
33
+ ### How to use
34
+
35
+ ```bash
36
+ pip install transformers
37
+ ```
38
+
39
+ #### Running the model on CPU/GPU/multi GPU
40
+ * _Using full precision_
41
+ ```python
42
+ # pip install transformers
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+ checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
45
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
46
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
47
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
48
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
49
+ inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
50
+ outputs = model.generate(inputs)
51
+ print(tokenizer.decode(outputs[0]))
52
+ ```
53
+
54
+ * _Using `torch.bfloat16`_
55
+ ```python
56
+ # pip install accelerate
57
+ # for fp16 use `torch_dtype=torch.float16` instead
58
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
59
+ inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
60
+ outputs = model.generate(inputs)
61
+ print(tokenizer.decode(outputs[0]))
62
+ ```
63
+ ```bash
64
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
65
+ Memory footprint: 3422.76 MB
66
+ ```