Update README.md
Browse files
README.md
CHANGED
@@ -88,7 +88,7 @@ trl chat --model_name_or_path HuggingFaceTB/SmolLM-135M-Instruct --device cpu
|
|
88 |
Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our [blog post](https://huggingface.co/blog/smollm).
|
89 |
|
90 |
## Training parameters
|
91 |
-
We train the models using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) with the datasets mentioned in the changelog, using these parameters for v0.2:
|
92 |
- 1 epoch
|
93 |
- lr 1e-3
|
94 |
- cosine schedule
|
|
|
88 |
Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our [blog post](https://huggingface.co/blog/smollm).
|
89 |
|
90 |
## Training parameters
|
91 |
+
We train the models using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) with the datasets mentioned in the changelog, using these parameters for v0.2 (most of them are from Zephyr Gemma recipe):
|
92 |
- 1 epoch
|
93 |
- lr 1e-3
|
94 |
- cosine schedule
|