Update README.md
Browse files
README.md
CHANGED
@@ -28,15 +28,14 @@ extra_gated_description: If you want to learn more about how we process your per
|
|
28 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3772](https://github.com/ggerganov/llama.cpp/releases/tag/b3772)<br>
|
29 |
|
30 |
## Model Summary:
|
31 |
-
No summary provided
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
No prompt template provided
|
36 |
|
37 |
## Technical Details
|
38 |
|
39 |
-
|
|
|
|
|
40 |
|
41 |
## Special thanks
|
42 |
|
|
|
28 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3772](https://github.com/ggerganov/llama.cpp/releases/tag/b3772)<br>
|
29 |
|
30 |
## Model Summary:
|
|
|
31 |
|
32 |
+
Mistral Small Instruct 2409 is an updated 22B parameter model from the Mistral team.
|
|
|
|
|
33 |
|
34 |
## Technical Details
|
35 |
|
36 |
+
Vocabulary length of 32768, and a context length of 128k
|
37 |
+
|
38 |
+
Supports function calling
|
39 |
|
40 |
## Special thanks
|
41 |
|