Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,17 @@ license: apache-2.0
|
|
13 |
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Medium-v2`](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
14 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) for more details on the model.
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Use with llama.cpp
|
17 |
Install llama.cpp through brew (works on Mac and Linux)
|
18 |
|
|
|
13 |
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Medium-v2`](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
14 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) for more details on the model.
|
15 |
|
16 |
+
---
|
17 |
+
Model details:
|
18 |
+
-
|
19 |
+
Virtuoso-Medium-v2 (32B) is our next-generation,
|
20 |
+
32-billion-parameter language model that builds upon the original
|
21 |
+
Virtuoso-Medium architecture. This version is distilled from
|
22 |
+
Deepseek-v3, leveraging an expanded dataset of 5B+ tokens worth of
|
23 |
+
logits. It achieves higher benchmark scores than our previous release
|
24 |
+
(including surpassing Arcee-Nova 2024 in certain tasks).
|
25 |
+
|
26 |
+
---
|
27 |
## Use with llama.cpp
|
28 |
Install llama.cpp through brew (works on Mac and Linux)
|
29 |
|