sometimesanotion
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ pipeline_tag: text-generation
|
|
25 |
|
26 |
Qwenvergence is a component of the [Lamarck project](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7), which iteratively merges a model_stock alongside its previous version as a first step to a complex merge strategy.
|
27 |
|
28 |
-
Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions
|
29 |
|
30 |
I attribute this model's record-breaking MATH score of 44.18%, for a 14B model on the Open LLM Leaderboard, to its combination of Krystalan/DRT-o1-14B and huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. These are strong models individually, but this is an area of synergy when they are merged.
|
31 |
|
|
|
25 |
|
26 |
Qwenvergence is a component of the [Lamarck project](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7), which iteratively merges a model_stock alongside its previous version as a first step to a complex merge strategy.
|
27 |
|
28 |
+
Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions in its performance.
|
29 |
|
30 |
I attribute this model's record-breaking MATH score of 44.18%, for a 14B model on the Open LLM Leaderboard, to its combination of Krystalan/DRT-o1-14B and huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. These are strong models individually, but this is an area of synergy when they are merged.
|
31 |
|