sometimesanotion
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ Qwenvergence is a component of the [Lamarck project](https://huggingface.co/some
|
|
27 |
|
28 |
Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions to other scores.
|
29 |
|
30 |
-
I attribute this model's record-breaking MATH score of 44.18
|
31 |
|
32 |
# Merge method
|
33 |
|
|
|
27 |
|
28 |
Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions to other scores.
|
29 |
|
30 |
+
I attribute this model's record-breaking MATH score of 44.18%, for a 14B model on the Open LLM Leaderboard, to its combination of Krystalan/DRT-o1-14B and huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. These are strong models individually, but this is an area of synergy when they are merged.
|
31 |
|
32 |
# Merge method
|
33 |
|