sometimesanotion commited on
Commit
5cd5ad6
·
verified ·
1 Parent(s): 7d83aba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ Qwenvergence is a component of the [Lamarck project](https://huggingface.co/some
27
 
28
  Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions to other scores.
29
 
30
- I attribute this model's record-breaking MATH score of 44.18% to its combination of Krystalan/DRT-o1-14B and huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated, both strong models individually, but definitely in synergy in this merge.
31
 
32
  # Merge method
33
 
 
27
 
28
  Some of the models have pre-applied LoRAs. In this case, a rank 128 adapter from Lamarck 0.7 was used to prevent sharp regressions to other scores.
29
 
30
+ I attribute this model's record-breaking MATH score of 44.18%, for a 14B model on the Open LLM Leaderboard, to its combination of Krystalan/DRT-o1-14B and huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. These are strong models individually, but this is an area of synergy when they are merged.
31
 
32
  # Merge method
33