Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ metrics:
|
|
20 |

|
21 |
---
|
22 |
|
23 |
-
> [!TIP] This version of the model has [broken the 41.0 average](https://shorturl.at/
|
24 |
|
25 |
Lamarck 14B v0.7: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. It uses a custom toolchain to create and apply multiple sequences of complex merges:
|
26 |
|
|
|
20 |

|
21 |
---
|
22 |
|
23 |
+
> [!TIP] This version of the model has [broken the 41.0 average](https://shorturl.at/jUqEk) maximum for 14B parameter models, and as of this writing, ranks #8 among models under 70B parameters on the Open LLM Leaderboard. Given the respectable performance in the 32B range, I think Lamarck deserves his shades. A little layer analysis in the 14B range goes a long, long way.
|
24 |
|
25 |
Lamarck 14B v0.7: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. It uses a custom toolchain to create and apply multiple sequences of complex merges:
|
26 |
|