File size: 1,582 Bytes
731e639 f2413f4 731e639 f2413f4 731e639 f2413f4 731e639 f2413f4 731e639 f2413f4 731e639 f2413f4 731e639 f2413f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
base_model:
- sometimesanotion/Lamarck-14B-v0.7
- sometimesanotion/Qwenvergence-14B-v12-Prose-DS
- jpacifico/Chocolatine-2-14B-Instruct-v2.0.3
- suayptalha/Lamarckvergence-14B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# EXPERIMENTAL:
So what's this new arcee_fusion merge method, and what can we do with it? This model aims to find out, as a multi-stage merge where 3 out of 4 steps are fusions:
* A fusion of [Lamarck-14B-v0.7](http://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) and @suayptalha's [Lamarckvergence SLERP merge](http://huggingface.co/suayptalha/Lamarckvergence-14B) of Lamarck-14B-v0.7 and [Qwenvergence-14B-v12-Prose-DS](http://huggingface.co/sometimesanotion/Qwenvergence-14B-v12-Prose-DS).
* A SLERP of Lamarck-14B-v0.7-Fusionvergence with Qwenvergence-14B-v12-Prose-DS, the latter emphasized in later layers.
* A fusion of @jpacifico's [Chocolatine-2-14B-Instruct-v2.0.3](http://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3), itself a finetune of Arcee's (https://huggingface.co/arcee-ai/Virtuoso-Small-v2) with Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose-DS, fusion-merged with - you guessed it - Qwenvergence-14B-v12-Prose-DS
* A fusion of the previous two.
I've seen strong prose from this model, which is natural considering its re-emphasis of Qwenvergence-14B-v12-Prose-DS. A full evaluation will be cued shortly.
This is actually a lot simpler than a mainline Lamarck release, and where it fits for efforts towards a Lamarck v0.8 depends greatly on evaluation and feedback. |