sometimesanotion commited on
Commit
aef1556
·
verified ·
1 Parent(s): 34d172f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -12,8 +12,6 @@ tags:
12
 
13
  The merits of multi-stage arcee_fusion merges are clearly shown in [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion), which has a valuable uptick in GPQA over its predecessors. Will its gains be maintained with a modified version of the SLERP recipe from [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)? Let's find out what these weights for self-attention and perceptrons can unlock in this merge.
14
 
15
- Why isn't this the next version of Lamarck? It has not undergone the highly layer-targeting merges that go into a Lamarck release, and to truly refine Lamarck v0.7 requires top-notch components. This one, perhaps.
16
-
17
  ## Merge Details
18
  ### Merge Method
19
 
 
12
 
13
  The merits of multi-stage arcee_fusion merges are clearly shown in [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion), which has a valuable uptick in GPQA over its predecessors. Will its gains be maintained with a modified version of the SLERP recipe from [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)? Let's find out what these weights for self-attention and perceptrons can unlock in this merge.
14
 
 
 
15
  ## Merge Details
16
  ### Merge Method
17