Tarek07 commited on
Commit
130b1ad
·
verified ·
1 Parent(s): 2a648d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -12,6 +12,7 @@ tags:
12
  - merge
13
  license: llama3.3
14
  ---
 
15
  # merge
16
 
17
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
12
  - merge
13
  license: llama3.3
14
  ---
15
+ This model is part of a series of experiments in merging some of my favorite Llama models, an idea which was based on the excellent Steelskull/L3.3-MS-Nevoria-70b merge, just with a couple of extra ingredients and different merge methods. Here I tried a Della Linear merge with conservative parameters. Against my better judgement I thought the newer Sao10K/L3.3-70B-Euryale-v2.3 would be better in the mix than Sao10K/L3.1-70B-Hanami-x1 (which has a very special place in my heart). The results were decent. Though Tarek07/Progenitor-V1.1-LLaMa-70B still comes out on top (imo).
16
  # merge
17
 
18
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).