sometimesanotion commited on
Commit
60fdf15
·
verified ·
1 Parent(s): 724da95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -1,6 +1,11 @@
1
  ---
2
  base_model:
3
  - sometimesanotion/Lamarck-14B-v0.6
 
 
 
 
 
4
  library_name: transformers
5
  tags:
6
  - mergekit
@@ -12,7 +17,7 @@ pipeline_tag: text-generation
12
  ---
13
  # output
14
 
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
 
17
  ## Merge Details
18
  ### Merge Method
 
1
  ---
2
  base_model:
3
  - sometimesanotion/Lamarck-14B-v0.6
4
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
5
+ - sometimesanotion/Lamarck-14B-v0.3
6
+ - sometimesanotion/Qwenvergence-14B-v9
7
+ - sometimesanotion/Qwenvergence-14B-v3-Prose
8
+ - arcee-ai/Virtuoso-Small
9
  library_name: transformers
10
  tags:
11
  - mergekit
 
17
  ---
18
  # output
19
 
20
+ This is an experimental merge which pushes the merge techniques behind [sometimesanotion/Lamarck-14B-v0.6](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6) further, and adds a merge of DeepSeek's R1 distillation to its mid to upper layers. How this will interact with the reasoning-heavy Qwenvergence models is unknown.
21
 
22
  ## Merge Details
23
  ### Merge Method