FallenMerick commited on
Commit
bed1691
1 Parent(s): 3e7996d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -88
README.md CHANGED
@@ -1,88 +1,90 @@
1
- ---
2
- license: cc-by-4.0
3
- language:
4
- - en
5
- base_model:
6
- - mistralai/Mistral-7B-v0.1
7
- - SanjiWatsuki/Kunoichi-7B
8
- - SanjiWatsuki/Silicon-Maid-7B
9
- - KatyTheCutie/LemonadeRP-4.5.3
10
- - Sao10K/Fimbulvetr-11B-v2
11
- library_name: transformers
12
- tags:
13
- - mergekit
14
- - merge
15
- - mistral
16
- - text-generation
17
- - roleplay
18
-
19
- ---
20
-
21
- ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png)
22
-
23
- # Chunky-Lemon-Cookie-11B
24
-
25
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
26
-
27
- ## Merge Details
28
- ### Merge Method
29
-
30
- This model was merged using the following methods:
31
- * passthrough
32
- * [task arithmetic](https://arxiv.org/abs/2212.04089)
33
-
34
- ### Models Merged
35
-
36
- The following models were included in the merge:
37
- * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
38
- * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
39
- * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
40
- * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
41
- * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
42
-
43
- ### Configuration
44
-
45
- The following YAML configurations were used to produce this model:
46
-
47
- ```yaml
48
- slices:
49
- - sources:
50
- - model: mistralai/Mistral-7B-v0.1
51
- layer_range: [0, 24]
52
- - sources:
53
- - model: mistralai/Mistral-7B-v0.1
54
- layer_range: [8, 32]
55
- merge_method: passthrough
56
- dtype: float16
57
- name: Mistral-11B
58
-
59
- ---
60
-
61
- slices:
62
- - sources:
63
- - model: SanjiWatsuki/Kunoichi-7B
64
- layer_range: [0, 24]
65
- - sources:
66
- - model: SanjiWatsuki/Silicon-Maid-7B
67
- layer_range: [8, 24]
68
- - sources:
69
- - model: KatyTheCutie/LemonadeRP-4.5.3
70
- layer_range: [24, 32]
71
- merge_method: passthrough
72
- dtype: float16
73
- name: Big-Lemon-Cookie-11B
74
-
75
- ---
76
-
77
- models:
78
- - model: Big-Lemon-Cookie-11B
79
- parameters:
80
- weight: 0.85
81
- - model: Sao10K/Fimbulvetr-11B-v2
82
- parameters:
83
- weight: 0.15
84
- merge_method: task_arithmetic
85
- base_model: Mistral-11B
86
- dtype: float16
87
- name: Chunky-Lemon-Cookie-11B
88
- ```
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - mistralai/Mistral-7B-v0.1
7
+ - SanjiWatsuki/Kunoichi-7B
8
+ - SanjiWatsuki/Silicon-Maid-7B
9
+ - KatyTheCutie/LemonadeRP-4.5.3
10
+ - Sao10K/Fimbulvetr-11B-v2
11
+ library_name: transformers
12
+ tags:
13
+ - mergekit
14
+ - merge
15
+ - mistral
16
+ - text-generation
17
+ - roleplay
18
+
19
+ ---
20
+
21
+ ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png)
22
+
23
+ # Chunky-Lemon-Cookie-11B
24
+
25
+ GGUF quants: https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF
26
+
27
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
28
+
29
+ ## Merge Details
30
+ ### Merge Method
31
+
32
+ This model was merged using the following methods:
33
+ * passthrough
34
+ * [task arithmetic](https://arxiv.org/abs/2212.04089)
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
40
+ * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
41
+ * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
42
+ * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
43
+ * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
44
+
45
+ ### Configuration
46
+
47
+ The following YAML configurations were used to produce this model:
48
+
49
+ ```yaml
50
+ slices:
51
+ - sources:
52
+ - model: mistralai/Mistral-7B-v0.1
53
+ layer_range: [0, 24]
54
+ - sources:
55
+ - model: mistralai/Mistral-7B-v0.1
56
+ layer_range: [8, 32]
57
+ merge_method: passthrough
58
+ dtype: float16
59
+ name: Mistral-11B
60
+
61
+ ---
62
+
63
+ slices:
64
+ - sources:
65
+ - model: SanjiWatsuki/Kunoichi-7B
66
+ layer_range: [0, 24]
67
+ - sources:
68
+ - model: SanjiWatsuki/Silicon-Maid-7B
69
+ layer_range: [8, 24]
70
+ - sources:
71
+ - model: KatyTheCutie/LemonadeRP-4.5.3
72
+ layer_range: [24, 32]
73
+ merge_method: passthrough
74
+ dtype: float16
75
+ name: Big-Lemon-Cookie-11B
76
+
77
+ ---
78
+
79
+ models:
80
+ - model: Big-Lemon-Cookie-11B
81
+ parameters:
82
+ weight: 0.85
83
+ - model: Sao10K/Fimbulvetr-11B-v2
84
+ parameters:
85
+ weight: 0.15
86
+ merge_method: task_arithmetic
87
+ base_model: Mistral-11B
88
+ dtype: float16
89
+ name: Chunky-Lemon-Cookie-11B
90
+ ```