nbeerbower commited on
Commit
8906355
1 Parent(s): 207b7d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -60
README.md CHANGED
@@ -1,60 +1,62 @@
1
- ---
2
- base_model:
3
- - flammenai/flammen29-mistral-7B
4
- - Azazelle/Argetsu
5
- - flammenai/flammen30-mistral-7B
6
- - allknowingroger/Neurallaymons-7B-slerp
7
- - flammenai/flammen27-mistral-7B
8
- - Azazelle/Tippy-Toppy-7b
9
- - cognitivecomputations/samantha-mistral-7b
10
- - flammenai/flammen23X-mistral-7B
11
- - Weyaxi/Seraph-openchat-3.5-1210-Slerp
12
- - Ppoyaa/StarMonarch-7B
13
- library_name: transformers
14
- tags:
15
- - mergekit
16
- - merge
17
-
18
- ---
19
- # flammen31-mistral-7B
20
-
21
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
22
-
23
- ## Merge Details
24
- ### Merge Method
25
-
26
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [flammenai/flammen27-mistral-7B](https://huggingface.co/flammenai/flammen27-mistral-7B) as a base.
27
-
28
- ### Models Merged
29
-
30
- The following models were included in the merge:
31
- * [flammenai/flammen29-mistral-7B](https://huggingface.co/flammenai/flammen29-mistral-7B)
32
- * [Azazelle/Argetsu](https://huggingface.co/Azazelle/Argetsu)
33
- * [flammenai/flammen30-mistral-7B](https://huggingface.co/flammenai/flammen30-mistral-7B)
34
- * [allknowingroger/Neurallaymons-7B-slerp](https://huggingface.co/allknowingroger/Neurallaymons-7B-slerp)
35
- * [Azazelle/Tippy-Toppy-7b](https://huggingface.co/Azazelle/Tippy-Toppy-7b)
36
- * [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b)
37
- * [flammenai/flammen23X-mistral-7B](https://huggingface.co/flammenai/flammen23X-mistral-7B)
38
- * [Weyaxi/Seraph-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/Seraph-openchat-3.5-1210-Slerp)
39
- * [Ppoyaa/StarMonarch-7B](https://huggingface.co/Ppoyaa/StarMonarch-7B)
40
-
41
- ### Configuration
42
-
43
- The following YAML configuration was used to produce this model:
44
-
45
- ```yaml
46
- models:
47
- - model: flammenai/flammen29-mistral-7B
48
- - model: flammenai/flammen30-mistral-7B
49
- - model: flammenai/flammen23X-mistral-7B
50
- - model: allknowingroger/Neurallaymons-7B-slerp
51
- - model: Azazelle/Argetsu
52
- - model: Weyaxi/Seraph-openchat-3.5-1210-Slerp
53
- - model: Azazelle/Tippy-Toppy-7b
54
- - model: Ppoyaa/StarMonarch-7B
55
- - model: cognitivecomputations/samantha-mistral-7b
56
- merge_method: model_stock
57
- base_model: flammenai/flammen27-mistral-7B
58
- dtype: bfloat16
59
-
60
- ```
 
 
 
1
+ ---
2
+ base_model:
3
+ - flammenai/flammen29-mistral-7B
4
+ - Azazelle/Argetsu
5
+ - flammenai/flammen30-mistral-7B
6
+ - allknowingroger/Neurallaymons-7B-slerp
7
+ - flammenai/flammen27-mistral-7B
8
+ - Azazelle/Tippy-Toppy-7b
9
+ - cognitivecomputations/samantha-mistral-7b
10
+ - flammenai/flammen23X-mistral-7B
11
+ - Weyaxi/Seraph-openchat-3.5-1210-Slerp
12
+ - Ppoyaa/StarMonarch-7B
13
+ library_name: transformers
14
+ tags:
15
+ - mergekit
16
+ - merge
17
+ license: apache-2.0
18
+ ---
19
+ ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png)
20
+
21
+ # flammen31-mistral-7B
22
+
23
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
+
25
+ ## Merge Details
26
+ ### Merge Method
27
+
28
+ This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [flammenai/flammen27-mistral-7B](https://huggingface.co/flammenai/flammen27-mistral-7B) as a base.
29
+
30
+ ### Models Merged
31
+
32
+ The following models were included in the merge:
33
+ * [flammenai/flammen29-mistral-7B](https://huggingface.co/flammenai/flammen29-mistral-7B)
34
+ * [Azazelle/Argetsu](https://huggingface.co/Azazelle/Argetsu)
35
+ * [flammenai/flammen30-mistral-7B](https://huggingface.co/flammenai/flammen30-mistral-7B)
36
+ * [allknowingroger/Neurallaymons-7B-slerp](https://huggingface.co/allknowingroger/Neurallaymons-7B-slerp)
37
+ * [Azazelle/Tippy-Toppy-7b](https://huggingface.co/Azazelle/Tippy-Toppy-7b)
38
+ * [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b)
39
+ * [flammenai/flammen23X-mistral-7B](https://huggingface.co/flammenai/flammen23X-mistral-7B)
40
+ * [Weyaxi/Seraph-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/Seraph-openchat-3.5-1210-Slerp)
41
+ * [Ppoyaa/StarMonarch-7B](https://huggingface.co/Ppoyaa/StarMonarch-7B)
42
+
43
+ ### Configuration
44
+
45
+ The following YAML configuration was used to produce this model:
46
+
47
+ ```yaml
48
+ models:
49
+ - model: flammenai/flammen29-mistral-7B
50
+ - model: flammenai/flammen30-mistral-7B
51
+ - model: flammenai/flammen23X-mistral-7B
52
+ - model: allknowingroger/Neurallaymons-7B-slerp
53
+ - model: Azazelle/Argetsu
54
+ - model: Weyaxi/Seraph-openchat-3.5-1210-Slerp
55
+ - model: Azazelle/Tippy-Toppy-7b
56
+ - model: Ppoyaa/StarMonarch-7B
57
+ - model: cognitivecomputations/samantha-mistral-7b
58
+ merge_method: model_stock
59
+ base_model: flammenai/flammen27-mistral-7B
60
+ dtype: bfloat16
61
+
62
+ ```