ohyeah1 commited on
Commit
8d07059
·
verified ·
1 Parent(s): b8fda93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -33
README.md CHANGED
@@ -1,33 +1,41 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # nous-rp-llama-3
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ### Configuration
14
-
15
- The following YAML configuration was used to produce this model:
16
-
17
- ```yaml
18
- models:
19
- - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
20
- parameters:
21
- weight: 0.7
22
- density: 0.4
23
- - model: NousResearch/Hermes-2-Pro-Llama-3-8B
24
- parameters:
25
- weight: 0.4
26
- density: 0.4
27
- merge_method: dare_ties
28
- base_model: Undi95/Meta-Llama-3-8B-hf
29
- parameters:
30
- normalize: false
31
- int8_mask: true
32
- dtype: bfloat16
33
- ```
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ # Pantheon-Hermes-rp
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## PROMPT FORMAT: ChatML
14
+
15
+ Very good RP model. Can be very unhinged. It is also surprisingly smart.
16
+
17
+ Tested with these sampling settings:
18
+ Temperature: 1.4
19
+ min p: 0.1
20
+
21
+ ### Configuration
22
+
23
+ The following YAML configuration was used to produce this model:
24
+
25
+ ```yaml
26
+ models:
27
+ - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
28
+ parameters:
29
+ weight: 0.7
30
+ density: 0.4
31
+ - model: NousResearch/Hermes-2-Pro-Llama-3-8B
32
+ parameters:
33
+ weight: 0.4
34
+ density: 0.4
35
+ merge_method: dare_ties
36
+ base_model: Undi95/Meta-Llama-3-8B-hf
37
+ parameters:
38
+ normalize: false
39
+ int8_mask: true
40
+ dtype: bfloat16
41
+ ```