Update README.md
Browse files
README.md
CHANGED
@@ -5,43 +5,102 @@ library_name: transformers
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
-
|
|
|
|
|
9 |
---
|
10 |
-
#
|
|
|
|
|
|
|
|
|
11 |
|
12 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
13 |
|
14 |
-
|
15 |
-
### Merge Method
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
###
|
20 |
|
21 |
-
|
22 |
-
* output/hq_rp
|
23 |
|
24 |
### Configuration
|
25 |
|
26 |
The following YAML configuration was used to produce this model:
|
27 |
|
28 |
```yaml
|
29 |
-
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
merge_method: task_arithmetic
|
|
|
32 |
parameters:
|
33 |
-
normalize:
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
parameters:
|
39 |
weight:
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
|
|
|
|
|
|
|
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
+
- llama
|
9 |
+
- conversational
|
10 |
+
license: llama3
|
11 |
---
|
12 |
+
# L3-Hecate-8B-v1.2
|
13 |
+
|
14 |
+
![Hecate](https://huggingface.co/Azazelle/L3-Hecate-8B-v1.2/resolve/main/img-lk8aRDQYDBJf0C02UowUk.jpeg)
|
15 |
+
|
16 |
+
## About:
|
17 |
|
18 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
19 |
|
20 |
+
**Recommended Samplers:**
|
|
|
21 |
|
22 |
+
```
|
23 |
+
Temperature - 1.0
|
24 |
+
TFS - 0.7
|
25 |
+
Smoothing Factor - 0.3
|
26 |
+
Smoothing Curve - 1.1
|
27 |
+
Repetition Penalty - 1.08
|
28 |
+
```
|
29 |
|
30 |
+
### Merge Method
|
31 |
|
32 |
+
This model was merged a series of model stock, followed by ExPO. It uses a mix of roleplay models to improve performance.
|
|
|
33 |
|
34 |
### Configuration
|
35 |
|
36 |
The following YAML configuration was used to produce this model:
|
37 |
|
38 |
```yaml
|
39 |
+
---
|
40 |
+
# Concise-Mopey
|
41 |
+
models:
|
42 |
+
- model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R
|
43 |
+
parameters:
|
44 |
+
weight: 1.0
|
45 |
+
- model: failspy/Llama-3-8B-Instruct-MopeyMule
|
46 |
+
parameters:
|
47 |
+
weight: 1.0
|
48 |
merge_method: task_arithmetic
|
49 |
+
base_model: NousResearch/Meta-Llama-3-8B-Instruct
|
50 |
parameters:
|
51 |
+
normalize: false
|
52 |
+
dtype: float32
|
53 |
+
vocab_type: bpe
|
54 |
+
name: Concise-Mopey
|
55 |
+
|
56 |
+
---
|
57 |
+
# Mopey RP Mix
|
58 |
+
models:
|
59 |
+
- model: Concise-Mopey+Azazelle/Llama-3-Sunfall-8b-lora
|
60 |
+
- model: Concise-Mopey+Azazelle/Llama-3-8B-Abomination-LORA
|
61 |
+
- model: Concise-Mopey+Azazelle/llama3-8b-hikikomori-v0.4
|
62 |
+
- model: Concise-Mopey+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
|
63 |
+
- model: Concise-Mopey+Azazelle/BlueMoon_Llama3
|
64 |
+
- model: Concise-Mopey+Azazelle/Llama3_RP_ORPO_LoRA
|
65 |
+
- model: Concise-Mopey+mpasila/Llama-3-LimaRP-Instruct-LoRA-8B
|
66 |
+
- model: Concise-Mopey+Azazelle/Llama-3-LongStory-LORA
|
67 |
+
merge_method: model_stock
|
68 |
+
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
|
69 |
+
dtype: float32
|
70 |
+
vocab_type: bpe
|
71 |
+
name: mopey_rp
|
72 |
+
|
73 |
+
---
|
74 |
+
models:
|
75 |
+
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
|
76 |
+
- model: Sao10K/L3-8B-Tamamo-v1
|
77 |
+
- model: Sao10K/L3-8B-Niitama-v1
|
78 |
+
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
|
79 |
+
- model: nothingiisreal/L3-8B-Celeste-v1
|
80 |
+
- model: Jellywibble/lora_120k_pref_data_ep2
|
81 |
+
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
82 |
+
- model: mopey_rp
|
83 |
+
merge_method: model_stock
|
84 |
+
base_model: NousResearch/Meta-Llama-3-8B-Instruct
|
85 |
+
dtype: float32
|
86 |
+
vocab_type: bpe
|
87 |
+
name: hq_rp
|
88 |
+
|
89 |
+
---
|
90 |
+
# ExPO
|
91 |
+
models:
|
92 |
+
- model: hq_rp
|
93 |
parameters:
|
94 |
weight:
|
95 |
+
- filter: mlp
|
96 |
+
value: 1.15
|
97 |
+
- filter: self_attn
|
98 |
+
value: 1.025
|
99 |
+
- value: 1.0
|
100 |
+
merge_method: task_arithmetic
|
101 |
+
base_model: NousResearch/Meta-Llama-3-8B-Instruct
|
102 |
+
parameters:
|
103 |
+
normalize: false
|
104 |
+
dtype: float32
|
105 |
+
vocab_type: bpe
|
106 |
+
```
|