johannhartmann
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- merge
|
4 |
+
- mergekit
|
5 |
+
- lazymergekit
|
6 |
+
- FelixChao/WestSeverus-7B-DPO-v2
|
7 |
+
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
|
8 |
+
- cognitivecomputations/openchat-3.5-0106-laser
|
9 |
+
base_model:
|
10 |
+
- FelixChao/WestSeverus-7B-DPO-v2
|
11 |
+
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
|
12 |
+
- cognitivecomputations/openchat-3.5-0106-laser
|
13 |
+
license: apache-2.0
|
14 |
+
language:
|
15 |
+
- de
|
16 |
+
---
|
17 |
+
|
18 |
+
# Brezn-7B
|
19 |
+
|
20 |
+
Brezn-7B is a dpo aligned merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
21 |
+
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
|
22 |
+
* [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
|
23 |
+
* [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
|
24 |
+
|
25 |
+
## mt-bench-de
|
26 |
+
```yaml
|
27 |
+
{
|
28 |
+
"first_turn": 7.6625,
|
29 |
+
"second_turn": 7.31875,
|
30 |
+
"categories": {
|
31 |
+
"writing": 8.75,
|
32 |
+
"roleplay": 8.5,
|
33 |
+
"reasoning": 6.1,
|
34 |
+
"math": 5.05,
|
35 |
+
"coding": 5.4,
|
36 |
+
"extraction": 7.975,
|
37 |
+
"stem": 9,
|
38 |
+
"humanities": 9.15
|
39 |
+
},
|
40 |
+
"average": 7.490625
|
41 |
+
}
|
42 |
+
```
|
43 |
+
|
44 |
+
## 🧩 Configuration
|
45 |
+
|
46 |
+
```yaml
|
47 |
+
models:
|
48 |
+
- model: mistralai/Mistral-7B-v0.1
|
49 |
+
# no parameters necessary for base model
|
50 |
+
- model: FelixChao/WestSeverus-7B-DPO-v2
|
51 |
+
parameters:
|
52 |
+
density: 0.60
|
53 |
+
weight: 0.30
|
54 |
+
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
|
55 |
+
parameters:
|
56 |
+
density: 0.65
|
57 |
+
weight: 0.40
|
58 |
+
- model: cognitivecomputations/openchat-3.5-0106-laser
|
59 |
+
parameters:
|
60 |
+
density: 0.6
|
61 |
+
weight: 0.3
|
62 |
+
merge_method: dare_ties
|
63 |
+
base_model: mistralai/Mistral-7B-v0.1
|
64 |
+
parameters:
|
65 |
+
int8_mask: true
|
66 |
+
dtype: bfloat16
|
67 |
+
random_seed: 0
|
68 |
+
```
|
69 |
+
|
70 |
+
## 💻 Usage
|
71 |
+
|
72 |
+
```python
|
73 |
+
!pip install -qU transformers accelerate
|
74 |
+
|
75 |
+
from transformers import AutoTokenizer
|
76 |
+
import transformers
|
77 |
+
import torch
|
78 |
+
|
79 |
+
model = "mayflowergmbh/Brezn-7B"
|
80 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
81 |
+
|
82 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
83 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
84 |
+
pipeline = transformers.pipeline(
|
85 |
+
"text-generation",
|
86 |
+
model=model,
|
87 |
+
torch_dtype=torch.float16,
|
88 |
+
device_map="auto",
|
89 |
+
)
|
90 |
+
|
91 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
92 |
+
print(outputs[0]["generated_text"])
|
93 |
+
```
|