|
--- |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- DiscoResearch/DiscoLM_German_7b_v1 |
|
- DRXD1000/Phoenix |
|
- VAGOsolutions/SauerkrautLM-7b-v1-mistral |
|
- malteos/hermeo-7b |
|
base_model: |
|
- DiscoResearch/DiscoLM_German_7b_v1 |
|
- DRXD1000/Phoenix |
|
- VAGOsolutions/SauerkrautLM-7b-v1-mistral |
|
- malteos/hermeo-7b |
|
license: apache-2.0 |
|
language: |
|
- de |
|
- en |
|
--- |
|
|
|
# Wiedervereinigung-7b-dpo |
|
![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png) |
|
|
|
This is a dpo aligned merge of our favourite german models, scoring 7.11 on the mt-bench-de average. |
|
Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model. |
|
Therefore the name, no nationalist ideas involved :-). |
|
|
|
To improve result quality they are dpo-trained with a german translation of slimorca dpo using hermeo-7B for reject results. |
|
|
|
If you are gpu-poor like me you can now use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to train with german datasets. |
|
|
|
Kudos to the authors of the original models at [DiscoResearch](https://huggingface.co/DiscoResearch) and [VAGOsolutions](https://huggingface.co/VAGOsolutions), [Malte Ostendorff](https://huggingface.co/malteos) |
|
and [Matthias Uhlig](https://huggingface.co/DRXD1000). We are your fan club. |
|
|
|
This model was brought to you and the nvidia bill was paid by [Mayflower GmbH](https://mayflower.de/). |
|
|
|
## Benchmark results: mt-bench-de |
|
|
|
Is the merged model alone already good? Well, of course. But it is even better with the help of some dpo tuning. |
|
|
|
```json |
|
{ |
|
"first_turn": 7.3, |
|
"second_turn": 6.925, |
|
"categories": { |
|
"writing": 8.425, |
|
"roleplay": 8.6, |
|
"reasoning": 5.4, |
|
"math": 4.35, |
|
"coding": 4.3, |
|
"extraction": 7.975, |
|
"stem": 8.5, |
|
"humanities": 9.35 |
|
}, |
|
"average": 7.1125 |
|
} |
|
``` |
|
|
|
## Other Versions |
|
|
|
A big thank you to [LoneStriker](https://huggingface.co/LoneStriker) for the quantized models. |
|
|
|
| Name | Quant method | Bits | |
|
| ---- | ---- | ---- | |
|
[Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)| Unquantized | 16 | |
|
[Wiedervereinigung-7b-dpo-GPTQ](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-GPTQ)| GPTQ | 4 | |
|
[Wiedervereinigung-7b-dpo-AWQ](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-AWQ)| AWQ | 4 | |
|
[Wiedervereinigung-7b-dpo-GGUF](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-GGUF)| GGUF | 3-8 | |
|
[Wiedervereinigung-7b-dpo-8.0bpw-h8-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-8.0bpw-h8-exl2)| EXL2 | 8 | |
|
[Wiedervereinigung-7b-dpo-6.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-6.0bpw-h6-exl2)| EXL2 | 6 | |
|
[Wiedervereinigung-7b-dpo-5.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-5.0bpw-h6-exl2)| EXL2 | 5 | |
|
[Wiedervereinigung-7b-dpo-4.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-4.0bpw-h6-exl2)| EXL2 | 4 | |
|
[Wiedervereinigung-7b-dpo-3.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-3.0bpw-h6-exl2)| EXL2 | 3 | |
|
|
|
Wiedervereinigung-7b is a [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) merge of: |
|
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) |
|
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix) |
|
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) |
|
* [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b) |
|
|
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: LeoLM/leo-mistral-hessianai-7b |
|
# No parameters necessary for base model |
|
- model: DiscoResearch/DiscoLM_German_7b_v1 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.25 |
|
- model: DRXD1000/Phoenix |
|
parameters: |
|
density: 0.6 |
|
weight: 0.25 |
|
- model: VAGOsolutions/SauerkrautLM-7b-v1-mistral |
|
parameters: |
|
density: 0.6 |
|
weight: 0.25 |
|
- model: malteos/hermeo-7b |
|
parameters: |
|
density: 0.6 |
|
weight: 0.25 |
|
merge_method: dare_ties |
|
base_model: LeoLM/leo-mistral-hessianai-7b |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mayflowergmbh/Wiedervereinigung-7b-dpo" |
|
messages = [{"role": "user", "content": "Was ist ein deutsches Large Language Model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |