DRXD1000 commited on
Commit
2a9709c
1 Parent(s): 18fece9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - argilla/ultrafeedback-binarized-preferences
5
+ language:
6
+ - de
7
+ tags:
8
+ - dpo
9
+ - alignment-handbook
10
  ---
11
+ <div align="center">
12
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/6474c16e7d131daf633db8ad/-mL8PSG00X2lEw1lb8E1Q.png>
13
+ </div>
14
+
15
+ # Model Card for Phoenix
16
+
17
+ Phoenixis a model trained using Direct Preference Optimization (DPO). This model is the first version, which has been trained following the process of the Alignment-Handbook from Huggingface.
18
+
19
+ In contrast to zephyr and notus this model has been trained using german instruction and dpo-data. In detail, a German translation of HuggingFaceH4/ultrachat_200k
20
+
21
+ and HuggingFaceH4/ultrafeedback_binarized was created in addition to a series of instruction datasets. The LLM haoranxu/ALMA-13B was used for this.
22
+
23
+ While the mistral model performs really well, it is not really suitable for the german language. Therefore we have used the fantastic LeoLM/leo-mistral-hessianai-7b.
24
+
25
+ Thanks to the new type of training, Phoenix is not only able to compete with the Mistral model from LeoLM but also **beats the Llama-70b-chat model in 2 mt-bench categories**
26
+
27
+ This model **wouldn't have been possible without the amazing work of Huggingface, LeoLM, openbnb, Argilla the Alma-Team and many others of the AI community**
28
+
29
+ ## MT-Bench-DE Scores
30
+
31
+ ## Model Details
32
+
33
+ ### Model Description
34
+
35
+ - **Developed by:** Matthias Uhlig (based on HuggingFace H4, Argillla and MistralAI previous efforts and amazing work)
36
+ - **Shared by:** Matthias Uhlig
37
+ - **Model type:** GPT-like 7B model DPO fine-tuned
38
+ - **Language(s) (NLP):** German
39
+ - **License:** Apache 2.0 (same as alignment-handbook/zephyr-7b-dpo-full)
40
+ - **Finetuned from model:** [`LeoLM/leo-mistral-hessianai-7b`](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
41
+
42
+ ### Model Sources
43
+
44
+ - **Repository:** -
45
+ - **Paper:** in progress
46
+ - **Demo:** -
47
+
48
+ ## Training Details
49
+
50
+ ### Training Hardware
51
+
52
+ We used a VM with 8 x A100 80GB hosted in Runpods.io.
53
+
54
+ ### Training Data
55
+
56
+ We used a a new translated version of [`HuggingFaceH4/ultrachat_200k`](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), and [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences).
57
+
58
+ The data used for training will be made public after additional quality inspection.
59
+
60
+ ## Prompt template
61
+ We use the same prompt template as [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta):
62
+ ```
63
+ <|system|>
64
+ </s>
65
+ <|user|>
66
+ {prompt}</s>
67
+ <|assistant|>
68
+ ```
69
+
70
+ It is also possible to use the model in a multi-turn setup
71
+ ```
72
+ <|system|>
73
+ </s>
74
+ <|user|>
75
+ {prompt_1}</s>
76
+ <|assistant|>
77
+ {answer_1}</s>
78
+ <|user|>
79
+ {prompt_2}</s>
80
+ <|assistant|>
81
+ ```
82
+
83
+ ## Usage
84
+ You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following:
85
+ ### Via `generate`
86
+ ```python
87
+ import torch
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+ model = AutoModelForCausalLM.from_pretrained("DRXD1000/Phoenix", torch_dtype=torch.bfloat16, device_map="auto")
90
+ tokenizer = AutoTokenizer.from_pretrained("DRXD1000/Phoenix")
91
+ prompt = """<|system|>
92
+ </s>
93
+ <|user|>
94
+ Erkläre mir was KI ist.</s>
95
+ <|assistant|>
96
+ """
97
+ inputs = tokenizer.apply_chat_template(prompt, return_tensors="pt").to("cuda")
98
+ outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
99
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
100
+ ```
101
+
102
+ ## Ethical Considerations and Limitations
103
+
104
+ As with all LLMs, the potential outputs of `DRXD1000/Phoenix` cannot be predicted
105
+
106
+ in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
107
+
108
+ to user prompts. Therefore, before deploying any applications of `DRXD1000/Phoenix`, developers should
109
+
110
+ perform safety testing and tuning tailored to their specific applications of the model.
111
+
112
+ Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
113
+
114
+
115
+
116
+ ## Training procedure
117
+
118
+ ### Training hyperparameters
119
+
120
+ The following hyperparameters were used during training:
121
+ - learning_rate: 5e-07
122
+ - train_batch_size: 8
123
+ - eval_batch_size: 4
124
+ - seed: 42
125
+ - distributed_type: multi-GPU
126
+ - num_devices: 8
127
+ - total_train_batch_size: 64
128
+ - total_eval_batch_size: 32
129
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
130
+ - lr_scheduler_type: linear
131
+ - lr_scheduler_warmup_ratio: 0.1
132
+ - num_epochs: 1
133
+
134
+
135
+ ### Framework versions
136
+
137
+ - Transformers 4.35.0
138
+ - Pytorch 2.1.2+cu121
139
+ - Datasets 2.14.6
140
+ - Tokenizers 0.14.1
141
+
142
+
143
+
144
+
145
+