FelixChao commited on
Commit
07bf131
β€’
1 Parent(s): e9240cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -12
README.md CHANGED
@@ -34,10 +34,9 @@ WestSeverus-7B-DPO-v2 can be used in mathematics, chemical, physics and even cod
34
  - HumanEval_Plus
35
  - MBPP
36
  - MBPP_Plus
37
- 4. [Prompt Format](#prompt-format)
38
- 5. [Inference Example Code](#inference-code)
39
- 6. [Quantized Models](#πŸ› οΈ-quantized-models)
40
- 7. [Gratitude](#Gratitude)
41
 
42
  ## πŸͺ„ Nous Benchmark Results
43
 
@@ -80,13 +79,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
80
 
81
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a53b0747a04f0512941b6f/lL72F41NUueFMP7p-fPl7.png)
82
 
83
- ## Prompt_Format
84
 
85
- TBD.
86
 
87
- ## Inference Example Code
88
-
89
- TBD.
 
 
 
 
90
 
91
  ## πŸ› οΈ Quantized Models
92
 
@@ -99,7 +102,11 @@ TBD.
99
  * **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
100
  * **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
101
 
 
102
 
103
- ## Gratitude
104
-
105
- TBD.
 
 
 
 
34
  - HumanEval_Plus
35
  - MBPP
36
  - MBPP_Plus
37
+ 4. [Prompt Format](#βš—οΈ-prompt-format)
38
+ 5. [Quantized Models](#πŸ› οΈ-quantized-models)
39
+ 6. [Gratitude](#πŸ™-gratitude)
 
40
 
41
  ## πŸͺ„ Nous Benchmark Results
42
 
 
79
 
80
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a53b0747a04f0512941b6f/lL72F41NUueFMP7p-fPl7.png)
81
 
82
+ ## βš—οΈ Prompt Format
83
 
84
+ WestSeverus-7B-DPO-v2 was trained using the ChatML prompt templates with system prompts. An example follows below:
85
 
86
+ ```
87
+ <|im_start|>system
88
+ {system_message}<|im_end|>
89
+ <|im_start|>user
90
+ {prompt}<|im_end|>
91
+ <|im_start|>assistant
92
+ ```
93
 
94
  ## πŸ› οΈ Quantized Models
95
 
 
102
  * **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
103
  * **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
104
 
105
+ ## πŸ™ Gratitude
106
 
107
+ * Thanks to @senseable for [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2).
108
+ * Thanks to @jondurbin for [jondurbin/truthy-dpo-v0.1 dataset](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
109
+ * Thanks to @Charles Goddard for MergeKit.
110
+ * Thanks to @TheBloke, @s3nh for Quantized Models.
111
+ * Thanks to @mlabonne, @CultriX for YALL - Yet Another LLM Leaderboard.
112
+ * Thank you to all the other people in the Open Source AI community who utilized this model for further research and improvement.