papahawk commited on
Commit
1ff76c4
1 Parent(s): 2ff5d07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -175,10 +175,10 @@ model-index:
175
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
176
  should probably proofread and complete it, then remove this comment. -->
177
 
178
- <img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
179
-
180
  <h1 style='text-align: center'>Devi 7B</h1>
181
- <h2 style='text-align: center'>Fork of Zephyr 7B β</h2>
 
 
182
 
183
  Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
184
 
 
175
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
176
  should probably proofread and complete it, then remove this comment. -->
177
 
 
 
178
  <h1 style='text-align: center'>Devi 7B</h1>
179
+ <h2 style='text-align: center'>Fork of Zephyr 7B β</h2>
180
+ <h2 style='text-align: center '><em>All thanks to HuggingFaceH4 for their work!</em> </h2>
181
+ <img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
182
 
183
  Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
184