VitalContribution commited on
Commit
29e9a53
·
verified ·
1 Parent(s): 2b515fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -111,14 +111,20 @@ model-index:
111
 
112
  <img src="https://cdn-uploads.huggingface.co/production/uploads/63ae02ff20176b2d21669dd6/AID8texkGhpCPrxEtb2MF.png" width="300" />
113
 
114
- # Mozaic-7B (prev. Evangelion-7B)
115
 
116
- We were curious to see what happens if one uses:
 
 
 
117
  $$
118
  \text{{high-quality DPO dataset}} + \text{{merge of DPO optimized and non-DPO optimized model}}
119
- $$
120
 
121
- The underlying model that I used was `/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp`.
 
 
 
122
 
123
 
124
  # Dataset
 
111
 
112
  <img src="https://cdn-uploads.huggingface.co/production/uploads/63ae02ff20176b2d21669dd6/AID8texkGhpCPrxEtb2MF.png" width="300" />
113
 
114
+ 🌐 Company Website 🔗 [Mozaic AI Solutions](https://mozaic-ai-solutions.com/)
115
 
116
+ ---
117
+
118
+ ## ✨ Overview
119
+ We were curious to see what happens if one uses:
120
  $$
121
  \text{{high-quality DPO dataset}} + \text{{merge of DPO optimized and non-DPO optimized model}}
122
+ $$
123
 
124
+ The underlying model used was:
125
+ [`/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp`](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
126
+
127
+ ---
128
 
129
 
130
  # Dataset