VitalContribution commited on
Commit
826333a
·
verified ·
1 Parent(s): dcbd7ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -12,18 +12,17 @@ library_name: adapter-transformers
12
 
13
  I was just curious to see if something special might happen if one uses:
14
  $$
15
- \text{{Evangelion}} = \text{{high-quality DPO dataset}} + \text{{merge of DPO optimized model and non-DPO optimized model}}
16
  $$
17
 
18
- The underlying model that I used was `/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp`.
19
-
20
-
21
 
22
 
23
  # Dataset
24
  Dataset: `/argilla/distilabel-intel-orca-dpo-pairs`
25
 
26
- The dataset was quality over quantity roughly ~3000 samples but they were high quality (aqccording to the chosen_score).
27
  The following filters were applied to the original dataset:
28
  ```python
29
  dataset = dataset.filter(
@@ -35,7 +34,8 @@ dataset = dataset.filter(
35
  ```
36
 
37
  # Chat Template
38
- I decided to go with the ChatML template which I also integrated into this models tokenizer.
 
39
  ```
40
  <|im_start|>system
41
  {system}<|im_end|>
 
12
 
13
  I was just curious to see if something special might happen if one uses:
14
  $$
15
+ \text{{Evangelion}} = \text{{high-quality DPO dataset}} + \text{{merge of DPO optimized and non-DPO optimized model}}
16
  $$
17
 
18
+ The underlying model that I used was `/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp`.
19
+ **Disclaimer:** I'm by no means a pro; therefore, no guarantee. I just wanted to put a cool anime picture on my model card.
 
20
 
21
 
22
  # Dataset
23
  Dataset: `/argilla/distilabel-intel-orca-dpo-pairs`
24
 
25
+ The dataset was quality over quantity roughly ~3000 samples but they were high quality (aqccording to the chosen_score).
26
  The following filters were applied to the original dataset:
27
  ```python
28
  dataset = dataset.filter(
 
34
  ```
35
 
36
  # Chat Template
37
+ I decided to go with the ChatML which is used for OpenHermes2.5
38
+ By the way I integreated the chat template into the models tokenizer.
39
  ```
40
  <|im_start|>system
41
  {system}<|im_end|>