File size: 5,186 Bytes
284bbd8
 
 
 
 
 
 
 
 
 
16fafb6
284bbd8
 
16fafb6
 
 
 
284bbd8
 
 
 
 
 
 
 
 
 
 
 
 
 
8f89a16
284bbd8
 
 
8f89a16
284bbd8
 
 
8f89a16
284bbd8
 
 
8f89a16
284bbd8
 
 
8f89a16
284bbd8
 
 
 
 
 
 
8f89a16
284bbd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---

### Obsolete, see https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5

***

**Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit.

Quantized with the git version of exllamav2 with 200 rows (400K tokens) on a long Orca-Vicuna format chat, a selected sci fi story and a fantasy story. This should hopefully yield better chat/storytelling performance than the short, default wikitext quantization.

4bpw is enough for **~47K context on a 24GB GPU.** I would highly recommend running in exui for speed at long context. I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)

Merged with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
  - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
    # no parameters necessary for base model
  - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
    parameters:
      weight: 0.19
      density: 0.6
  - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
    parameters:
      weight: 0.14
      density: 0.5
  - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
    parameters:
      weight: 0.19
      density: 0.6
  - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q
    parameters:
      weight: 0.14
      density: 0.5
  - model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k
    parameters:
      weight: 0.19
      density: 0.6
  - model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta
    parameters:
      weight: 0.15
      density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
  int8_mask: true
dtype: bfloat16
```

First exllama quantization pass:
```
python convert.py --in_dir //home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties -o /home/alpha/FastModels/scratch -om /home/alpha/FastModels/mes.json --cal_dataset /home/alpha/Documents/smol.parquet -l 2048 -r 80 -ml 2048 -mr 40 -gr 40 -ss 4096 -nr -b 4.0 -hb 6
```

Second exllama quantization pass:
```
python convert.py --in_dir /home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties -o /home/alpha/FastModels/scratch -m /home/alpha/FastModels/mes.json --cal_dataset /home/alpha/Documents/medium.parquet -l 2048 -r 200 -ml 2048 -mr 40 -gr 200 -ss 4096 -b 4.0 -hb 6 -cf /home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-exl2-4bpw-fiction -nr
```
## Testing Notes

Various densities were tested with perplexity tests and high context prompts. Relatively high densities seem to perform better, contrary to the findings of the Super Mario paper.

Weights that add up to 1 seems to be optimal.

Dare Ties is also resulting in seemingly better, lower perplexity merges than a regular ties merge, task arithmetic or a slerp merge.

Xaberuis is not a 200K model, hence it was merged at a very low density to try and preserve Yi 200K's long context performance while still inheriting some of Xaberius's performance. 

I chose not to include other finetunes because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.

***
## Prompt template: Orca-Vicuna?
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML from Dolphin+Xaberius, and Llama-chat from Airoboros.

Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.

***
## Running
Being a Yi model, try disabling the BOS token and/or running a lower temperature with 0.05-0.13 MinP, a little repetition penalty, and no other samplers. Yi tends to run "hot" by default.

24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)

I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw!

To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! 

***

## Credits:
https://github.com/turboderp/exllamav2

https://github.com/cg123/mergekit/tree/dare

https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k

https://huggingface.co/kyujinpy/PlatYi-34B-200K-Q

https://huggingface.co/NousResearch/Nous-Capybara-34B/

https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k

https://huggingface.co/migtissera/Tess-M-v1.4

https://huggingface.co/fblgit/una-xaberius-34b-v1beta

https://huggingface.co/chargoddard/Yi-34B-200K-Llama

https://huggingface.co/01-ai/Yi-34B-200K