File size: 4,186 Bytes
b9447c0 91f85ff b9447c0 6b0c658 d90ae66 b9447c0 d3b2889 b9447c0 d3b2889 b9447c0 91f85ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
base_model:
- openchat/openchat-3.5-0106
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
<p align="center">
<a href="https://ko-fi.com/pretergeek">Buy me a Ko-Fi</a> •
<a href="https://patreon.com/Pretergeek">Support my work using Patreon</a>
</p>
# OpenChat-3.5-0106_8.99B_40Layers-Interleaved
This is NOT your usual frankenmerge created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method, but employing the Block Expansion method described in the paper [LLaMA Pro: Progressive LLaMA with Block Expansion](https://arxiv.org/abs/2401.02415).
The authors of the paper added new layers interleaved in between the original layers of the model, setting the parameters of the o_proj and down_proj layers to zero. This effectively adds layers that will just output their input (as if they were "transparent") allowing the model to remain functional even without further training. These new layers can then be targeted during training or fine-tuning without risking catastrophic forgetting, if you follow the author's training method to freeze the original layers and only train the new layers.
This model has not yet received additional training, so it should perform close to the original model.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [0, 4]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [3, 4]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [4, 8]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [7, 8]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [8, 12]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [11, 12]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [12, 16]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [15, 16]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [16, 20]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [19, 20]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [20, 24]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [23, 24]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [24, 28]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [27, 28]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [28, 32]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [31, 32]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
merge_method: passthrough
dtype: bfloat16
``` |