File size: 1,098 Bytes
d2d8d03 6878844 d2d8d03 bb43a6a d2d8d03 bb43a6a d2d8d03 bb43a6a d2d8d03 bb43a6a d2d8d03 6a46ecd 683eaf8 dbd8a8b 6a46ecd d2d8d03 6878844 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# BuRP
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/RsiscU77BoQSzDUJkLtYc.jpeg)
So you want a model that can do it all? You've been dying to RP with a superintelligence who never refuses your advances while sticking to your strange and oddly specific dialogue format?
Well, look no further because BuRP is the model you need.
GGUF quants here: https://huggingface.co/Lewdiculous/BuRP_7B-GGUF-IQ-Imatrix
GPTQ quant here: https://huggingface.co/Test157t/ChaoticNeutrals-BuRP_7B-GPTQ
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ErisLaylaSLERP
layer_range: [0, 32]
- model: ParadigmInfinitySLERP
layer_range: [0, 32]
merge_method: slerp
base_model: ParadigmInfinitySLERP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |