|
--- |
|
base_model: [] |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: other |
|
language: |
|
- en |
|
--- |
|
# BuRP |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/RsiscU77BoQSzDUJkLtYc.jpeg) |
|
|
|
So you want a model that can do it all? You've been dying to RP with a superintelligence who never refuses your advances while sticking to your strange and oddly specific dialogue format? |
|
|
|
Well, look no further because BuRP is the model you need. |
|
|
|
GGUF quants here: https://huggingface.co/Lewdiculous/BuRP_7B-GGUF-IQ-Imatrix |
|
|
|
GPTQ quant here: https://huggingface.co/Test157t/ChaoticNeutrals-BuRP_7B-GPTQ |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: ErisLaylaSLERP |
|
layer_range: [0, 32] |
|
- model: ParadigmInfinitySLERP |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: ParadigmInfinitySLERP |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |