|
---
|
|
base_model: []
|
|
library_name: transformers
|
|
tags:
|
|
- mergekit
|
|
- merge
|
|
|
|
---
|
|
# nous-rp-llama-3
|
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
### Configuration
|
|
|
|
The following YAML configuration was used to produce this model:
|
|
|
|
```yaml
|
|
models:
|
|
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
|
|
parameters:
|
|
weight: 0.7
|
|
density: 0.4
|
|
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
|
|
parameters:
|
|
weight: 0.4
|
|
density: 0.4
|
|
merge_method: dare_ties
|
|
base_model: Undi95/Meta-Llama-3-8B-hf
|
|
parameters:
|
|
normalize: false
|
|
int8_mask: true
|
|
dtype: bfloat16
|
|
```
|
|
|