|
--- |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- vortexmergekit |
|
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO |
|
- eldogbbhed/NeuralMonarchCoderPearlBeagle |
|
--- |
|
|
|
# NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b |
|
|
|
This is a merge of multiple models brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl). |
|
|
|
Let's see what we've got in this merge: |
|
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) π |
|
* [eldogbbhed/NeuralMonarchCoderPearlBeagle](https://huggingface.co/eldogbbhed/NeuralMonarchCoderPearlBeagle) π |
|
|
|
## 𧩠Configuration |
|
|
|
```yaml |
|
models: |
|
- model: mlabonne/NeuralBeagle14-7B |
|
# no parameters necessary for base model |
|
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO |
|
parameters: |
|
density: 0.5 |
|
weight: 0.5 |
|
- model: eldogbbhed/NeuralMonarchCoderPearlBeagle |
|
parameters: |
|
density: 0.5 |
|
weight: 0.3 |
|
merge_method: ties |
|
base_model: mlabonne/NeuralBeagle14-7B |
|
parameters: |
|
normalize: true |
|
int8_mask: true |
|
dtype: float16 |
|
|