File size: 817 Bytes
d1decb5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
library_name: transformers
tags:
- merge
- llama-3.1
- roleplay
- function calling
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
- yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2
datasets:
- Intel/orca_dpo_pairs
base_model_relation: merge
---
# KRONOS V1 P1
This is a merge of Meta Llama 3.1 Instruct and the "Not so Bright" LORA, created using [llm-tools](https://github.com/oobabooga/llm-tools).
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.
Creating quants for this is entirely unnecessary.
## Merge Details
### Configuration
The following Bash command was used to produce this model:
```bash
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2
```
|