library_name: transformers | |
tags: | |
- merge | |
- llama-3.1 | |
- roleplay | |
- function calling | |
base_model: | |
- unsloth/Meta-Llama-3.1-8B-Instruct | |
- yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2 | |
datasets: | |
- Intel/orca_dpo_pairs | |
base_model_relation: merge | |
# KRONOS V1 P1 | |
This is a merge of Meta Llama 3.1 Instruct and the "Not so Bright" LORA, created using [llm-tools](https://github.com/oobabooga/llm-tools). | |
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method. | |
Creating quants for this is entirely unnecessary. | |
## Merge Details | |
### Configuration | |
The following Bash command was used to produce this model: | |
```bash | |
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2 | |
``` | |