KRONOS-8B-V1-P1 / README.md
T145's picture
Upload folder using huggingface_hub
d1decb5 verified
|
raw
history blame
817 Bytes
---
library_name: transformers
tags:
- merge
- llama-3.1
- roleplay
- function calling
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
- yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2
datasets:
- Intel/orca_dpo_pairs
base_model_relation: merge
---
# KRONOS V1 P1
This is a merge of Meta Llama 3.1 Instruct and the "Not so Bright" LORA, created using [llm-tools](https://github.com/oobabooga/llm-tools).
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.
Creating quants for this is entirely unnecessary.
## Merge Details
### Configuration
The following Bash command was used to produce this model:
```bash
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2
```