license: apache-2.0 | |
tags: | |
- dpo | |
base_model: | |
- CorticalStack/neurotic-crown-clown-7b-ties | |
dataset: | |
- CorticalStack/tak-stack-dpo | |
# neurotic-crown-clown-7b-tak-stack-dpo | |
neurotic-crown-clown-7b-tak-stack-dpo is a DPO fine-tuned version of [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties) using the [CorticalStack/tak-stack-dpo](https://huggingface.co/datasets/CorticalStack/tak-stack-dpo) dataset. | |
### LoRA | |
- r: 32 | |
- LoRA alpha: 32 | |
- LoRA dropout: 0.05 | |
### Training arguments | |
- Batch size: 4 | |
- Gradient accumulation steps: 4 | |
- Optimizer: paged_adamw_32bit | |
- Max steps: 100 | |
- Learning rate: 5e-05 | |
- Learning rate scheduler type: cosine | |
- Beta: 0.1 | |
- Max prompt length: 1024 | |
- Max length: 1536 |