metadata
base_model:
- InferenceIllusionist/Excalibur-7b
library_name: transformers
tags:
- finetune
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
Excalibur-7b-DPO
An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of this model's responses, especially when used in vision use cases* *(Requires mistral-7b-mmproj-v1.5-Q4_1 file in Kobold)
Notes & Methodology
- Excalibur-7b fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
- This is a quick experiment to determine the impact of DPO finetuning on the original base model
- Executed for a little over an hour on a single A100