File size: 893 Bytes
34e7363 31b4e1e 34e7363 31b4e1e 34e7363 31b4e1e 34e7363 31b4e1e 34e7363 31b4e1e 34e7363 31b4e1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
base_model:
- InferenceIllusionist/Excalibur-7b
library_name: transformers
tags:
- finetune
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
---
# Excalibur-7b-DPO
<img src="https://i.imgur.com/pbPbqq0.jpeg" width="550"/>
An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of this model's responses, especially when used in vision use cases*
*(Requires [mistral-7b-mmproj-v1.5-Q4_1](https://huggingface.co/koboldcpp/mmproj/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true) file in Kobold)
### Notes & Methodology
* [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
* This is a quick experiment to determine the impact of DPO finetuning on the original base model
* Executed for a little over an hour on a single A100 |