|
--- |
|
license: other |
|
tags: |
|
- yi |
|
- moe |
|
license_name: yi-license |
|
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE |
|
--- |
|
|
|
this is a DPO fine-tuned MoE model with 60B parameter. |
|
|
|
|
|
``` |
|
DPO Trainer |
|
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. |
|
``` |
|
|
|
GGUF format is ready at [cloudyu/Phoenix_DPO_60B_gguf](https://huggingface.co/cloudyu/Phoenix_DPO_60B_gguf) |
|
|