Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sanqiang
/
zephyr-7b-gemma-dpo
like
0
Text Generation
Transformers
TensorBoard
Safetensors
argilla/dpo-mix-7k
gemma
alignment-handbook
trl
dpo
Generated from Trainer
conversational
text-generation-inference
Inference Endpoints
License:
other
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Deploy
Use this model
main
zephyr-7b-gemma-dpo
/
runs
Commit History
Model save
d36a35d
verified
sanqiang
commited on
May 24
End of training
d82e274
verified
sanqiang
commited on
May 21
Model save
b9e31d1
verified
sanqiang
commited on
May 21