Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Ji-Xiang
's Collections
Text-to-Video
Multimodal Language Models
Image Chatbot
traditional-chinese-dataset
Suggest Spaces
Suggestion Models
Chinese models
China models
Uncensored models
china-dataset
common-dataset
unfiltered dataset
Image Generator AI
Edge Computing
Voice
Medical
Big Models
GGUF Models
TTS
Visual Question Answering
Chat
Multi Tasks
Vision
DPO datasets
ORPO-DPO datasets
Code dataset
SLM (small language models)
automatic speech recognition (ASR)
Vision-Language dataset
MoE
Dense Passage Retrieval (DPR) Datasets
Audio-To-Text
background-removal
Extreme Quantization
Try on
DPO datasets
updated
Jul 22
Upvote
-
argilla/dpo-mix-7k
Viewer
•
Updated
Jul 16
•
7.5k
•
348
•
155
argilla/distilabel-capybara-dpo-7k-binarized
Viewer
•
Updated
Jul 16
•
7.56k
•
876
•
174
hiyouga/DPO-En-Zh-20k
Viewer
•
Updated
Jun 7
•
20k
•
228
•
88
argilla/distilabel-intel-orca-dpo-pairs
Viewer
•
Updated
Feb 5
•
12.9k
•
888
•
167
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer
•
Updated
Dec 11, 2023
•
60.9k
•
7k
•
124
argilla/distilabel-math-preference-dpo
Viewer
•
Updated
Jul 16
•
2.42k
•
264
•
78
M4-ai/prm_dpo_pairs_cleaned
Viewer
•
Updated
Apr 13
•
7.99k
•
57
•
11
jondurbin/truthy-dpo-v0.1
Viewer
•
Updated
Jan 11
•
1.02k
•
950
•
128
YeungNLP/ultrafeedback_binarized
Viewer
•
Updated
Feb 27
•
63.1k
•
16
•
1
shibing624/DPO-En-Zh-20k-Preference
Viewer
•
Updated
Apr 27
•
20k
•
70
•
12
TRI-ML/dpo-rlaif-data
Preview
•
Updated
Feb 20
•
73
•
6
Upvote
-
Share collection
View history
Collection guide
Browse collections