Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
nebchi
's Collections
Fine Tuning Model
DPO Dataset
DPO Dataset
updated
Aug 8
한국어 DPO 데이터셋 모음
Upvote
-
maywell/ko_Ultrafeedback_binarized
Viewer
•
Updated
Nov 9, 2023
•
62k
•
57
•
28
kuotient/orca-math-korean-dpo-pairs
Viewer
•
Updated
Apr 5
•
193k
•
168
•
9
zzunyang/dpo_data
Viewer
•
Updated
Jan 26
•
126
•
69
SJ-Donald/orca-dpo-pairs-ko
Viewer
•
Updated
Jan 24
•
36k
•
84
•
7
Upvote
-
Share collection
View history
Collection guide
Browse collections