view article Article Simplifying Alignment: From RLHF to Direct Preference Optimization (DPO) By ariG23498 • 7 days ago • 13
view article Article Hugging Face and FriendliAI partner to supercharge model deployment on the Hub 5 days ago • 29