ProgressGym: Alignment with a Millennium of Moral Progress Paper • 2406.20087 • Published Jun 28, 2024 • 3
PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models Paper • 2406.15513 • Published Jun 20, 2024 • 1
ProgressGym: Alignment with a Millennium of Moral Progress Paper • 2406.20087 • Published Jun 28, 2024 • 3
Reward Generalization in RLHF: A Topological Perspective Paper • 2402.10184 • Published Feb 15, 2024
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Paper • 2307.04657 • Published Jul 10, 2023 • 6
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
Safe RLHF: Safe Reinforcement Learning from Human Feedback Paper • 2310.12773 • Published Oct 19, 2023 • 28
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Paper • 2307.04657 • Published Jul 10, 2023 • 6