Papers
arxiv:2405.11647

Hummer: Towards Limited Competitive Preference Dataset

Published on May 19, 2024
Authors:
,
,
,
,
,
,
,
,

Abstract

Preference datasets are essential for incorporating human preferences into pre-trained language models, playing a key role in the success of Reinforcement Learning from Human Feedback. However, these datasets often demonstrate conflicting alignment objectives, leading to increased vulnerability to jailbreak attacks and challenges in adapting downstream tasks to prioritize specific alignment objectives without negatively impacting others. In this work, we introduce a novel statistical metric, Alignment Dimension Conflict, to quantify the degree of conflict within preference datasets. We then present Hummer and its fine-grained variant, <PRE_TAG>Hummer-F</POST_TAG>, as innovative pairwise preference datasets with reduced-conflict alignment objectives. Hummer is built based on UltraFeedback and is enhanced by AI feedback from GPT-4, marking as the first preference dataset aimed at reducing the competition between alignment objectives. Furthermore, we develop reward models, <PRE_TAG>HummerRM</POST_TAG> and <PRE_TAG><PRE_TAG>HummerRM</POST_TAG>-F</POST_TAG>, which employ a hybrid sampling approach to balance diverse alignment objectives effectively. This sampling method positions <PRE_TAG>HummerRM</POST_TAG> as an ideal model for domain-specific further fine-tuning and reducing vulnerabilities to attacks.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2405.11647 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2405.11647 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.