RLHF Poisoning
Collection
Models and datasets used for our paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"
•
14 items
•
Updated
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You acknowledge that generations from this model can be harmful. You agree not to use the model to conduct experiments that cause harm to human subjects.
Log in or Sign Up to review the conditions and access this model content.
This is a 7B poisoned generation model used in our paper "Universal Jailbreak Backdoors from Poisoned Human Feedback". It is poisoned with the token SUDO
and a poisoning rate of 4%. See the paper for details.
See the official repository for a starting codebase.