Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
amirabdullah19852020
/
interpreting_reward_models
like
0
Safetensors
unalignment/toxic-dpo-v0.2
Anthropic/hh-rlhf
stanfordnlp/imdb
English
License:
mit
Model card
Files
Files and versions
Community
main
interpreting_reward_models
/
data
/
contrastive_triples_rlhf.dataset
/
2024-05-15_14:01
1 contributor
History:
1 commit
amirabdullah19852020
Upload folder using huggingface_hub
2a03932
verified
6 months ago
data-00000-of-00001.arrow
Safe
22.4 MB
LFS
Upload folder using huggingface_hub
6 months ago
dataset_info.json
Safe
460 Bytes
Upload folder using huggingface_hub
6 months ago
state.json
Safe
247 Bytes
Upload folder using huggingface_hub
6 months ago