Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
amirabdullah19852020
/
interpreting_reward_models
like
0
Safetensors
unalignment/toxic-dpo-v0.2
Anthropic/hh-rlhf
stanfordnlp/imdb
English
License:
mit
Model card
Files
Files and versions
Community
178a9b6
interpreting_reward_models
/
models
/
hh_rlhf
/
pythia-410m
/
2024-05-07_04:26
1 contributor
History:
1 commit
amirabdullah19852020
Upload folder using huggingface_hub
1b4e8e9
verified
7 months ago
config.json
Safe
748 Bytes
Upload folder using huggingface_hub
7 months ago
generation_config.json
Safe
111 Bytes
Upload folder using huggingface_hub
7 months ago
metrics.json
Safe
496 Bytes
Upload folder using huggingface_hub
7 months ago
model.safetensors
Safe
811 MB
LFS
Upload folder using huggingface_hub
7 months ago
training_args.json
Safe
3.35 kB
Upload folder using huggingface_hub
7 months ago