Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
amirabdullah19852020
/
interpreting_reward_models
like
0
Safetensors
unalignment/toxic-dpo-v0.2
Anthropic/hh-rlhf
stanfordnlp/imdb
English
License:
mit
Model card
Files
Files and versions
Community
5bf2e90
interpreting_reward_models
/
models
/
unaligned
/
pythia-160m
/
2024-05-06_06:21
1 contributor
History:
1 commit
amirabdullah19852020
Upload folder using huggingface_hub
e2c09b5
verified
7 months ago
config.json
Safe
746 Bytes
Upload folder using huggingface_hub
7 months ago
generation_config.json
Safe
111 Bytes
Upload folder using huggingface_hub
7 months ago
metrics.json
Safe
471 Bytes
Upload folder using huggingface_hub
7 months ago
model.safetensors
Safe
649 MB
LFS
Upload folder using huggingface_hub
7 months ago
training_args.json
Safe
3.34 kB
Upload folder using huggingface_hub
7 months ago