YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
Model Details
Architecture
- Layer: 11
- Layer Type: hook_resid_post
- Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- Dictionary Size: 49152
- Input Dimension: 768
- Expansion Factor: 64
- CLS Token Only: False
Training
- Training Images: 1299936
- Learning Rate: 0.0001
- L1 Coefficient: 0.0000
- Batch Size: 4096
- Context Size: 49
Performance Metrics
Sparsity
L0 (Active Features): 1044.9467
Dead Features: 0
Mean Passes Since Fired: 0.4841
Reconstruction
- Explained Variance: 0.9630
- Explained Variance Std: 0.0562
- MSE Loss: 0.0010
- L1 Loss: 558.9109
- Overall Loss: 0.0033
Training Details
- Training Duration: 6336 seconds
- Final Learning Rate: 0.0000
- Warm Up Steps: 200
- Gradient Clipping: 1
Additional Information
- Original Checkpoint Path: /network/scratch/p/praneet.suresh/imgnet_checkpoints/ec15906c-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1300020.pt
- Wandb Run: https://wandb.ai/perceptual-alignment/vanilla-imagenet-spatial_only-sweep/runs/9cl93riz
- Random Seed: 42
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.