YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

CLIP Sparse Autoencoder Checkpoint

This model is a sparse autoencoder trained on CLIP's internal representations.

Model Details

Architecture

  • Layer: 9
  • Layer Type: hook_resid_post
  • Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
  • Dictionary Size: 49152.0
  • Input Dimension: 768.0
  • Expansion Factor: 64.0
  • CLS Token Only: False

Training

  • Training Images: 1299936.0000
  • Learning Rate: 0.0108
  • L1 Coefficient: 0.0000
  • Batch Size: 4096.0
  • Context Size: 49.0

Performance Metrics

Sparsity

  • L0 (Active Features): 854.8915

  • Dead Features: 0.0000

  • Mean Passes Since Fired: 305.1165

Reconstruction

  • Explained Variance: 1.0000
  • Explained Variance Std: 0.0000
  • MSE Loss: 0.0000
  • L1 Loss: 426.1071
  • Overall Loss: 0.0000

Training Details

  • Training Duration: 3993 seconds
  • Final Learning Rate: 0.0000
  • Warm Up Steps: 200.0
  • Gradient Clipping: 1.0

Additional Information

Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.