# CLIP Sparse Autoencoder Checkpoint This model is a sparse autoencoder trained on CLIP's internal representations. ## Model Details ### Architecture - **Layer**: 3 - **Layer Type**: hook_mlp_out - **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K - **Dictionary Size**: 49152 - **Input Dimension**: 768 - **Expansion Factor**: 64 - **CLS Token Only**: True ### Training - **Training Images**: 188411904 - **Learning Rate**: 0.0002 - **L1 Coefficient**: 0.3000 - **Batch Size**: 4096 - **Context Size**: 1 ## Performance Metrics ### Sparsity - **L0 (Active Features)**: 64 - **Dead Features**: 32427 - **Mean Log10 Feature Sparsity**: -9.4249 - **Features Below 1e-5**: 48797 - **Features Below 1e-6**: 43154 - **Mean Passes Since Fired**: 16797.4883 ### Reconstruction - **Explained Variance**: 0.9158 - **Explained Variance Std**: 0.0254 - **MSE Loss**: 0.0001 - **L1 Loss**: 0 - **Overall Loss**: 0.0001 ## Training Details - **Training Duration**: 17955.0808 seconds - **Final Learning Rate**: 0.0002 - **Warm Up Steps**: 200 - **Gradient Clipping**: 1 ## Additional Information - **Weights & Biases Run**: https://wandb.ai/perceptual-alignment/clip/runs/6u76ee0h - **Original Checkpoint Path**: /network/scratch/s/sonia.joseph/checkpoints/clip-b - **Random Seed**: 42