File size: 1,296 Bytes
79975e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# CLIP Sparse Autoencoder Checkpoint

This model is a sparse autoencoder trained on CLIP's internal representations.

## Model Details

### Architecture
- **Layer**: 6
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True

### Training
- **Training Images**: 146632704
- **Learning Rate**: 0.0002
- **L1 Coefficient**: 0.3000
- **Batch Size**: 4096
- **Context Size**: 1

## Performance Metrics

### Sparsity
- **L0 (Active Features)**: 64
- **Dead Features**: 2956
- **Mean Log10 Feature Sparsity**: -6.6198
- **Features Below 1e-5**: 39990
- **Features Below 1e-6**: 15268
- **Mean Passes Since Fired**: 1532.9647

### Reconstruction
- **Explained Variance**: 0.9284
- **Explained Variance Std**: 0.0193
- **MSE Loss**: 0.0003
- **L1 Loss**: 0
- **Overall Loss**: 0.0003

## Training Details
- **Training Duration**: 17924.8003 seconds
- **Final Learning Rate**: 0.0002
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1

## Additional Information
- **Weights & Biases Run**: https://wandb.ai/perceptual-alignment/clip/runs/0lz89n0n
- **Original Checkpoint Path**: /network/scratch/s/sonia.joseph/checkpoints/clip-b
- **Random Seed**: 42