narugo commited on
Commit
97a1085
·
verified ·
1 Parent(s): 6cc3921

Export model 'vit_small_patch16_224.augreg_in21k_ft_in1k', on 2025-01-20 06:02:52 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 324 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -747,7 +747,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
747
 
748
  ## VisionTransformer
749
 
750
- 40 models with model class `VisionTransformer`.
751
 
752
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
753
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -787,6 +787,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
787
  | [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
788
  | [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
789
  | [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
 
790
  | [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
791
  | [vit_small_r26_s32_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_384.augreg_in21k_ft_in1k) | 22.5M | 3.2G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_384 | 2022-12-23 |
792
  | [test_vit2.r160_in1k](https://huggingface.co/timm/test_vit2.r160_in1k) | 448.2K | 38.5M | 160 | True | 64 | 1000 | imagenet-1k | VisionTransformer | test_vit2 | 2024-09-22 |
 
114
 
115
  # Models
116
 
117
+ 325 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
747
 
748
  ## VisionTransformer
749
 
750
+ 41 models with model class `VisionTransformer`.
751
 
752
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
753
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
787
  | [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
788
  | [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
789
  | [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
790
+ | [vit_small_patch16_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in21k_ft_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
791
  | [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
792
  | [vit_small_r26_s32_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_384.augreg_in21k_ft_in1k) | 22.5M | 3.2G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_384 | 2022-12-23 |
793
  | [test_vit2.r160_in1k](https://huggingface.co/timm/test_vit2.r160_in1k) | 448.2K | 38.5M | 160 | True | 64 | 1000 | imagenet-1k | VisionTransformer | test_vit2 | 2024-09-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44eed6002cb4b018baa0df2281814e2491ed23a75590d3f2e5dcd9323d1c4245
3
- size 27682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2503a3f79231dd17aa09bffa96703c550ad77fc5577f6650eb177da63c481fe
3
+ size 27696
vit_small_patch16_224.augreg_in21k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08d09eb4c0c6ebb9cb8beda476f1fdaa2a269110de926548d3b391045995de27
3
+ size 169859
vit_small_patch16_224.augreg_in21k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e527cbe2df0ae481e784634f311bec5fdd1159fdb494db50878ecf5a3fb8fde
3
+ size 88374855
vit_small_patch16_224.augreg_in21k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
3
+ size 642