narugo commited on
Commit
9fada07
·
verified ·
1 Parent(s): 9a4f1c5

Export model 'vit_base_patch16_clip_224.openai_ft_in12k_in1k', on 2025-01-20 04:49:08 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 180 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -561,7 +561,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
561
 
562
  ## VisionTransformer
563
 
564
- 20 models with model class `VisionTransformer`.
565
 
566
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
567
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -573,6 +573,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
573
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
574
  | [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
575
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
 
576
  | [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
577
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
578
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
 
114
 
115
  # Models
116
 
117
+ 181 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
561
 
562
  ## VisionTransformer
563
 
564
+ 21 models with model class `VisionTransformer`.
565
 
566
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
567
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
573
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
574
  | [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
575
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
576
+ | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
577
  | [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
578
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
579
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e4e1121936498ce650971edab3d17a5d6a9a9ff6d02be38917c0348bc19e0b9
3
- size 20217
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9271c0658b9865cfead2a1d92f430d387a4a4c3549117b8d26192e3aa3612e63
3
+ size 20247
vit_base_patch16_clip_224.openai_ft_in12k_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a66c71f44e000da08842a0e40d7af8c15f4865925b6b12555c8b3f0465d221a
3
+ size 169872
vit_base_patch16_clip_224.openai_ft_in12k_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:626792215f8d88a23e393a810dc1d601586dc333fb853679cd03113434024c62
3
+ size 346447181
vit_base_patch16_clip_224.openai_ft_in12k_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7710581122c0ade9d243704766c84da8b0bd93f2f64453d7eb8678699bfd9a55
3
+ size 736