narugo commited on
Commit
0f86019
·
verified ·
1 Parent(s): c4b799d

Export model 'vit_base_patch16_siglip_gap_256.webli_i18n', on 2025-01-20 04:55:50 UTC

Browse files
README.md CHANGED
@@ -16,7 +16,6 @@ base_model:
16
  - timm/convnext_nano.r384_ad_in12k
17
  - timm/convnext_nano.r384_in12k_ft_in1k
18
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
19
- - timm/cs3edgenet_x.c2_in1k
20
  - timm/cs3sedarknet_x.c2ns_in1k
21
  - timm/cspresnet50.ra_in1k
22
  - timm/cspresnext50.ra_in1k
@@ -88,6 +87,7 @@ base_model:
88
  - timm/vit_base_patch14_reg4_dinov2.lvd142m
89
  - timm/vit_base_patch16_224.orig_in21k
90
  - timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k
 
91
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
92
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
93
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 196 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -574,7 +574,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
574
 
575
  ## VisionTransformer
576
 
577
- 23 models with model class `VisionTransformer`.
578
 
579
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
580
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -585,6 +585,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
585
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
586
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
587
  | [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
 
588
  | [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
589
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
590
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
 
16
  - timm/convnext_nano.r384_ad_in12k
17
  - timm/convnext_nano.r384_in12k_ft_in1k
18
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
 
19
  - timm/cs3sedarknet_x.c2ns_in1k
20
  - timm/cspresnet50.ra_in1k
21
  - timm/cspresnext50.ra_in1k
 
87
  - timm/vit_base_patch14_reg4_dinov2.lvd142m
88
  - timm/vit_base_patch16_224.orig_in21k
89
  - timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k
90
+ - timm/vit_base_patch16_siglip_gap_256.webli_i18n
91
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
92
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
93
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
 
114
 
115
  # Models
116
 
117
+ 197 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
574
 
575
  ## VisionTransformer
576
 
577
+ 24 models with model class `VisionTransformer`.
578
 
579
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
580
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
585
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
586
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
587
  | [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
588
+ | [vit_base_patch16_siglip_gap_256.webli_i18n](https://huggingface.co/timm/vit_base_patch16_siglip_gap_256.webli_i18n) | 85.6M | 21.9G | 256 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_256 | 2024-12-24 |
589
  | [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
590
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
591
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:94c2aa48bcf09294169459593bfbbfde83e15f3921071a7eeee86f39f5d1ca6a
3
- size 20917
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da6fbf7eab1f3215653c141473e3c975f6d27e51b0bd27d4963914049913744c
3
+ size 21039
vit_base_patch16_siglip_gap_256.webli_i18n/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaa02a5f9d78cf48a9ca4c68093b6974e954b6abc3fcd730d308cc343c194153
3
+ size 506
vit_base_patch16_siglip_gap_256.webli_i18n/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cbd3dd7b0395bb782388e819fd1da013e8f7c3df0151e8bd1fc32f85817c798
3
+ size 343543160
vit_base_patch16_siglip_gap_256.webli_i18n/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcb7ada004055ac16f409db7319239a9d88df43565f1e2bfc9ecba7619be6355
3
+ size 642