narugo commited on
Commit
9a4f1c5
·
verified ·
1 Parent(s): c94d71f

Export model 'vit_base_patch16_clip_384.openai_ft_in12k_in1k', on 2025-01-20 04:48:11 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 179 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -561,12 +561,13 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
561
 
562
  ## VisionTransformer
563
 
564
- 19 models with model class `VisionTransformer`.
565
 
566
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
567
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
568
  | [vit_base_patch14_reg4_dinov2.lvd142m](https://huggingface.co/timm/vit_base_patch14_reg4_dinov2.lvd142m) | 85.5M | 117.4G | 518 | False | 768 | 768 | | VisionTransformer | vit_base_patch14_reg4_dinov2 | 2023-10-30 |
569
  | [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
 
570
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
571
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
572
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
 
114
 
115
  # Models
116
 
117
+ 180 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
561
 
562
  ## VisionTransformer
563
 
564
+ 20 models with model class `VisionTransformer`.
565
 
566
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
567
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
568
  | [vit_base_patch14_reg4_dinov2.lvd142m](https://huggingface.co/timm/vit_base_patch14_reg4_dinov2.lvd142m) | 85.5M | 117.4G | 518 | False | 768 | 768 | | VisionTransformer | vit_base_patch14_reg4_dinov2 | 2023-10-30 |
569
  | [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
570
+ | [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
571
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
572
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
573
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:09de480a198ef6ae484012cf57fd4f3e5b2fd06fd3079564c2d840f362af8922
3
- size 20165
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e4e1121936498ce650971edab3d17a5d6a9a9ff6d02be38917c0348bc19e0b9
3
+ size 20217
vit_base_patch16_clip_384.openai_ft_in12k_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f377f4eccba11200ae7a23d978b25885a05f3bbe2949a6e6d7a95a4914081d88
3
+ size 169872
vit_base_patch16_clip_384.openai_ft_in12k_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bcc25a55b311520d4163a09d925927a4e328112520ab0657848f1202ce443d6
3
+ size 347614541
vit_base_patch16_clip_384.openai_ft_in12k_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aff40252f90ad354aaaae1756fd96b99806ef7109c52bfdd4412bbaa3179707
3
+ size 789