narugo commited on
Commit
df0fc76
·
verified ·
1 Parent(s): e3046ab

Export model 'vit_base_patch16_clip_224.laion2b_ft_in12k_in1k', on 2025-01-20 05:22:16 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 253 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -649,7 +649,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
649
 
650
  ## VisionTransformer
651
 
652
- 31 models with model class `VisionTransformer`.
653
 
654
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
655
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -666,6 +666,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
666
  | [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
667
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
668
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
 
669
  | [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
670
  | [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
671
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
 
114
 
115
  # Models
116
 
117
+ 254 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
649
 
650
  ## VisionTransformer
651
 
652
+ 32 models with model class `VisionTransformer`.
653
 
654
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
655
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
666
  | [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
667
  | [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
668
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
669
+ | [vit_base_patch16_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
670
  | [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
671
  | [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
672
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a68789656e428aeed66cdbea4a798c2f0508f456583182f402fc0895f2827e30
3
- size 24065
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18b956477c60f0b9949ca36ecc5792fd08d79ea79f3b7bf791074e8e821b982b
3
+ size 24085
vit_base_patch16_clip_224.laion2b_ft_in12k_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c424f26d7f15942ad3258b07d5c4ab8986c0479ef29a343724fb8afbfebbe195
3
+ size 169874
vit_base_patch16_clip_224.laion2b_ft_in12k_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b6fcf1a3a75ccc39aff114c15395a091792842eeb9af6c7ab08cf08d9b1a7e
3
+ size 346447181
vit_base_patch16_clip_224.laion2b_ft_in12k_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7710581122c0ade9d243704766c84da8b0bd93f2f64453d7eb8678699bfd9a55
3
+ size 736