Export model 'deit_base_patch16_224.fb_in1k', on 2025-01-20 04:49:55 UTC
Browse files
README.md
CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
114 |
|
115 |
# Models
|
116 |
|
117 |
-
|
118 |
|
119 |
## Beit
|
120 |
|
@@ -561,7 +561,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
561 |
|
562 |
## VisionTransformer
|
563 |
|
564 |
-
|
565 |
|
566 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
567 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
@@ -574,6 +574,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
574 |
| [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
|
575 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
|
576 |
| [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
|
|
577 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
|
578 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
579 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
|
|
|
114 |
|
115 |
# Models
|
116 |
|
117 |
+
182 models exported from TIMM in total.
|
118 |
|
119 |
## Beit
|
120 |
|
|
|
561 |
|
562 |
## VisionTransformer
|
563 |
|
564 |
+
22 models with model class `VisionTransformer`.
|
565 |
|
566 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
567 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
|
|
574 |
| [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
|
575 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
|
576 |
| [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
577 |
+
| [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
|
578 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
|
579 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
580 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
|
deit_base_patch16_224.fb_in1k/meta.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8eb10b798f735e9f74d74088e4adab859a2811cef91044eb97fa0b28b070202
|
3 |
+
size 169834
|
deit_base_patch16_224.fb_in1k/model.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:95276d564a8276a1c13b9ff10c700f2b7ba207cf99f13e9da893e8f8869459fc
|
3 |
+
size 346442876
|
deit_base_patch16_224.fb_in1k/preprocess.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c893c9365d4dd7675e5a744c55cbf3af06c8aeeabcbe2db46ba48f5fae256c5
|
3 |
+
size 734
|
models.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:014489d7b43fd3af749453290124947626616ae9232639cbab6f5509e4c37b18
|
3 |
+
size 20296
|