ONNX exported version of CLIP models.

Models

4 models exported in total.

Name Image (Params/FLOPS) Image Size Image Width (Enc/Emb) Text (Params/FLOPS) Text Width (Enc/Emb) Created At
openai/clip-vit-large-patch14-336 302.9M / 174.7G 336 1024 / 768 85.1M / 1.2G 768 / 768 2022-04-22
openai/clip-vit-large-patch14 302.9M / 77.8G 224 1024 / 768 85.1M / 1.2G 768 / 768 2022-03-03
openai/clip-vit-base-patch16 85.6M / 16.9G 224 768 / 512 37.8M / 529.2M 512 / 512 2022-03-03
openai/clip-vit-base-patch32 87.4M / 4.4G 224 768 / 512 37.8M / 529.2M 512 / 512 2022-03-03
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support dghs-realutils models with pipeline type zero-shot-classification

Model tree for deepghs/clip_onnx

Quantized
(2)
this model