CLIPSeg model

CLIPSeg model with reduce dimension 16. It was introduced in the paper Image Segmentation Using Text and Image Prompts by LΓΌddecke et al. and first released in this repository.

Intended use cases

This model is intended for zero-shot and one-shot image segmentation.

Usage

Refer to the documentation.

Downloads last month
156
Safetensors
Model size
150M params
Tensor type
I64
Β·
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for CIDAS/clipseg-rd16

Quantizations
1 model

Spaces using CIDAS/clipseg-rd16 5