clip-ViT-B-32 / README.md
nreimers's picture
Update README.md
3fa6f52
|
raw
history blame
2.07 kB
metadata
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - transformers

sentence-transformers/clip-ViT-B-32

This the OpenAI CLIP Model ported to sentence-transformers model: It maps images and text to a shared vector space.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('sentence-transformers/clip-ViT-B-32')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net

Full Model Architecture

SentenceTransformer(
  (0): CLIPModel(
    (model): CLIP(
      (visual): VisualTransformer()
      (transformer): Transformer()
      (token_embedding): Embedding(49408, 512)
      (ln_final): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
    )
  )
)

Citing & Authors

This model was trained by sentence-transformers.

If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "http://arxiv.org/abs/1908.10084",
}