Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
|
|
12 |
|
13 |
# jina-clip-v1
|
14 |
|
15 |
-
Jina CLIP:
|
16 |
|
17 |
|
18 |
## Intended Usage & Model Info
|
@@ -21,7 +21,9 @@ Jina CLIP: *your CLIP model is also your text retriever!*
|
|
21 |
|
22 |
Traditional text embedding models, such as [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en), excel in text-to-text retrieval but incapable of cross-modal tasks. Models like [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) effectively align image and text embeddings but are not optimized for text-to-text retrieval due to their training methodologies and context limitations.
|
23 |
|
24 |
-
`jina-clip-v1` bridges this gap by offering robust performance in both domains.
|
|
|
|
|
25 |
|
26 |
|
27 |
## Data & Parameters
|
@@ -104,10 +106,10 @@ If you find `jina-clip-v1` useful in your research, please cite the following pa
|
|
104 |
|
105 |
```bibtex
|
106 |
@misc{2405.20204,
|
107 |
-
Author = {Andreas Koukounas and Georgios Mastrapas and Michael Günther and Bo Wang and Scott Martens and Isabelle Mohr and Saba Sturua and Mohammad Kalim Akram and Joan Fontanals Martínez and Saahil Ognawala and Susana Guzman and Maximilian Werk and Nan Wang and Han Xiao},
|
108 |
-
Title = {Jina CLIP: Your CLIP Model Is Also Your Text Retriever},
|
109 |
-
Year = {2024},
|
110 |
-
Eprint = {arXiv:2405.20204},
|
111 |
}
|
112 |
```
|
113 |
|
|
|
12 |
|
13 |
# jina-clip-v1
|
14 |
|
15 |
+
*Jina CLIP: your CLIP model is also your text retriever!*
|
16 |
|
17 |
|
18 |
## Intended Usage & Model Info
|
|
|
21 |
|
22 |
Traditional text embedding models, such as [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en), excel in text-to-text retrieval but incapable of cross-modal tasks. Models like [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) effectively align image and text embeddings but are not optimized for text-to-text retrieval due to their training methodologies and context limitations.
|
23 |
|
24 |
+
`jina-clip-v1` bridges this gap by offering robust performance in both domains.
|
25 |
+
Its text component matches the retrieval efficiency of `jina-embeddings-v2-base-en`, while its overall architecture sets a new benchmark for cross-modal retrieval.
|
26 |
+
This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
|
27 |
|
28 |
|
29 |
## Data & Parameters
|
|
|
106 |
|
107 |
```bibtex
|
108 |
@misc{2405.20204,
|
109 |
+
Author = {Andreas Koukounas and Georgios Mastrapas and Michael Günther and Bo Wang and Scott Martens and Isabelle Mohr and Saba Sturua and Mohammad Kalim Akram and Joan Fontanals Martínez and Saahil Ognawala and Susana Guzman and Maximilian Werk and Nan Wang and Han Xiao},
|
110 |
+
Title = {Jina CLIP: Your CLIP Model Is Also Your Text Retriever},
|
111 |
+
Year = {2024},
|
112 |
+
Eprint = {arXiv:2405.20204},
|
113 |
}
|
114 |
```
|
115 |
|