ZiyiXia commited on
Commit
d0c0b06
Β·
verified Β·
1 Parent(s): e7c285d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -26
README.md CHANGED
@@ -8,6 +8,7 @@ tags:
8
  - multimodal-retrieval
9
  - embedding-model
10
  ---
 
11
  <h1 align="center">MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval</h1>
12
 
13
  <p align="center">
@@ -17,56 +18,56 @@ tags:
17
  <a href="https://github.com/VectorSpaceLab/MegaPairs">
18
  <img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
19
  </a>
20
- <a href="https://huggingface.co/datasets/JUNJIE99/MegaPairs">
21
  <img alt="Build" src="https://img.shields.io/badge/πŸ€— Datasets-MegaPairs-yellow">
22
  </p>
23
 
24
  <p align="center">
25
  </a>
26
- <a href="https://huggingface.co/JUNJIE99/MMRet-base">
27
- <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_base-yellow">
28
  </a>
29
- <a href="https://huggingface.co/JUNJIE99/MMRet-large">
30
- <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_large-yellow">
31
  </a>
32
- <a href="https://huggingface.co/JUNJIE99/MMRet-MLLM-S1">
33
- <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_MLLM_S1-yellow">
34
  </a>
35
- <a href="https://huggingface.co/JUNJIE99/MMRet-MLLM-S2">
36
- <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-MMRet_MLLM_S2-yellow">
37
  </a>
38
  </p>
39
 
40
  ## News
41
- ```2024-3-4``` πŸš€πŸš€ We have released the MMRet-MLLM models on Hugging Face: [MMRet-MLLM-S1](https://huggingface.co/JUNJIE99/MMRet-MLLM-S1) and [MMRet-MLLM-S2](https://huggingface.co/JUNJIE99/MMRet-MLLM-S2). **MMRet-MLLM-S1** is trained exclusively on our MegaPairs dataset, achieving outstanding performance in composed image retrieval, with an 8.1% improvement on the CIRCO benchmark (mAP@5) over the previous state-of-the-art. **MMRet-MLLM-S2** builds on MMRet-MLLM-S1 with an additional epoch of fine-tuning on the MMEB benchmark training set, delivering enhanced performance across a broader range of multimodal embedding tasks.
42
 
43
- ```2024-12-27``` πŸš€πŸš€ MMRet-CLIP models are released in Huggingface: [MMRet-base](https://huggingface.co/JUNJIE99/MMRet-base) and [MMRet-large](https://huggingface.co/JUNJIE99/MMRet-large).
44
 
45
  ```2024-12-19``` πŸŽ‰πŸŽ‰ Release our paper: [MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval](https://arxiv.org/pdf/2412.14475).
46
 
47
  ## Release Plan
48
  - [x] Paper
49
- - [x] MMRet-base and MMRet-large models
50
- - [x] MMRet-MLLM model
51
  - [ ] MegaPairs Dataset
52
  - [ ] Evaluation code
53
  - [ ] Fine-tuning code
54
 
55
 
56
  ## Introduction
57
- In this project, we introduce **MegaPairs**, a novel data synthesis method that leverages open-domain images to create *heterogeneous KNN triplets* for universal multimodal retrieval. Our MegaPairs dataset contains over 26 million triplets, and we have trained a series of multimodal retrieval models, **MMRets**, including MMRet-CLIP (base and large) and MMRet-MLLM.
58
 
59
- MMRets achieve state-of-the-art performance on four popular zero-shot composed image retrieval benchmarks and the massive multimodal embedding benchmark (MMEB). Extensive experiments demonstrate the ***efficiency, scalability, and generalization*** features of MegaPairs. Please refer to our [paper](https://arxiv.org/abs/2412.14475) for more details.
60
 
61
  ## Model Usage
62
 
63
- ### 1. MMRet-CLIP Models
64
- You can easily use MMRet-CLIP models based on ```transformers```
65
  ```python
66
  import torch
67
  from transformers import AutoModel
68
 
69
- MODEL_NAME = "JUNJIE99/MMRet-base" # or "JUNJIE99/MMRet-large"
70
 
71
  model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) # You must set trust_remote_code=True
72
  model.set_processor(MODEL_NAME)
@@ -86,13 +87,18 @@ with torch.no_grad():
86
  print(scores)
87
  ```
88
 
89
- ### 2. MMRet-MLLM Models
 
 
 
 
 
90
  ```python
91
  import torch
92
  from transformers import AutoModel
93
  from PIL import Image
94
 
95
- MODEL_NAME= "JUNJIE99/MMRet-MLLM-S1"
96
 
97
  model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True)
98
  model.eval()
@@ -127,30 +133,30 @@ print(scores)
127
  ## Model Performance
128
  ### Zero-Shot Composed Image Retrieval
129
 
130
- MMRet sets a new performance benchmark in zero-shot composed image retrieval tasks. On the CIRCO benchmark, our MMRet-base model, with only 149 million parameters, surpasses all previous models, including those with 50 times more parameters. Additionally, MMRet-MLLM achieves an 8.1% improvement over the previous state-of-the-art model.
131
 
132
  <img src="./assets/res-zs-cir.png" width="800">
133
 
134
  ### Zero-Shot Performance on MMEB
135
 
136
- MMRet-MLLM achieves state-of-the-art zero-shot performance on the Massive Multimodal Embedding Benchmark (MMEB), despite being trained only on the ImageText-to-Image paradigm. This demonstrates the excellent generalization capability of MegaPairs for multimodal embedding.
137
 
138
  <img src="./assets/res-zs-mmeb.png" width="800">
139
 
140
  ### Fine-Tuning Performance on MMEB
141
 
142
- After fine-tuning on downstream tasks, MMRet-MLLM maintains its leading performance. Notably, it surpasses the previous state-of-the-art by 7.1% on the MMEB out-of-distribution (OOD) set. These results demonstrate the robust generalization capability of MMRet-MLLM and highlight the potential of MegaPairs as foundational training data for universal multimodal embedding.
143
 
144
  <img src="./assets/res-ft-mmeb.png" width="800">
145
 
146
  ### Performance Scaling
147
- MegaPairs showcases **scalability**: MMRet-base improves as training data increases. It also demonstrates **efficiency**: with just 0.5M training samples, MMRet-base significantly outperforms MagicLens, which uses the same CLIP-base backbone and was trained on 36.7M samples.
148
 
149
  <img src="./assets/res-scaling.png" width="800">
150
 
151
 
152
  ## License
153
- The annotations for MegaPairs and the MMRet models are released under the [MIT License](LICENSE). The images in MegaPairs originate from the [Recap-Datacomp](https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B), which is released under the CC BY 4.0 license.
154
 
155
 
156
 
@@ -164,4 +170,4 @@ If you find this repository useful, please consider giving a star ⭐ and citati
164
  journal={arXiv preprint arXiv:2412.14475},
165
  year={2024}
166
  }
167
- ```
 
8
  - multimodal-retrieval
9
  - embedding-model
10
  ---
11
+
12
  <h1 align="center">MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval</h1>
13
 
14
  <p align="center">
 
18
  <a href="https://github.com/VectorSpaceLab/MegaPairs">
19
  <img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
20
  </a>
21
+ <a href="https://huggingface.co/datasets/BAAI/MegaPairs">
22
  <img alt="Build" src="https://img.shields.io/badge/πŸ€— Datasets-MegaPairs-yellow">
23
  </p>
24
 
25
  <p align="center">
26
  </a>
27
+ <a href="https://huggingface.co/BAAI/BGE-VL-base">
28
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-BGE_VL_base-yellow">
29
  </a>
30
+ <a href="https://huggingface.co/BAAI/BGE-VL-large">
31
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-BGE_VL_large-yellow">
32
  </a>
33
+ <a href="https://huggingface.co/BAAI/BGE-VL-MLLM-S1">
34
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-BGE_VL_MLLM_S1-yellow">
35
  </a>
36
+ <a href="https://huggingface.co/BAAI/BGE-VL-MLLM-S2">
37
+ <img alt="Build" src="https://img.shields.io/badge/πŸ€— Model-BGE_VL_MLLM_S2-yellow">
38
  </a>
39
  </p>
40
 
41
  ## News
42
+ ```2024-3-4``` πŸš€πŸš€ We have released the BGE-VL-MLLM models on Huggingface: [BGE-VL-MLLM-S1](https://huggingface.co/BAAI/BGE-VL-MLLM-S1) and [BGE-VL-MLLM-S2](https://huggingface.co/BAAI/BGE-VL-MLLM-S2). **BGE-VL-MLLM-S1** is trained exclusively on our MegaPairs dataset, achieving outstanding performance in composed image retrieval, with an 8.1% improvement on the CIRCO benchmark (mAP@5) over the previous state-of-the-art. **BGE-VL-MLLM-S2** builds on BGE-VL-MLLM-S1 with an additional epoch of fine-tuning on the MMEB benchmark training set, delivering enhanced performance across a broader range of multimodal embedding tasks.
43
 
44
+ ```2024-12-27``` πŸš€πŸš€ BGE-VL-CLIP models are released on Huggingface: [BGE-VL-base](https://huggingface.co/BAAI/BGE-VL-base) and [BGE-VL-large](https://huggingface.co/BAAI/BGE-VL-large).
45
 
46
  ```2024-12-19``` πŸŽ‰πŸŽ‰ Release our paper: [MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval](https://arxiv.org/pdf/2412.14475).
47
 
48
  ## Release Plan
49
  - [x] Paper
50
+ - [x] BGE-VL-base and BGE-VL-large models
51
+ - [x] BGE-VL-MLLM model
52
  - [ ] MegaPairs Dataset
53
  - [ ] Evaluation code
54
  - [ ] Fine-tuning code
55
 
56
 
57
  ## Introduction
58
+ In this work, we introduce **MegaPairs**, a novel data synthesis method that leverages open-domain images to create *heterogeneous KNN triplets* for universal multimodal retrieval. Our MegaPairs dataset contains over 26 million triplets, and we have trained a series of multimodal retrieval models, **BGE-VL**, including BGE-VL-CLIP (base and large) and BGE-VL-MLLM.
59
 
60
+ BGE-VL achieve state-of-the-art performance on four popular zero-shot composed image retrieval benchmarks and the massive multimodal embedding benchmark (MMEB). Extensive experiments demonstrate the ***efficiency, scalability, and generalization*** features of MegaPairs. Please refer to our [paper](https://arxiv.org/abs/2412.14475) for more details.
61
 
62
  ## Model Usage
63
 
64
+ ### 1. BGE-VL-CLIP Models
65
+ You can easily use BGE-VL-CLIP models based on ```transformers```
66
  ```python
67
  import torch
68
  from transformers import AutoModel
69
 
70
+ MODEL_NAME = "BAAI/BGE-VL-base" # or "BAAI/BGE-VL-large"
71
 
72
  model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) # You must set trust_remote_code=True
73
  model.set_processor(MODEL_NAME)
 
87
  print(scores)
88
  ```
89
 
90
+ See the [demo](./retrieval_demo.ipynb) for a complete example of using BGE-VL for multimodel retrieval.
91
+
92
+
93
+ ### 2. BGE-VL-MLLM Models
94
+
95
+
96
  ```python
97
  import torch
98
  from transformers import AutoModel
99
  from PIL import Image
100
 
101
+ MODEL_NAME= "BAAI/BGE-VL-MLLM-S1"
102
 
103
  model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True)
104
  model.eval()
 
133
  ## Model Performance
134
  ### Zero-Shot Composed Image Retrieval
135
 
136
+ BGE-VL sets a new performance benchmark in zero-shot composed image retrieval tasks. On the CIRCO benchmark, our BGE-VL-base model, with only 149 million parameters, surpasses all previous models, including those with 50 times more parameters. Additionally, BGE-VL-MLLM achieves an 8.1% improvement over the previous state-of-the-art model.
137
 
138
  <img src="./assets/res-zs-cir.png" width="800">
139
 
140
  ### Zero-Shot Performance on MMEB
141
 
142
+ BGE-VL-MLLM achieves state-of-the-art zero-shot performance on the Massive Multimodal Embedding Benchmark (MMEB), despite being trained only on the ImageText-to-Image paradigm. This demonstrates the excellent generalization capability of MegaPairs for multimodal embedding.
143
 
144
  <img src="./assets/res-zs-mmeb.png" width="800">
145
 
146
  ### Fine-Tuning Performance on MMEB
147
 
148
+ After fine-tuning on downstream tasks, BGE-VL-MLLM maintains its leading performance. Notably, it surpasses the previous state-of-the-art by 7.1% on the MMEB out-of-distribution (OOD) set. These results demonstrate the robust generalization capability of BGE-VL-MLLM and highlight the potential of MegaPairs as foundational training data for universal multimodal embedding.
149
 
150
  <img src="./assets/res-ft-mmeb.png" width="800">
151
 
152
  ### Performance Scaling
153
+ MegaPairs showcases **scalability**: BGE-VL-base improves as training data increases. It also demonstrates **efficiency**: with just 0.5M training samples, BGE-VL-base significantly outperforms MagicLens, which uses the same CLIP-base backbone and was trained on 36.7M samples.
154
 
155
  <img src="./assets/res-scaling.png" width="800">
156
 
157
 
158
  ## License
159
+ The annotations for MegaPairs and the BGE-VL models are released under the [MIT License](LICENSE). The images in MegaPairs originate from the [Recap-Datacomp](https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B), which is released under the CC BY 4.0 license.
160
 
161
 
162
 
 
170
  journal={arXiv preprint arXiv:2412.14475},
171
  year={2024}
172
  }
173
+ ```