Shitao commited on
Commit
7fc4958
1 Parent(s): aac0c78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -17
README.md CHANGED
@@ -4,7 +4,7 @@ tags:
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
-
8
  ---
9
 
10
  For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
@@ -25,6 +25,17 @@ This allows you to obtain token weights (similar to the BM25) without any additi
25
  Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
26
 
27
 
 
 
 
 
 
 
 
 
 
 
 
28
  ## FAQ
29
 
30
  **1. Introduction for different retrieval methods**
@@ -72,13 +83,17 @@ pip install -U FlagEmbedding
72
  ```python
73
  from FlagEmbedding import BGEM3FlagModel
74
 
75
- model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
 
76
 
77
  sentences_1 = ["What is BGE M3?", "Defination of BM25"]
78
  sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
79
  "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
80
 
81
- embeddings_1 = model.encode(sentences_1)['dense_vecs']
 
 
 
82
  embeddings_2 = model.encode(sentences_2)['dense_vecs']
83
  similarity = embeddings_1 @ embeddings_2.T
84
  print(similarity)
@@ -148,13 +163,17 @@ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical
148
  "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
149
 
150
  sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
151
- print(model.compute_score(sentence_pairs))
 
 
 
 
152
  # {
153
- # 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
154
- # 'sparse': [0.05865478515625, 0.0026397705078125, 0.0, 0.0540771484375],
155
- # 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
156
- # 'sparse+dense': [0.5266395211219788, 0.2692706882953644, 0.2691181004047394, 0.563307523727417],
157
- # 'colbert+sparse+dense': [0.6366440653800964, 0.3531297743320465, 0.3487969636917114, 0.6618075370788574]
158
  # }
159
  ```
160
 
@@ -172,8 +191,10 @@ print(model.compute_score(sentence_pairs))
172
  ![avatar](./imgs/mkqa.jpg)
173
 
174
  - Long Document Retrieval
175
-
176
- ![avatar](./imgs/long.jpg)
 
 
177
 
178
 
179
  ## Training
@@ -191,8 +212,8 @@ Refer to our [report](https://github.com/FlagOpen/FlagEmbedding/blob/master/Flag
191
  ## Models
192
 
193
  We release two versions:
194
- - [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised): the model after contrastive learning in a large-scale dataset
195
- - [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3): the final model fine-tuned from BAAI/bge-m3-unsupervised
196
 
197
  ## Acknowledgement
198
 
@@ -204,7 +225,4 @@ If you find this repository useful, please consider giving a star :star: and cit
204
 
205
  ```
206
 
207
- ```
208
-
209
-
210
-
 
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
+ license: mit
8
  ---
9
 
10
  For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
 
25
  Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
26
 
27
 
28
+ ## Model Specs
29
+
30
+ | Model Name | Dimension | Sequence Length |
31
+ |:----:|:---:|:---:|
32
+ | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 |
33
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 |
34
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 |
35
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 |
36
+
37
+
38
+
39
  ## FAQ
40
 
41
  **1. Introduction for different retrieval methods**
 
83
  ```python
84
  from FlagEmbedding import BGEM3FlagModel
85
 
86
+ model = BGEM3FlagModel('BAAI/bge-m3',
87
+ use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
88
 
89
  sentences_1 = ["What is BGE M3?", "Defination of BM25"]
90
  sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
91
  "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
92
 
93
+ embeddings_1 = model.encode(sentences_1,
94
+ batch_size=12,
95
+ max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
96
+ )['dense_vecs']
97
  embeddings_2 = model.encode(sentences_2)['dense_vecs']
98
  similarity = embeddings_1 @ embeddings_2.T
99
  print(similarity)
 
163
  "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
164
 
165
  sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
166
+
167
+ print(model.compute_score(sentence_pairs,
168
+ max_passage_length=128, # a smaller max length leads to a lower latency
169
+ weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
170
+
171
  # {
172
+ # 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
173
+ # 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
174
+ # 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
175
+ # 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
176
+ # 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
177
  # }
178
  ```
179
 
 
191
  ![avatar](./imgs/mkqa.jpg)
192
 
193
  - Long Document Retrieval
194
+ - MLDR:
195
+ ![avatar](./imgs/long.jpg)
196
+ - NarritiveQA:
197
+ ![avatar](./imgs/nqa.jpg)
198
 
199
 
200
  ## Training
 
212
  ## Models
213
 
214
  We release two versions:
215
+ - BAAI/bge-m3-unsupervised: the model after contrastive learning in a large-scale dataset
216
+ - BAAI/bge-m3: the final model fine-tuned from BAAI/bge-m3-unsupervised
217
 
218
  ## Acknowledgement
219
 
 
225
 
226
  ```
227
 
228
+ ```