File size: 5,407 Bytes
211ddb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8e7a46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
211ddb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
898caef
211ddb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
task_categories:
- question-answering
language:
- en
tags:
- TREC-RAG
- RAG
- MSMARCO
- MSMARCOV2.1
- Snowflake
- arctic
- arctic-embed
- arctic-embed-v1.5
- MRL
pretty_name: TREC-RAG-Embedding-Baseline
size_categories:
- 100M<n<1B
configs:
- config_name: corpus
  data_files:
  - split: train
    path: corpus/*
---

# Snowflake Arctic Embed M V1.5 Embeddings for MSMARCO V2.1 for TREC-RAG

This dataset contains the embeddings for the MSMARCO-V2.1 dataset which is used as the corpora for [TREC RAG](https://trec-rag.github.io/)
All embeddings are created using [Snowflake's Arctic Embed M v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) and are intended to serve as a simple baseline for dense retrieval-based methods.
It's worth noting that Snowflake's Arctic Embed M v1.5 is optimized for efficient embeddings and thus supports embedding truncation and quantization. More details on model release can be found in this [blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/) along with methods for [quantization and compression](https://github.com/Snowflake-Labs/arctic-embed/blob/main/compressed_embeddings_examples/score_arctic_embed_m_v1dot5_with_quantization.ipynb).
Note, that the embeddings are not normalized so you will need to normalize them before usage.


## Retrieval Performance
Retrieval performance for the TREC DL21-23, MSMARCOV2-Dev and Raggy Queries can be found below with BM25 as a baseline. For both systems, retrieval is at the segment level and Doc Score = Max (passage score).
Retrieval is done via a dot product and happens in BF16. Since the M-v1.5 model supports Vector Truncation we do so to 256 dimensions

### NDCG@10
| Dataset | BM25 | Arctic-M-V1.5 (768 Dimensions) | Arctic-M-V1.5 (256 Dimensions) |
|---|---|---|---|
| Deep Learning 2021 | 0.5778 | 0.6936 | 0.69392 |
| Deep Learning 2022 | 0.3576 | 0.55199 | 0.55608 |
| Deep Learning 2023 | 0.3356 | 0.46963 | 0.45196 |
| msmarcov2-dev | N/A | 0.346 | 0.34074 |
| msmarcov2-dev2 | N/A | 0.34518 | 0.34339 |
| Raggy Queries | 0.4227 | 0.57439 | 0.56686 |

### Recall@100
| Dataset | BM25 | Arctic-M-V1.5 (768 Dimensions) | Arctic-M-V1.5 (256 Dimensions) |
|---|---|---|---|
| Deep Learning 2021 | 0.3811 | 0.43 | 0.42245 |
| Deep Learning 2022 | 0.233 | 0.32125 | 0.3165 |
| Deep Learning 2023 | 0.3049 | 0.37622 | 0.36089 |
| msmarcov2-dev | 0.6683 | 0.85435 | 0.84985 |
| msmarcov2-dev2 | 0.6771 | 0.8576 | 0.8526 |
| Raggy Queries | 0.2807 | 0.36915 | 0.36149 |


### Recall@1000
| Dataset | BM25 | Arctic-M-V1.5 (768 Dimensions) | Arctic-M-V1.5 (256 Dimensions) |
|---|---|---|---|
| Deep Learning 2021 | 0.7115 | 0.74895 | 0.73511 |
| Deep Learning 2022 | 0.479 | 0.55413 | 0.54499 |
| Deep Learning 2023 | 0.5852 | 0.62262 | 0.61199 |
| msmarcov2-dev | 0.8528 | 0.94156 | 0.94014 |
| msmarcov2-dev2 | 0.8577 | 0.94277 | 0.94047 |
| Raggy Queries | 0.5745 | 0.64527 | 0.63826 |


##  Loading the dataset

### Loading the document embeddings

You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5", split="train")
```

Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5",  split="train", streaming=True)
for doc in docs:
	doc_id = j['docid']
    url = doc['url']
	text = doc['text']
	emb = doc['embedding']
```


Note, The full dataset corpus is ~ 620GB so it will take a while to download and may not fit on some devices/ 

## Search
A full search example (on the first 1,000 paragraphs):
```python
from datasets import load_dataset
import torch
from transformers import AutoModel, AutoTokenizer
import numpy as np


top_k = 100
docs_stream = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5",split="train", streaming=True)

docs = []
doc_embeddings = []

for doc in docs_stream:
    docs.append(doc)
    doc_embeddings.append(doc['embedding'])
    if len(docs) >= top_k:
        break

doc_embeddings = np.asarray(doc_embeddings)

tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-m-v1.5')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-m-v1.5', add_pooling_layer=False)
model.eval()

query_prefix = 'Represent this sentence for searching relevant passages: '
queries  = ['how do you clean smoke off walls']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)

# Compute token embeddings
with torch.no_grad():
    query_embeddings = model(**query_tokens)[0][:, 0]


# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
doc_embeddings = torch.nn.functional.normalize(doc_embeddings, p=2, dim=1)

# Compute dot score between query embedding and document embeddings
dot_scores = np.matmul(query_embeddings, doc_embeddings.transpose())[0]
top_k_hits = np.argpartition(dot_scores, -top_k)[-top_k:].tolist()

# Sort top_k_hits by dot score
top_k_hits.sort(key=lambda x: dot_scores[x], reverse=True)

# Print results
print("Query:", queries[0])
for doc_id in top_k_hits:
    print(docs[doc_id]['doc_id'])
    print(docs[doc_id]['text'])
    print(docs[doc_id]['url'], "\n")
```