spacemanidol commited on
Commit
211ddb6
1 Parent(s): 1817e1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,3 +1,112 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - TREC-RAG
9
+ - RAG
10
+ - MSMARCO
11
+ - MSMARCOV2.1
12
+ - Snowflake
13
+ - arctic
14
+ - arctic-embed
15
+ - arctic-embed-v1.5
16
+ - MRL
17
+ pretty_name: TREC-RAG-Embedding-Baseline
18
+ size_categories:
19
+ - 100M<n<1B
20
+ configs:
21
+ - config_name: corpus
22
+ data_files:
23
+ - split: train
24
+ path: corpus/*
25
+ ---
26
+
27
+ # Snowflake Arctic Embed M V1.5 Embeddings for MSMARCO V2.1 for TREC-RAG
28
+
29
+ This dataset contains the embeddings for the MSMARCO-V2.1 dataset which is used as the corpora for [TREC RAG](https://trec-rag.github.io/)
30
+ All embeddings are created using [Snowflake's Arctic Embed M v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) and are intended to serve as a simple baseline for dense retrieval-based methods.
31
+ It's worth noting that Snowflake's Arctic Embed M v1.5 is optimized for efficient embeddings and thus supports embedding truncation and quantization. More details on model release can be found in this [blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/) along with methods for [quantization and compression](https://github.com/Snowflake-Labs/arctic-embed/blob/main/compressed_embeddings_examples/score_arctic_embed_m_v1dot5_with_quantization.ipynb).
32
+ Note, that the embeddings are not normalized so you will need to normalize them before usage.
33
+
34
+ ## Loading the dataset
35
+
36
+ ### Loading the document embeddings
37
+
38
+ You can either load the dataset like this:
39
+ ```python
40
+ from datasets import load_dataset
41
+ docs = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5", split="train")
42
+ ```
43
+
44
+ Or you can also stream it without downloading it before:
45
+ ```python
46
+ from datasets import load_dataset
47
+ docs = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5", split="train", streaming=True)
48
+ for doc in docs:
49
+ doc_id = j['docid']
50
+ url = doc['url']
51
+ text = doc['text']
52
+ emb = doc['embedding']
53
+ ```
54
+
55
+
56
+ Note, The full dataset corpus is ~ 620GB so it will take a while to download and may not fit on some devices/
57
+
58
+ ## Search
59
+ A full search example (on the first 1,000 paragraphs):
60
+ ```python
61
+ from datasets import load_dataset
62
+ import torch
63
+ from transformers import AutoModel, AutoTokenizer
64
+ import numpy as np
65
+
66
+
67
+ top_k = 100
68
+ docs_stream = load_dataset("Snowflake/msmarco-v2.1-snowflake-arctic-embed-m-v1.5",split="train", streaming=True)
69
+
70
+ docs = []
71
+ doc_embeddings = []
72
+
73
+ for doc in docs_stream:
74
+ docs.append(doc)
75
+ doc_embeddings.append(doc['embedding'])
76
+ if len(docs) >= top_k:
77
+ break
78
+
79
+ doc_embeddings = np.asarray(doc_embeddings)
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-m-v1.5)
82
+ model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-m-v1.5', add_pooling_layer=False)
83
+ model.eval()
84
+
85
+ query_prefix = 'Represent this sentence for searching relevant passages: '
86
+ queries = ['how do you clean smoke off walls']
87
+ queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
88
+ query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
89
+
90
+ # Compute token embeddings
91
+ with torch.no_grad():
92
+ query_embeddings = model(**query_tokens)[0][:, 0]
93
+
94
+
95
+ # normalize embeddings
96
+ query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
97
+ doc_embeddings = torch.nn.functional.normalize(doc_embeddings, p=2, dim=1)
98
+
99
+ # Compute dot score between query embedding and document embeddings
100
+ dot_scores = np.matmul(query_embeddings, doc_embeddings.transpose())[0]
101
+ top_k_hits = np.argpartition(dot_scores, -top_k)[-top_k:].tolist()
102
+
103
+ # Sort top_k_hits by dot score
104
+ top_k_hits.sort(key=lambda x: dot_scores[x], reverse=True)
105
+
106
+ # Print results
107
+ print("Query:", queries[0])
108
+ for doc_id in top_k_hits:
109
+ print(docs[doc_id]['doc_id'])
110
+ print(docs[doc_id]['text'])
111
+ print(docs[doc_id]['url'], "\n")
112
+ ```