Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
DOI:
spacemanidol
commited on
Commit
•
6d1a3ae
1
Parent(s):
c9db5fd
Upload 9 files
Browse files- scripts/.DS_Store +0 -0
- scripts/README.md +79 -0
- scripts/convert_qrels_to_json.py +21 -0
- scripts/generate_doc_embeddings.py +79 -0
- scripts/generate_query_embeddings.py +66 -0
- scripts/get_data.sh +21 -0
- scripts/merge_retrieved_shard.py +79 -0
- scripts/retrieve.sh +14 -0
- scripts/retrieve_from_shard.py +101 -0
scripts/.DS_Store
ADDED
Binary file (6.15 kB). View file
|
|
scripts/README.md
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# TREC RAG baselines using arctic-l and arctic-m-v1.5
|
2 |
+
|
3 |
+
First, download the data including documents, queries, qrels.
|
4 |
+
|
5 |
+
|
6 |
+
## Generate The doc and query embeddings
|
7 |
+
|
8 |
+
```sh
|
9 |
+
bash get_data.sh
|
10 |
+
```
|
11 |
+
|
12 |
+
Next, go ahead and convert the qrels format into json using the script below.
|
13 |
+
|
14 |
+
```sh
|
15 |
+
python convert_qrels_to_json.py
|
16 |
+
```
|
17 |
+
|
18 |
+
After that, go ahead and generate the query embeddings using the command below.
|
19 |
+
|
20 |
+
```sh
|
21 |
+
python generate_query_embeddings.py
|
22 |
+
```
|
23 |
+
|
24 |
+
After that, go ahead and generate embeddings for each shard. This will take ~ 20m per shard on a single H100. Feel free to parallelize. Make sure you have at least 600 gbs free.
|
25 |
+
|
26 |
+
```sh
|
27 |
+
python generate_doc_embeddings.py
|
28 |
+
```
|
29 |
+
|
30 |
+
|
31 |
+
## Retrieval Runs
|
32 |
+
Once you have query and doc embeddings go ahead and retrieve. Given the size of the vectors we do this in shards. First we retrieve the top_n from each shard for each queryset on each shard. Feel free to parrelize.
|
33 |
+
|
34 |
+
```sh
|
35 |
+
python retrieve_from_shard.py <path to embeddings> <query_embedding_prefix> <shard> <num_retrieved> <use_faiss>
|
36 |
+
```
|
37 |
+
Alternatively, you can just run retrieve.sh in the background.
|
38 |
+
|
39 |
+
```sh
|
40 |
+
python merge_retrieved_shard.py <shard_retrieved_results> <output_filename> <top_n_docs> <qrel json> <metric to get per_query breakdown>
|
41 |
+
```
|
42 |
+
## Retrieval Scores
|
43 |
+
|
44 |
+
|
45 |
+
### NDCG@10
|
46 |
+
| NDCG @10 | | | | | | | |
|
47 |
+
|--------------------|--------|----------------|----------|---------------|---------------|---------------|---------------------------|
|
48 |
+
| Dataset | BM25 | GTE-Large-v1.5 | Arctic-L | Arctic-M-V1.5 | Arctic-M-V1.5 | Arctic-M-V1.5 | Cohere Embed3 - Trunc 128 |
|
49 |
+
| Dim | N/A | 1024 | 1024 | 768 | 256 | 128 | 128 |
|
50 |
+
| Deep Learning 2021 | 0.5778 | 0.71928 | 0.70682 | 0.6936 | 0.69392 | 0.60578 | 0.6962 |
|
51 |
+
| Deep Learning 2022 | 0.3576 | 0.53576 | 0.5444 | 0.55199 | 0.55608 | 0.47348 | 0.5396 |
|
52 |
+
| Deep Learning 2023 | 0.3356 | 0.46423 | 0.47372 | 0.46963 | 0.45196 | 0.32789 | 0.4473 |
|
53 |
+
| msmarcov2-dev | N/A | 0.3538 | 0.35844 | 0.346 | 0.34074 | 0.28499 | N/A |
|
54 |
+
| msmarcov2-dev2 | N/A | 0.34698 | 0.35821 | 0.34518 | 0.34339 | 0.29606 | N/A |
|
55 |
+
| Raggy Queries | 0.4227 | 0.56782 | 0.57759 | 0.57439 | 0.56686 | 0.47555 | N/A |
|
56 |
+
|
57 |
+
### Recall @100
|
58 |
+
| Recall@100 | | | | | | | |
|
59 |
+
|--------------------|--------|----------------|----------|---------------|---------------|---------------|---------------------------|
|
60 |
+
| Dataset | BM25 | GTE-Large-v1.5 | Arctic-L | Arctic-M-V1.5 | Arctic-M-V1.5 | Arctic-M-V1.5 | Cohere Embed3 - Trunc 128 |
|
61 |
+
| Dim | N/A | 1024 | 1024 | 768 | 256 | 128 | 128 |
|
62 |
+
| Deep Learning 2021 | 0.3811 | 0.4156 | 0.41361 | 0.43 | 0.42245 | 0.3488 | 0.3914 |
|
63 |
+
| Deep Learning 2022 | 0.233 | 0.31173 | 0.31351 | 0.32125 | 0.3165 | 0.26714 | 0.3019 |
|
64 |
+
| Deep Learning 2023 | 0.3049 | 0.35236 | 0.34793 | 0.37622 | 0.36089 | 0.28314 | 0.3438 |
|
65 |
+
| msmarcov2-dev | 0.6683 | 0.85135 | 0.85131 | 0.85435 | 0.84985 | 0.76201 | N/A |
|
66 |
+
| msmarcov2-dev2 | 0.6771 | 0.84333 | 0.84767 | 0.8576 | 0.8526 | 0.78987 | N/A |
|
67 |
+
| Raggy Queries | 0.2807 | 0.35125 | 0.36228 | 0.36915 | 0.36149 | 0.30272 | N/A |
|
68 |
+
### Recall @1000
|
69 |
+
|
70 |
+
| Recall@1000 | | | | | | | |
|
71 |
+
|--------------------|--------|----------------|----------|---------------|---------------|---------------|---------------------------|
|
72 |
+
| Dataset | BM25 | GTE-Large-v1.5 | Arctic-L | Arctic-M-V1.5 | Arctic-M-V1.5 | Arctic-M-V1.5 | Cohere Embed3 - Trunc 128 |
|
73 |
+
| Dim | N/A | 1024 | 1024 | 768 | 256 | 128 | 128 |
|
74 |
+
| Deep Learning 2021 | 0.7115 | 0.73185 | 0.7193 | 0.74895 | 0.73511 | 0.63253 | 0.7188 |
|
75 |
+
| Deep Learning 2022 | 0.479 | 0.55174 | 0.54566 | 0.55413 | 0.54499 | 0.47823 | 0.5558 |
|
76 |
+
| Deep Learning 2023 | 0.5852 | 0.6167 | 0.59577 | 0.62262 | 0.61199 | 0.49188 | 0.6025 |
|
77 |
+
| msmarcov2-dev | 0.8528 | 0.93549 | 0.93966 | 0.94156 | 0.94014 | 0.87705 | N/A |
|
78 |
+
| msmarcov2-dev2 | 0.8577 | 0.93997 | 0.93947 | 0.94277 | 0.94047 | 0.91683 | N/A |
|
79 |
+
| Raggy Queries | 0.5745 | 0.63515 | 0.63092 | 0.64527 | 0.63826 | 0.55002 | N/A |
|
scripts/convert_qrels_to_json.py
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
|
3 |
+
qrel_filenames = [
|
4 |
+
"qrels.dl21-doc-msmarco-v2.1.txt",
|
5 |
+
"qrels.msmarco-v2.1-doc.dev.txt",
|
6 |
+
"qrels.dl22-doc-msmarco-v2.1.txt",
|
7 |
+
"qrels.msmarco-v2.1-doc.dev2.txt",
|
8 |
+
"qrels.dl23-doc-msmarco-v2.1.txt",
|
9 |
+
"qrels.rag24.raggy-dev.txt",
|
10 |
+
]
|
11 |
+
for filename in qrel_filenames:
|
12 |
+
short_filename = filename.split("qrels.")[1][:-4]
|
13 |
+
qrels: dict = {}
|
14 |
+
with open(filename, "r") as f:
|
15 |
+
for line in f:
|
16 |
+
qid, _, doc_id, label = line.strip().split()
|
17 |
+
if qid not in qrels:
|
18 |
+
qrels[qid] = {}
|
19 |
+
qrels[qid][doc_id] = int(label)
|
20 |
+
with open(f"{short_filename}.json", "w") as w:
|
21 |
+
w.write(json.dumps(qrels))
|
scripts/generate_doc_embeddings.py
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import os
|
3 |
+
|
4 |
+
import pyarrow as pa
|
5 |
+
import pyarrow.parquet as pq
|
6 |
+
import torch
|
7 |
+
from tqdm import tqdm
|
8 |
+
from transformers import AutoModel, AutoTokenizer
|
9 |
+
|
10 |
+
file_name_prefix = "msmarco_v2.1_doc_segmented_"
|
11 |
+
path = "/home/mltraining/msmarco_v2.1_doc_segmented/"
|
12 |
+
model_names = [
|
13 |
+
"Snowflake/snowflake-arctic-embed-l",
|
14 |
+
"Snowflake/snowflake-arctic-embed-m-v1.5",
|
15 |
+
]
|
16 |
+
for model_name in model_names:
|
17 |
+
print(f"Running doc embeddings using {model_name}")
|
18 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
19 |
+
model = AutoModel.from_pretrained(
|
20 |
+
model_name,
|
21 |
+
add_pooling_layer=False,
|
22 |
+
)
|
23 |
+
model.eval()
|
24 |
+
device = "cuda"
|
25 |
+
model = model.to(device)
|
26 |
+
dir_path = f"{path}{model_name.split('/')[1]}/"
|
27 |
+
if not os.path.exists(dir_path):
|
28 |
+
os.makedirs(dir_path)
|
29 |
+
for i in range(0, 59):
|
30 |
+
try:
|
31 |
+
filename = f"{path}{file_name_prefix}{i:02}.json"
|
32 |
+
filename_out = f"{dir_path}{i:02}.parquet"
|
33 |
+
print(f"Starting doc embeddings on {filename}")
|
34 |
+
data = []
|
35 |
+
ids = []
|
36 |
+
with open(filename, "r") as f:
|
37 |
+
for line in tqdm(f, desc="Processing JSONL file"):
|
38 |
+
j = json.loads(line)
|
39 |
+
doc_id = j["docid"]
|
40 |
+
text = j["segment"]
|
41 |
+
title = j["title"]
|
42 |
+
heading = j["headings"]
|
43 |
+
doc_text = "{} {}".format(title, text)
|
44 |
+
data.append(doc_text)
|
45 |
+
ids.append(doc_id)
|
46 |
+
|
47 |
+
print("Documents fully loaded")
|
48 |
+
batch_size = 512
|
49 |
+
chunks = [data[i: i + batch_size] for i in range(0, len(data), batch_size)]
|
50 |
+
embds = []
|
51 |
+
for chunk in tqdm(chunks, desc="inference"):
|
52 |
+
tokens = tokenizer(
|
53 |
+
chunk,
|
54 |
+
padding=True,
|
55 |
+
truncation=True,
|
56 |
+
return_tensors="pt",
|
57 |
+
max_length=512,
|
58 |
+
).to(device)
|
59 |
+
with torch.autocast(
|
60 |
+
"cuda", dtype=torch.bfloat16
|
61 |
+
), torch.inference_mode():
|
62 |
+
embds.append(
|
63 |
+
model(**tokens)[0][:, 0]
|
64 |
+
.cpu()
|
65 |
+
.to(torch.float32)
|
66 |
+
.detach()
|
67 |
+
.numpy()
|
68 |
+
)
|
69 |
+
del data, chunks
|
70 |
+
embds = [item for batch in embds for item in batch]
|
71 |
+
out_data = []
|
72 |
+
for emb, doc_id in zip(embds, ids):
|
73 |
+
out_data.append({"doc_id": doc_id, "embedding": emb})
|
74 |
+
del embds, ids
|
75 |
+
table = pa.Table.from_pylist(out_data)
|
76 |
+
del out_data
|
77 |
+
pq.write_table(table, filename_out)
|
78 |
+
except Exception:
|
79 |
+
pass
|
scripts/generate_query_embeddings.py
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import pyarrow as pa
|
2 |
+
import pyarrow.parquet as pq
|
3 |
+
import torch
|
4 |
+
from transformers import AutoModel, AutoTokenizer
|
5 |
+
|
6 |
+
query_prefix = "Represent this sentence for searching relevant passages: "
|
7 |
+
topic_file_names = [
|
8 |
+
"topics.dl21.txt",
|
9 |
+
"topics.dl22.txt",
|
10 |
+
"topics.dl23.txt",
|
11 |
+
"topics.msmarco-v2-doc.dev.txt",
|
12 |
+
"topics.msmarco-v2-doc.dev2.txt",
|
13 |
+
"topics.rag24.raggy-dev.txt",
|
14 |
+
"topics.rag24.researchy-dev.txt",
|
15 |
+
]
|
16 |
+
model_names = [
|
17 |
+
"Snowflake/snowflake-arctic-embed-l",
|
18 |
+
"Snowflake/snowflake-arctic-embed-m-v1.5",
|
19 |
+
]
|
20 |
+
|
21 |
+
for model_name in model_names:
|
22 |
+
print(f"Running query embeddings using {model_name}")
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
24 |
+
model = AutoModel.from_pretrained(
|
25 |
+
model_name,
|
26 |
+
add_pooling_layer=False,
|
27 |
+
)
|
28 |
+
model.eval()
|
29 |
+
device = "cuda"
|
30 |
+
model = model.to(device)
|
31 |
+
for file_name in topic_file_names:
|
32 |
+
short_file_name = ".".join(file_name.split(".")[:-1])
|
33 |
+
data = []
|
34 |
+
print(f"starting on {file_name}")
|
35 |
+
with open(file_name, "r") as f:
|
36 |
+
for line in f:
|
37 |
+
line = line.strip().split("\t")
|
38 |
+
qid = line[0]
|
39 |
+
query_text = line[1]
|
40 |
+
queries_with_prefix = [
|
41 |
+
"{}{}".format(query_prefix, i) for i in [query_text]
|
42 |
+
]
|
43 |
+
query_tokens = tokenizer(
|
44 |
+
queries_with_prefix,
|
45 |
+
padding=True,
|
46 |
+
truncation=True,
|
47 |
+
return_tensors="pt",
|
48 |
+
max_length=512,
|
49 |
+
)
|
50 |
+
# Compute token embeddings
|
51 |
+
with torch.autocast(
|
52 |
+
"cuda", dtype=torch.bfloat16
|
53 |
+
), torch.inference_mode():
|
54 |
+
query_embeddings = (
|
55 |
+
model(**query_tokens.to(device))[0][:, 0]
|
56 |
+
.cpu()
|
57 |
+
.to(torch.float32)
|
58 |
+
.detach()
|
59 |
+
.numpy()[0]
|
60 |
+
)
|
61 |
+
item = {"id": qid, "text": query_text, "embedding": query_embeddings}
|
62 |
+
data.append(item)
|
63 |
+
table = pa.Table.from_pylist(data)
|
64 |
+
pq.write_table(
|
65 |
+
table, f"{model_name.split('/')[1]}-{short_file_name}.parquet"
|
66 |
+
)
|
scripts/get_data.sh
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Download and unzip docs
|
2 |
+
wget https://msmarco.z22.web.core.windows.net/msmarcoranking/msmarco_v2.1_doc_segmented.tar
|
3 |
+
tar -xf /home/mltraining/msmarco_v2.1_doc_segmented.tar
|
4 |
+
gunzip /home/mltraining/msmarco_v2.1_doc_segmented/*
|
5 |
+
|
6 |
+
# Download Queries
|
7 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.msmarco-v2-doc.dev.txt
|
8 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.msmarco-v2-doc.dev2.txt
|
9 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.dl21.txt
|
10 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.dl22.txt
|
11 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.dl23.txt
|
12 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.rag24.raggy-dev.txt
|
13 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/topics.rag24.researchy-dev.txt
|
14 |
+
|
15 |
+
# Download Qrels
|
16 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.rag24.raggy-dev.txt
|
17 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl21-doc-msmarco-v2.1.txt
|
18 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl22-doc-msmarco-v2.1.txt
|
19 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl23-doc-msmarco-v2.1.txt
|
20 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-v2.1-doc.dev.txt
|
21 |
+
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-v2.1-doc.dev2.txt
|
scripts/merge_retrieved_shard.py
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import glob
|
2 |
+
import json
|
3 |
+
import pickle
|
4 |
+
import sys
|
5 |
+
from typing import Dict
|
6 |
+
|
7 |
+
import numpy as np
|
8 |
+
from beir.retrieval.evaluation import EvaluateRetrieval
|
9 |
+
|
10 |
+
|
11 |
+
def load_qrels(filename: str) -> Dict:
|
12 |
+
with open(filename, "r") as f:
|
13 |
+
qrels = json.load(f)
|
14 |
+
return qrels
|
15 |
+
|
16 |
+
|
17 |
+
def merge_retrieved_shards(
|
18 |
+
suffix: str, output_file: str, top_n: int, qrels: dict, metric: str
|
19 |
+
) -> None:
|
20 |
+
shard_files = glob.glob(f"*{suffix}")
|
21 |
+
print(f"There are {len(shard_files)} shards found")
|
22 |
+
merged_results = {}
|
23 |
+
print("Loading All shards")
|
24 |
+
for shard_file in shard_files:
|
25 |
+
print(f"Loading shard {shard_file} ")
|
26 |
+
with open(shard_file, "rb") as f:
|
27 |
+
shard_results = pickle.load(f)
|
28 |
+
for query_id, doc_scores in shard_results.items():
|
29 |
+
if query_id not in merged_results:
|
30 |
+
merged_results[query_id] = []
|
31 |
+
merged_results[query_id].extend(doc_scores.items())
|
32 |
+
print("Shards all loaded, merging results and sorting by score")
|
33 |
+
run = {}
|
34 |
+
per_query = []
|
35 |
+
for query_id, doc_scores in merged_results.items():
|
36 |
+
if query_id in qrels:
|
37 |
+
doc_score_dict = {}
|
38 |
+
for passage_id, score in doc_scores:
|
39 |
+
doc_id = passage_id.split("#")[
|
40 |
+
0
|
41 |
+
] # everything after # is the passage idenfitier withing a doc
|
42 |
+
if doc_id not in doc_score_dict:
|
43 |
+
doc_score_dict[doc_id] = (
|
44 |
+
-1
|
45 |
+
) # scores are in range -1 to 1 on similairty so starting at -1 is floor
|
46 |
+
if score > doc_score_dict[doc_id]:
|
47 |
+
doc_score_dict[doc_id] = score
|
48 |
+
top_docs = sorted(doc_score_dict.items(), key=lambda x: x[1], reverse=True)[
|
49 |
+
:top_n
|
50 |
+
]
|
51 |
+
run[query_id] = {
|
52 |
+
doc_id: round(score * 100, 2) for doc_id, score in top_docs
|
53 |
+
}
|
54 |
+
scores = EvaluateRetrieval.evaluate(
|
55 |
+
qrels, {query_id: run[query_id]}, k_values=[1, 3, 5, 10, 100, 1000]
|
56 |
+
)
|
57 |
+
scores = {k: v for d in scores for k, v in d.items()}
|
58 |
+
per_query.append(scores[metric])
|
59 |
+
print("Done merging and sorting results, Evaluating and saving run")
|
60 |
+
print(f"There are {len(run)} queries being evaled agaisnt qrels")
|
61 |
+
print(f"There were {len(shard_files)} shards found")
|
62 |
+
print(
|
63 |
+
f"Per Query Score average: {np.array(per_query).mean()} for {metric}. Individual scores{per_query}"
|
64 |
+
)
|
65 |
+
print("Overall Score Numbers:")
|
66 |
+
print(EvaluateRetrieval.evaluate(qrels, run, k_values=[1, 3, 5, 10, 100, 1000]))
|
67 |
+
with open(output_file, "wb") as w:
|
68 |
+
pickle.dump(run, w)
|
69 |
+
|
70 |
+
|
71 |
+
if __name__ == "__main__":
|
72 |
+
suffix = sys.argv[1]
|
73 |
+
output_file = sys.argv[2]
|
74 |
+
top_n = int(sys.argv[3])
|
75 |
+
qrel_filename = sys.argv[4]
|
76 |
+
metric = sys.argv[5]
|
77 |
+
merge_retrieved_shards(
|
78 |
+
suffix, output_file, top_n, load_qrels(qrel_filename), metric
|
79 |
+
)
|
scripts/retrieve.sh
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
# Define common parameters
|
4 |
+
base_path="gte-large/"
|
5 |
+
prefix="gte-large-en-v1.5"
|
6 |
+
num_shards=59
|
7 |
+
num_retrieved=100
|
8 |
+
dim=1024
|
9 |
+
use_faiss=1
|
10 |
+
# Loop through the required range
|
11 |
+
for i in $(seq 0 $num_shards)
|
12 |
+
do
|
13 |
+
python retrieve_from_shard.py $base_path $prefix $i $num_retrieved $dim $use_faiss
|
14 |
+
done
|
scripts/retrieve_from_shard.py
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import pickle
|
2 |
+
import sys
|
3 |
+
|
4 |
+
import pyarrow.parquet as pq
|
5 |
+
import torch
|
6 |
+
import torch.nn.functional as F
|
7 |
+
import faiss
|
8 |
+
import numpy as np
|
9 |
+
|
10 |
+
def main(
|
11 |
+
path: str, query_prefix: str, shard_num: int, retrieval_depth: int, num_dim: int, use_faiss_gpu: bool = False
|
12 |
+
) -> None:
|
13 |
+
query_filenames = [
|
14 |
+
"topics.dl21.parquet",
|
15 |
+
"topics.msmarco-v2-doc.dev2.parquet",
|
16 |
+
"topics.dl22.parquet",
|
17 |
+
"topics.rag24.raggy-dev.parquet",
|
18 |
+
"topics.dl23.parquet",
|
19 |
+
"topics.rag24.researchy-dev.parquet",
|
20 |
+
"topics.msmarco-v2-doc.dev.parquet",
|
21 |
+
]
|
22 |
+
shard_filename = f"{path}{shard_num:02}.parquet"
|
23 |
+
print(f"Starting retrieval on Chunk {shard_num} for {shard_filename}")
|
24 |
+
doc_embeddings = []
|
25 |
+
idx2docid = {}
|
26 |
+
print("Reading Document Embeddings File")
|
27 |
+
table = pq.read_table(shard_filename)
|
28 |
+
print("Parquet file read, looping")
|
29 |
+
print(f"Chunk {shard_filename} loaded with {len(table)} documents")
|
30 |
+
for idx in range(len(table)):
|
31 |
+
doc_id = str(table[0][idx])
|
32 |
+
doc_embeddings.append(table[1][idx].as_py()[:num_dim])
|
33 |
+
idx2docid[idx] = doc_id
|
34 |
+
|
35 |
+
doc_embeddings = torch.tensor(doc_embeddings, dtype=torch.float32)
|
36 |
+
print(f"Embeddings loaded. Size {doc_embeddings.shape}")
|
37 |
+
doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1)
|
38 |
+
print("Document Embeddings normalized")
|
39 |
+
print("Document Embeddings Loaded into index")
|
40 |
+
|
41 |
+
if use_faiss_gpu:
|
42 |
+
# Create a FAISS index on GPU
|
43 |
+
index = faiss.IndexFlatL2(num_dim)
|
44 |
+
index = faiss.index_cpu_to_gpu(faiss.StandardGpuResources(), 0, index)
|
45 |
+
index.add(doc_embeddings.numpy())
|
46 |
+
else:
|
47 |
+
# Use numpy for similarity calculations
|
48 |
+
doc_embeddings_numpy = doc_embeddings.numpy()
|
49 |
+
|
50 |
+
for query_filename in query_filenames:
|
51 |
+
query_embeddings = []
|
52 |
+
retrieved_results = {}
|
53 |
+
idx2query_id = {}
|
54 |
+
query_filename_full = f"{path}{query_prefix}{query_filename}"
|
55 |
+
print(f"Retrieving from {shard_filename} for query set {query_filename_full}")
|
56 |
+
query_embeddings = []
|
57 |
+
print("Loading Query Embedding file")
|
58 |
+
table = pq.read_table(query_filename_full)
|
59 |
+
print("Done loading parquet query file")
|
60 |
+
for idx in range(len(table)):
|
61 |
+
query_id = str(table[0][idx])
|
62 |
+
query_embeddings.append(table[2][idx].as_py()[:num_dim])
|
63 |
+
idx2query_id[idx] = query_id
|
64 |
+
query_embeddings = torch.tensor(query_embeddings, dtype=torch.float32)
|
65 |
+
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
|
66 |
+
print(f"Query Embeddings loaded with size {query_embeddings.shape}")
|
67 |
+
|
68 |
+
if use_faiss_gpu:
|
69 |
+
# Search the FAISS index on GPU
|
70 |
+
similarities, indices = index.search(query_embeddings.numpy(), retrieval_depth)
|
71 |
+
for idx in range(query_embeddings.shape[0]):
|
72 |
+
qid = idx2query_id[idx]
|
73 |
+
retrieved_results[qid] = {}
|
74 |
+
for jdx in range(retrieval_depth):
|
75 |
+
idx_doc = int(indices[idx, jdx])
|
76 |
+
doc_id = idx2docid[idx_doc]
|
77 |
+
retrieved_results[qid][doc_id] = float(similarities[idx, jdx])
|
78 |
+
else:
|
79 |
+
# Use numpy for similarity calculations
|
80 |
+
for idx in range(query_embeddings.shape[0]):
|
81 |
+
similarities = np.dot(query_embeddings[idx].numpy(), doc_embeddings_numpy.T)
|
82 |
+
top_n = np.argsort(-similarities)[:retrieval_depth]
|
83 |
+
qid = idx2query_id[idx]
|
84 |
+
retrieved_results[qid] = {}
|
85 |
+
for jdx in range(retrieval_depth):
|
86 |
+
idx_doc = int(top_n[jdx])
|
87 |
+
doc_id = idx2docid[idx_doc]
|
88 |
+
retrieved_results[qid][doc_id] = float(similarities[idx_doc])
|
89 |
+
|
90 |
+
with open(f"{shard_num}-{query_prefix}{num_dim}-{query_filename}", "wb") as w:
|
91 |
+
pickle.dump(retrieved_results, w)
|
92 |
+
|
93 |
+
|
94 |
+
if __name__ == "__main__":
|
95 |
+
path = sys.argv[1]
|
96 |
+
query_prefix = sys.argv[2]
|
97 |
+
shard_num = int(sys.argv[3])
|
98 |
+
retrieval_depth = int(sys.argv[4])
|
99 |
+
num_dim = int(sys.argv[5])
|
100 |
+
use_faiss_gpu = bool(int(sys.argv[6])) # 0 for numpy, 1 for FAISS GPU
|
101 |
+
main(path, query_prefix, shard_num, retrieval_depth, num_dim, use_faiss_gpu)
|