Files changed (1) hide show
  1. README.md +221 -210
README.md CHANGED
@@ -1,210 +1,221 @@
1
- ---
2
- configs:
3
- - config_name: en
4
- default: true
5
- data_files:
6
- - split: train
7
- path: "data/en/*.parquet"
8
- - config_name: de
9
- data_files:
10
- - split: train
11
- path: "data/de/*.parquet"
12
- - config_name: fr
13
- data_files:
14
- - split: train
15
- path: "data/fr/*.parquet"
16
- - config_name: ru
17
- data_files:
18
- - split: train
19
- path: "data/ru/*.parquet"
20
- - config_name: es
21
- data_files:
22
- - split: train
23
- path: "data/es/*.parquet"
24
- - config_name: it
25
- data_files:
26
- - split: train
27
- path: "data/it/*.parquet"
28
- - config_name: ja
29
- data_files:
30
- - split: train
31
- path: "data/ja/*.parquet"
32
- - config_name: pt
33
- data_files:
34
- - split: train
35
- path: "data/pt/*.parquet"
36
- - config_name: zh
37
- data_files:
38
- - split: train
39
- path: "data/zh/*.parquet"
40
- - config_name: fa
41
- data_files:
42
- - split: train
43
- path: "data/fa/*.parquet"
44
- - config_name: tr
45
- data_files:
46
- - split: train
47
- path: "data/tr/*.parquet"
48
- license: apache-2.0
49
- ---
50
- # Wikipedia Embeddings with BGE-M3
51
-
52
- This dataset contains embeddings from the
53
- [June 2024 Wikipedia dump](https://dumps.wikimedia.org/wikidatawiki/20240601/)
54
- for the 11 most popular languages.
55
-
56
- The embeddings are generated with the multilingual
57
- [BGE-M3](https://huggingface.co/BAAI/bge-m3) model.
58
-
59
- The dataset consists of Wikipedia articles split into paragraphs,
60
- and embedded with the aforementioned model.
61
-
62
- To enhance search quality, the paragraphs are prefixed with their
63
- respective article titles before embedding.
64
-
65
- Additionally, paragraphs containing fewer than 100 characters,
66
- which tend to have low information density, are excluded from the dataset.
67
-
68
- The dataset contains approximately 144 million vector embeddings in total.
69
-
70
- | Language | Config Name | Embeddings |
71
- |------------|-------------|-------------|
72
- | English | en | 47_018_430 |
73
- | German | de | 20_213_669 |
74
- | French | fr | 18_324_060 |
75
- | Russian | ru | 13_618_886 |
76
- | Spanish | es | 13_194_999 |
77
- | Italian | it | 10_092_524 |
78
- | Japanese | ja | 7_769_997 |
79
- | Portuguese | pt | 5_948_941 |
80
- | Farsi | fa | 2_598_251 |
81
- | Chinese | zh | 3_306_397 |
82
- | Turkish | tr | 2_051_157 |
83
- | **Total** | | 144_137_311 |
84
-
85
- ## Loading Dataset
86
-
87
- You can load the entire dataset for a language as follows.
88
- Please note that for some languages, the download size may be quite large.
89
-
90
- ```python
91
- from datasets import load_dataset
92
-
93
- dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train")
94
- ```
95
-
96
- Alternatively, you can stream portions of the dataset as needed.
97
-
98
- ```python
99
- from datasets import load_dataset
100
-
101
- dataset = load_dataset(
102
- "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
103
- )
104
-
105
- for data in dataset:
106
- data_id = data["id"]
107
- url = data["url"]
108
- title = data["title"]
109
- text = data["text"]
110
- embedding = data["embedding"]
111
- # Do some work
112
- break
113
- ```
114
-
115
- ## Using Dataset
116
-
117
- One potential use case for the dataset is enabling similarity search
118
- by integrating it with a vector database.
119
-
120
- In fact, we have developed a vector database that allows you to search
121
- through the Wikipedia articles. Additionally, it includes a
122
- [RAG (Retrieval-Augmented Generation)](https://github.com/upstash/rag-chat) chatbot,
123
- which enables you to interact with a chatbot enhanced by the dataset.
124
-
125
- For more details, see this [blog post](https://upstash.com/blog/indexing-wikipedia),
126
- and be sure to check out the
127
- [search engine and chatbot](https://wikipedia-semantic-search.vercel.app) yourself.
128
-
129
- For reference, here is a rough estimation of how to implement semantic search
130
- functionality using this dataset and Upstash Vector.
131
-
132
- ```python
133
- from datasets import load_dataset
134
- from sentence_transformers import SentenceTransformer
135
- from upstash_vector import Index
136
-
137
- # You can create Upstash Vector with dimension set to 1024,
138
- # and similarity search function to dot product.
139
- index = Index(
140
- url="<UPSTASH_VECTOR_REST_URL>",
141
- token="<UPSTASH_VECTOR_REST_TOKEN>",
142
- )
143
-
144
- vectors = []
145
- batch_size = 200
146
-
147
- dataset = load_dataset(
148
- "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
149
- )
150
-
151
- for data in dataset:
152
- data_id = data["id"]
153
- url = data["url"]
154
- title = data["title"]
155
- text = data["text"]
156
- embedding = data["embedding"]
157
-
158
- metadata = {
159
- "url": url,
160
- "title": title,
161
- }
162
-
163
- vector = (
164
- data_id, # Unique vector id
165
- embedding, # Vector embedding
166
- metadata, # Optional, JSON-like metadata
167
- text, # Optional, unstructured text data
168
- )
169
- vectors.append(vector)
170
-
171
- if len(vectors) == batch_size:
172
- break
173
-
174
- # Upload embeddings into Upstash Vector in batches
175
- index.upsert(
176
- vectors=vectors,
177
- namespace="en",
178
- )
179
-
180
- # Create the query vector
181
- transformer = SentenceTransformer(
182
- "BAAI/bge-m3",
183
- device="cuda",
184
- revision="babcf60cae0a1f438d7ade582983d4ba462303c2",
185
- )
186
-
187
- query = "Which state has the nickname Yellowhammer State?"
188
- query_vector = transformer.encode(
189
- sentences=query,
190
- show_progress_bar=False,
191
- normalize_embeddings=True,
192
- )
193
-
194
- results = index.query(
195
- vector=query_vector,
196
- top_k=2,
197
- include_metadata=True,
198
- include_data=True,
199
- namespace="en",
200
- )
201
-
202
- # Query results are sorted in descending order of similarity
203
- for result in results:
204
- print(result.id) # Unique vector id
205
- print(result.score) # Similarity score to the query vector
206
- print(result.metadata) # Metadata associated with vector
207
- print(result.data) # Unstructured data associated with vector
208
- print("---")
209
- ```
210
-
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: en
4
+ default: true
5
+ data_files:
6
+ - split: train
7
+ path: "data/en/*.parquet"
8
+ - config_name: de
9
+ data_files:
10
+ - split: train
11
+ path: "data/de/*.parquet"
12
+ - config_name: fr
13
+ data_files:
14
+ - split: train
15
+ path: "data/fr/*.parquet"
16
+ - config_name: ru
17
+ data_files:
18
+ - split: train
19
+ path: "data/ru/*.parquet"
20
+ - config_name: es
21
+ data_files:
22
+ - split: train
23
+ path: "data/es/*.parquet"
24
+ - config_name: it
25
+ data_files:
26
+ - split: train
27
+ path: "data/it/*.parquet"
28
+ - config_name: ja
29
+ data_files:
30
+ - split: train
31
+ path: "data/ja/*.parquet"
32
+ - config_name: pt
33
+ data_files:
34
+ - split: train
35
+ path: "data/pt/*.parquet"
36
+ - config_name: zh
37
+ data_files:
38
+ - split: train
39
+ path: "data/zh/*.parquet"
40
+ - config_name: fa
41
+ data_files:
42
+ - split: train
43
+ path: "data/fa/*.parquet"
44
+ - config_name: tr
45
+ data_files:
46
+ - split: train
47
+ path: "data/tr/*.parquet"
48
+ license: apache-2.0
49
+ language:
50
+ - en
51
+ - de
52
+ - es
53
+ - fa
54
+ - fr
55
+ - it
56
+ - ja
57
+ - pt
58
+ - ru
59
+ - tr
60
+ - zh
61
+ ---
62
+ # Wikipedia Embeddings with BGE-M3
63
+
64
+ This dataset contains embeddings from the
65
+ [June 2024 Wikipedia dump](https://dumps.wikimedia.org/wikidatawiki/20240601/)
66
+ for the 11 most popular languages.
67
+
68
+ The embeddings are generated with the multilingual
69
+ [BGE-M3](https://huggingface.co/BAAI/bge-m3) model.
70
+
71
+ The dataset consists of Wikipedia articles split into paragraphs,
72
+ and embedded with the aforementioned model.
73
+
74
+ To enhance search quality, the paragraphs are prefixed with their
75
+ respective article titles before embedding.
76
+
77
+ Additionally, paragraphs containing fewer than 100 characters,
78
+ which tend to have low information density, are excluded from the dataset.
79
+
80
+ The dataset contains approximately 144 million vector embeddings in total.
81
+
82
+ | Language | Config Name | Embeddings |
83
+ |------------|-------------|-------------|
84
+ | English | en | 47_018_430 |
85
+ | German | de | 20_213_669 |
86
+ | French | fr | 18_324_060 |
87
+ | Russian | ru | 13_618_886 |
88
+ | Spanish | es | 13_194_999 |
89
+ | Italian | it | 10_092_524 |
90
+ | Japanese | ja | 7_769_997 |
91
+ | Portuguese | pt | 5_948_941 |
92
+ | Farsi | fa | 2_598_251 |
93
+ | Chinese | zh | 3_306_397 |
94
+ | Turkish | tr | 2_051_157 |
95
+ | **Total** | | 144_137_311 |
96
+
97
+ ## Loading Dataset
98
+
99
+ You can load the entire dataset for a language as follows.
100
+ Please note that for some languages, the download size may be quite large.
101
+
102
+ ```python
103
+ from datasets import load_dataset
104
+
105
+ dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train")
106
+ ```
107
+
108
+ Alternatively, you can stream portions of the dataset as needed.
109
+
110
+ ```python
111
+ from datasets import load_dataset
112
+
113
+ dataset = load_dataset(
114
+ "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
115
+ )
116
+
117
+ for data in dataset:
118
+ data_id = data["id"]
119
+ url = data["url"]
120
+ title = data["title"]
121
+ text = data["text"]
122
+ embedding = data["embedding"]
123
+ # Do some work
124
+ break
125
+ ```
126
+
127
+ ## Using Dataset
128
+
129
+ One potential use case for the dataset is enabling similarity search
130
+ by integrating it with a vector database.
131
+
132
+ In fact, we have developed a vector database that allows you to search
133
+ through the Wikipedia articles. Additionally, it includes a
134
+ [RAG (Retrieval-Augmented Generation)](https://github.com/upstash/rag-chat) chatbot,
135
+ which enables you to interact with a chatbot enhanced by the dataset.
136
+
137
+ For more details, see this [blog post](https://upstash.com/blog/indexing-wikipedia),
138
+ and be sure to check out the
139
+ [search engine and chatbot](https://wikipedia-semantic-search.vercel.app) yourself.
140
+
141
+ For reference, here is a rough estimation of how to implement semantic search
142
+ functionality using this dataset and Upstash Vector.
143
+
144
+ ```python
145
+ from datasets import load_dataset
146
+ from sentence_transformers import SentenceTransformer
147
+ from upstash_vector import Index
148
+
149
+ # You can create Upstash Vector with dimension set to 1024,
150
+ # and similarity search function to dot product.
151
+ index = Index(
152
+ url="<UPSTASH_VECTOR_REST_URL>",
153
+ token="<UPSTASH_VECTOR_REST_TOKEN>",
154
+ )
155
+
156
+ vectors = []
157
+ batch_size = 200
158
+
159
+ dataset = load_dataset(
160
+ "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
161
+ )
162
+
163
+ for data in dataset:
164
+ data_id = data["id"]
165
+ url = data["url"]
166
+ title = data["title"]
167
+ text = data["text"]
168
+ embedding = data["embedding"]
169
+
170
+ metadata = {
171
+ "url": url,
172
+ "title": title,
173
+ }
174
+
175
+ vector = (
176
+ data_id, # Unique vector id
177
+ embedding, # Vector embedding
178
+ metadata, # Optional, JSON-like metadata
179
+ text, # Optional, unstructured text data
180
+ )
181
+ vectors.append(vector)
182
+
183
+ if len(vectors) == batch_size:
184
+ break
185
+
186
+ # Upload embeddings into Upstash Vector in batches
187
+ index.upsert(
188
+ vectors=vectors,
189
+ namespace="en",
190
+ )
191
+
192
+ # Create the query vector
193
+ transformer = SentenceTransformer(
194
+ "BAAI/bge-m3",
195
+ device="cuda",
196
+ revision="babcf60cae0a1f438d7ade582983d4ba462303c2",
197
+ )
198
+
199
+ query = "Which state has the nickname Yellowhammer State?"
200
+ query_vector = transformer.encode(
201
+ sentences=query,
202
+ show_progress_bar=False,
203
+ normalize_embeddings=True,
204
+ )
205
+
206
+ results = index.query(
207
+ vector=query_vector,
208
+ top_k=2,
209
+ include_metadata=True,
210
+ include_data=True,
211
+ namespace="en",
212
+ )
213
+
214
+ # Query results are sorted in descending order of similarity
215
+ for result in results:
216
+ print(result.id) # Unique vector id
217
+ print(result.score) # Similarity score to the query vector
218
+ print(result.metadata) # Metadata associated with vector
219
+ print(result.data) # Unstructured data associated with vector
220
+ print("---")
221
+ ```