Datasets:
emrgnt-cmplxty
commited on
Commit
•
1984f0e
1
Parent(s):
3707754
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,136 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pretty_name: OpenSERP-V1
|
5 |
+
task_categories:
|
6 |
+
- text-generation
|
7 |
+
size_categories:
|
8 |
+
- 1B<n<10B
|
9 |
---
|
10 |
+
|
11 |
+
### Getting Started
|
12 |
+
|
13 |
+
The OpenSERP-V1 dataset includes full embeddings for over 50 million high-quality documents. This extensive collection encompasses the majority of content from sources like Arxiv, Wikipedia, Project Gutenberg, and includes quality-filtered CC data.
|
14 |
+
|
15 |
+
To access and utilize the OpenSERP-1B dataset, you can download it via HuggingFace with the following Python code:
|
16 |
+
|
17 |
+
```python
|
18 |
+
from datasets import load_dataset
|
19 |
+
ds = load_dataset("SciPhi/OpenSERP-V1")
|
20 |
+
|
21 |
+
# Optional, load just the "arxiv" dataset
|
22 |
+
ds = load_dataset("SciPhi/OpenSERP-V1", "arxiv")
|
23 |
+
|
24 |
+
```
|
25 |
+
|
26 |
+
---
|
27 |
+
|
28 |
+
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi/OpenSERP).
|
29 |
+
|
30 |
+
### Dataset Summary
|
31 |
+
|
32 |
+
OpenSERP is divided into a number of categories, similar to RedPajama-V1.
|
33 |
+
|
34 |
+
|
35 |
+
| Dataset | Token Count |
|
36 |
+
|----------------|-------------|
|
37 |
+
| Books | x Billion |
|
38 |
+
| ArXiv | x Billion |
|
39 |
+
| Wikipedia | x Billion |
|
40 |
+
| StackExchange | x Billion |
|
41 |
+
| OpenMath | x Billion |
|
42 |
+
| Filtered Crawl | x Billion |
|
43 |
+
| Total | x Billion |
|
44 |
+
|
45 |
+
### Languages
|
46 |
+
|
47 |
+
English.
|
48 |
+
|
49 |
+
## Dataset Structure
|
50 |
+
|
51 |
+
The raw dataset structure is as follows:
|
52 |
+
|
53 |
+
```json
|
54 |
+
{
|
55 |
+
"url": ...,
|
56 |
+
"title": ...,
|
57 |
+
"metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
|
58 |
+
"text_chunks": ...,
|
59 |
+
"embeddings": ...,
|
60 |
+
"dataset": "github" | "books" | "arxiv" | "wikipedia" | "stackexchange" | "open-math" | "filtered-rp2"
|
61 |
+
}
|
62 |
+
```
|
63 |
+
|
64 |
+
The indexed dataset is structured as a qdrant database dump, each entry has meta data {"url", "vector"}.
|
65 |
+
|
66 |
+
## Dataset Creation
|
67 |
+
|
68 |
+
This dataset was created to allow make humanities most important knowledge locally searchable. It was created by filtering, cleaning, and augmenting locally publicly available datasets.
|
69 |
+
|
70 |
+
The embedding vectors have been indexed and made searchable via a qdrant database.
|
71 |
+
|
72 |
+
### Source Data
|
73 |
+
|
74 |
+
```
|
75 |
+
@ONLINE{wikidump,
|
76 |
+
author = "Wikimedia Foundation",
|
77 |
+
title = "Wikimedia Downloads",
|
78 |
+
url = "https://dumps.wikimedia.org"
|
79 |
+
}
|
80 |
+
```
|
81 |
+
|
82 |
+
```
|
83 |
+
@misc{paster2023openwebmath,
|
84 |
+
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
|
85 |
+
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
|
86 |
+
year={2023},
|
87 |
+
eprint={2310.06786},
|
88 |
+
archivePrefix={arXiv},
|
89 |
+
primaryClass={cs.AI}
|
90 |
+
}
|
91 |
+
```
|
92 |
+
|
93 |
+
```
|
94 |
+
@software{together2023redpajama,
|
95 |
+
author = {Together Computer},
|
96 |
+
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
|
97 |
+
month = April,
|
98 |
+
year = 2023,
|
99 |
+
url = {https://github.com/togethercomputer/RedPajama-Data}
|
100 |
+
}
|
101 |
+
```
|
102 |
+
|
103 |
+
### License
|
104 |
+
Please refer to the licenses of the data subsets you use.
|
105 |
+
|
106 |
+
* [Open-Web (Common Crawl Foundation Terms of Use)](https://commoncrawl.org/terms-of-use/full/)
|
107 |
+
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
|
108 |
+
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
|
109 |
+
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
|
110 |
+
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
|
111 |
+
|
112 |
+
<!--
|
113 |
+
### Annotations
|
114 |
+
#### Annotation process
|
115 |
+
[More Information Needed]
|
116 |
+
#### Who are the annotators?
|
117 |
+
[More Information Needed]
|
118 |
+
### Personal and Sensitive Information
|
119 |
+
[More Information Needed]
|
120 |
+
## Considerations for Using the Data
|
121 |
+
### Social Impact of Dataset
|
122 |
+
[More Information Needed]
|
123 |
+
### Discussion of Biases
|
124 |
+
[More Information Needed]
|
125 |
+
### Other Known Limitations
|
126 |
+
[More Information Needed]
|
127 |
+
## Additional Information
|
128 |
+
### Dataset Curators
|
129 |
+
[More Information Needed]
|
130 |
+
### Licensing Information
|
131 |
+
[More Information Needed]
|
132 |
+
### Citation Information
|
133 |
+
[More Information Needed]
|
134 |
+
### Contributions
|
135 |
+
[More Information Needed]
|
136 |
+
-->
|