|
--- |
|
language: |
|
- en |
|
size_categories: |
|
- 1B<n<10B |
|
task_categories: |
|
- text-generation |
|
pretty_name: AgentSearch-V1 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: metadata |
|
dtype: string |
|
- name: dataset |
|
dtype: string |
|
- name: text_chunks |
|
sequence: string |
|
- name: embeddings |
|
sequence: |
|
sequence: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 40563228 |
|
num_examples: 1000 |
|
download_size: 34541852 |
|
dataset_size: 40563228 |
|
--- |
|
|
|
### Getting Started |
|
|
|
The AgentSearch-V1 dataset includes over one billion embeddings sourced from over 50 million high-quality documents. This extensive collection encompasses the majority of content from sources like Arxiv, Wikipedia, Project Gutenberg, and includes quality-filtered CC data. |
|
|
|
To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
# To stream the entire dataset: |
|
ds = load_dataset("SciPhi/OpenWebSearch-V1", data_files="**/*", streaming=True) |
|
|
|
# Optional, stream just the "arxiv" dataset |
|
ds = load_dataset("SciPhi/OpenWebSearch-V1", data_files="arxiv/*", streaming=True) |
|
``` |
|
|
|
--- |
|
|
|
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi/agent-search). |
|
|
|
### Dataset Summary |
|
|
|
We take a similar approach to RedPajama-v1 and divide AgentSearch into a number of categories. |
|
|
|
|
|
| Dataset | Token Count | |
|
|----------------|-------------| |
|
| Books | x Billion | |
|
| ArXiv | x Billion | |
|
| Wikipedia | x Billion | |
|
| StackExchange | x Billion | |
|
| OpenMath | x Billion | |
|
| Filtered Crawl | x Billion | |
|
| Total | x Billion | |
|
|
|
### Languages |
|
|
|
English. |
|
|
|
## Dataset Structure |
|
|
|
The raw dataset structure is as follows: |
|
|
|
```json |
|
{ |
|
"url": ..., |
|
"title": ..., |
|
"metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}, |
|
"text_chunks": ..., |
|
"embeddings": ..., |
|
"dataset": "github" | "books" | "arxiv" | "wikipedia" | "stackexchange" | "open-math" | "filtered-rp2" |
|
} |
|
``` |
|
|
|
The indexed dataset can be downloaded directly and is structured as a qdrant database dump, each entry has meta data {"url", "vector"}. In addition, there is a corresponding sqlite dataset which contains the mapping from urls onto embeddings, text chunks, and other metadata. |
|
|
|
## Dataset Creation |
|
|
|
This dataset was created as a step towards making humanities most important knowledge locally searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets. |
|
|
|
To cite our work, please use the following: |
|
|
|
``` |
|
@software{SciPhi2023AgentSearch, |
|
author = {SciPhi}, |
|
title = {AgentSearch [ΨΦ]: A Comprehensive Agent-First Framework and Dataset for Webscale Search}, |
|
year = {2023}, |
|
url = {https://github.com/SciPhi-AI/agent-search} |
|
} |
|
``` |
|
|
|
### Source Data |
|
|
|
``` |
|
@ONLINE{wikidump, |
|
author = "Wikimedia Foundation", |
|
title = "Wikimedia Downloads", |
|
url = "https://dumps.wikimedia.org" |
|
} |
|
``` |
|
|
|
``` |
|
@misc{paster2023openwebmath, |
|
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, |
|
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba}, |
|
year={2023}, |
|
eprint={2310.06786}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI} |
|
} |
|
``` |
|
|
|
``` |
|
@software{together2023redpajama, |
|
author = {Together Computer}, |
|
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, |
|
month = April, |
|
year = 2023, |
|
url = {https://github.com/togethercomputer/RedPajama-Data} |
|
} |
|
``` |
|
|
|
### License |
|
Please refer to the licenses of the data subsets you use. |
|
|
|
* [Open-Web (Common Crawl Foundation Terms of Use)](https://commoncrawl.org/terms-of-use/full/) |
|
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information) |
|
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html) |
|
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information) |
|
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) |
|
|
|
<!-- |
|
### Annotations |
|
#### Annotation process |
|
[More Information Needed] |
|
#### Who are the annotators? |
|
[More Information Needed] |
|
### Personal and Sensitive Information |
|
[More Information Needed] |
|
## Considerations for Using the Data |
|
### Social Impact of Dataset |
|
[More Information Needed] |
|
### Discussion of Biases |
|
[More Information Needed] |
|
### Other Known Limitations |
|
[More Information Needed] |
|
## Additional Information |
|
### Dataset Curators |
|
[More Information Needed] |
|
### Licensing Information |
|
[More Information Needed] |
|
### Citation Information |
|
[More Information Needed] |
|
### Contributions |
|
[More Information Needed] |
|
--> |