Update README.md
Browse files
README.md
CHANGED
@@ -182,4 +182,53 @@ configs:
|
|
182 |
path: "data/YoutubeSubtitles/train/*.arrow"
|
183 |
- split: test
|
184 |
path: "data/YoutubeSutitles/test/*.arrow"
|
185 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
182 |
path: "data/YoutubeSubtitles/train/*.arrow"
|
183 |
- split: test
|
184 |
path: "data/YoutubeSutitles/test/*.arrow"
|
185 |
+
---
|
186 |
+
|
187 |
+
# Dataset description
|
188 |
+
|
189 |
+
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
|
190 |
+
designed by EleutherAI to train large-scale language models. The original version of
|
191 |
+
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
|
192 |
+
|
193 |
+
The dataset is divided into 22 smaller high-quality datasets. For more information
|
194 |
+
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
|
195 |
+
|
196 |
+
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
|
197 |
+
We had to solve this problem in order to improve the user experience when it comes to deal with
|
198 |
+
the pile via the hub.
|
199 |
+
|
200 |
+
Here is an instance of the pile
|
201 |
+
|
202 |
+
```
|
203 |
+
{
|
204 |
+
'meta': {'pile_set_name': 'Pile-CC'},
|
205 |
+
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
|
206 |
+
}
|
207 |
+
```
|
208 |
+
|
209 |
+
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
|
210 |
+
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
|
211 |
+
that is properly divided, each instance having a new column `domain`.
|
212 |
+
|
213 |
+
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
|
214 |
+
|
215 |
+
```
|
216 |
+
data
|
217 |
+
ArXiv
|
218 |
+
train
|
219 |
+
test
|
220 |
+
BookCorpus2
|
221 |
+
train
|
222 |
+
test
|
223 |
+
Books3
|
224 |
+
train
|
225 |
+
test
|
226 |
+
```
|
227 |
+
|
228 |
+
# Usage
|
229 |
+
|
230 |
+
```python
|
231 |
+
from datasets import load_dataset
|
232 |
+
dataset = load_dataset("ArmelR/the-pile-splitted", "subset_of_interest", num_proc=8)
|
233 |
+
```
|
234 |
+
Using `subset_of_interest = "default"` would load the whole dataset.
|