Datasets:

Modalities:
Tabular
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
File size: 968 Bytes
6d87bcf
 
 
 
 
 
4fcad1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
language:
- en
pretty_name: SlimPajama_300B
---

The SlimPajama_300B is a 300B token sample of de-duplicated Slim Pajama dataset tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer

Due to file size constraints, C4 and CommonCrawl has been uploaded in multiple chunks, you can use the following commands to merge them back into a single file:
```bash
cat C4_part_* > C4.bin
cat CommonCrawl_part_* > CommonCrawl.bin
```

#### Data Distribution

| Data source   | Composition                     |
| ------------- | ------------------------------- |
| Commoncrawl   | 0.5208                          |
| C4            | 0.2668                          |
| GitHub        | 0.0522                          |
| Books         | 0.0420                          |
| ArXiv         | 0.0442                          |
| Wikpedia      | 0.0399                          |
| StackExchange | 0.0337                          |