Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
nielsr HF staff commited on
Commit
e105fa3
·
verified ·
1 Parent(s): 3b19b67

Improve dataset card with task category and Github link

Browse files

This PR improves the dataset card by:

- Adding the `text-generation` task category.
- Adding relevant tags for better searchability.
- Including a link to the GitHub repository for code and further details.

Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -1,6 +1,13 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
4
  <p align="center">
5
  📑 <a href="https://arxiv.org/abs/2503.00808" target="_blank">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🔨 <a href="https://huggingface.co/hkust-nlp/preselect-fasttext-classifier" target="_blank">fastText Classifier</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a> &nbsp&nbsp | &nbsp&nbsp 📦 <a href="https://github.com/hkust-nlp/PreSelect" target="_blank">Repo</a>
6
  <br>
@@ -10,7 +17,7 @@ PreSelect-100B is a curated ~100B token pretraining dataset that achieves great
10
  It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
11
 
12
  ### Benchmark results
13
- Trianing using PreSelect curated dataset achieve superior results than other dataset selection methods on various downstream tasks and below are comparisons.
14
 
15
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641c9662043963b1c0a1df52/_2eDuE5K06giMepA_lNSp.png)
16
 
@@ -24,6 +31,4 @@ If you find this work helpful, please kindly cite as:
24
  year={2025},
25
  eprint={2503.00808},
26
  }
27
- ```
28
-
29
-
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - data-selection
7
+ - pretraining
8
+ - efficient-training
9
  ---
10
+
11
  <p align="center">
12
  📑 <a href="https://arxiv.org/abs/2503.00808" target="_blank">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🔨 <a href="https://huggingface.co/hkust-nlp/preselect-fasttext-classifier" target="_blank">fastText Classifier</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a> &nbsp&nbsp | &nbsp&nbsp 📦 <a href="https://github.com/hkust-nlp/PreSelect" target="_blank">Repo</a>
13
  <br>
 
17
  It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
18
 
19
  ### Benchmark results
20
+ Training using the PreSelect curated dataset achieves superior results than other dataset selection methods on various downstream tasks, as shown in the comparison below.
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641c9662043963b1c0a1df52/_2eDuE5K06giMepA_lNSp.png)
23
 
 
31
  year={2025},
32
  eprint={2503.00808},
33
  }
34
+ ```