ksshumab commited on
Commit
7702eb5
·
verified ·
1 Parent(s): 65e201e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -3
README.md CHANGED
@@ -1,3 +1,20 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <p align="center">
5
+ 📑 <a href="" target="_blank">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🔨 <a href="https://huggingface.co/hkust-nlp/PreSelect-classifier" target="_blank">fastText Classifier</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a> &nbsp&nbsp | &nbsp&nbsp 📦 <a href="https://github.com/hkust-nlp/preselect" target="_blank">Repo</a>
6
+ <br>
7
+ </p>
8
+
9
+ PreSelect-100B is a curated ~100B token pretraining dataset that achieves great performance on various benchmarks.
10
+ It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
11
+
12
+ ### Benchmark results
13
+ Trianing using PreSelect curated dataset achieve superior results than other dataset selection methods on various downstream tasks and below are comparisons.
14
+
15
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641c9662043963b1c0a1df52/_2eDuE5K06giMepA_lNSp.png)
16
+
17
+ ### Citation
18
+
19
+
20
+