Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
<p align="center">
|
5 |
+
📑 <a href="" target="_blank">Paper</a>    |    🔨 <a href="https://huggingface.co/hkust-nlp/PreSelect-classifier" target="_blank">fastText Classifier</a>    |    🤗 <a href="https://huggingface.co/datasets/hkust-nlp/PreSelect-100B" target="_blank">Released Dataset</a>    |    📦 <a href="https://github.com/hkust-nlp/preselect" target="_blank">Repo</a>
|
6 |
+
<br>
|
7 |
+
</p>
|
8 |
+
|
9 |
+
PreSelect-100B is a curated ~100B token pretraining dataset that achieves great performance on various benchmarks.
|
10 |
+
It is filtered by [PreSelect-Classifier](https://huggingface.co/hkust-nlp/PreSelect-classifier) at 10% threshold, where the pool is a randomly sampled subset of [DCLM-refinedweb](https://data.commoncrawl.org/contrib/datacomp/DCLM-refinedweb/index.html), which is a cleaned version of Common Crawl raw data but without any model-based filtering.
|
11 |
+
|
12 |
+
### Benchmark results
|
13 |
+
Trianing using PreSelect curated dataset achieve superior results than other dataset selection methods on various downstream tasks and below are comparisons.
|
14 |
+
|
15 |
+

|
16 |
+
|
17 |
+
### Citation
|
18 |
+
|
19 |
+
|
20 |
+
|