Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,12 @@ configs:
|
|
4 |
data_files:
|
5 |
- split: train
|
6 |
path:
|
7 |
-
- "wiki/archive/
|
8 |
- config_name: wikiteam
|
9 |
data_files:
|
10 |
- split: train
|
11 |
path:
|
12 |
-
- "wiki/archive/
|
13 |
- config_name: wikimedia
|
14 |
data_files:
|
15 |
- split: train
|
@@ -28,5 +28,6 @@ Preprocessed versions of openly licensed wiki dumps collected by wikiteam and ho
|
|
28 |
* `v0`: Wikitext parsed to plain text with `wtf\_wikipedia` and conversion of math templates to LaTeX.
|
29 |
* `v1`: Removal of some html snippets left behind during parsing.
|
30 |
* `v2`: Removal of documents that basically just transcripts of non-openly licensed things.
|
|
|
31 |
|
32 |
Note: The `wikiteam3` scraping tool, used for most of the dumps, doesn't format edits to pages as `revisions` in the xml output, instead it creates new `pages`. Thus some documents in this dataset are earlier versions of various pages. For large edits this duplication can be benificial, but results in near-duplicate documents for small edits. Some sort of fuzzy deduping filter should be applied before using this dataset.
|
|
|
4 |
data_files:
|
5 |
- split: train
|
6 |
path:
|
7 |
+
- "wiki/archive/v3/documents/*.jsonl.gz"
|
8 |
- config_name: wikiteam
|
9 |
data_files:
|
10 |
- split: train
|
11 |
path:
|
12 |
+
- "wiki/archive/v3/documents/*.jsonl.gz"
|
13 |
- config_name: wikimedia
|
14 |
data_files:
|
15 |
- split: train
|
|
|
28 |
* `v0`: Wikitext parsed to plain text with `wtf\_wikipedia` and conversion of math templates to LaTeX.
|
29 |
* `v1`: Removal of some html snippets left behind during parsing.
|
30 |
* `v2`: Removal of documents that basically just transcripts of non-openly licensed things.
|
31 |
+
* `v3`: Removal of documents that basically lyrics for non-openly licensed things.
|
32 |
|
33 |
Note: The `wikiteam3` scraping tool, used for most of the dumps, doesn't format edits to pages as `revisions` in the xml output, instead it creates new `pages`. Thus some documents in this dataset are earlier versions of various pages. For large edits this duplication can be benificial, but results in near-duplicate documents for small edits. Some sort of fuzzy deduping filter should be applied before using this dataset.
|