Set `sep="\s+"` for the duplicates file
#1
by
lhoestq
HF staff
- opened
This PR fixes the viewer for the duplicates config
>>> load_dataset("commoncrawl/statistics", "Duplicates", revision="refs/pr/1")["train"].to_pandas()
Downloading readme: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.11k/1.11k [00:00<00:00, 460kB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9.49k/9.49k [00:00<00:00, 43.8kB/s]
Generating train split: 100 examples [00:00, 11343.01 examples/s]
id crawl page url digest estim. 1-(urls/pages) 1-(digests/pages)
0 0 CC-MAIN-2008-2009 1798158091 1790932667 1804803498 0.4% -0.4%
1 1 CC-MAIN-2009-2010 2863495211 2301135881 2631454016 19.6% 8.1%
2 2 CC-MAIN-2012 3828539877 3597338329 3472132880 6.0% 9.3%
3 3 CC-MAIN-2013-20 1796098643 1666857706 1675186145 7.2% 6.7%
4 4 CC-MAIN-2013-48 2245773667 2085501361 2123908635 7.1% 5.4%
.. .. ... ... ... ... ... ...
95 95 CC-MAIN-2023-40 3445015037 3419001876 3398196830 0.8% 1.4%
96 96 CC-MAIN-2023-50 3354042124 3327873282 3296666094 0.8% 1.7%
97 97 CC-MAIN-2024-10 3106525566 3081216032 3005351878 0.8% 3.3%
98 98 CC-MAIN-2024-18 2786800057 2768587136 2736902585 0.7% 1.8%
99 99 CC-MAIN-2024-22 2709877975 2692942753 2639673318 0.6% 2.6%
[100 rows x 7 columns]
lhoestq
changed pull request title from
Set `sep=" "` for the duplicates file
to Set `sep="\s+"` for the duplicates file
Thanks for the fix!
pjox
changed pull request status to
merged