|
--- |
|
language: |
|
- multilingual |
|
- af |
|
- am |
|
- ar |
|
- as |
|
- az |
|
- be |
|
- bg |
|
- bn |
|
- br |
|
- bs |
|
- ca |
|
- cs |
|
- cy |
|
- da |
|
- de |
|
- el |
|
- en |
|
- eo |
|
- es |
|
- et |
|
- eu |
|
- fa |
|
- fi |
|
- fr |
|
- fy |
|
- ga |
|
- gd |
|
- gl |
|
- gu |
|
- ha |
|
- he |
|
- hi |
|
- hr |
|
- hu |
|
- hy |
|
- id |
|
- is |
|
- it |
|
- ja |
|
- jv |
|
- ka |
|
- kk |
|
- km |
|
- kn |
|
- ko |
|
- ku |
|
- ky |
|
- la |
|
- lo |
|
- lt |
|
- lv |
|
- mg |
|
- mk |
|
- ml |
|
- mn |
|
- mr |
|
- ms |
|
- my |
|
- ne |
|
- nl |
|
- 'no' |
|
- om |
|
- or |
|
- pa |
|
- pl |
|
- ps |
|
- pt |
|
- ro |
|
- ru |
|
- sa |
|
- sd |
|
- si |
|
- sk |
|
- sl |
|
- so |
|
- sq |
|
- sr |
|
- su |
|
- sv |
|
- sw |
|
- ta |
|
- te |
|
- th |
|
- tl |
|
- tr |
|
- ug |
|
- uk |
|
- ur |
|
- uz |
|
- vi |
|
- xh |
|
- yi |
|
- zh |
|
pretty_name: All of Common Crawl News, 100+ languages, preprocessed and cleaned |
|
task_categories: |
|
- text-classification |
|
- question-answering |
|
- text-generation |
|
- text2text-generation |
|
size_categories: |
|
- 100M<n<1B |
|
tags: |
|
- news |
|
configs: |
|
- config_name: "2016" |
|
data_files: "2016_part_00.jsonl.gz" |
|
- config_name: "2017" |
|
data_files: |
|
- "2017_part_00.jsonl.gz" |
|
- "2017_part_01.jsonl.gz" |
|
- "2017_part_02.jsonl.gz" |
|
- "2017_part_03.jsonl.gz" |
|
- "2017_part_04.jsonl.gz" |
|
- "2017_part_05.jsonl.gz" |
|
- config_name: "2018" |
|
data_files: |
|
- "2018_part_00.jsonl.gz" |
|
- "2018_part_01.jsonl.gz" |
|
- "2018_part_02.jsonl.gz" |
|
- "2018_part_03.jsonl.gz" |
|
- "2018_part_04.jsonl.gz" |
|
- "2018_part_05.jsonl.gz" |
|
- "2018_part_06.jsonl.gz" |
|
- "2018_part_07.jsonl.gz" |
|
- "2018_part_08.jsonl.gz" |
|
- config_name: "2019" |
|
data_files: |
|
- "2019_part_00.jsonl.gz" |
|
- "2019_part_01.jsonl.gz" |
|
- "2019_part_02.jsonl.gz" |
|
- "2019_part_03.jsonl.gz" |
|
- "2019_part_04.jsonl.gz" |
|
- "2019_part_05.jsonl.gz" |
|
- "2019_part_06.jsonl.gz" |
|
- "2019_part_07.jsonl.gz" |
|
- "2019_part_08.jsonl.gz" |
|
- "2019_part_09.jsonl.gz" |
|
- "2019_part_10.jsonl.gz" |
|
- config_name: "2020" |
|
data_files: |
|
- "2020_part_00.jsonl.gz" |
|
- "2020_part_01.jsonl.gz" |
|
- "2020_part_02.jsonl.gz" |
|
- "2020_part_03.jsonl.gz" |
|
- "2020_part_04.jsonl.gz" |
|
- "2020_part_05.jsonl.gz" |
|
- "2020_part_06.jsonl.gz" |
|
- "2020_part_07.jsonl.gz" |
|
- "2020_part_08.jsonl.gz" |
|
- "2020_part_09.jsonl.gz" |
|
- "2020_part_10.jsonl.gz" |
|
- "2020_part_11.jsonl.gz" |
|
- "2020_part_12.jsonl.gz" |
|
- "2020_part_13.jsonl.gz" |
|
- "2020_part_14.jsonl.gz" |
|
- "2020_part_15.jsonl.gz" |
|
- config_name: "2021" |
|
data_files: |
|
- "2021_part_00.jsonl.gz" |
|
- "2021_part_01.jsonl.gz" |
|
- "2021_part_02.jsonl.gz" |
|
- "2021_part_03.jsonl.gz" |
|
- "2021_part_04.jsonl.gz" |
|
- "2021_part_05.jsonl.gz" |
|
- "2021_part_06.jsonl.gz" |
|
- "2021_part_07.jsonl.gz" |
|
- "2021_part_08.jsonl.gz" |
|
- "2021_part_09.jsonl.gz" |
|
- "2021_part_10.jsonl.gz" |
|
- "2021_part_11.jsonl.gz" |
|
- "2021_part_12.jsonl.gz" |
|
- "2021_part_13.jsonl.gz" |
|
- "2021_part_14.jsonl.gz" |
|
- "2021_part_15.jsonl.gz" |
|
- config_name: "2022" |
|
data_files: |
|
- "2022_part_00.jsonl.gz" |
|
- "2022_part_01.jsonl.gz" |
|
- "2022_part_02.jsonl.gz" |
|
- "2022_part_03.jsonl.gz" |
|
- "2022_part_04.jsonl.gz" |
|
- "2022_part_05.jsonl.gz" |
|
- "2022_part_06.jsonl.gz" |
|
- "2022_part_07.jsonl.gz" |
|
- "2022_part_08.jsonl.gz" |
|
- "2022_part_09.jsonl.gz" |
|
- "2022_part_10.jsonl.gz" |
|
- "2022_part_11.jsonl.gz" |
|
- "2022_part_12.jsonl.gz" |
|
- "2022_part_13.jsonl.gz" |
|
- "2022_part_14.jsonl.gz" |
|
- "2022_part_15.jsonl.gz" |
|
- "2022_part_16.jsonl.gz" |
|
- config_name: "2023" |
|
data_files: |
|
- "2023_part_00.jsonl.gz" |
|
- "2023_part_01.jsonl.gz" |
|
- "2023_part_02.jsonl.gz" |
|
- "2023_part_03.jsonl.gz" |
|
- "2023_part_04.jsonl.gz" |
|
- "2023_part_05.jsonl.gz" |
|
- "2023_part_06.jsonl.gz" |
|
- "2023_part_07.jsonl.gz" |
|
- "2023_part_08.jsonl.gz" |
|
- "2023_part_09.jsonl.gz" |
|
- "2023_part_10.jsonl.gz" |
|
- "2023_part_11.jsonl.gz" |
|
- "2023_part_12.jsonl.gz" |
|
- "2023_part_13.jsonl.gz" |
|
- "2023_part_14.jsonl.gz" |
|
- "2023_part_15.jsonl.gz" |
|
- config_name: "2024" |
|
data_files: |
|
- "2024_part_00.jsonl.gz" |
|
- "2024_part_01.jsonl.gz" |
|
- "2024_part_02.jsonl.gz" |
|
- "2024_part_03.jsonl.gz" |
|
- "2024_part_04.jsonl.gz" |
|
- "2024_part_05.jsonl.gz" |
|
- "2024_part_06.jsonl.gz" |
|
|
|
--- |
|
|
|
This dataset is the result of processing all WARC files in the [CCNews Corpus](https://commoncrawl.org/blog/news-dataset-available), from the beginning (2016) to June of 2024. |
|
The data has been cleaned and deduplicated, and language of articles have been detected and added. The process is similar to what HuggingFace's [DataTrove](https://github.com/huggingface/datatrove) does. |
|
|
|
Overall, it contains about 600 million news articles in more than 100 languages from all around the globe. |
|
|
|
|
|
Sample Python code to explore this dataset: |
|
|
|
```python |
|
from datasets import load_dataset |
|
from tqdm import tqdm |
|
|
|
# Load the news articles **crawled** in the year 2016 (but not necessarily published in 2016), in streaming mode |
|
dataset = load_dataset("stanford-oval/ccnews", name="2016", streaming=True) # `name` can be one of 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024 |
|
|
|
# Print information about the dataset |
|
print(dataset) |
|
|
|
# Iterate over a few examples |
|
print("\nFirst few examples:") |
|
for i, example in enumerate(dataset["train"].take(5)): |
|
print(f"Example {i + 1}:") |
|
print(example) |
|
print() |
|
|
|
# Count the number of articles (in 2016) |
|
row_count = 0 |
|
for _ in tqdm(dataset["train"], desc="Counting rows", unit=" rows", unit_scale=True, unit_divisor=1000): |
|
row_count += 1 |
|
|
|
# Print the number of rows |
|
print(f"\nTotal number of articles: {row_count}") |
|
``` |