Tonic commited on
Commit
7046435
·
verified ·
1 Parent(s): 0168afe

Migrated from GitHub

Browse files
data/README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # bbcfw
2
+
3
+ Exploring the BBC News subset of the FineWeb dataset (via HuggingFaceFW/fineweb's dated subsets on HF),
4
+ originally a Common Crawl dataset. Iterating on a previous use of the C4 dataset [here](https://github.com/lmmx/bbcc4).
5
+
6
+ ## Speed benchmarking
7
+
8
+ The sample 10BT (14 files, each 2.15GB = 30.1GB, only loading 3 columns: url, text, language)
9
+
10
+ - Each file takes about 45 seconds
11
+ - There are ~25,000 files in the other ~100 non-sample subsets, which suggests ~13 days estimated processing time.
12
+
13
+ ```
14
+ 0it [00:00, ?it/s]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/000_00000.parquet
15
+ 1it [00:41, 41.21s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/001_00000.parquet
16
+ 2it [01:26, 43.58s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/002_00000.parquet
17
+ 3it [02:10, 43.65s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/003_00000.parquet
18
+ 4it [02:52, 43.32s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/004_00000.parquet
19
+ 5it [03:40, 44.68s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/005_00000.parquet
20
+ 6it [04:26, 45.31s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/006_00000.parquet
21
+ 7it [05:11, 45.06s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/007_00000.parquet
22
+ 8it [05:53, 44.24s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/008_00000.parquet
23
+ 9it [06:41, 45.47s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/009_00000.parquet
24
+ 10it [07:25, 44.95s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/010_00000.parquet
25
+ 11it [08:08, 44.41s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/011_00000.parquet
26
+ 12it [09:04, 47.78s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/012_00000.parquet
27
+ 13it [09:47, 46.53s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/013_00000.parquet
28
+ 14it [10:32, 46.04s/it]Processing hf://datasets/HuggingFaceFW/fineweb/sample/10BT/014_00000.parquet
29
+ 15it [10:54, 43.61s/it]
30
+ Creating parquet from Arrow format: 100%|█████████████████████████████████████| 10/10 [00:00<00:00, 120.79ba/s]
31
+ Uploading the dataset shards: 100%|██████████████████████████████████████████████| 1/1 [00:02<00:00, 2.15s/it]
32
+ ```
33
+
34
+ - LazyFrame with `scan_parquet`/`sink_parquet` seems to make it marginally faster (but not tested
35
+ extensively), I decided to use it regardless as it should reduce the memory load.
data/dataset_card.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ...TBC...
3
+ ...
4
+ license: odc-by
5
+ language:
6
+ - en
7
+ pretty_name: BBC News from FineWeb
8
+ size_categories:
9
+ - 10K<n<100K
10
+ ---
11
+
12
+ # Dataset Card for BBC News from FineWeb
13
+
14
+ This dataset provides a filtered subset of BBC News articles from the realnewslike subset of the FineWeb dataset, containing approximately 77k articles from BBC News domains.
15
+
16
+ ## Dataset Details
17
+
18
+ ### Dataset Description
19
+
20
+ - **Curated by:** Louis Maddox (@permutans on HuggingFace and X/Twitter)
21
+ - **License:** ODC-BY (inherited from FineWeb)
22
+ - **Language:** English
23
+
24
+ ### Dataset Sources
25
+ - **Repository:** https://huggingface.co/datasets/permutans/fineweb-bbc-news
26
+ - **Source Dataset:** HuggingFaceFW/fineweb
27
+ - **Paper:** https://arxiv.org/abs/2406.17557 (FineWeb paper)
28
+
29
+ ## Uses
30
+
31
+ ### Direct Use
32
+ Suitable for text analysis and NLP tasks focused on news content, particularly when working with BBC News articles. The dataset provides cleaned article text without metadata like bylines or publication dates.
33
+
34
+ ### Out-of-Scope Use
35
+ This dataset should not be used as a comprehensive archive of BBC News content, as it represents only articles captured in FineWeb's crawl (from CommonCrawl between 2013-2024). It should not be assumed to contain all articles from any given time period.
36
+
37
+ ## Dataset Structure
38
+
39
+ ### Data Instances
40
+ Example format:
41
+ ```python
42
+ {
43
+ 'url': 'news.bbc.co.uk/news/article-path',
44
+ 'text': 'Article content...'
45
+ }
46
+ ```
47
+
48
+ ### Data Fields
49
+ - `url`: URL of the article with query parameters removed
50
+ - `text`: Full article text content
51
+
52
+ ### Data Statistics
53
+ - Contains approximately 77k articles
54
+ - No validation split in current version
55
+
56
+ ## Dataset Creation
57
+
58
+ ### Curation Rationale
59
+ Created to provide an easily accessible dataset of BBC news articles while offering a focused view into the FineWeb dataset's coverage of major news sources. Enables analysis of FineWeb's completeness and motivates investigation of alternative data acquisition methods.
60
+
61
+ ### Source Data
62
+ #### Data Collection and Processing
63
+ - Filtered from FineWeb's dated subsets (i.e. not default subset nor sample subsets)
64
+ - Limited to domains: news.bbc.co.uk, www.bbc.co.uk/news, www.bbc.com/news
65
+ - URL cleaning: removed query parameters
66
+ - Regional news content excluded due to sparse coverage in source data
67
+ - No modifications to article text content
68
+
69
+ #### Personal and Sensitive Information
70
+ Article texts contain only the main content body, without bylines or metadata.
71
+
72
+ ## Bias, Risks, and Limitations
73
+
74
+ - No validation split in current version
75
+ - Original publication dates not available (FineWeb timestamps were crawl dates)
76
+ - Section/index pages not yet filtered out from article pages
77
+ - Regional news content explicitly excluded due to sparse coverage
78
+ - Relationship between news.bbc.co.uk and bbc.co.uk/news domains needs investigation
79
+ - Coverage may be incomplete compared to full BBC News archive
80
+
81
+ ### Recommendations
82
+ Users should be aware that this represents a subset of BBC News content which appears to be from around 2019 and earlier. For applications requiring comprehensive coverage or accurate publication dates, additional data sources should be considered.
83
+
84
+ ## Future Directions
85
+ - Potential expansion using fineweb dataset for more recent content
86
+ - Addition of publication dates through targeted crawling
87
+ - Filtering to distinguish between section pages and article pages
88
+ - Creation of validation split
89
+
90
+ ## Citation
91
+ Please cite the original FineWeb dataset when using this data. A reference to this one would be welcome but not necessary, I consider this a derivative work.
92
+
93
+ ## Dataset Card Authors
94
+ Louis Maddox (@permutans)
data/pyproject.toml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ authors = [
3
+ { name = "Louis Maddox", email = "[email protected]" },
4
+ ]
5
+ name = "bbcfw"
6
+ version = "0.1.0"
7
+ description = "BBC News specific FineWeb dataset processing"
8
+ readme = "README.md"
9
+ requires-python = ">=3.12"
10
+ dependencies = [
11
+ "datasets>=3.2.0",
12
+ "huggingface-hub>=0.27.0",
13
+ "polars>=1.19.0",
14
+ "tqdm>=4.67.1",
15
+ ]
16
+
17
+ [build-system]
18
+ requires = ["pdm-backend"]
19
+ build-backend = "pdm.backend"
20
+
21
+ [project.license]
22
+ text = "MIT"
23
+
data/src/bbcfw/__init__.py ADDED
File without changes
data/src/bbcfw/core/__init__.py ADDED
File without changes
data/src/bbcfw/core/caching.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import tempfile
3
+ from pathlib import Path
4
+
5
+
6
+ def cache_name(url: str) -> str:
7
+ return base64.urlsafe_b64encode(url.encode()).decode().rstrip("=") + ".parquet"
8
+
9
+
10
+ def make_cache_path(url: str, cache_dir: Path) -> Path:
11
+ return cache_dir / cache_name(url=url)
12
+
13
+
14
+ def mktemp_cache_dir(id_path: str) -> Path:
15
+ """Make a temporary directory (deleted upon reboot, so short-term persistent).
16
+ `id_path` is a path that may contain slashes which will be turned to snake case.
17
+ """
18
+ cache_dir = Path(tempfile.gettempdir()) / id_path.replace("/", "_")
19
+ cache_dir.mkdir(exist_ok=True)
20
+ return cache_dir
data/src/bbcfw/core/configs.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+
3
+ import polars as pl
4
+ from datasets import load_dataset_builder
5
+ from huggingface_hub import list_repo_files
6
+
7
+ from bbcfw.core.caching import make_cache_path, mktemp_cache_dir
8
+
9
+
10
+ def map_file_configs(dataset_id: str) -> pl.DataFrame:
11
+ """Map every file to a config (subset)."""
12
+ builder_configs = dict(load_dataset_builder(dataset_id).builder_configs)
13
+ del builder_configs["default"] # Overlaps data/* configs, the rest are all disjoint
14
+ # Check that there's only 1 split per config (the train split), with 1 path pattern
15
+ assert set(len(v.data_files) for v in builder_configs.values()) == {1}
16
+ assert set(len(v.data_files["train"]) for v in builder_configs.values()) == {1}
17
+ cfg2path = pl.DataFrame(
18
+ [
19
+ {
20
+ "config_name": cfg_name,
21
+ "path": builder_configs[cfg_name].data_files["train"][0],
22
+ }
23
+ for cfg_name in builder_configs
24
+ ]
25
+ ).with_columns(pl.col("path").str.strip_suffix("/*"))
26
+ source_files = (
27
+ (
28
+ pl.DataFrame(
29
+ {"name": pl.Series(list_repo_files(dataset_id, repo_type="dataset"))}
30
+ )
31
+ .with_columns(
32
+ # Keep only filenames which are 2 levels deep (2nd subpath = the config name)
33
+ path=pl.col("name").str.extract(r"([^/]*/[^/]*)/"),
34
+ )
35
+ .drop_nulls()
36
+ .sort("name")
37
+ )
38
+ .join(cfg2path, on="path")
39
+ .drop("path")
40
+ )
41
+ return source_files
data/src/bbcfw/core/filters.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ domain_capture = r"https?://([^/?]+)"
2
+ subpage_capture = r"https?://[^/]+(\/[^/?]+\/)" # Include pre/suffix slashes
3
+ domain_match = r"^(news\.bbc\.co\.uk|www\.bbc\.co\.uk|www\.bbc\.com)$"
data/src/bbcfw/main.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+ from pathlib import Path
3
+
4
+ import polars as pl
5
+ from datasets import Dataset, get_dataset_config_names, load_dataset_builder
6
+ from datasets.exceptions import DatasetNotFoundError
7
+ from huggingface_hub import login
8
+ from tqdm import tqdm
9
+
10
+ from bbcfw.core.caching import make_cache_path, mktemp_cache_dir
11
+ from bbcfw.core.configs import map_file_configs
12
+ from bbcfw.core.filters import domain_capture, domain_match, subpage_capture
13
+
14
+ # 1) Log into HuggingFace, name the datasets we'll ingest and produce
15
+
16
+ login(new_session=False) # Will prompt for your token or use cached token
17
+
18
+ dataset_id = "HuggingFaceFW/fineweb"
19
+ dataset_id_slug = dataset_id.replace("/", "_")
20
+ username = "permutans"
21
+ result_dataset_name = "fineweb-bbc-news"
22
+ result_dataset_id = f"{username}/{result_dataset_name}"
23
+
24
+ # 2) Make a directory to cache our transformations of the entire dataset (all subsets)
25
+
26
+ cache_dir = mktemp_cache_dir(id_path=dataset_id)
27
+ dataset_cache_path = partial(make_cache_path, cache_dir=cache_dir)
28
+
29
+ parquet_cache_names = cache_dir / f"{dataset_id_slug}_filenames.parquet"
30
+
31
+ if parquet_cache_names.exists():
32
+ source_files = pl.read_parquet(parquet_cache_names)
33
+ else:
34
+ source_files = map_file_configs(dataset_id=dataset_id)
35
+ source_files.write_parquet(parquet_cache_names)
36
+
37
+ fwnews_features = {feat_name: pl.String for feat_name in "url text".split()}
38
+ aggregator = pl.DataFrame(schema=fwnews_features)
39
+
40
+ domain_col = pl.col("url").str.extract(domain_capture)
41
+ path_col = pl.col("url").str.extract(subpage_capture)
42
+
43
+ config_names = source_files["config_name"].unique().sort()
44
+
45
+
46
+ def ds_subset_exists(dataset_id: str, subset_name: str) -> bool:
47
+ """Check that the dataset exists, and if so whether the config name is in it."""
48
+ try:
49
+ configs = get_dataset_config_names(dataset_id)
50
+ except DatasetNotFoundError:
51
+ print(f"The dataset {dataset_id} was not found.")
52
+ return False
53
+ else:
54
+ return subset_name in list(configs)
55
+
56
+
57
+ def process_all_subsets(reverse: bool=False):
58
+ for subset_name in tqdm(config_names[::-1] if reverse else config_names):
59
+ try:
60
+ # Skip any existing subsets entirely
61
+ if ds_subset_exists(dataset_id=result_dataset_id, subset_name=subset_name):
62
+ print(f"Skipping {subset_name} as it exists")
63
+ continue
64
+ else:
65
+ print(f"The subset {subset_name} doesn't exist, creating it")
66
+ hf_urls = source_files.filter(pl.col("config_name") == subset_name).select(
67
+ url=f"hf://datasets/{dataset_id}/" + pl.col("name")
68
+ )
69
+ pq_caches = []
70
+
71
+ def process_subset_chunk(source_url: str) -> Path:
72
+ parquet_cache_chunk = dataset_cache_path(source_url)
73
+ if parquet_cache_chunk.exists():
74
+ try:
75
+ news_df = pl.read_parquet(parquet_cache_chunk)
76
+ except:
77
+ print(f"Failed to read {parquet_cache_chunk}")
78
+ raise
79
+ else:
80
+ print(f"\nProcessing {source_url}")
81
+ # Drop query parameters if ? in URL, drop any non-BBC News domain URLs
82
+ news_df = (
83
+ pl.scan_parquet(source_url, parallel="prefiltered")
84
+ .select("url", "text", "language")
85
+ .filter(pl.col("language") == "en")
86
+ .select(pl.col("url").str.extract(r"([^?]+)"), "text")
87
+ .filter(
88
+ domain_col.str.contains(domain_match),
89
+ ~pl.col("url").str.contains(
90
+ r"https?://[^/]+\/\?"
91
+ ), # Path is not `/?`
92
+ )
93
+ .filter(
94
+ domain_col.str.contains("news").or_(path_col == "/news/")
95
+ )
96
+ )
97
+ news_df.sink_parquet(parquet_cache_chunk)
98
+ return parquet_cache_chunk
99
+
100
+ for url in tqdm(list(hf_urls["url"])):
101
+ parquet_cache_chunk = process_subset_chunk(url)
102
+ pq_caches.append(parquet_cache_chunk)
103
+
104
+ # Reload once all parts completed and upload
105
+ aggregator = pl.read_parquet(pq_caches)
106
+
107
+ news_data = aggregator.to_dict(as_series=False)
108
+ news_dataset = Dataset.from_dict(news_data)
109
+ news_dataset.push_to_hub(
110
+ result_dataset_id,
111
+ config_name=subset_name,
112
+ private=False,
113
+ )
114
+ except KeyboardInterrupt:
115
+ print("\nGracefully shutting down - current subset was not completed")
116
+ return # Exit cleanly
117
+ except Exception as e:
118
+ print(f"\nError processing subset {subset_name}: {str(e)}")
119
+ continue # Skip to next subset
120
+
121
+
122
+ if __name__ == "__main__":
123
+ try:
124
+ process_all_subsets()
125
+ except KeyboardInterrupt:
126
+ print("\nShutting down...")
data/src/bbcfw/old/bbc_news_main_subpath_only.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pprint import pprint
2
+ import tempfile
3
+
4
+ from pathlib import Path
5
+ import polars as pl
6
+ from huggingface_hub import hf_hub_url, list_repo_files
7
+ from tqdm import tqdm
8
+ import base64
9
+ from datasets import Dataset
10
+ from huggingface_hub import login
11
+
12
+ login(new_session=False) # Will prompt for your token or use cached token
13
+
14
+ cache_dir = Path(tempfile.gettempdir()) / "allenai_c4"
15
+ cache_dir.mkdir(exist_ok=True)
16
+
17
+ def cache_name(url: str) -> str:
18
+ return base64.urlsafe_b64encode(url.encode()).decode().rstrip("=") + ".parquet"
19
+
20
+ def cache_path(url: str, cache_dir=cache_dir) -> Path:
21
+ return cache_dir / cache_name(url=url)
22
+
23
+ parquet_cache_names = cache_dir / "realnewslike_filenames.parquet"
24
+ if parquet_cache_names.exists():
25
+ news_files = pl.read_parquet(parquet_cache_names)["filename"]
26
+ else:
27
+ file_names = pl.Series(list_repo_files("allenai/c4", repo_type="dataset"))
28
+ # Take all splits of the realnewslike subset (513 files)
29
+ news_files = file_names.filter(
30
+ file_names.str.starts_with("realnewslike/") & file_names.str.ends_with(".json.gz"),
31
+ ).str.strip_prefix("realnewslike/")
32
+ pl.DataFrame({"filename": news_files}).write_parquet(parquet_cache_names)
33
+
34
+ c4n_features = {"url": pl.String, "text": pl.String}
35
+ aggregator = pl.DataFrame(schema=c4n_features)
36
+
37
+ domain_capture = r"https?://([^/?]+)"
38
+ subpage_capture = r"https?://[^/]+(\/[^/?]+\/)" # Include pre/suffix slashes
39
+ domain_match = r"^(news\.bbc\.co\.uk|www\.bbc\.co\.uk|www\.bbc\.com)$"
40
+ domain_col = pl.col("url").str.extract(domain_capture)
41
+ path_col = pl.col("url").str.extract(subpage_capture)
42
+
43
+ hf_urls = [
44
+ hf_hub_url(
45
+ repo_id="allenai/c4",
46
+ filename=filename,
47
+ subfolder="realnewslike",
48
+ repo_type="dataset",
49
+ )
50
+ for filename in news_files
51
+ ]
52
+ pq_caches = list(map(cache_path, hf_urls))
53
+
54
+ for json_url, parquet_cache_chunk in tqdm(zip(hf_urls, pq_caches)):
55
+ if parquet_cache_chunk.exists():
56
+ news_df = pl.read_parquet(parquet_cache_chunk)
57
+ else:
58
+ print(f"Processing {json_url}")
59
+ df = (
60
+ pl.read_ndjson(json_url, schema=c4n_features)
61
+ .with_columns(pl.col("url").str.extract(r"([^?]+)"))
62
+ .filter(
63
+ domain_col.str.contains(domain_match),
64
+ ~pl.col("url").str.contains(r"https?://[^/]+\/\?"), # Path is not `/?`
65
+ )
66
+ )
67
+ # Just match the "/news/" path here
68
+ news_df = df.filter(domain_col.str.contains('news').or_(path_col == "/news/"))
69
+ news_df.write_parquet(parquet_cache_chunk)
70
+
71
+ # Reload once all parts completed and upload
72
+ aggregator = pl.read_parquet(pq_caches)
73
+
74
+ news_data = aggregator.to_dict(as_series=False)
75
+ news_dataset = Dataset.from_dict(news_data)
76
+ news_dataset.push_to_hub("permutans/c4-bbc-news", config_name="realnewslike-bbc-news", private=False)
data/src/bbcfw/old/bbc_news_urls.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pprint import pprint
2
+
3
+ import polars as pl
4
+ from huggingface_hub import hf_hub_url, list_repo_files
5
+ from tqdm import tqdm
6
+
7
+ file_names = pl.Series(list_repo_files("allenai/c4", repo_type="dataset"))
8
+ # Take all splits of the realnewslike subset (513 files)
9
+ news_files = file_names.filter(
10
+ file_names.str.starts_with("realnewslike/") & file_names.str.ends_with(".json.gz"),
11
+ ).str.strip_prefix("realnewslike/")
12
+
13
+ c4n_features = {"url": pl.String, "text": pl.String}
14
+ aggregator = pl.DataFrame(schema=c4n_features)
15
+
16
+ domain_capture = r"https?://([^/?]+)"
17
+ # subpage_capture = r"https?://[^/]+/([^/?]+)"
18
+ subpage_capture = r"https?://[^/]+(\/[^/?]+\/)" # Include pre/suffix slashes
19
+ url_match = r"^(news\.bbc\.co\.uk|www\.bbc\.co\.uk|www\.bbc\.com)$"
20
+ news_subpages = ["news"] # Blogs are the 2nd largest category but still far smaller
21
+ regions = [
22
+ "berkshire",
23
+ "birmingham",
24
+ "blackcountry",
25
+ "bradford",
26
+ "bristol",
27
+ "cambridgeshire",
28
+ "chelsea",
29
+ "cornwall",
30
+ "coventry",
31
+ "cumbria",
32
+ "derby",
33
+ "devon",
34
+ "dorset",
35
+ "england",
36
+ "essex",
37
+ "gloucestershire",
38
+ "guernsey",
39
+ "hampshire",
40
+ "herefordandworcester",
41
+ "humber",
42
+ "isleofman",
43
+ "jersey",
44
+ "kent",
45
+ "lancashire",
46
+ "leeds",
47
+ "leicester",
48
+ "lincolnshire",
49
+ "liverpool",
50
+ "london",
51
+ "manchester",
52
+ "norfolk",
53
+ "northamptonshire",
54
+ "northernireland",
55
+ "nottingham",
56
+ "oxford",
57
+ "readingandleeds",
58
+ "scotland",
59
+ "shropshire",
60
+ "somerset",
61
+ "southampton",
62
+ "southyorkshire",
63
+ "stoke",
64
+ "suffolk",
65
+ "tees",
66
+ "tyne",
67
+ "wales",
68
+ "wiltshire",
69
+ ]
70
+ allowed_subpages = pl.DataFrame({"path": map("/{}/".format, news_subpages + regions)})
71
+ path_col = pl.col("url").str.extract(subpage_capture).alias("path")
72
+
73
+ for filename in tqdm(news_files):
74
+ json_url = hf_hub_url(
75
+ repo_id="allenai/c4",
76
+ filename=filename,
77
+ subfolder="realnewslike",
78
+ repo_type="dataset",
79
+ )
80
+ print(f"Processing {json_url}")
81
+ df = pl.read_ndjson(json_url, schema=c4n_features).filter(
82
+ pl.col("url").str.extract(domain_capture).str.contains(url_match),
83
+ ~pl.col("url").str.contains(r"https?://[^/]+\/\?"), # Path is a ?
84
+ )
85
+ news_df = (
86
+ df.with_columns(path_col)
87
+ .sort("path")
88
+ .join(allowed_subpages, on="path")
89
+ .drop("path")
90
+ )
91
+ aggregator = pl.concat([aggregator, news_df])
92
+ print(aggregator)
93
+
94
+ with pl.Config() as cfg:
95
+ cfg.set_tbl_rows(-1)
96
+ aggregator.with_columns(path_col)["path"].value_counts().sort(
97
+ "count", descending=True
98
+ ).with_row_index().pipe(print)
data/src/bbcfw/old/bbc_urls.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pprint import pprint
2
+
3
+ import polars as pl
4
+ from huggingface_hub import hf_hub_url, list_repo_files
5
+ from tqdm import tqdm
6
+
7
+ file_names = pl.Series(list_repo_files("allenai/c4", repo_type="dataset"))
8
+ # Take all splits of the realnewslike subset (513 files)
9
+ news_files = file_names.filter(
10
+ file_names.str.starts_with("realnewslike/") & file_names.str.ends_with(".json.gz"),
11
+ ).str.strip_prefix("realnewslike/")
12
+
13
+ c4n_features = {"url": pl.String, "text": pl.String}
14
+ aggregator = pl.DataFrame(schema=c4n_features)
15
+
16
+ domain_capture = r"https?://([^/?]+)"
17
+ url_match = r"^(news\.bbc\.co\.uk|www\.bbc\.co\.uk|www\.bbc\.com)$"
18
+
19
+ for filename in tqdm(news_files):
20
+ json_url = hf_hub_url(
21
+ repo_id="allenai/c4",
22
+ filename=filename,
23
+ subfolder="realnewslike",
24
+ repo_type="dataset",
25
+ )
26
+ print(f"Processing {json_url}")
27
+ df = pl.read_ndjson(json_url, schema=c4n_features).filter(
28
+ pl.col("url").str.extract(domain_capture).str.contains(url_match),
29
+ ~pl.col("url").str.contains("/sport/"),
30
+ )
31
+ aggregator = pl.concat([aggregator, df])
32
+ print(aggregator)
33
+
34
+ # Print all domains
35
+ print("Domains:", end=" ")
36
+ pprint(aggregator.sort("url")["url"].str.extract(domain_capture).unique().to_list())
data/src/bbcfw/old/compare_dataset.py ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ from datasets import Dataset
2
+ from huggingface_hub import login
3
+
4
+ login(new_session=False) # Will prompt for your token or use cached token
5
+
6
+ dataset.load_from_hub(
7
+ "permutans/bbc-news-dataset", config_name="2025-04", private=False
8
+ )
data/src/bbcfw/old/date_top_tail.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import polars as pl
2
+ from huggingface_hub import hf_hub_url, list_repo_files
3
+ from tqdm import tqdm
4
+
5
+ file_names = pl.Series(list_repo_files("allenai/c4", repo_type="dataset"))
6
+ # Take all splits of the realnewslike subset (513 files)
7
+ news_files = file_names.filter(
8
+ file_names.str.starts_with("realnewslike/") & file_names.str.ends_with(".json.gz"),
9
+ ).str.strip_prefix("realnewslike/")
10
+
11
+ features = {"timestamp": pl.Datetime, "url": pl.String}
12
+ aggregator = pl.DataFrame(schema=features)
13
+ for filename in tqdm(news_files):
14
+ json_url = hf_hub_url(
15
+ repo_id="allenai/c4",
16
+ filename=filename,
17
+ subfolder="realnewslike",
18
+ repo_type="dataset",
19
+ )
20
+ df = pl.read_ndjson(json_url, schema=features).sort("timestamp")
21
+ aggregator = pl.concat([aggregator, df.head(1), df.tail(1)])
22
+ print(aggregator.shape)
data/src/bbcfw/old/date_year_agg.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import polars as pl
2
+ from huggingface_hub import hf_hub_url, list_repo_files
3
+ from tqdm import tqdm
4
+
5
+ file_names = pl.Series(list_repo_files("allenai/c4", repo_type="dataset"))
6
+ # Take all splits of the realnewslike subset (513 files)
7
+ news_files = file_names.filter(
8
+ file_names.str.starts_with("realnewslike/") & file_names.str.ends_with(".json.gz"),
9
+ ).str.strip_prefix("realnewslike/")
10
+
11
+ c4n_features = {"timestamp": pl.Datetime, "url": pl.String}
12
+ aggregator = pl.DataFrame()
13
+ for filename in tqdm(news_files):
14
+ json_url = hf_hub_url(
15
+ repo_id="allenai/c4",
16
+ filename=filename,
17
+ subfolder="realnewslike",
18
+ repo_type="dataset",
19
+ )
20
+ print(f"Processing {json_url}")
21
+ df = pl.read_ndjson(json_url, schema=c4n_features).sort("timestamp")
22
+ yearly = df.group_by(pl.col("timestamp").dt.year()).agg(
23
+ pl.count("url").alias("count")
24
+ )
25
+ y_pivot = pl.DataFrame({str(year): count for year, count in yearly.rows()})
26
+ aggregator = pl.concat([aggregator, y_pivot], how="diagonal").sum()
27
+ print(aggregator)
data/src/bbcfw/old/delete_dataset.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ import huggingface_hub
2
+
3
+ huggingface_hub.delete_repo(repo_id="permutans/fineweb-bbc-news", repo_type="dataset")
data/src/bbcfw/old/load_dataframe_parquet.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import polars as pl
2
+
3
+ df = pl.read_parquet(
4
+ "hf://datasets/allenai/c4@~parquet/realnewslike/partial-train/*.parquet",
5
+ columns=["timestamp", "url"],
6
+ )
data/src/bbcfw/old/upload_dataset.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import Dataset
2
+ from huggingface_hub import login
3
+
4
+ login(new_session=False) # Will prompt for your token or use cached token
5
+
6
+ # Sample BBC news articles (replace with your actual data)
7
+ news_data = {
8
+ "text": [
9
+ "BBC article content here...",
10
+ "Another BBC article...",
11
+ "And one more BBC article...",
12
+ ],
13
+ "title": [
14
+ "1st BBC Article",
15
+ "2nd BBC Article",
16
+ "3rd BBC Article",
17
+ ],
18
+ "date": [
19
+ "2025-01-04",
20
+ "2025-01-04",
21
+ "2025-01-01",
22
+ ],
23
+ "url": [
24
+ "https://bbc.co.uk/news/1",
25
+ "https://bbc.co.uk/news/2",
26
+ "https://bbc.co.uk/news/3",
27
+ ],
28
+ }
29
+
30
+ dataset = Dataset.from_dict(news_data)
31
+ dataset.push_to_hub(
32
+ "permutans/bbc-news-dataset-test", config_name="2025-01", private=False
33
+ )
data/uv.lock ADDED
The diff for this file is too large to render. See raw diff