File size: 1,504 Bytes
53c53ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35d2188
f858bbd
0e720ea
35d2188
 
 
 
 
53c53ff
 
35d2188
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  features:
  - name: date
    dtype: string
  - name: headline
    dtype: string
  - name: short_headline
    dtype: string
  - name: short_text
    dtype: string
  - name: article
    dtype: string
  - name: link
    dtype: string
  splits:
  - name: train
    num_bytes: 107545823
    num_examples: 21847
  download_size: 63956047
  dataset_size: 107545823
language:
- de
size_categories:
- 10K<n<100K
---

# Tagesschau Archive Article Dataset

A scrape of Tagesschau.de articles from 01.01.2018 to 26.04.2023. Find all source code in [github.com/bjoernpl/tagesschau](https://github.com/bjoernpl/tagesschau).

## Dataset Information
CSV structure:

| Field | Description |
| --- | --- |
| `date` | Date of the article |
| `headline` | Title of the article |
| `short_headline` | A short headline / Context |
| `short_text` | A brief summary of the article |
| `article` | The full text of the article |
| `href` | The href of the article on tagesschau.de |

Size:

The final dataset (2018-today) contains 225202 articles from 1942 days. Of these articles only
21848 are unique (Tagesschau often keeps articles in circulation for ~1 month). The total download
size is ~65MB.

Cleaning:

- Duplicate articles are removed
- Articles with empty text are removed
- Articles with empty short_texts are removed
- Articles, headlines and short_headlines are stripped of leading and trailing whitespace

More details in [`clean.py`](https://github.com/bjoernpl/tagesschau/blob/main/clean.py).