Datasets:
Commit
•
bdd8f0d
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +160 -0
- dataset_infos.json +1 -0
- dummy/abstract/1.1.0/dummy_data.zip +3 -0
- dummy/title/1.1.0/dummy_data.zip +3 -0
- orange_sum.py +111 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- found
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- fr
|
8 |
+
licenses:
|
9 |
+
- unknown
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 10K<n<100K
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- conditional-text-generation
|
18 |
+
task_ids:
|
19 |
+
- summarization
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for OrangeSum
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Repository:** [OrangeSum repository](https://github.com/Tixierae/OrangeSum)
|
50 |
+
- **Paper:** [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
|
51 |
+
- **Point of Contact:** [Antoine J.-P. Tixier]([email protected])
|
52 |
+
|
53 |
+
### Dataset Summary
|
54 |
+
|
55 |
+
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous.
|
56 |
+
|
57 |
+
Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
|
58 |
+
|
59 |
+
### Supported Tasks and Leaderboards
|
60 |
+
|
61 |
+
**Tasks:** OrangeSum Title and OrangeSum Abstract.
|
62 |
+
|
63 |
+
To this day, there is no Leaderboard for this dataset.
|
64 |
+
|
65 |
+
### Languages
|
66 |
+
|
67 |
+
The text in the dataset is in French.
|
68 |
+
|
69 |
+
## Dataset Structure
|
70 |
+
|
71 |
+
### Data Instances
|
72 |
+
|
73 |
+
A data instance consists of a news article and a summary. The summary can be a short abstract or a title depending on the configuration.
|
74 |
+
|
75 |
+
Example:
|
76 |
+
|
77 |
+
**Document:** Le temps sera pluvieux sur huit départements de la France ces prochaines heures : outre les trois départements bretons placés en vigilance orange jeudi matin, cinq autres départements du sud du Massif Central ont été à leur tour placés en alerte orange pluie et inondation. Il s'agit de l'Aveyron, du Cantal, du Gard, de la Lozère, et de la Haute-Loire. Sur l'ensemble de l'épisode, les cumuls de pluies attendus en Bretagne sont compris entre 40 et 60 mm en 24 heures et peuvent atteindre localement les 70 mm en 24 heures.Par la suite, la dégradation qui va se mettre en place cette nuit sur le Languedoc et le sud du Massif Central va donner sur l'Aveyron une première salve intense de pluie. Des cumuls entre 70 et 100 mm voir 120 mm localement sont attendus sur une durée de 24 heures. Sur le relief des Cévennes on attend de 150 à 200 mm, voire 250 mm très ponctuellement sur l'ouest du Gard et l'est de la Lozère. Cet épisode va s'estomper dans la soirée avec le décalage des orages vers les régions plus au nord. Un aspect orageux se mêlera à ces précipitations, avec de la grêle possible, des rafales de vent et une forte activité électrique.
|
78 |
+
|
79 |
+
**Abstract:** Outre les trois départements bretons, cinq autres départements du centre de la France ont été placés en vigilance orange pluie-inondation.
|
80 |
+
|
81 |
+
**Title:** Pluie-inondations : 8 départements en alerte orange.
|
82 |
+
|
83 |
+
### Data Fields
|
84 |
+
|
85 |
+
`text`: the document to be summarized. \
|
86 |
+
`summary`: the summary of the source document.
|
87 |
+
|
88 |
+
### Data Splits
|
89 |
+
|
90 |
+
The data is split into a training, validation and test in both configuration.
|
91 |
+
|
92 |
+
| | Tain | Valid | Test |
|
93 |
+
| ----- | ------ | ----- | ---- |
|
94 |
+
| Abstract | 21400 | 1500 | 1500 |
|
95 |
+
| Title | 30658 | 1500 | 1500 |
|
96 |
+
|
97 |
+
## Dataset Creation
|
98 |
+
|
99 |
+
### Curation Rationale
|
100 |
+
|
101 |
+
The goal here was to create a French equivalent of the recently introduced [XSum](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset) dataset. Unlike the historical summarization datasets, CNN, DailyMail, and NY Times, which favor extractive strategies, XSum, as well as OrangeSum require the models to display a high degree of abstractivity to perform well. The summaries in OrangeSum are not catchy headlines, but rather capture the gist of the articles.
|
102 |
+
|
103 |
+
### Source Data
|
104 |
+
|
105 |
+
#### Initial Data Collection and Normalization
|
106 |
+
|
107 |
+
Each article features a single-sentence title as well as a very brief abstract. Extracting these two fields from each news article page, creates two summarization tasks: OrangeSum Title and OrangeSum Abstract. As a post-processing step, all empty articles and those whose summaries were shorter than 5 words were removed. For OrangeSum Abstract, the top 10% articles in terms of proportion of novel unigrams in the abstracts were removed, as it was observed that such abstracts tend to be introductions rather than real abstracts. This corresponded to a threshold of 57% novel unigrams. For both OrangeSum Title and OrangeSum Abstract, 1500 pairs for testing and 1500 for validation are set aside, and all the remaining ones are used for training.
|
108 |
+
|
109 |
+
#### Who are the source language producers?
|
110 |
+
|
111 |
+
The authors of the artiles.
|
112 |
+
|
113 |
+
### Annotations
|
114 |
+
|
115 |
+
#### Annotation process
|
116 |
+
|
117 |
+
The smmaries are professionally written by the author of the articles.
|
118 |
+
|
119 |
+
#### Who are the annotators?
|
120 |
+
|
121 |
+
The authors of the artiles.
|
122 |
+
|
123 |
+
### Personal and Sensitive Information
|
124 |
+
|
125 |
+
[More Information Needed]
|
126 |
+
|
127 |
+
## Considerations for Using the Data
|
128 |
+
|
129 |
+
### Social Impact of Dataset
|
130 |
+
|
131 |
+
[More Information Needed]
|
132 |
+
|
133 |
+
### Discussion of Biases
|
134 |
+
|
135 |
+
[More Information Needed]
|
136 |
+
|
137 |
+
### Other Known Limitations
|
138 |
+
|
139 |
+
[More Information Needed]
|
140 |
+
|
141 |
+
## Additional Information
|
142 |
+
|
143 |
+
### Dataset Curators
|
144 |
+
|
145 |
+
The dataset was initially created by Antoine J.-P. Tixier.
|
146 |
+
|
147 |
+
### Licensing Information
|
148 |
+
|
149 |
+
[More Information Needed]
|
150 |
+
|
151 |
+
### Citation Information
|
152 |
+
|
153 |
+
```
|
154 |
+
@article{eddine2020barthez,
|
155 |
+
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
|
156 |
+
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
|
157 |
+
journal={arXiv preprint arXiv:2010.12321},
|
158 |
+
year={2020}
|
159 |
+
}
|
160 |
+
```
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"abstract": {"description": "The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the \"Orange Actu\" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual (\"insolite\" in French), and miscellaneous.\n\nEach article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.\n", "citation": "@article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}\n", "homepage": "https://github.com/Tixierae/OrangeSum/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "summary"}, "builder_name": "orange_sum", "config_name": "abstract", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 53531651, "num_examples": 21401, "dataset_name": "orange_sum"}, "test": {"name": "test", "num_bytes": 3785207, "num_examples": 1500, "dataset_name": "orange_sum"}, "validation": {"name": "validation", "num_bytes": 3698650, "num_examples": 1500, "dataset_name": "orange_sum"}}, "download_checksums": {"https://raw.githubusercontent.com/Tixierae/OrangeSum/main/data/docs/splits/abstract.tgz": {"num_bytes": 23058350, "checksum": "eaa4321b70bcf41c758d02fb5a94e50d73509a2be32adb1f9aacdcfd5796434b"}}, "download_size": 23058350, "post_processing_size": null, "dataset_size": 61015508, "size_in_bytes": 84073858}, "title": {"description": "The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the \"Orange Actu\" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual (\"insolite\" in French), and miscellaneous.\n\nEach article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.\n", "citation": "@article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}\n", "homepage": "https://github.com/Tixierae/OrangeSum/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "summary"}, "builder_name": "orange_sum", "config_name": "title", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 65225136, "num_examples": 30659, "dataset_name": "orange_sum"}, "test": {"name": "test", "num_bytes": 3176690, "num_examples": 1500, "dataset_name": "orange_sum"}, "validation": {"name": "validation", "num_bytes": 3276713, "num_examples": 1500, "dataset_name": "orange_sum"}}, "download_checksums": {"https://raw.githubusercontent.com/Tixierae/OrangeSum/main/data/docs/splits/title.tgz": {"num_bytes": 27321627, "checksum": "5d15823f7e1158f16f5428fdfc8fa26509f98325c0793d6a8880a33af9822301"}}, "download_size": 27321627, "post_processing_size": null, "dataset_size": 71678539, "size_in_bytes": 99000166}}
|
dummy/abstract/1.1.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec963c46427e787ac707d9839ad94814b559e0cfde1679aca26beb2c98e358bb
|
3 |
+
size 4864
|
dummy/title/1.1.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:09de6a9fcc218ddf64d31b2e53cfc368ed2ccbef0fabf93aba6d8ede62b69390
|
3 |
+
size 4822
|
orange_sum.py
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""OrangeSum dataset"""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import os
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
_CITATION = """\
|
25 |
+
@article{eddine2020barthez,
|
26 |
+
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
|
27 |
+
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
|
28 |
+
journal={arXiv preprint arXiv:2010.12321},
|
29 |
+
year={2020}
|
30 |
+
}
|
31 |
+
"""
|
32 |
+
|
33 |
+
_DESCRIPTION = """\
|
34 |
+
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous.
|
35 |
+
|
36 |
+
Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
|
37 |
+
"""
|
38 |
+
|
39 |
+
_URL_DATA = {
|
40 |
+
"abstract": "https://raw.githubusercontent.com/Tixierae/OrangeSum/main/data/docs/splits/abstract.tgz",
|
41 |
+
"title": "https://raw.githubusercontent.com/Tixierae/OrangeSum/main/data/docs/splits/title.tgz",
|
42 |
+
}
|
43 |
+
|
44 |
+
_DOCUMENT = "text"
|
45 |
+
_SUMMARY = "summary"
|
46 |
+
|
47 |
+
|
48 |
+
class OrangeSum(datasets.GeneratorBasedBuilder):
|
49 |
+
"""OrangeSum: a french abstractive summarization dataset"""
|
50 |
+
|
51 |
+
VERSION = datasets.Version("1.1.0")
|
52 |
+
|
53 |
+
BUILDER_CONFIGS = [
|
54 |
+
datasets.BuilderConfig(name="abstract", description="Abstracts used as summaries", version=VERSION),
|
55 |
+
datasets.BuilderConfig(name="title", description="Titles used as summaries", version=VERSION),
|
56 |
+
]
|
57 |
+
|
58 |
+
def _info(self):
|
59 |
+
return datasets.DatasetInfo(
|
60 |
+
description=_DESCRIPTION,
|
61 |
+
features=datasets.Features(
|
62 |
+
{
|
63 |
+
_DOCUMENT: datasets.Value("string"),
|
64 |
+
_SUMMARY: datasets.Value("string"),
|
65 |
+
}
|
66 |
+
),
|
67 |
+
supervised_keys=(_DOCUMENT, _SUMMARY),
|
68 |
+
homepage="https://github.com/Tixierae/OrangeSum/",
|
69 |
+
citation=_CITATION,
|
70 |
+
)
|
71 |
+
|
72 |
+
def _split_generators(self, dl_manager):
|
73 |
+
"""Returns SplitGenerators."""
|
74 |
+
data_dir = dl_manager.download_and_extract(_URL_DATA[self.config.name])
|
75 |
+
|
76 |
+
return [
|
77 |
+
datasets.SplitGenerator(
|
78 |
+
name=datasets.Split.TRAIN,
|
79 |
+
# These kwargs will be passed to _generate_examples
|
80 |
+
gen_kwargs={
|
81 |
+
"filepath": data_dir,
|
82 |
+
"split": "train",
|
83 |
+
},
|
84 |
+
),
|
85 |
+
datasets.SplitGenerator(
|
86 |
+
name=datasets.Split.TEST,
|
87 |
+
# These kwargs will be passed to _generate_examples
|
88 |
+
gen_kwargs={
|
89 |
+
"filepath": data_dir,
|
90 |
+
"split": "test",
|
91 |
+
},
|
92 |
+
),
|
93 |
+
datasets.SplitGenerator(
|
94 |
+
name=datasets.Split.VALIDATION,
|
95 |
+
# These kwargs will be passed to _generate_examples
|
96 |
+
gen_kwargs={
|
97 |
+
"filepath": data_dir,
|
98 |
+
"split": "valid",
|
99 |
+
},
|
100 |
+
),
|
101 |
+
]
|
102 |
+
|
103 |
+
def _generate_examples(self, filepath, split):
|
104 |
+
""" Yields examples. """
|
105 |
+
with open(
|
106 |
+
os.path.join(filepath, self.config.name, "{}.source".format(split)), encoding="utf-8"
|
107 |
+
) as f_source, open(
|
108 |
+
os.path.join(filepath, self.config.name, "{}.target".format(split)), encoding="utf-8"
|
109 |
+
) as f_target:
|
110 |
+
for idx, (document, summary) in enumerate(zip(f_source, f_target)):
|
111 |
+
yield idx, {_DOCUMENT: document, _SUMMARY: summary}
|