File size: 6,963 Bytes
bafb500 993a18a bafb500 5448c82 bafb500 5448c82 bafb500 5448c82 bafb500 5448c82 bafb500 5448c82 bafb500 5448c82 bafb500 5448c82 bafb500 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 |
---
license: cc-by-sa-4.0
task_categories:
- summarization
- text-retrieval
- text-generation
- text2text-generation
language:
- af
- ar
- az
- bn
- cs
- de
- en
- es
- et
- fa
- fi
- fr
- ga
- gl
- gu
- he
- hi
- hr
- id
- it
- ja
- ka
- kk
- km
- ko
- lt
- lv
- mk
- ml
- mn
- mr
- my
- ne
- nl
- pl
- ps
- pt
- ro
- ru
- si
- sl
- sv
- ta
- th
- tr
- uk
- ur
- vi
- xh
- zh
pretty_name: MegaWika-Report-Generation
---
# Dataset Card for MegaWika for Report Generation
## Dataset Description
- **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Paper:** [link](https://arxiv.org/pdf/2307.07049.pdf)
- **Point of Contact:** [Samuel Barham]([email protected])
### Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided.
This dataset provides the data for report generation / multi-document summarization with information retrieval.
### Dataset Creation
See the original [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika) repo.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
- `af`: Afrikaans
- `ar`: Arabic
- `az`: Azeri (Azerbaijani)
- `bn`: Bengali
- `cs`: Czech
- `de`: German (Deutsch)
- `en`: English
- `es`: Spanish (Español)
- `et`: Estonian
- `fa`: Farsi (Persian)
- `fi`: Finnish
- `fr`: French
- `ga`: Irish (Gaelic)
- `gl`: Galician
- `gu`: Gujarati
- `he`: Hebrew
- `hi`: Hindi
- `hr`: Hungarian
- `id`: Indonesian
- `it`: Italian
- `ja`: Japanese
- `ka`: Georgian (Kartvelian/Kartlian)
- `kk`: Kazakh
- `km`: Khmer
- `ko`: Korean
- `lt`: Lithuanian
- `lv`: Latvian
- `mk`: Macedonian (Makedonski)
- `ml`: Malay (Malayalam)
- `mn`: Mongolian
- `mr`: Marathi
- `my`: Burmese (Myanmar language)
- `ne`: Nepali
- `nl`: Dutch (Nederlands)
- `pl`: Polish
- `ps`: Pashto
- `pt`: Portuguese
- `ro`: Romanian
- `ru`: Russian
- `si`: Sinhalese (Sri Lankan language)
- `sl`: Slovenian
- `sv`: Swedish (Svenska)
- `ta`: Tamil
- `th`: Thai
- `tr`: Turkish
- `uk`: Ukrainian
- `ur`: Urdu
- `vi`: Vietnamese
- `xh`: Xhosa
- `zh`: Chinese (Zhōng wén)
## Dataset Structure
The dataset is divided into two main sections (1) generating the entire Wikipedia sections from multiple citations ("all") or (2) generating segments of each section in an iterative fashion ("iterative").
Then the dataset is divided by language pairs. Note that each language can be used cross-lingually by using the `en_gold_section_text` key.
### Data Instances
Given the rest of the fields (except for the ID) the goals is to produce the `gold_section_text` (e.g. given the title, intro, section name, and citations).
`num_docs` is provided for filtering on the number of docs for the multi-doc summarization. Note that in the iterative setting is it just one citation. **NOTE: `num_docs` is incorrect for now, will be updated.**
### Data Fields
The detailed structure of an instance is as follows:
```
{
"id": <string : a unique id for the instance>
"num_docs": <int : the number of citations for this instance>
"title": <string : title of original Wikipedia article>
"intro": <string : text of the Wikipedia article's introduction>
"section_name": <string : the name of the section to generate>
"previous_text": <string : used for the iterative task format, the previous text in the section already to condition on>
"question": <string : a natural language question that could be used for query-focused summarization, generated by ChatGPT>
"gold_section_text": <string : the text of the original Wikipedia section, e.g. the gold label for summarization>
"en_gold_section_text": <string : the English version of the text from the original Wikipedia section, e.g. the gold label for cross-lingual summarization>
"citations": <list of strings : the text of the citations (e.g. reference) for the section/chunk >
}
```
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Usage
```
# all of the dataset (not recommended)
dataset = load_dataset("hltcoe/megawika-report-generation")
# just the `all`` section data (all splits)
dataset = load_dataset("hltcoe/megawika-report-generation", data_dir="all")
# just the `all` English test set (can replace with "validation" or "train", or other langs)
dataset = load_dataset("hltcoe/megawika-report-generation", data_dir="all/en", split="test")
```
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:[email protected]), [Orion Weller](mailto:[email protected]),
and [Ben van Durme](mailto:[email protected]) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |