Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
File size: 2,852 Bytes
c9a496a ccd7bfe c9a496a ccd7bfe c9a496a ccd7bfe b24d869 c9a496a b24d869 c9a496a b24d869 e5adc1c b24d869 c9a496a b24d869 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
dataset_info:
features:
- name: context
dtype: string
- name: questions
dtype: string
splits:
- name: train
num_bytes: 20771339
num_examples: 18896
- name: validation
num_bytes: 2430525
num_examples: 2067
download_size: 12661411
dataset_size: 23201864
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Question Generation for T5 based on Squad V1.1
size_categories:
- 10K<n<100K
source_datasets:
- extended|squad
tags:
- questiongeneration
- question-generation
- text2text-generation
task_categories:
- text2text-generation
task_ids: []
---
# Dataset Card for "squad-v1.1-t5-question-generation"
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Paper:** [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### Dataset Summary
This is a modified Stanford Question Answering Dataset (SQuAD) to suit question generation with All Questions in One Line (AQOL) just like in [Transformer-based End-to-End Question Generation](https://arxiv.org/pdf/2005.01107v1.pdf)
specifically for the T5 family of models. The prefix is `generate questions: ` so that the task can be unique to a trained model.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
## Dataset Structure
### Data Instances
#### plain_text
An example of 'train' looks as follows.
```
{
"context": "generate questions: This is a test context.",
"question": "Is this a test? {sep_token} Is this another Test {sep_token}"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `context`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|18896| 2067|
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) and [Thomas Simonini](https://huggingface.co/ThomasSimonini) for adding this to the hub
Check out: [How to contribute more](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|