Datasets:
File size: 3,420 Bytes
4b6c90f a640102 62bf34b 4b6c90f a640102 e1bf385 cd6eed7 e1bf385 a640102 1b868a5 df2bdaf 1b868a5 a640102 6926e41 a640102 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
license: bsd-2-clause
task_categories:
- text-classification
language:
- fr
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains a machine-translated French version of the portion of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli) concerning the 9/11 terrorist attacks (2000 examples).
Note that these 2000 examples included in MultiNLI (and machine translated in French here) on the subject of 9/11 are different from the 249 examples in the validation subset and the 501 ones in the test subset of XNLI on the same subject.
In the original subset of MultiNLI on 9/11, 26 examples were left without gold label. In this French version, we have given a gold label also to these examples (so that there are no more examples without gold label), according to our reading of the examples.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `premise`: The machine translated premise in the target language.
- `hypothesis`: The machine translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `pairID`: Unique identifier for pair.
- `promptID`: Unique identifier for prompt.
- `premise_original`: The original premise from the English source dataset.
- `hypothesis_original`: The original hypothesis from the English source dataset.
### Data Splits
| name |entailment|neutral|contradiction|
|--------|---------:|------:|------------:|
|mnli_fr | 705 | 641 | 654 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
```
### Contributions
[More Information Needed] |