|
--- |
|
license: bsd-2-clause |
|
task_categories: |
|
- text-classification |
|
task_ids: |
|
- natural-language-inference |
|
- multi-input-text-classification |
|
language: |
|
- fr |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
# Dataset Card for Dataset Name |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** |
|
- **Repository:** |
|
- **Paper:** |
|
- **Leaderboard:** |
|
- **Point of Contact:** |
|
|
|
### Dataset Summary |
|
|
|
This repository contains a machine-translated French version of the portion of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli) concerning the 9/11 terrorist attacks (2000 examples). |
|
Note that these 2000 examples included in MultiNLI (and machine translated in French here) on the subject of 9/11 are different from the 249 examples in the validation subset and the 501 ones in the test subset of XNLI on the same subject. |
|
|
|
In the original subset of MultiNLI on 9/11, 26 examples were left without gold label. In this French version, we have given a gold label also to these examples (so that there are no more examples without gold label), according to our reading of the examples. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task. |
|
|
|
## Dataset Structure |
|
|
|
### Data Fields |
|
|
|
- `premise`: The machine translated premise in the target language. |
|
- `hypothesis`: The machine translated premise in the target language. |
|
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`). |
|
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2). |
|
- `pairID`: Unique identifier for pair. |
|
- `promptID`: Unique identifier for prompt. |
|
- `premise_original`: The original premise from the English source dataset. |
|
- `hypothesis_original`: The original hypothesis from the English source dataset. |
|
|
|
### Data Splits |
|
|
|
| name |entailment|neutral|contradiction| |
|
|--------|---------:|------:|------------:| |
|
|mnli_fr | 705 | 641 | 654 | |
|
|
|
## Dataset Creation |
|
|
|
The dataset was machine translated from English to French using the latest neural machine translation [opus-mt-tc-big](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) model available for French. |
|
The translation of the sentences was carried out on March 29th, 2023. |
|
|
|
## Additional Information |
|
|
|
### Citation Information |
|
|
|
**BibTeX:** |
|
|
|
````BibTeX |
|
@InProceedings{N18-1101, |
|
author = "Williams, Adina |
|
and Nangia, Nikita |
|
and Bowman, Samuel", |
|
title = "A Broad-Coverage Challenge Corpus for |
|
Sentence Understanding through Inference", |
|
booktitle = "Proceedings of the 2018 Conference of |
|
the North American Chapter of the |
|
Association for Computational Linguistics: |
|
Human Language Technologies, Volume 1 (Long |
|
Papers)", |
|
year = "2018", |
|
publisher = "Association for Computational Linguistics", |
|
pages = "1112--1122", |
|
location = "New Orleans, Louisiana", |
|
url = "http://aclweb.org/anthology/N18-1101" |
|
} |
|
```` |
|
|
|
**ACL:** |
|
|
|
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference](https://aclanthology.org/N18-1101/). In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. |
|
|
|
### Acknowledgements |
|
|
|
This translation of the original dataset was done as part of a research project supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France. |