Datasets:
asi
/

Languages:
French
ArXiv:
License:

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

YAML Metadata Warning: The task_categories "sequence-modeling" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Dataset Card Creation Guide

Dataset Summary

Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles". It is designed to mirror the english benchmark from Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer Sentinel Mixture Models The dataset is available under the Creative Commons Attribution-ShareAlike License

Supported Tasks and Leaderboards

  • language-modeling: The dataset can be used to evaluate the generation abilites of a model. Success on this task is typically measured by achieving a low perplexity. The (model name currently achieves 12.9.

Languages

The dataset is in French.

Dataset Structure

Data Instances

The dataset consists in the agregation of paragraphs from wikipedia articles.

{
  'paragraph': ...,
  ...
}

Data Fields

  • paragraph: This is a paragraph from the original wikipedia article.

Data Splits

The dataset is splited into a train/valid/test split.

Tain (35) Train (72) Valid Test
Number of Documents 2 126 5 902 60 60
Number of tokens 351 66 72 961 896 897
Vocabulary size 137 589 205 403
Out of Vocabulary 0.8% 1.2%

Dataset Creation

Curation Rationale

The dataset is created to evaluate French models with similart criteria than English.s

Source Data

Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles". We did not apply specific pre-treatments as transformers models might use a dedicated tokenization.s

Initial Data Collection and Normalization

We used the Wikipedia API to collect the articles since cleaning Wikipedia articles from dumps is not a trivial task.

Personal and Sensitive Information

Considerations for Using the Data

Social Impact of Dataset

Discussion of Biases

Other Known Limitations

Additional Information

Dataset Curators

Licensing Information

The dataset is available under the Creative Commons Attribution-ShareAlike License

Citation Information

@inproceedings{simoulin:hal-03265900,
  TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}},
  AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit},
  URL = {https://hal.archives-ouvertes.fr/hal-03265900},
  BOOKTITLE = {{Traitement Automatique des Langues Naturelles}},
  ADDRESS = {Lille, France},
  EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio},
  PUBLISHER = {{ATALA}},
  PAGES = {246-255},
  YEAR = {2021},
  KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}},
  PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf},
  HAL_ID = {hal-03265900},
  HAL_VERSION = {v1},
}

Contributions

Thanks to @AntoineSimoulin for adding this dataset.

Downloads last month
149

Models trained or fine-tuned on asi/wikitext_fr