medit / README.md
machineteacher's picture
Update README.md
7dd1f4f verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-generation
language:
  - en
  - de
  - ar
  - ja
  - ko
  - es
  - zh
pretty_name: medit
size_categories:
  - 10K<n<100K
tags:
  - gec
  - simplification
  - paraphrasing
  - es
  - de
  - ar
  - en
  - ja
  - ko
  - zh
  - multilingual

Dataset Card for mEdIT: Multilingual Text Editing via Instruction Tuning

Paper: mEdIT: Multilingual Text Editing via Instruction Tuning

Authors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar

Project Repo: https://github.com/vipulraheja/medit

Dataset Summary

This is the dataset that was used to train the mEdIT text editing models. Full details of the dataset can be found in our paper.

Dataset Structure

The dataset is in JSON format.

Data Instances

{
  "instance":999999,
  "task":"gec",
  "language":"english",
  "lang":"en",
  "dataset":"lang8.bea19",
  "src":"Luckily there was no damage for the earthquake .",
  "refs": ['Luckily there was no damage from the earthquake .'],
  "tgt":"Luckily there was no damage from the earthquake .",
  "prompt":"この文の文法上の誤りを修正してください: Luckily there was no damage for the earthquake .",
}

Note that for the mEdIT models, the prompt was formatted as follows: (e.g. for a Japanese-prompted editing for English text)

### 命令:\nこの文の文法上の誤りを修正してください\n### 入力:\nLuckily there was no damage for the earthquake .\n### 出力:\n\n

Details about the added keywords ("Instruction", "Input", "Output") can be found in the Appendix or on the mEdIT model cards.

Data Fields

  • instance: instance ID
  • language: Language of input and edited text
  • lang: Language code in ISO-639-1
  • dataset: Source of the current example
  • task: Text editing task for this instance
  • src: input text
  • refs: reference texts
  • tgt: output text
  • prompt: Full prompt (instruction + input) for training the models

Considerations for Using the Data

Please note that this dataset contains 102k instances (as opposed to the 190k instances we used in the paper). This is because this public release includes only the instances that were acquired and curated from publicly available datasets.

Following are the details of the subsets (including the ones we are unable to publicly release):

Grammatical Error Correction:

Simplification:

Paraphrasing:

Citation

@misc{raheja2024medit,
      title={mEdIT: Multilingual Text Editing via Instruction Tuning}, 
      author={Vipul Raheja and Dimitris Alikaniotis and Vivek Kulkarni and Bashar Alhafni and Dhruv Kumar},
      year={2024},
      eprint={2402.16472},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}