Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
Data
Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA
Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details.
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Datasets used to train mbartolo/roberta-large-synqa
Evaluation results
- Exact Match on squadvalidation set self-reported89.653
- F1 on squadvalidation set self-reported94.817
- Exact Match on adversarial_qavalidation set self-reported55.333
- F1 on adversarial_qavalidation set self-reported66.746