XLM-RoBERTa large model whole word masking finetuned on SQuAD
Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets
Used QA Datasets
SQuAD + SberQuAD
SberQuAD original paper is here! Recommend to read!
Evaluation results
The results obtained are the following (SberQUaD):
f1 = 84.3
exact_match = 65.3
- Downloads last month
- 2,519
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.