File size: 4,531 Bytes
d55144d 6e60a91 ce4b09b d55144d ce4b09b d55144d ce4b09b d55144d ce4b09b d55144d be63672 ce4b09b d55144d ce4b09b d55144d 6e60a91 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlm-roberta-base-finetuned-squad2
results: []
language:
- en
- ar
- de
- el
- es
- hi
- ro
- ru
- th
- tr
- vi
- zh
metrics:
- exact_match
- f1
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa developed by Facebook AI. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
It is an extension of RoBERTa, which is itself a variant of the BERT model. XLM-RoBERTa is designed to handle multiple languages and demonstrate strong performance across a wide range of tasks, making it highly useful for multilingual natural language processing (NLP) applications.
**Language model:** xlm-roberta-base
**Language:** English
**Downstream-task:** Question-Answering
**Training data:** Train-set SQuAD 2.0
**Evaluation data:** Evaluation-set SQuAD 2.0
**Hardware Accelerator used**: GPU Tesla T4
## Intended uses & limitations
Multilingual Question-Answering
For Question-Answering in English-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
The Statue of Unity is the world's tallest statue, with a height of 182 metres (597 feet), located near Kevadia in the state of Gujarat, India.
"""
question = "What is the height of statue of Unity?"
question_answerer(question=question, context=context)
```
For Question-Answering in Hindi-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
स्टैच्यू ऑफ यूनिटी दुनिया की सबसे ऊंची प्रतिमा है, जिसकी ऊंचाई 182 मीटर (597 फीट) है, जो भारत के गुजरात राज्य में केवडिया के पास स्थित है।
"""
question = "स्टैच्यू ऑफ यूनिटी की ऊंचाई कितनी है?"
question_answerer(question=question, context=context)
```
For Question-Answering in Spanish-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
La Estatua de la Unidad es la estatua más alta del mundo, con una altura de 182 metros (597 pies), ubicada cerca de Kevadia en el estado de Gujarat, India.
"""
question = "¿Cuál es la altura de la estatua de la Unidad?"
question_answerer(question=question, context=context)
```
## Results
Evaluation on SQuAD 2.0 validation dataset:
```
exact: 75.51587635812348,
f1: 78.7328391907263,
total: 11873,
HasAns_exact: 73.00944669365722,
HasAns_f1: 79.45259779208723,
HasAns_total: 5928,
NoAns_exact: 78.01513877207738,
NoAns_f1: 78.01513877207738,
NoAns_total: 5945,
best_exact: 75.51587635812348,
best_exact_thresh: 0.999241054058075,
best_f1: 78.73283919072665,
best_f1_thresh: 0.999241054058075,
total_time_in_seconds: 218.97641910400125,
samples_per_second: 54.220450076686134,
latency_in_seconds: 0.018443225730986376
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0539 | 1.0 | 8333 | 0.9962 |
| 0.8013 | 2.0 | 16666 | 0.8910 |
| 0.5918 | 3.0 | 24999 | 0.9802 |
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9802
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3 |