Model description
DistilBERT fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model. DistilBERT is a compact and efficient version of BERT (Bidirectional Encoder Representations from Transformers).
It employs a distillation process that transfers knowledge from a larger pretrained model (like BERT) to a smaller one.
Suitable for Question-Answering tasks, predicts answer spans within the context provided.
Language model: distilbert-base-uncased
Language: English
Downstream-task: Question-Answering
Training data: Train-set SQuAD 2.0
Evaluation data: Evaluation-set SQuAD 2.0
Hardware Accelerator used: GPU Tesla T4
Intended uses & limitations
For Question-Answering -
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/distilbert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer(question=question, context=context)
Results
Evaluation on SQuAD 2.0 validation dataset:
exact: 65.88056935904994,
f1: 68.9782873196397,
total': 11873,
HasAns_exact: 68.15114709851552,
HasAns_f1: 74.35546648888003,
HasAns_total: 5928,
NoAns_exact: 63.61648444070648,
NoAns_f1: 63.61648444070648,
NoAns_total: 5945,
best_exact: 65.88056935904994,
best_exact_thresh: 0.9993563294410706,
best_f1: 68.97828731963992,
best_f1_thresh: 0.9993563294410706,
total_time_in_seconds: 122.51037029999998,
samples_per_second: 96.91424465476456,
latency_in_seconds: 0.01031840059799545
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1952 | 1.0 | 8235 | 1.2246 |
0.8749 | 2.0 | 16470 | 1.3015 |
0.6708 | 3.0 | 24705 | 1.4648 |
This model is a fine-tuned version of distilbert-base-uncased on the squad_v2 dataset. It achieves the following results on the evaluation set:
- Loss: 1.4648
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
- Downloads last month
- 23
Model tree for IProject-10/distilbert-base-uncased-finetuned-squad2
Base model
distilbert/distilbert-base-uncased