Edit model card

distilbert-base-uncased Quora Duplicate Pair Detection

This model is fine tuned of version of distilbert-base-uncased on quora dataset for detecting duplicate sentences or questions.

Loss :: 0.111300 Accuracy :: 0.900740 F1 Score :: 0.868633

Model Description: DistilBERT model is a distilled form of the BERT model. The size of a BERT model was reduced by 40% via knowledge distillation during the pre-training phase while retaining 97% of its language understanding abilities and being 60% faster.

Training and evaluation data: "quora"

Training Hyper Parameters: learning_rate = 3e-4, per_device_train_batch_size=32, per_device_eval_batch_size=32, num_train_epochs=4, evaluation_strategy="epoch", seed: 42, optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08

Training Results:

Epoch Training Loss Validation Loss Accuracy F1 1 0.271500 0.264808 0.884909 0.844402 2 0.191200 0.258109 0.896399 0.866099 3 0.111300 0.315554 0.900740 0.868633

Label 0 = Not Duplicate Label 1 = Duplicate

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.