language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: sileod/deberta-v3-base-tasksource-nli
model-index:
- name: deberta-v3-bass-complex-questions_classifier
results: []
deberta-v3-bass-complex-questions_classifier
This model is a fine-tuned version of sileod/deberta-v3-base-tasksource-nli on an unknown dataset. It is designed to classify questions into three categories: simple, multi, and compare.
It achieves the following results on the evaluation set:
- Loss: 0.0
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
Model description
The model is trained to classify the type of questions based on their complexity:
- Simple: Contains one and ONLY one question.
- Multi: Contains 2 or more questions.
- Compare: Involves direct comparisons using specific, invented company names or refers to different aspects within the same company.
Intended uses & limitations
This model can be used for question classification tasks, such as organizing large datasets of questions or automating question routing in customer service systems. However, it may not generalize well to questions outside the scope of the training data, or questions in languages other than English.
Training and evaluation data
The training and evaluation datasets used for fine-tuning this model will be uploaded soon. They will contain labeled questions categorized as simple, multi, and compare to facilitate training and evaluation of the model.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
The model achieves the following results on the evaluation set:
- Loss: 0.0
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.2