Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,8 @@ model-index:
|
|
19 |
should probably proofread and complete it, then remove this comment. -->
|
20 |
|
21 |
# deberta-v3-bass-complex-questions_classifier
|
|
|
22 |
|
23 |
-
This model is a fine-tuned version of [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) on an unknown dataset.
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Loss: 0.0
|
26 |
- Accuracy: 1.0
|
@@ -30,15 +30,19 @@ It achieves the following results on the evaluation set:
|
|
30 |
|
31 |
## Model description
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Intended uses & limitations
|
36 |
|
37 |
-
|
38 |
|
39 |
## Training and evaluation data
|
40 |
|
41 |
-
|
42 |
|
43 |
## Training procedure
|
44 |
|
@@ -55,7 +59,12 @@ The following hyperparameters were used during training:
|
|
55 |
|
56 |
### Training results
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
### Framework versions
|
61 |
|
|
|
19 |
should probably proofread and complete it, then remove this comment. -->
|
20 |
|
21 |
# deberta-v3-bass-complex-questions_classifier
|
22 |
+
This model is a fine-tuned version of [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) on an unknown dataset. It is designed to classify questions into three categories: simple, multi, and compare.
|
23 |
|
|
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Loss: 0.0
|
26 |
- Accuracy: 1.0
|
|
|
30 |
|
31 |
## Model description
|
32 |
|
33 |
+
The model is trained to classify the type of questions based on their complexity:
|
34 |
+
- **Simple:** Contains one and ONLY one question.
|
35 |
+
- **Multi:** Contains 2 or more questions.
|
36 |
+
- **Compare:** Involves direct comparisons using specific, invented company names or refers to different aspects within the same company.
|
37 |
+
|
38 |
|
39 |
## Intended uses & limitations
|
40 |
|
41 |
+
This model can be used for question classification tasks, such as organizing large datasets of questions or automating question routing in customer service systems. However, it may not generalize well to questions outside the scope of the training data, or questions in languages other than English.
|
42 |
|
43 |
## Training and evaluation data
|
44 |
|
45 |
+
The training and evaluation datasets used for fine-tuning this model are not specified.
|
46 |
|
47 |
## Training procedure
|
48 |
|
|
|
59 |
|
60 |
### Training results
|
61 |
|
62 |
+
The model achieves the following results on the evaluation set:
|
63 |
+
- Loss: 0.0
|
64 |
+
- Accuracy: 1.0
|
65 |
+
- Precision: 1.0
|
66 |
+
- Recall: 1.0
|
67 |
+
- F1: 1.0
|
68 |
|
69 |
### Framework versions
|
70 |
|