File size: 2,339 Bytes
8cc44ec
7aae37e
 
 
 
 
 
 
 
 
 
d41bf3f
7aae37e
 
 
8cc44ec
 
7aae37e
 
8cc44ec
7aae37e
6a2919b
8cc44ec
7aae37e
 
 
 
 
 
8cc44ec
7aae37e
8cc44ec
6a2919b
 
 
 
 
8cc44ec
7aae37e
8cc44ec
6a2919b
8cc44ec
7aae37e
8cc44ec
c6dca8d
 
8cc44ec
7aae37e
8cc44ec
7aae37e
8cc44ec
7aae37e
 
 
 
 
 
 
 
8cc44ec
7aae37e
8cc44ec
6a2919b
 
 
 
 
 
8cc44ec
7aae37e
8cc44ec
7aae37e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: sileod/deberta-v3-base-tasksource-nli
model-index:
- name: deberta-v3-bass-complex-questions_classifier
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# deberta-v3-bass-complex-questions_classifier
This model is a fine-tuned version of [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) on an unknown dataset. It is designed to classify questions into three categories: simple, multi, and compare. 

It achieves the following results on the evaluation set:
- Loss: 0.0
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0

## Model description

The model is trained to classify the type of questions based on their complexity:
- **Simple:** Contains one and ONLY one question.
- **Multi:** Contains 2 or more questions.
- **Compare:** Involves direct comparisons using specific, invented company names or refers to different aspects within the same company.


## Intended uses & limitations

This model can be used for question classification tasks, such as organizing large datasets of questions or automating question routing in customer service systems. However, it may not generalize well to questions outside the scope of the training data, or questions in languages other than English.

## Training and evaluation data

The training and evaluation datasets used for fine-tuning this model will be uploaded soon. They will contain labeled questions categorized as simple, multi, and compare to facilitate training and evaluation of the model.


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

### Training results

The model achieves the following results on the evaluation set:
- Loss: 0.0
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0

### Framework versions

- Transformers 4.38.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.2