pt-sk commited on
Commit
9098175
1 Parent(s): c8120d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -1,15 +1,16 @@
1
  ---
2
  license: apache-2.0
3
- base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
 
6
  datasets:
7
  - emotion
8
  metrics:
9
  - accuracy
10
  - f1
11
  model-index:
12
- - name: distilbert-base-uncased-finetuned-emotion
13
  results:
14
  - task:
15
  name: Text Classification
@@ -27,14 +28,17 @@ model-index:
27
  - name: F1
28
  type: f1
29
  value: 0.923296474937779
 
 
 
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
  should probably proofread and complete it, then remove this comment. -->
34
 
35
- # distilbert-base-uncased-finetuned-emotion
36
 
37
- distilbert is a variant of bert model(one of LLM models). This model with a classification head is used to classify the emotions of the input tweet.
38
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
39
  It achieves the following results on the evaluation set:
40
  - Loss: 0.2195
@@ -82,4 +86,4 @@ The following hyperparameters were used during training:
82
  - Transformers 4.37.2
83
  - Pytorch 2.1.0+cu121
84
  - Datasets 2.16.1
85
- - Tokenizers 0.15.1
 
1
  ---
2
  license: apache-2.0
3
+ base_model: Distilbert-finetuned-emotion
4
  tags:
5
  - generated_from_trainer
6
+ - Pytorch
7
  datasets:
8
  - emotion
9
  metrics:
10
  - accuracy
11
  - f1
12
  model-index:
13
+ - name: Distilbert-finetuned-emotion
14
  results:
15
  - task:
16
  name: Text Classification
 
28
  - name: F1
29
  type: f1
30
  value: 0.923296474937779
31
+ language:
32
+ - en
33
+ library_name: transformers
34
  ---
35
 
36
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
37
  should probably proofread and complete it, then remove this comment. -->
38
 
39
+ # Distilbert-finetuned-emotion
40
 
41
+ Distilbert is a variant of bert model(one of LLM models). This model with a classification head is used to classify the emotions of the input tweet.
42
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
43
  It achieves the following results on the evaluation set:
44
  - Loss: 0.2195
 
86
  - Transformers 4.37.2
87
  - Pytorch 2.1.0+cu121
88
  - Datasets 2.16.1
89
+ - Tokenizers 0.15.1