AdonaiHS commited on
Commit
deebbf5
1 Parent(s): fa7c154

End of training

Browse files
Files changed (1) hide show
  1. README.md +15 -11
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.58
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.7506
36
- - Accuracy: 0.58
37
 
38
  ## Model description
39
 
@@ -52,23 +52,27 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 5e-05
56
- - train_batch_size: 16
57
- - eval_batch_size: 16
58
  - seed: 42
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_ratio: 0.1
62
- - training_steps: 100
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 2.2637 | 0.44 | 25 | 2.1478 | 0.39 |
69
- | 2.0908 | 0.88 | 50 | 1.9407 | 0.51 |
70
- | 1.8916 | 1.32 | 75 | 1.7992 | 0.55 |
71
- | 1.798 | 1.75 | 100 | 1.7506 | 0.58 |
 
 
72
 
73
 
74
  ### Framework versions
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.77
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.9423
36
+ - Accuracy: 0.77
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 5e-06
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 8
58
  - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - training_steps: 3000
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 2.1574 | 8.85 | 500 | 1.8008 | 0.66 |
71
+ | 1.5882 | 17.7 | 1000 | 1.3509 | 0.7 |
72
+ | 1.2416 | 26.55 | 1500 | 1.1347 | 0.72 |
73
+ | 1.037 | 35.4 | 2000 | 1.0163 | 0.74 |
74
+ | 0.9152 | 44.25 | 2500 | 0.9583 | 0.76 |
75
+ | 0.8556 | 53.1 | 3000 | 0.9423 | 0.77 |
76
 
77
 
78
  ### Framework versions