pedropauletti commited on
Commit
4a2a975
·
1 Parent(s): c92fac1

End of training

Browse files
Files changed (2) hide show
  1. README.md +14 -14
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.81
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.5199
36
- - Accuracy: 0.81
37
 
38
  ## Model description
39
 
@@ -66,21 +66,21 @@ The following hyperparameters were used during training:
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
- | 2.026 | 1.0 | 113 | 1.8207 | 0.5 |
70
- | 1.3301 | 2.0 | 226 | 1.2287 | 0.64 |
71
- | 1.0454 | 3.0 | 339 | 0.9090 | 0.74 |
72
- | 0.8686 | 4.0 | 452 | 0.8438 | 0.74 |
73
- | 0.5966 | 5.0 | 565 | 0.6423 | 0.8 |
74
- | 0.3924 | 6.0 | 678 | 0.6111 | 0.79 |
75
- | 0.4546 | 7.0 | 791 | 0.5108 | 0.83 |
76
- | 0.1588 | 8.0 | 904 | 0.4974 | 0.84 |
77
- | 0.2416 | 9.0 | 1017 | 0.4641 | 0.85 |
78
- | 0.1248 | 10.0 | 1130 | 0.5199 | 0.81 |
79
 
80
 
81
  ### Framework versions
82
 
83
  - Transformers 4.35.0.dev0
84
- - Pytorch 2.0.1+cu118
85
  - Datasets 2.14.5
86
  - Tokenizers 0.14.1
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.84
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.5842
36
+ - Accuracy: 0.84
37
 
38
  ## Model description
39
 
 
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
+ | 1.9813 | 1.0 | 113 | 1.7851 | 0.6 |
70
+ | 1.2847 | 2.0 | 226 | 1.1387 | 0.68 |
71
+ | 0.9723 | 3.0 | 339 | 0.8980 | 0.79 |
72
+ | 0.8016 | 4.0 | 452 | 0.8234 | 0.75 |
73
+ | 0.7288 | 5.0 | 565 | 0.6514 | 0.84 |
74
+ | 0.323 | 6.0 | 678 | 0.6621 | 0.8 |
75
+ | 0.4408 | 7.0 | 791 | 0.5913 | 0.82 |
76
+ | 0.1275 | 8.0 | 904 | 0.5346 | 0.84 |
77
+ | 0.3043 | 9.0 | 1017 | 0.5475 | 0.86 |
78
+ | 0.1396 | 10.0 | 1130 | 0.5842 | 0.84 |
79
 
80
 
81
  ### Framework versions
82
 
83
  - Transformers 4.35.0.dev0
84
+ - Pytorch 2.1.0+cu118
85
  - Datasets 2.14.5
86
  - Tokenizers 0.14.1
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d6e221612818fa8c3edf60e7baa77fe8a5e837b05653ce905687bf195d67206
3
  size 94783885
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:420e63a0020f0b912287cfcb908798d7732f3904ba56a6b164e5e7c4e15bee70
3
  size 94783885