jethrowang commited on
Commit
2f95141
1 Parent(s): 98a60e6

End of training

Browse files
Files changed (2) hide show
  1. README.md +27 -17
  2. model.safetensors +1 -1
README.md CHANGED
@@ -19,8 +19,8 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the HAT ASR Aligned dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.1319
23
- - Cer: 7.3572
24
 
25
  ## Model description
26
 
@@ -39,30 +39,40 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 0.0001
43
  - train_batch_size: 64
44
  - eval_batch_size: 32
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 488
49
- - training_steps: 4880
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Cer |
55
- |:-------------:|:------:|:----:|:---------------:|:-------:|
56
- | 0.2061 | 0.9980 | 488 | 0.3095 | 30.6829 |
57
- | 0.1014 | 1.9959 | 976 | 0.2149 | 16.2991 |
58
- | 0.0486 | 2.9939 | 1464 | 0.1858 | 15.3582 |
59
- | 0.0243 | 3.9918 | 1952 | 0.1723 | 14.7074 |
60
- | 0.0115 | 4.9898 | 2440 | 0.1626 | 12.7598 |
61
- | 0.0046 | 5.9877 | 2928 | 0.1542 | 9.6054 |
62
- | 0.0016 | 6.9857 | 3416 | 0.1437 | 8.7997 |
63
- | 0.0006 | 7.9836 | 3904 | 0.1355 | 9.0378 |
64
- | 0.0004 | 8.9816 | 4392 | 0.1315 | 7.8473 |
65
- | 0.0004 | 9.9796 | 4880 | 0.1319 | 7.3572 |
 
 
 
 
 
 
 
 
 
 
66
 
67
 
68
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the HAT ASR Aligned dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.1959
23
+ - Cer: 12.2188
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 1e-05
43
  - train_batch_size: 64
44
  - eval_batch_size: 32
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 976
49
+ - training_steps: 9760
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Cer |
55
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
56
+ | 1.189 | 0.9980 | 488 | 1.2025 | 50.1503 |
57
+ | 0.3904 | 1.9959 | 976 | 0.4830 | 26.8916 |
58
+ | 0.2027 | 2.9939 | 1464 | 0.3017 | 17.2273 |
59
+ | 0.1241 | 3.9918 | 1952 | 0.2566 | 15.3859 |
60
+ | 0.0837 | 4.9898 | 2440 | 0.2299 | 14.5098 |
61
+ | 0.0558 | 5.9877 | 2928 | 0.2175 | 13.6302 |
62
+ | 0.0365 | 6.9857 | 3416 | 0.2119 | 13.6151 |
63
+ | 0.0266 | 7.9836 | 3904 | 0.2052 | 13.6059 |
64
+ | 0.0197 | 8.9816 | 4392 | 0.1990 | 11.9877 |
65
+ | 0.0131 | 9.9796 | 4880 | 0.1982 | 12.7887 |
66
+ | 0.0082 | 10.9775 | 5368 | 0.1987 | 12.5864 |
67
+ | 0.006 | 11.9755 | 5856 | 0.1985 | 13.6336 |
68
+ | 0.0046 | 12.9734 | 6344 | 0.1971 | 13.0037 |
69
+ | 0.0035 | 13.9714 | 6832 | 0.1945 | 12.7390 |
70
+ | 0.0034 | 14.9693 | 7320 | 0.1966 | 12.7135 |
71
+ | 0.0026 | 15.9673 | 7808 | 0.1954 | 12.6477 |
72
+ | 0.0022 | 16.9652 | 8296 | 0.1958 | 12.5922 |
73
+ | 0.0021 | 17.9632 | 8784 | 0.1957 | 11.5970 |
74
+ | 0.0019 | 18.9611 | 9272 | 0.1959 | 12.0061 |
75
+ | 0.0018 | 19.9591 | 9760 | 0.1959 | 12.2188 |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c6759fe9beab2a73f3e1556460c4aa0f6bb78dc1932e08504344c88bbaaac33b
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3eb834e7ce94e86ef11295061652b5012ad4d2be63df1a5c9ac35b341235ed7c
3
  size 151061672