Training finished
Browse files
README.md
CHANGED
@@ -20,9 +20,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
20 |
|
21 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
-
- Loss: 0.
|
24 |
-
- Wer: 12.
|
25 |
-
- Cer: 4.
|
26 |
|
27 |
## Model description
|
28 |
|
@@ -50,7 +50,7 @@ The following hyperparameters were used during training:
|
|
50 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
51 |
- lr_scheduler_type: cosine
|
52 |
- lr_scheduler_warmup_steps: 500
|
53 |
-
- num_epochs:
|
54 |
- mixed_precision_training: Native AMP
|
55 |
|
56 |
### Training results
|
@@ -99,13 +99,20 @@ The following hyperparameters were used during training:
|
|
99 |
| 0.0016 | 5.6337 | 16000 | 4.3983 | 0.0134 | 13.8539 |
|
100 |
| 0.0015 | 5.7746 | 16400 | 4.2035 | 0.0134 | 13.5426 |
|
101 |
| 0.0016 | 5.9154 | 16800 | 4.2561 | 0.0134 | 13.6335 |
|
102 |
-
| 0.0015 | 6.0563 | 17200 | 0.0134
|
103 |
-
| 0.0015 | 6.1972 | 17600 | 0.0137
|
104 |
-
| 0.0016 | 6.3380 | 18000 | 0.0137
|
105 |
-
| 0.0014 | 6.4788 | 18400 | 0.0137
|
106 |
-
| 0.0015 | 6.6197 | 18800 | 0.0137
|
107 |
-
| 0.0015 | 6.7605 | 19200 | 0.0137
|
108 |
-
| 0.0016 | 6.9013 | 19600 | 0.0137
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
109 |
|
110 |
|
111 |
### Framework versions
|
|
|
20 |
|
21 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
+
- Loss: 0.0141
|
24 |
+
- Wer: 12.6647
|
25 |
+
- Cer: 4.0046
|
26 |
|
27 |
## Model description
|
28 |
|
|
|
50 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
51 |
- lr_scheduler_type: cosine
|
52 |
- lr_scheduler_warmup_steps: 500
|
53 |
+
- num_epochs: 8
|
54 |
- mixed_precision_training: Native AMP
|
55 |
|
56 |
### Training results
|
|
|
99 |
| 0.0016 | 5.6337 | 16000 | 4.3983 | 0.0134 | 13.8539 |
|
100 |
| 0.0015 | 5.7746 | 16400 | 4.2035 | 0.0134 | 13.5426 |
|
101 |
| 0.0016 | 5.9154 | 16800 | 4.2561 | 0.0134 | 13.6335 |
|
102 |
+
| 0.0015 | 6.0563 | 17200 | 4.3246 | 0.0134 | 13.6059 |
|
103 |
+
| 0.0015 | 6.1972 | 17600 | 4.1759 | 0.0137 | 13.6142 |
|
104 |
+
| 0.0016 | 6.3380 | 18000 | 4.2195 | 0.0137 | 13.5536 |
|
105 |
+
| 0.0014 | 6.4788 | 18400 | 4.4176 | 0.0137 | 13.8760 |
|
106 |
+
| 0.0015 | 6.6197 | 18800 | 4.2144 | 0.0137 | 13.5784 |
|
107 |
+
| 0.0015 | 6.7605 | 19200 | 4.1868 | 0.0137 | 13.4874 |
|
108 |
+
| 0.0016 | 6.9013 | 19600 | 4.0946 | 0.0137 | 13.3442 |
|
109 |
+
| 0.0015 | 7.0422 | 20000 | 0.0139 | 13.5508 | 4.1526 |
|
110 |
+
| 0.0012 | 7.1831 | 20400 | 0.0139 | 13.5040 | 4.1830 |
|
111 |
+
| 0.0011 | 7.3239 | 20800 | 0.0138 | 13.3194 | 4.0708 |
|
112 |
+
| 0.0017 | 7.4647 | 21200 | 0.0138 | 13.3552 | 4.0446 |
|
113 |
+
| 0.0012 | 7.6056 | 21600 | 0.0139 | 13.3194 | 4.0699 |
|
114 |
+
| 0.0011 | 7.7464 | 22000 | 0.0140 | 13.3001 | 4.0378 |
|
115 |
+
| 0.0012 | 7.8872 | 22400 | 0.0139 | 13.3442 | 4.0558 |
|
116 |
|
117 |
|
118 |
### Framework versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 151061672
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b3401dd5ccddf6e9e01e258f141b21a20a6066cfa801fbf65ade51b86fc7845c
|
3 |
size 151061672
|
runs/Feb17_10-12-52_ac64f08baa54/events.out.tfevents.1739815704.ac64f08baa54.18.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1d1507d439b983b8f03bd9ff06d988da2d7edc8a3e44afff329b4e8a795c8d0
|
3 |
+
size 460
|