Gabi00 commited on
Commit
aa6d0ef
·
verified ·
1 Parent(s): 255347c

End of training

Browse files
Files changed (2) hide show
  1. README.md +101 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: distil-whisper/distil-large-v3
3
+ datasets:
4
+ - Gabi00/english-mistakes
5
+ language:
6
+ - eng
7
+ library_name: peft
8
+ license: apache-2.0
9
+ metrics:
10
+ - wer
11
+ tags:
12
+ - generated_from_trainer
13
+ model-index:
14
+ - name: Whisper Small Eng - Gabriel Mora
15
+ results:
16
+ - task:
17
+ type: automatic-speech-recognition
18
+ name: Automatic Speech Recognition
19
+ dataset:
20
+ name: English-mistakes
21
+ type: Gabi00/english-mistakes
22
+ config: default
23
+ split: validation
24
+ args: 'config: eng, split: test'
25
+ metrics:
26
+ - type: wer
27
+ value: 18.233650721249788
28
+ name: Wer
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Whisper Small Eng - Gabriel Mora
35
+
36
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the English-mistakes dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.6550
39
+ - Wer: 18.2337
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 28
60
+ - eval_batch_size: 28
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 100000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:------:|:-----:|:---------------:|:-------:|
72
+ | 1.5085 | 0.4444 | 500 | 1.1844 | 25.9507 |
73
+ | 1.1717 | 0.8889 | 1000 | 0.9522 | 25.2751 |
74
+ | 1.1302 | 1.3333 | 1500 | 0.8634 | 22.0879 |
75
+ | 1.0094 | 1.7778 | 2000 | 0.8098 | 21.0103 |
76
+ | 1.0509 | 2.2222 | 2500 | 0.7784 | 23.2054 |
77
+ | 0.9722 | 2.6667 | 3000 | 0.7555 | 21.5206 |
78
+ | 0.9562 | 3.1111 | 3500 | 0.7401 | 21.0075 |
79
+ | 0.9995 | 3.5556 | 4000 | 0.7269 | 19.8985 |
80
+ | 0.9497 | 4.0 | 4500 | 0.7170 | 19.3626 |
81
+ | 0.8703 | 4.4444 | 5000 | 0.7078 | 19.4652 |
82
+ | 1.0015 | 4.8889 | 5500 | 0.7004 | 20.1608 |
83
+ | 0.9248 | 5.3333 | 6000 | 0.6947 | 17.7034 |
84
+ | 0.9163 | 5.7778 | 6500 | 0.6880 | 17.4953 |
85
+ | 0.8833 | 6.2222 | 7000 | 0.6823 | 17.4668 |
86
+ | 0.9051 | 6.6667 | 7500 | 0.6770 | 17.4554 |
87
+ | 0.8882 | 7.1111 | 8000 | 0.6730 | 17.3613 |
88
+ | 0.8879 | 7.5556 | 8500 | 0.6684 | 18.3220 |
89
+ | 0.8396 | 8.0 | 9000 | 0.6647 | 18.2165 |
90
+ | 0.9282 | 8.4444 | 9500 | 0.6616 | 18.4646 |
91
+ | 0.8581 | 8.8889 | 10000 | 0.6578 | 18.1538 |
92
+ | 0.8938 | 9.3333 | 10500 | 0.6550 | 18.2337 |
93
+
94
+
95
+ ### Framework versions
96
+
97
+ - PEFT 0.11.1
98
+ - Transformers 4.42.3
99
+ - Pytorch 2.1.0+cu118
100
+ - Datasets 2.20.0
101
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37dd78cec127b93bf2a7a3277ea86a6520464070de31e392bf2b5fe6d31a348f
3
  size 11816808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e597ea1434b89224149830a17e208599a627d2a5b413b1b163a33311b2f07f91
3
  size 11816808