bnriiitb commited on
Commit
25774f7
1 Parent(s): e3e90e8

updated readme

Browse files
Files changed (1) hide show
  1. README.md +23 -15
README.md CHANGED
@@ -18,11 +18,13 @@ model-index:
18
  dataset:
19
  name: Chai_Bisket_Stories_16-08-2021_14-17
20
  type: Chai_Bisket_Stories_16-08-2021_14-17
 
 
21
  args: 'config: te, split: test'
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 307.8748651564186
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +34,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.1913
36
- - Wer: 307.8749
37
 
38
  ## Model description
39
 
@@ -53,28 +55,34 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
- - train_batch_size: 4
57
- - eval_batch_size: 2
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - lr_scheduler_warmup_steps: 50
62
- - training_steps: 400
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
- | Training Loss | Epoch | Step | Validation Loss | Wer |
68
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
- | 1.4863 | 0.97 | 100 | 1.4943 | 283.3873 |
70
- | 1.2696 | 1.94 | 200 | 1.3335 | 280.5825 |
71
- | 1.1176 | 2.91 | 300 | 1.2336 | 312.1899 |
72
- | 1.0132 | 3.88 | 400 | 1.1913 | 307.8749 |
 
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
- - Transformers 4.25.0.dev0
78
- - Pytorch 1.12.1+cu113
79
  - Datasets 2.7.1
80
  - Tokenizers 0.13.2
 
18
  dataset:
19
  name: Chai_Bisket_Stories_16-08-2021_14-17
20
  type: Chai_Bisket_Stories_16-08-2021_14-17
21
+ config: None
22
+ split: None
23
  args: 'config: te, split: test'
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 77.48711850971065
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.7063
38
+ - Wer: 77.4871
39
 
40
  ## Model description
41
 
 
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
+ - train_batch_size: 16
59
+ - eval_batch_size: 8
60
  - seed: 42
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 500
64
+ - training_steps: 5000
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
70
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
+ | 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 |
72
+ | 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 |
73
+ | 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 |
74
+ | 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 |
75
+ | 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 |
76
+ | 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 |
77
+ | 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 |
78
+ | 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 |
79
+ | 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 |
80
+ | 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 |
81
 
82
 
83
  ### Framework versions
84
 
85
+ - Transformers 4.26.0.dev0
86
+ - Pytorch 1.13.0
87
  - Datasets 2.7.1
88
  - Tokenizers 0.13.2