MMars commited on
Commit
28e435a
1 Parent(s): 1563d21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -53,27 +53,21 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
 
56
  train_batch_size=16
57
 
58
- gradient_accumulation_steps=1
 
 
59
 
60
  learning_rate=1e-5
61
 
62
  warmup_steps=500
63
 
64
  max_steps=4000
65
-
66
- gradient_checkpointing=True
67
-
68
- fp16=True
69
-
70
- evaluation_strategy="steps"
71
-
72
- save_steps=1000
73
-
74
  eval_steps=1000
75
 
76
- logging_steps=25
77
 
78
  metric_for_best_model="wer"
79
 
 
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
+
57
  train_batch_size=16
58
 
59
+ eval_batch_size=8
60
+
61
+ optimizer: Adam
62
 
63
  learning_rate=1e-5
64
 
65
  warmup_steps=500
66
 
67
  max_steps=4000
68
+
 
 
 
 
 
 
 
 
69
  eval_steps=1000
70
 
 
71
 
72
  metric_for_best_model="wer"
73