raygx commited on
Commit
19a2181
1 Parent(s): ff66d69

Upload TFDistilBertForMaskedLM

Browse files
Files changed (3) hide show
  1. README.md +6 -6
  2. config.json +3 -3
  3. tf_model.h5 +2 -2
README.md CHANGED
@@ -2,14 +2,14 @@
2
  tags:
3
  - generated_from_keras_callback
4
  model-index:
5
- - name: dBERT-Nepali-Masked-LM
6
  results: []
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
  probably proofread and complete it, then remove this comment. -->
11
 
12
- # dBERT-Nepali-Masked-LM
13
 
14
  This model was trained from scratch on an unknown dataset.
15
  It achieves the following results on the evaluation set:
@@ -32,8 +32,8 @@ More information needed
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
- - optimizer: None
36
- - training_precision: float32
37
 
38
  ### Training results
39
 
@@ -41,7 +41,7 @@ The following hyperparameters were used during training:
41
 
42
  ### Framework versions
43
 
44
- - Transformers 4.28.1
45
- - TensorFlow 2.11.0
46
  - Datasets 2.1.0
47
  - Tokenizers 0.13.3
 
2
  tags:
3
  - generated_from_keras_callback
4
  model-index:
5
+ - name: distilBERT-Nepali
6
  results: []
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
  probably proofread and complete it, then remove this comment. -->
11
 
12
+ # distilBERT-Nepali
13
 
14
  This model was trained from scratch on an unknown dataset.
15
  It achieves the following results on the evaluation set:
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 16760, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
36
+ - training_precision: mixed_float16
37
 
38
  ### Training results
39
 
 
41
 
42
  ### Framework versions
43
 
44
+ - Transformers 4.30.2
45
+ - TensorFlow 2.12.0
46
  - Datasets 2.1.0
47
  - Tokenizers 0.13.3
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/kaggle/input/distilbert-nepali-maskedlm/DistilBert-Nepali-MaskedLM",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForMaskedLM"
@@ -20,6 +20,6 @@
20
  "seq_classif_dropout": 0.2,
21
  "sinusoidal_pos_embds": false,
22
  "tie_weights_": true,
23
- "transformers_version": "4.28.1",
24
- "vocab_size": 30000
25
  }
 
1
  {
2
+ "_name_or_path": "raygx/distilBERT-Nepali",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForMaskedLM"
 
20
  "seq_classif_dropout": 0.2,
21
  "sinusoidal_pos_embds": false,
22
  "tie_weights_": true,
23
+ "transformers_version": "4.30.2",
24
+ "vocab_size": 50000
25
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c64de5cd0c33d09dfc0c912b5d18f62b6e975394a1065cad75c3330ceff88c79
3
- size 360214392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b466f1f965142edcd99bbc4b2739fd7b6902837491970417767034ec361a9967
3
+ size 636980224