Areeb123 commited on
Commit
971fa52
·
1 Parent(s): e0a3adb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -18
README.md CHANGED
@@ -1,11 +1,17 @@
1
  ---
2
- license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
7
  - name: Distilbert_Masked_Language_Model_IMDB
8
  results: []
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -17,20 +23,6 @@ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingf
17
  It achieves the following results on the evaluation set:
18
 
19
 
20
- ## Model description
21
-
22
- More information needed
23
-
24
- ## Intended uses & limitations
25
-
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
-
32
- ## Training procedure
33
-
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
@@ -38,12 +30,11 @@ The following hyperparameters were used during training:
38
  - training_precision: float32
39
 
40
  ### Training results
41
-
42
-
43
 
44
  ### Framework versions
45
 
46
  - Transformers 4.35.2
47
  - TensorFlow 2.14.0
48
  - Datasets 2.15.0
49
- - Tokenizers 0.15.0
 
1
  ---
2
+ license: mit
3
  base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
7
  - name: Distilbert_Masked_Language_Model_IMDB
8
  results: []
9
+ datasets:
10
+ - imdb
11
+ language:
12
+ - en
13
+ metrics:
14
+ - perplexity
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
23
  It achieves the following results on the evaluation set:
24
 
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ### Training hyperparameters
27
 
28
  The following hyperparameters were used during training:
 
30
  - training_precision: float32
31
 
32
  ### Training results
33
+ perplexity = 13
 
34
 
35
  ### Framework versions
36
 
37
  - Transformers 4.35.2
38
  - TensorFlow 2.14.0
39
  - Datasets 2.15.0
40
+ - Tokenizers 0.15.0