RafaelMayer commited on
Commit
3de5ed4
1 Parent(s): ba4af7a

Training in progress epoch 1

Browse files
Files changed (7) hide show
  1. README.md +60 -0
  2. config.json +37 -0
  3. special_tokens_map.json +7 -0
  4. tf_model.h5 +3 -0
  5. tokenizer.json +0 -0
  6. tokenizer_config.json +15 -0
  7. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mrm8488/electricidad-base-discriminator
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: RafaelMayer/electra-copec-2
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # RafaelMayer/electra-copec-2
14
+
15
+ This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.7303
18
+ - Validation Loss: 0.6874
19
+ - Train Accuracy: 0.8824
20
+ - Train Precision: [0.75 0.92307692]
21
+ - Train Precision W: 0.8824
22
+ - Train Recall: [0.75 0.92307692]
23
+ - Train Recall W: 0.8824
24
+ - Train F1: [0.75 0.92307692]
25
+ - Train F1 W: 0.8824
26
+ - Epoch: 1
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
46
+ - training_precision: float32
47
+
48
+ ### Training results
49
+
50
+ | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
51
+ |:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:-----------------------:|:--------------:|:-----------------------:|:----------:|:-----:|
52
+ | 0.7303 | 0.6874 | 0.8824 | [0.75 0.92307692] | 0.8824 | [0.75 0.92307692] | 0.8824 | [0.75 0.92307692] | 0.8824 | 1 |
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.32.1
58
+ - TensorFlow 2.12.0
59
+ - Datasets 2.14.4
60
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mrm8488/electricidad-base-discriminator",
3
+ "architectures": [
4
+ "ElectraForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "embedding_size": 768,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "False",
14
+ "1": "True"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "False": 0,
20
+ "True": 1
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "max_position_embeddings": 512,
24
+ "model_type": "electra",
25
+ "num_attention_heads": 12,
26
+ "num_hidden_layers": 12,
27
+ "pad_token_id": 0,
28
+ "position_embedding_type": "absolute",
29
+ "summary_activation": "gelu",
30
+ "summary_last_dropout": 0.1,
31
+ "summary_type": "first",
32
+ "summary_use_proj": true,
33
+ "transformers_version": "4.32.1",
34
+ "type_vocab_size": 2,
35
+ "use_cache": true,
36
+ "vocab_size": 31002
37
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcb6a93eea46bf34e5361811b40f88464278c5b440b46b2d6ca0b3002bf5e953
3
+ size 439699664
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 1000000000000000019884624838656,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "ElectraTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff