rmhirota commited on
Commit
d8cb3a4
1 Parent(s): 96751a7

Model save

Browse files
Files changed (4) hide show
  1. README.md +2 -11
  2. config.json +0 -1
  3. pytorch_model.bin +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -3,8 +3,6 @@ license: mit
3
  base_model: neuralmind/bert-base-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - assin2
8
  model-index:
9
  - name: model_dir
10
  results: []
@@ -15,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # model_dir
17
 
18
- This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the assin2 dataset.
19
 
20
  ## Model description
21
 
@@ -40,14 +38,7 @@ The following hyperparameters were used during training:
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 1
44
-
45
- ### Training results
46
-
47
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
48
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
49
- | No log | 1.0 | 125 | 0.4393 | 0.81 |
50
-
51
 
52
  ### Framework versions
53
 
 
3
  base_model: neuralmind/bert-base-portuguese-cased
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: model_dir
8
  results: []
 
13
 
14
  # model_dir
15
 
16
+ This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
17
 
18
  ## Model description
19
 
 
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
+ - num_epochs: 10
 
 
 
 
 
 
 
42
 
43
  ### Framework versions
44
 
config.json CHANGED
@@ -24,7 +24,6 @@
24
  "pooler_size_per_head": 128,
25
  "pooler_type": "first_token_transform",
26
  "position_embedding_type": "absolute",
27
- "problem_type": "single_label_classification",
28
  "torch_dtype": "float32",
29
  "transformers_version": "4.34.1",
30
  "type_vocab_size": 2,
 
24
  "pooler_size_per_head": 128,
25
  "pooler_type": "first_token_transform",
26
  "position_embedding_type": "absolute",
 
27
  "torch_dtype": "float32",
28
  "transformers_version": "4.34.1",
29
  "type_vocab_size": 2,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f928eee80efabdf08188810e3176951e09fdcecf0e5e6040f0c5b2e83a3d5af8
3
  size 435764718
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:334a77a03746e1636e38ea1c8c826466dd195e654cacd8a29be336d6442a07bd
3
  size 435764718
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dd4c902953aec375cb43579e9d69e319a6d5d1a9b7b9ffecd60ed15e6804e421
3
  size 4472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25e8e0348e4063182cc1f977bc3f2ad4a86909ebd2735ec16aced4a6b0e5162c
3
  size 4472