pierreguillou
commited on
Commit
•
cfbac95
1
Parent(s):
aca0f80
Update README.md
Browse files
README.md
CHANGED
@@ -22,28 +22,35 @@ model-index:
|
|
22 |
metrics:
|
23 |
- name: F1
|
24 |
type: f1
|
25 |
-
value: 0.
|
26 |
- name: Precision
|
27 |
type: precision
|
28 |
-
value: 0.
|
29 |
- name: Recall
|
30 |
type: recall
|
31 |
-
value: 0.
|
32 |
- name: Accuracy
|
33 |
type: accuracy
|
34 |
-
value: 0.
|
35 |
- name: Loss
|
36 |
type: loss
|
37 |
-
value: 0.
|
38 |
widget:
|
39 |
- text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
|
40 |
---
|
41 |
|
42 |
## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)
|
43 |
|
44 |
-
**ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
- **f1**: 0.8716487228203504
|
48 |
- **precision**: 0.8559286898839138
|
49 |
- **recall**: 0.8879569892473118
|
@@ -102,53 +109,52 @@ ner(input_text)
|
|
102 |
````
|
103 |
Num examples = 7828
|
104 |
Num Epochs = 3
|
105 |
-
Instantaneous batch size per device =
|
106 |
Total train batch size (w. parallel, distributed & accumulation) = 8
|
107 |
-
Gradient Accumulation steps =
|
108 |
-
Total optimization steps =
|
109 |
-
|
110 |
-
Step Training Loss Validation Loss Precision Recall
|
111 |
-
290
|
112 |
-
580 0.
|
113 |
-
870 0.
|
114 |
-
1160 0.
|
115 |
-
1450 0.
|
116 |
-
1740 0.
|
117 |
-
2030 0.
|
118 |
-
2320 0.
|
119 |
-
2610 0.017100 0.118826 0.822488 0.888817 0.854367 0.973471
|
120 |
````
|
121 |
|
122 |
### Validation metrics by Named Entity
|
123 |
````
|
124 |
Num examples = 1177
|
125 |
|
126 |
-
{'JURISPRUDENCIA': {'f1': 0.
|
127 |
'number': 657,
|
128 |
-
'precision': 0.
|
129 |
-
'recall': 0.
|
130 |
-
'LEGISLACAO': {'f1': 0.
|
131 |
'number': 571,
|
132 |
-
'precision': 0.
|
133 |
-
'recall': 0.
|
134 |
-
'LOCAL': {'f1': 0.
|
135 |
'number': 194,
|
136 |
-
'precision': 0.
|
137 |
-
'recall': 0.
|
138 |
-
'ORGANIZACAO': {'f1': 0.
|
139 |
'number': 1340,
|
140 |
-
'precision': 0.
|
141 |
-
'recall': 0.
|
142 |
-
'PESSOA': {'f1': 0.
|
143 |
'number': 1072,
|
144 |
-
'precision': 0.
|
145 |
-
'recall': 0.
|
146 |
-
'TEMPO': {'f1': 0.
|
147 |
'number': 816,
|
148 |
-
'precision': 0.
|
149 |
-
'recall': 0.
|
150 |
-
'overall_accuracy': 0.
|
151 |
-
'overall_f1': 0.
|
152 |
-
'overall_precision': 0.
|
153 |
-
'overall_recall': 0.
|
154 |
````
|
|
|
22 |
metrics:
|
23 |
- name: F1
|
24 |
type: f1
|
25 |
+
value: 0.8733423827921062
|
26 |
- name: Precision
|
27 |
type: precision
|
28 |
+
value: 0.8487923685812868
|
29 |
- name: Recall
|
30 |
type: recall
|
31 |
+
value: 0.8993548387096775
|
32 |
- name: Accuracy
|
33 |
type: accuracy
|
34 |
+
value: 0.9759397808828684
|
35 |
- name: Loss
|
36 |
type: loss
|
37 |
+
value: 0.10249536484479904
|
38 |
widget:
|
39 |
- text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
|
40 |
---
|
41 |
|
42 |
## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)
|
43 |
|
44 |
+
**ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.
|
45 |
|
46 |
+
Due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
|
47 |
+
- **f1**: 0.8733423827921062
|
48 |
+
- **precision**: 0.8487923685812868
|
49 |
+
- **recall**: 0.8993548387096775
|
50 |
+
- **accuracy**: 0.9759397808828684
|
51 |
+
- **loss**: 0.10249536484479904
|
52 |
+
|
53 |
+
**Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before fine finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model):
|
54 |
- **f1**: 0.8716487228203504
|
55 |
- **precision**: 0.8559286898839138
|
56 |
- **recall**: 0.8879569892473118
|
|
|
109 |
````
|
110 |
Num examples = 7828
|
111 |
Num Epochs = 3
|
112 |
+
Instantaneous batch size per device = 4
|
113 |
Total train batch size (w. parallel, distributed & accumulation) = 8
|
114 |
+
Gradient Accumulation steps = 2
|
115 |
+
Total optimization steps = 2934
|
116 |
+
|
117 |
+
Step Training Loss Validation Loss Precision Recall F1 Accuracy
|
118 |
+
290 0.314600 0.163042 0.735828 0.697849 0.716336 0.949198
|
119 |
+
580 0.086900 0.123495 0.779540 0.824301 0.801296 0.965807
|
120 |
+
870 0.072800 0.106785 0.798481 0.858925 0.827600 0.968626
|
121 |
+
1160 0.046300 0.109921 0.824576 0.877419 0.850177 0.973243
|
122 |
+
1450 0.036600 0.102495 0.848792 0.899355 0.873342 0.975940
|
123 |
+
1740 0.033400 0.121514 0.821681 0.899785 0.858961 0.967071
|
124 |
+
2030 0.034700 0.115568 0.846849 0.887097 0.866506 0.970607
|
125 |
+
2320 0.018000 0.108600 0.840258 0.895914 0.867194 0.973730
|
|
|
126 |
````
|
127 |
|
128 |
### Validation metrics by Named Entity
|
129 |
````
|
130 |
Num examples = 1177
|
131 |
|
132 |
+
{'JURISPRUDENCIA': {'f1': 0.7069834413246942,
|
133 |
'number': 657,
|
134 |
+
'precision': 0.6707650273224044,
|
135 |
+
'recall': 0.7473363774733638},
|
136 |
+
'LEGISLACAO': {'f1': 0.8256227758007118,
|
137 |
'number': 571,
|
138 |
+
'precision': 0.8390596745027125,
|
139 |
+
'recall': 0.8126094570928196},
|
140 |
+
'LOCAL': {'f1': 0.7688564476885645,
|
141 |
'number': 194,
|
142 |
+
'precision': 0.728110599078341,
|
143 |
+
'recall': 0.8144329896907216},
|
144 |
+
'ORGANIZACAO': {'f1': 0.8548387096774193,
|
145 |
'number': 1340,
|
146 |
+
'precision': 0.8062169312169312,
|
147 |
+
'recall': 0.9097014925373135},
|
148 |
+
'PESSOA': {'f1': 0.9826697892271662,
|
149 |
'number': 1072,
|
150 |
+
'precision': 0.9868297271872061,
|
151 |
+
'recall': 0.9785447761194029},
|
152 |
+
'TEMPO': {'f1': 0.9615846338535414,
|
153 |
'number': 816,
|
154 |
+
'precision': 0.9423529411764706,
|
155 |
+
'recall': 0.9816176470588235},
|
156 |
+
'overall_accuracy': 0.9759397808828684,
|
157 |
+
'overall_f1': 0.8733423827921062,
|
158 |
+
'overall_precision': 0.8487923685812868,
|
159 |
+
'overall_recall': 0.8993548387096775}
|
160 |
````
|