File size: 2,422 Bytes
31a9080
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57e19aa
31a9080
3b8e469
 
 
 
 
57e19aa
3b8e469
31a9080
 
 
 
 
 
 
 
 
9e54345
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b8e469
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
base_model: FacebookAI/xlm-roberta-large-finetuned-conll03-english
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# xml-roberta-large-finetuned-ner


Los siguientes son los resultados sobre el conjunto de evaluación:
 - 'eval_loss': 0.0929097980260849,
 - 'eval_precision': 0.8704318936877077,
 - 'eval_recall': 0.8833942118572633,
 - 'eval_f1': 0.8768651513038628,
 - 'eval_accuracy': 0.982701988941157,
 


## Model description

Este es el modelo más grande de roberta [FacebookAI/xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-english)-
Este modelo fue ajustado usando el framework Kaggle [https://www.kaggle.com/settings]. Para realizar el preentrenamiento del modelo se tuvo que crear un directorio temporal en Kaggle
con el fin de almacenar de manera temoporal el modelo que pesa alrededor de 35 Gz.


The following hyperparameters were used during training:
- eval_strategy="epoch",
- save_strategy="epoch",
- learning_rate=2e-5, # (Aprendizaje se esta cambiando)
- per_device_train_batch_size=16,
- per_device_eval_batch_size=16,
- num_train_epochs=5,
- weight_decay=0.1,
- max_grad_norm=1.0,
- adam_epsilon=1e-5,
- fp16=True,
- save_total_limit=2,
- load_best_model_at_end=True,
- push_to_hub=True,
- metric_for_best_model="f1",
- seed=42,



| Metric          | Value       |
|-----------------|-------------|
| eval_loss       | 0.12918254733085632 |
| eval_precision  | 0.8674463937621832 |
| eval_recall     | 0.8752458555774094 |
| eval_f1         | 0.8713286713286713 |
| eval_accuracy   | 0.9813980358174466 |
| eval_runtime    | 3.6357      |
| eval_samples_per_second | 417.526 |
| eval_steps_per_second   | 26.13   |
| epoch           | 5.0         |

| Label  | Precision | Recall | F1        | Number |
|--------|-----------|--------|------------|--------|
| LOC    | 0.8867924528301887 | 0.8238007380073801 | 0.8541367766618843 | 1084 |
| MISC   | 0.7349726775956285 | 0.7911764705882353 | 0.7620396600566574 | 340  |
| ORG    | 0.8400272294077604 | 0.8814285714285715 | 0.8602300453119553 | 1400 |
| PER    | 0.9599465954606141 | 0.9782312925170068 | 0.9690026954177898 | 735  |