File size: 2,861 Bytes
0689246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language:
- pl
license: mit 
tags:
- bert 
- sentiment-classification 
- clarinpl-embeddings
- LEPISZCZE
datasets:
- clarin-pl/aspectemo
metrics:
- accuracy
- f1
- precision
- recall
---

# LEPISZCZE-aspectemo-allegro__herbert-base-cased-v1
## Description
Finetuned [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) model on [clarin-pl/aspectemo](https://huggingface.co/datasets/clarin-pl/aspectemo) dataset. 

Trained via [clarin-pl-embeddings](https://github.com/clarin-pl/embeddings) library, included in [LEPISZCZE](https://lepiszcze.ml/tasks/sentimentanalysis/) benchmark.

## Results on clarin-pl/aspectemo
|       |   accuracy |   f1_macro |   f1_micro |   f1_weighted |   recall_macro |   recall_micro |   recall_weighted |   precision_macro |   precision_micro |   precision_weighted |
|:------|-----------:|-----------:|-----------:|--------------:|---------------:|---------------:|------------------:|------------------:|------------------:|---------------------:|
| value |      0.952 |      0.368 |      0.585 |         0.586 |          0.371 |          0.566 |             0.566 |             0.392 |             0.606 |                0.617 |

### Metrics per class
|           |   precision |   recall |    f1 |   support |
|:----------|------------:|---------:|------:|----------:|
| a_amb     |       0.2   |    0.033 | 0.057 |        91 |
| a_minus_m |       0.632 |    0.542 | 0.584 |      1033 |
| a_minus_s |       0.156 |    0.209 | 0.178 |        67 |
| a_plus_m  |       0.781 |    0.694 | 0.735 |      1015 |
| a_plus_s  |       0.153 |    0.22  | 0.18  |        41 |
| a_zero    |       0.431 |    0.529 | 0.475 |       501 |

## Finetuning hyperparameters
| Hyperparameter Name     | Value    |
|:------------------------|:---------|
| use_scheduler           | True     |
| optimizer               | AdamW    |
| warmup_steps            | 25       |
| learning_rate           | 0.0005   |
| adam_epsilon            | 1e-05    |
| weight_decay            | 0        |
| finetune_last_n_layers  | 4        |
| classifier_dropout      | 0.2      |
| max_seq_length          | 512      |
| batch_size              | 64       |
| max_epochs              | 20       |
| early_stopping_monitor  | val/Loss |
| early_stopping_mode     | min      |
| early_stopping_patience | 3        |

## Citation (BibTeX)
```
@article{augustyniak2022way,
  title={This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
  author={Augustyniak, Lukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and Szymczak, Adrian and Janz, Arkadiusz and Szyma{'n}ski, Piotr and W{\k{a}}troba, Marcin and Morzy, Miko{\l}aj and others},
  journal={Advances in Neural Information Processing Systems},
  volume={35},
  pages={21805--21818},
  year={2022}
}
```