File size: 1,213 Bytes
62cf905 ff94728 b5f1426 7e66913 62cf905 ff94728 49f593c ff94728 49f593c ff94728 3e8908b b0757e6 3e8908b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: openrail++
language:
- uk
widget:
- text: Ти неймовірна!
datasets:
- ukr-detect/ukr-toxicity-dataset
base_model:
- FacebookAI/xlm-roberta-base
---
## Binary toxicity classifier for Ukrainian
This is the fine-tuned on the downstream task ["xlm-roberta-base"](https://huggingface.co/xlm-roberta-base) instance.
The evaluation metrics for binary toxicity classification are:
**Precision**: 0.9130
**Recall**: 0.9065
**F1**: 0.9061
The training and evaluation data will be clarified later.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('dardem/xlm-roberta-base-uk-toxicity')
model = AutoModelForSequenceClassification.from_pretrained('dardem/xlm-roberta-base-uk-toxicity')
# prepare the input
batch = tokenizer.encode('Ти неймовірна!', return_tensors='pt')
# inference
model(batch)
```
## Citation
```
@article{dementieva2024toxicity,
title={Toxicity Classification in Ukrainian},
author={Dementieva, Daryna and Khylenko, Valeriia and Babakov, Nikolay and Groh, Georg},
journal={arXiv preprint arXiv:2404.17841},
year={2024}
}
``` |