File size: 923 Bytes
47106fa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# ROBERTA BASE (cased) trained on private Bulgarian sentiment-analysis dataset
This is a Multilingual Roberta model.
This model is cased: it does make a difference between bulgarian and Bulgarian.
### How to use
Here is how to use this model in PyTorch:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/roberta-base-sentiment-bg"
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt')
>>> outputs = model(**inputs)
>>> torch.softmax(outputs, dim=1).tolist()
[[0.0004746630438603461, 0.9995253086090088],
[0.9986956715583801, 0.0013043134240433574]]
```
|