This is polish fast tokenizer.
Number of documents used to train tokenizer:
- 25 088 398
Sample usge with transformers:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('radlab/polish-fast-tokenizer')
tokenizer.decode(tokenizer("Ala ma kota i psa").input_ids)
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.