Edit model card


  
    Model: BERT
    Lang: IT
  Type: Uncased

Model description

This is an uncased BERT [1] model for the Italian language, obtained using the uncased mBERT (bert-base-multilingual-uncased) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in [2], computing document-level frequencies over the Wikipedia dataset)

The resulting model has 110M parameters, a vocabulary of 30.154 tokens, and a size of ~430 MB.

Quick usage

from transformers import BertTokenizerFast, BertModel

tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-base-italian-uncased")
model = BertModel.from_pretrained("osiria/bert-base-italian-uncased")

References

[1] https://arxiv.org/abs/1810.04805

[2] https://arxiv.org/abs/2010.05609

License

The model is released under Apache-2.0 license

Downloads last month
14
Safetensors
Model size
109M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including osiria/bert-base-italian-uncased