Bert base model for Korean

  • 70GB Korean text dataset and 42000 lower-cased subwords are used
  • Check the model performance and other language models for Korean in github
from transformers import BertTokenizerFast, BertModel

tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base")
model_bert = BertModel.from_pretrained("kykim/bert-kor-base")
Downloads last month
118,002
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for kykim/bert-kor-base

Finetunes
3 models

Spaces using kykim/bert-kor-base 24