File size: 1,084 Bytes
04b3693 eed53cd 04b3693 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "modernbert"
datasets:
- "globis-university/aozorabunko-clean"
- "wikimedia/wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# modernbert-base-japanese-wikipedia
## Model Description
This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts. NVIDIA A100-SXM4-40GB×8 took 56 hours 49 minutes for training. You can fine-tune `modernbert-base-japanese-wikipedia` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia-ud-square), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-base-japanese-wikipedia")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-base-japanese-wikipedia",trust_remote_code=True)
```
|