metadata
language:
- ja
tags:
- japanese
- wikipedia
- cc100
- pos
- dependency-parsing
base_model: nlp-waseda/roberta-large-japanese
datasets:
- universal_dependencies
license: cc-by-sa-4.0
pipeline_tag: token-classification
roberta-large-japanese-juman-ud-goeswith
Model Description
This is a RoBERTa model pretrained on Japanese Wikipedia and CC-100 texts for POS-tagging and dependency-parsing (using goeswith
for subwords), derived from roberta-large-japanese.
How to Use
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-large-japanese-juman-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
fugashi is required.