llm-jp-1.3b-ud-causal
Model Description
This is a GPT-2 model pretrained for POS-tagging and dependency-parsing, derived from llm-jp-1.3b-upos and UD_Japanese-GSDLUW.
How to Use
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/llm-jp-1.3b-ud-causal",trust_remote_code=True)
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
- Downloads last month
- 5
Model tree for KoichiYasuoka/llm-jp-1.3b-ud-causal
Base model
llm-jp/llm-jp-1.3b-v1.0
Finetuned
KoichiYasuoka/llm-jp-1.3b-upos