KoichiYasuoka commited on
Commit
7f7524f
·
1 Parent(s): a23b6da

dependency-parsing

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -5,6 +5,7 @@ tags:
5
  - "japanese"
6
  - "token-classification"
7
  - "pos"
 
8
  base_model: KoichiYasuoka/modernbert-base-japanese-wikipedia
9
  datasets:
10
  - "universal_dependencies"
@@ -18,7 +19,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging, derived from [modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
22
 
23
  ## How to Use
24
 
@@ -28,3 +29,15 @@ nlp=pipeline("upos","KoichiYasuoka/modernbert-base-japanese-wikipedia-upos",trus
28
  print(nlp("国境の長いトンネルを抜けると雪国であった。"))
29
  ```
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - "japanese"
6
  - "token-classification"
7
  - "pos"
8
+ - "dependency-parsing"
9
  base_model: KoichiYasuoka/modernbert-base-japanese-wikipedia
10
  datasets:
11
  - "universal_dependencies"
 
19
 
20
  ## Model Description
21
 
22
+ This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing, derived from [modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
23
 
24
  ## How to Use
25
 
 
29
  print(nlp("国境の長いトンネルを抜けると雪国であった。"))
30
  ```
31
 
32
+ or
33
+
34
+ ```py
35
+ import esupar
36
+ nlp=esupar.load("KoichiYasuoka/modernbert-base-japanese-wikipedia-upos")
37
+ print(nlp("国境の長いトンネルを抜けると雪国であった。"))
38
+ ```
39
+
40
+ ## See Also
41
+
42
+ [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models
43
+