schnell commited on
Commit
899f64f
·
1 Parent(s): ab0d661

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -22,6 +22,12 @@ Note that the texts should be segmented into words using Juman++ in advance.
22
 
23
  ### How to use
24
 
 
 
 
 
 
 
25
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
26
 
27
  ```python
@@ -38,8 +44,8 @@ generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sam
38
  ```
39
 
40
  ```python
41
- from transformers import ReformerTokenizer, GPT2Model
42
- tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese')
43
  model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese')
44
  text = "早稲田 大学 で 自然 言語 処理 を"
45
  encoded_input = tokenizer(text, return_tensors='pt')
 
22
 
23
  ### How to use
24
 
25
+ requirement
26
+
27
+ ```shell
28
+ pip install sentencepiece
29
+ ```
30
+
31
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
32
 
33
  ```python
 
44
  ```
45
 
46
  ```python
47
+ from transformers import AutoTokenizer, GPT2Model
48
+ tokenizer = AutoTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese')
49
  model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese')
50
  text = "早稲田 大学 で 自然 言語 処理 を"
51
  encoded_input = tokenizer(text, return_tensors='pt')