Update README.md
Browse files
README.md
CHANGED
@@ -13,21 +13,21 @@ widget:
|
|
13 |
|
14 |
This is the set of Chinese T5 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
|
15 |
|
16 |
-
The Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their
|
17 |
|
18 |
| | Link |
|
19 |
| -------- | :-----------------------: |
|
20 |
| **T5-Small** | [**L=6/H=512 (Small)**][small] |
|
21 |
| **T5-Base** | [**L=12/H=768 (Base)**][base] |
|
22 |
|
23 |
-
In T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<
|
24 |
|
25 |
## How to use
|
26 |
|
27 |
-
You can use
|
28 |
|
29 |
```python
|
30 |
-
>>> from transformers import BertTokenizer, T5ForConditionalGeneration,Text2TextGenerationPipeline
|
31 |
>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
|
32 |
>>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
|
33 |
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
|
@@ -41,7 +41,9 @@ You can use the model directly with a pipeline for text2text generation:
|
|
41 |
|
42 |
## Training procedure
|
43 |
|
44 |
-
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
|
|
|
|
|
45 |
|
46 |
Stage1:
|
47 |
|
|
|
13 |
|
14 |
This is the set of Chinese T5 models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
|
15 |
|
16 |
+
The Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their work, we released a series of Chinese T5 models.
|
17 |
|
18 |
| | Link |
|
19 |
| -------- | :-----------------------: |
|
20 |
| **T5-Small** | [**L=6/H=512 (Small)**][small] |
|
21 |
| **T5-Base** | [**L=12/H=768 (Base)**][base] |
|
22 |
|
23 |
+
In T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.
|
24 |
|
25 |
## How to use
|
26 |
|
27 |
+
You can use this model directly with a pipeline for text2text generation (take the case of T5-Small):
|
28 |
|
29 |
```python
|
30 |
+
>>> from transformers import BertTokenizer, T5ForConditionalGeneration, Text2TextGenerationPipeline
|
31 |
>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
|
32 |
>>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
|
33 |
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
|
|
|
41 |
|
42 |
## Training procedure
|
43 |
|
44 |
+
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
45 |
+
|
46 |
+
Taking the case of T5-Small
|
47 |
|
48 |
Stage1:
|
49 |
|