Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,8 @@ pipeline_tag: text2text-generation
|
|
11 |
- Input: `context` (e.g. news article)
|
12 |
- Output: `question <sep> answer`
|
13 |
|
|
|
|
|
14 |
## Model Details
|
15 |
|
16 |
t5-large model is fine-tuned to the RACE dataset where the input is the context/passage and the output is the question followed by the answer. This is the first component in the question generation pipeline (i.e. `g1`) in our [MQAG paper](https://arxiv.org/abs/2301.12307),
|
@@ -18,7 +20,7 @@ or please refer to the GitHub repo of this project: https://github.com/potsawee/
|
|
18 |
|
19 |
## How to Use the Model
|
20 |
|
21 |
-
Use the code below to get started with the model.
|
22 |
|
23 |
```python
|
24 |
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
|
|
11 |
- Input: `context` (e.g. news article)
|
12 |
- Output: `question <sep> answer`
|
13 |
|
14 |
+
This model generates **abstractive** answers following the RACE dataset. If you would like to have **extractive** questions/answers, you can use our model trained on SQuAD: https://huggingface.co/potsawee/t5-large-generation-squad-QuestionAnswer.
|
15 |
+
|
16 |
## Model Details
|
17 |
|
18 |
t5-large model is fine-tuned to the RACE dataset where the input is the context/passage and the output is the question followed by the answer. This is the first component in the question generation pipeline (i.e. `g1`) in our [MQAG paper](https://arxiv.org/abs/2301.12307),
|
|
|
20 |
|
21 |
## How to Use the Model
|
22 |
|
23 |
+
Use the code below to get started with the model. You can also set do_sample=True in generate() to obtain different question-answer pairs.
|
24 |
|
25 |
```python
|
26 |
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|