mtl-summarization / README.md
autoevaluator's picture
Add evaluation results on the samsum config and test split of samsum
c01c595
|
raw
history blame
5.01 kB
metadata
license: apache-2.0
language:
  - en
tags:
  - text-generation
  - text2text-generation
  - summarization
pipeline_tag: text2text-generation
widget:
  - text: >-
      Summarize: You may want to stick it to your boss and leave your job, but
      don't do it if these are your reasons.
    example_title: Example1
  - text: >-
      Summarize: Jorge Alfaro drove in two runs, Aaron Nola pitched seven
      innings of two-hit ball and the Philadelphia Phillies beat the Los Angeles
      Dodgers 2-1 Thursday, spoiling Clayton Kershaw's first start in almost a
      month. Hitting out of the No. 8 spot in the ...
    example_title: Example2
model-index:
  - name: RUCAIBox/mtl-summarization
    results:
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 27.3604
            verified: true
          - name: ROUGE-2
            type: rouge
            value: 7.2277
            verified: true
          - name: ROUGE-L
            type: rouge
            value: 22.4688
            verified: true
          - name: ROUGE-LSUM
            type: rouge
            value: 24.6476
            verified: true
          - name: loss
            type: loss
            value: 2.2192304134368896
            verified: true
          - name: gen_len
            type: gen_len
            value: 17.2418
            verified: true

MTL-summarization

The MTL-summarization model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.

The detailed information and instructions can be found https://github.com/RUCAIBox/MVP.

Model Description

MTL-summarization is supervised pre-trained using a mixture of labeled summarization datasets. It is a variant (Single) of our main MVP model. It follows a standard Transformer encoder-decoder architecture.

MTL-summarization is specially designed for summarization tasks, such as new summarization (CNN/DailyMail, XSum) and dialog summarization (SAMSum).

Example

>>> from transformers import MvpTokenizer, MvpForConditionalGeneration

>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-summarization")

>>> inputs = tokenizer(
...     "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
...     return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]

Related Models

MVP: https://huggingface.co/RUCAIBox/mvp.

Prompt-based models:

Multi-task models:

Citation

@article{tang2022mvp,
  title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
  author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
  journal={arXiv preprint arXiv:2206.12131},
  year={2022},
  url={https://arxiv.org/abs/2206.12131},
}