naimur900 commited on
Commit
fa3e6d9
1 Parent(s): 8b878fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  datasets:
6
  - xlsum
7
  model-index:
8
- - name: my_awesome_pegsasus_model
9
  results: []
10
  language:
11
  - en
@@ -22,11 +22,20 @@ This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://hug
22
 
23
  ## Model description
24
 
25
- More information needed
 
 
 
26
 
27
  ## Intended uses & limitations
28
 
29
- More information needed
 
 
 
 
 
 
30
 
31
  ## Training and evaluation data
32
 
 
5
  datasets:
6
  - xlsum
7
  model-index:
8
+ - name: pegsasus_xlsum
9
  results: []
10
  language:
11
  - en
 
22
 
23
  ## Model description
24
 
25
+ Our model, pegasus_xlsum, is a state-of-the-art model fine-tuned on the English subset of the csebuetnlp/xlsum dataset. This data source is one of the most comprehensive and diverse sets available, originally composed of 1.35 million professional article-summary pairs sourced from BBC across 45 languages. Despite its multilingual nature, we intentionally selected the English language subset, consisting of approximately 330,000 records, as the focus for our fine-tuning process.
26
+
27
+ The goal was to adapt the model for the text summarization task, and we're thrilled to report that the fine-tuned pegasus_xlsum model exceeded our expectations. It outperformed the established csebuetnlp/mT5_multilingual_XLSum model in terms of ROUGE scores, demonstrating superior summary generation capabilities. The pegasus_xlsum model leverages the powerful PEGASUS architecture, proving its efficiency and effectiveness in handling English text summarization tasks.
28
+
29
 
30
  ## Intended uses & limitations
31
 
32
+ Our intention with pegasus_xlsum is to provide a reliable, high-performance solution for English text summarization, making the most of the rich, professional, and diverse source dataset it was trained on. We hope you find this model as useful in your applications as we found it in our experiments.
33
+
34
+
35
+
36
+
37
+
38
+
39
 
40
  ## Training and evaluation data
41