adedamola26 commited on
Commit
e7e3d08
·
verified ·
1 Parent(s): 95f730e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -44
README.md CHANGED
@@ -1,29 +1,42 @@
1
  ---
2
  pipeline_tag: summarization
3
  datasets:
4
- - samsum
5
  language:
6
- - en
7
  metrics:
8
- - rouge
9
  library_name: transformers
10
  widget:
11
- - text: |
12
- John: Hey! I've been thinking about getting a PlayStation 5. Do you think it is worth it?
13
- Dan: Idk man. R u sure ur going to have enough free time to play it?
14
- John: Yeah, that's why I'm not sure if I should buy one or not. I've been working so much lately idk if I'm gonna be able to play it as much as I'd like.
15
- - text: |
16
- Sarah: Do you think it's a good idea to invest in Bitcoin?
17
- Emily: I'm skeptical. The market is very volatile, and you could lose money.
18
- Sarah: True. But there's also a high upside, right?
19
- - text: |
20
- Madison: Hello Lawrence are you through with the article?
21
- Lawrence: Not yet sir.
22
- Lawrence: But i will be in a few.
23
- Madison: Okay. But make it quick.
24
- Madison: The piece is needed by today
25
- Lawrence: Sure thing
26
- Lawrence: I will get back to you once i am through."
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  model-index:
29
  - name: bart-finetuned-samsum
@@ -37,52 +50,47 @@ model-index:
37
  metrics:
38
  - name: Validation ROUGE-1
39
  type: rouge-1
40
- value: 53.8804
41
  - name: Validation ROUGE-2
42
  type: rouge-2
43
- value: 29.2329
44
  - name: Validation ROUGE-L
45
  type: rougeL
46
- value: 44.774
47
  - name: Validation ROUGE-L Sum
48
  type: rougeLsum
49
- value: 49.8255
50
- - name: Test ROUGE-1
51
- type: rouge-1
52
- value: 52.8156
53
- - name: Test ROUGE-2
54
- type: rouge-2
55
- value: 28.1259
56
- - name: Test ROUGE-L
57
- type: rougeL
58
- value: 43.7147
59
- - name: Test ROUGE-L Sum
60
- type: rougeLsum
61
- value: 48.5712
62
  ---
63
 
64
  # Description
65
 
66
- This model is a specialized adaptation of the <b>facebook/bart-large-xsum</b>, fine-tuned for enhanced performance on dialogue summarization using the <b>SamSum</b> dataset.
67
 
68
  ## Development
69
- - Kaggle Notebook: [Text Summarization with Large Language Models](https://www.kaggle.com/code/lusfernandotorres/text-summarization-with-large-language-models)
 
70
 
71
  ## Usage
72
 
73
  ```python
74
  from transformers import pipeline
75
 
76
- model = pipeline("summarization", model="luisotorres/bart-finetuned-samsum")
77
 
78
- conversation = '''Sarah: Do you think it's a good idea to invest in Bitcoin?
79
- Emily: I'm skeptical. The market is very volatile, and you could lose money.
80
- Sarah: True. But there's also a high upside, right?
 
 
 
 
 
81
  '''
82
  model(conversation)
83
  ```
84
 
85
  ## Training Parameters
 
86
  ```python
87
  evaluation_strategy = "epoch",
88
  save_strategy = 'epoch',
@@ -101,7 +109,6 @@ fp16=True,
101
  report_to="none"
102
  ```
103
 
104
- ## Reference
105
- This model is based on the original <b>BART</b> architecture, as detailed in:
106
 
107
- Lewis et al. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. [arXiv:1910.13461](https://arxiv.org/abs/1910.13461)
 
1
  ---
2
  pipeline_tag: summarization
3
  datasets:
4
+ - samsum
5
  language:
6
+ - en
7
  metrics:
8
+ - rouge
9
  library_name: transformers
10
  widget:
11
+ - text: |
12
+ Rita: I'm so bloody tired. Falling asleep at work. :-(
13
+ Tina: I know what you mean.
14
+ Tina: I keep on nodding off at my keyboard hoping that the boss doesn't notice..
15
+ Rita: The time just keeps on dragging on and on and on....
16
+ Rita: I keep on looking at the clock and there's still 4 hours of this drudgery to go.
17
+ Tina: Times like these I really hate my work.
18
+ Rita: I'm really not cut out for this level of boredom.
19
+ Tina: Neither am I.
20
+ - text: |
21
+ Beatrice: I am in town, shopping. They have nice scarfs in the shop next to the church. Do you want one?
22
+ Leo: No, thanks
23
+ Beatrice: But you don't have a scarf.
24
+ Leo: Because I don't need it.
25
+ Beatrice: Last winter you had a cold all the time. A scarf could help.
26
+ Leo: I don't like them.
27
+ Beatrice: Actually, I don't care. You will get a scarf.
28
+ Leo: How understanding of you!
29
+ Beatrice: You were complaining the whole winter that you're going to die. I've had enough.
30
+ Leo: Eh.
31
+ - text: |
32
+ Jack: Cocktails later?
33
+ May: YES!!!
34
+ May: You read my mind...
35
+ Jack: Possibly a little tightly strung today?
36
+ May: Sigh... without question.
37
+ Jack: Thought so.
38
+ May: A little drink will help!
39
+ Jack: Maybe two!
40
 
41
  model-index:
42
  - name: bart-finetuned-samsum
 
50
  metrics:
51
  - name: Validation ROUGE-1
52
  type: rouge-1
53
+ value: 53.6163
54
  - name: Validation ROUGE-2
55
  type: rouge-2
56
+ value: 28.914
57
  - name: Validation ROUGE-L
58
  type: rougeL
59
+ value: 44.1443
60
  - name: Validation ROUGE-L Sum
61
  type: rougeLsum
62
+ value: 49.2995
 
 
 
 
 
 
 
 
 
 
 
 
63
  ---
64
 
65
  # Description
66
 
67
+ This model was trained by fine-tuning the [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) model using [these parameters](#training-parameters) and the [samsum dataset](https://huggingface.co/datasets/samsum).
68
 
69
  ## Development
70
+
71
+ - Jupyter Notebook: [Text Summarization With BART](https://github.com/adedamola26/text-summarization/blob/main/Text_Summarization_with_BART.ipynb)
72
 
73
  ## Usage
74
 
75
  ```python
76
  from transformers import pipeline
77
 
78
+ model = pipeline("summarization", model="adedamolade26/bart-finetuned-samsum")
79
 
80
+ conversation = '''Jack: Cocktails later?
81
+ May: YES!!!
82
+ May: You read my mind...
83
+ Jack: Possibly a little tightly strung today?
84
+ May: Sigh... without question.
85
+ Jack: Thought so.
86
+ May: A little drink will help!
87
+ Jack: Maybe two!
88
  '''
89
  model(conversation)
90
  ```
91
 
92
  ## Training Parameters
93
+
94
  ```python
95
  evaluation_strategy = "epoch",
96
  save_strategy = 'epoch',
 
109
  report_to="none"
110
  ```
111
 
112
+ ## References
 
113
 
114
+ Model Training process was adapted from Luis Fernando Torres's [Kaggle Notebook](https://www.kaggle.com/code/lusfernandotorres/text-summarization-with-large-language-models): 📝 Text Summarization with Large Language Models