GlycerinLOL commited on
Commit
caca060
1 Parent(s): eb1480d

Model save

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/bart-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: LLM_Teached_Bart_From_Scratch
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # LLM_Teached_Bart_From_Scratch
20
+
21
+ This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.4999
24
+ - Rouge1: 0.4331
25
+ - Rouge2: 0.2164
26
+ - Rougel: 0.3724
27
+ - Rougelsum: 0.3725
28
+ - Gen Len: 19.9255
29
+ - Precision: 0.9125
30
+ - Recall: 0.8885
31
+ - F1: 0.9002
32
+
33
+ ## Model description
34
+
35
+ More information needed
36
+
37
+ ## Intended uses & limitations
38
+
39
+ More information needed
40
+
41
+ ## Training and evaluation data
42
+
43
+ More information needed
44
+
45
+ ## Training procedure
46
+
47
+ ### Training hyperparameters
48
+
49
+ The following hyperparameters were used during training:
50
+ - learning_rate: 2e-05
51
+ - train_batch_size: 32
52
+ - eval_batch_size: 16
53
+ - seed: 42
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 128
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - num_epochs: 4
59
+ - mixed_precision_training: Native AMP
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 |
64
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:------:|
65
+ | No log | 1.0 | 390 | 1.5709 | 0.4119 | 0.2002 | 0.3529 | 0.3527 | 19.9709 | 0.9093 | 0.8846 | 0.8966 |
66
+ | 1.8155 | 2.0 | 781 | 1.5361 | 0.4331 | 0.2157 | 0.3717 | 0.3717 | 19.9185 | 0.9123 | 0.8889 | 0.9003 |
67
+ | 1.5875 | 3.0 | 1172 | 1.5030 | 0.4263 | 0.2129 | 0.3671 | 0.3673 | 19.9545 | 0.9117 | 0.8871 | 0.899 |
68
+ | 1.4978 | 3.99 | 1560 | 1.4999 | 0.4331 | 0.2164 | 0.3724 | 0.3725 | 19.9255 | 0.9125 | 0.8885 | 0.9002 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.36.0
74
+ - Pytorch 2.0.1+cu117
75
+ - Datasets 2.14.5
76
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_bos_token_id": 0,
7
+ "forced_eos_token_id": 2,
8
+ "no_repeat_ngram_size": 3,
9
+ "num_beams": 4,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.36.0"
12
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7947661f5f344c0a93c7df5475ecd6e537a5d48bb7343b1597ff87db9b296a93
3
  size 1625426996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a48c0969c18fe75caf334533acb10619e57668b490a08aa414ef206d029df1e
3
  size 1625426996
runs/Mar03_21-26-30_oi5vv8ctr1709312124223-tkfr5/events.out.tfevents.1709472393.oi5vv8ctr1709312124223-tkfr5.1103.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3322fa71b47aebe104eda99fb8eedeaa86444ec1881c6d19db144784a2114b86
3
- size 8057
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f6dddbc717944747b2dee808d7d151ad777819c501c863cdfdf192f8524be6
3
+ size 9085