ukzash1 commited on
Commit
0241fa9
1 Parent(s): bbb1dfe

End of training

Browse files
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert/distilgpt2
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: result_llm
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # result_llm
15
+
16
+ This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: nan
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:------:|:-----:|:---------------:|
49
+ | 8.289 | 0.0554 | 500 | nan |
50
+ | 6.8357 | 0.1109 | 1000 | nan |
51
+ | 6.7413 | 0.1663 | 1500 | nan |
52
+ | 6.6101 | 0.2218 | 2000 | nan |
53
+ | 6.6348 | 0.2772 | 2500 | nan |
54
+ | 6.6871 | 0.3326 | 3000 | nan |
55
+ | 6.602 | 0.3881 | 3500 | nan |
56
+ | 6.6078 | 0.4435 | 4000 | nan |
57
+ | 6.5465 | 0.4989 | 4500 | nan |
58
+ | 6.5643 | 0.5544 | 5000 | nan |
59
+ | 6.5696 | 0.6098 | 5500 | nan |
60
+ | 6.5294 | 0.6653 | 6000 | nan |
61
+ | 6.5638 | 0.7207 | 6500 | nan |
62
+ | 6.4361 | 0.7761 | 7000 | nan |
63
+ | 6.4547 | 0.8316 | 7500 | nan |
64
+ | 6.5327 | 0.8870 | 8000 | nan |
65
+ | 6.3524 | 0.9425 | 8500 | nan |
66
+ | 6.4341 | 0.9979 | 9000 | nan |
67
+ | 6.3677 | 1.0533 | 9500 | nan |
68
+ | 6.199 | 1.1088 | 10000 | nan |
69
+ | 6.3033 | 1.1642 | 10500 | nan |
70
+ | 6.2976 | 1.2196 | 11000 | nan |
71
+ | 6.2322 | 1.2751 | 11500 | nan |
72
+ | 6.2222 | 1.3305 | 12000 | nan |
73
+ | 6.2119 | 1.3860 | 12500 | nan |
74
+ | 6.2336 | 1.4414 | 13000 | nan |
75
+ | 6.349 | 1.4968 | 13500 | nan |
76
+ | 6.311 | 1.5523 | 14000 | nan |
77
+ | 6.2247 | 1.6077 | 14500 | nan |
78
+ | 6.2851 | 1.6632 | 15000 | nan |
79
+ | 6.35 | 1.7186 | 15500 | nan |
80
+ | 6.2996 | 1.7740 | 16000 | nan |
81
+ | 6.3229 | 1.8295 | 16500 | nan |
82
+ | 6.3609 | 1.8849 | 17000 | nan |
83
+ | 6.3063 | 1.9403 | 17500 | nan |
84
+ | 6.2759 | 1.9958 | 18000 | nan |
85
+ | 6.2499 | 2.0512 | 18500 | nan |
86
+ | 6.1473 | 2.1067 | 19000 | nan |
87
+ | 6.2088 | 2.1621 | 19500 | nan |
88
+ | 6.2482 | 2.2175 | 20000 | nan |
89
+ | 6.2123 | 2.2730 | 20500 | nan |
90
+ | 6.2298 | 2.3284 | 21000 | nan |
91
+ | 6.2666 | 2.3839 | 21500 | nan |
92
+ | 6.21 | 2.4393 | 22000 | nan |
93
+ | 6.2396 | 2.4947 | 22500 | nan |
94
+ | 6.2626 | 2.5502 | 23000 | nan |
95
+ | 6.1824 | 2.6056 | 23500 | nan |
96
+ | 6.3142 | 2.6610 | 24000 | nan |
97
+ | 6.2816 | 2.7165 | 24500 | nan |
98
+ | 6.2371 | 2.7719 | 25000 | nan |
99
+ | 6.3075 | 2.8274 | 25500 | nan |
100
+ | 6.2306 | 2.8828 | 26000 | nan |
101
+ | 6.2919 | 2.9382 | 26500 | nan |
102
+ | 6.2668 | 2.9937 | 27000 | nan |
103
+
104
+
105
+ ### Framework versions
106
+
107
+ - Transformers 4.42.4
108
+ - Pytorch 2.4.0+cu121
109
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "distilbert/distilgpt2",
3
+ "_num_labels": 1,
4
+ "activation_function": "gelu_new",
5
+ "architectures": [
6
+ "GPT2LMHeadModel"
7
+ ],
8
+ "attn_pdrop": 0.1,
9
+ "bos_token_id": 50256,
10
+ "embd_pdrop": 0.1,
11
+ "eos_token_id": 50256,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_epsilon": 1e-05,
20
+ "model_type": "gpt2",
21
+ "n_ctx": 1024,
22
+ "n_embd": 768,
23
+ "n_head": 12,
24
+ "n_inner": null,
25
+ "n_layer": 6,
26
+ "n_positions": 1024,
27
+ "reorder_and_upcast_attn": false,
28
+ "resid_pdrop": 0.1,
29
+ "scale_attn_by_inverse_layer_idx": false,
30
+ "scale_attn_weights": true,
31
+ "summary_activation": null,
32
+ "summary_first_dropout": 0.1,
33
+ "summary_proj_to_labels": true,
34
+ "summary_type": "cls_index",
35
+ "summary_use_proj": true,
36
+ "task_specific_params": {
37
+ "text-generation": {
38
+ "do_sample": true,
39
+ "max_length": 50
40
+ }
41
+ },
42
+ "torch_dtype": "float32",
43
+ "transformers_version": "4.42.4",
44
+ "use_cache": true,
45
+ "vocab_size": 50258
46
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.42.4"
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f5c311cc316ebc3746aa5c53d548224f344fa95fa5db1f8b06bd4bc1c72f996
3
+ size 327661000
runs/Aug29_04-13-08_78287ca2ad8f/events.out.tfevents.1724904798.78287ca2ad8f.635.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f7a2055fa16e115036f27362327ad341673eefb60dc6f1d2dc1d3e846c5649b
3
+ size 31753
runs/Aug29_04-13-08_78287ca2ad8f/events.out.tfevents.1724906267.78287ca2ad8f.635.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:367f3c2f0544c7f964be311eb90b6a8db7bea6544df85fa17e064ce97d79eac3
3
+ size 364
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7be070f0f7f634428e69f0e6634ac4c06657511a251c32998f3eaf92977a17e
3
+ size 5112