sheepy928 commited on
Commit
f7f1a62
1 Parent(s): 218695e

Model save

Browse files
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: src_prober_codellama-13b-last1unfreeze
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
11
+
12
+ # src_prober_codellama-13b-last1unfreeze
13
+
14
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 0.6267
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 5e-05
36
+ - train_batch_size: 4
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - distributed_type: multi-GPU
40
+ - num_devices: 4
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 64
43
+ - total_eval_batch_size: 32
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_steps: 1000
47
+ - num_epochs: 5
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:-----:|:---------------:|
54
+ | 0.7443 | 0.12 | 500 | 0.7429 |
55
+ | 0.6851 | 0.24 | 1000 | 0.7170 |
56
+ | 0.6723 | 0.36 | 1500 | 0.6912 |
57
+ | 0.6605 | 0.48 | 2000 | 0.6730 |
58
+ | 0.6475 | 0.6 | 2500 | 0.6643 |
59
+ | 0.6419 | 0.72 | 3000 | 0.6584 |
60
+ | 0.6307 | 0.85 | 3500 | 0.6532 |
61
+ | 0.6167 | 0.97 | 4000 | 0.6495 |
62
+ | 0.6272 | 1.09 | 4500 | 0.6477 |
63
+ | 0.6002 | 1.21 | 5000 | 0.6445 |
64
+ | 0.6303 | 1.33 | 5500 | 0.6429 |
65
+ | 0.6405 | 1.45 | 6000 | 0.6421 |
66
+ | 0.6041 | 1.57 | 6500 | 0.6387 |
67
+ | 0.5912 | 1.69 | 7000 | 0.6370 |
68
+ | 0.6121 | 1.81 | 7500 | 0.6360 |
69
+ | 0.613 | 1.93 | 8000 | 0.6344 |
70
+ | 0.6126 | 2.05 | 8500 | 0.6338 |
71
+ | 0.5932 | 2.17 | 9000 | 0.6344 |
72
+ | 0.5927 | 2.3 | 9500 | 0.6332 |
73
+ | 0.5883 | 2.42 | 10000 | 0.6317 |
74
+ | 0.6023 | 2.54 | 10500 | 0.6308 |
75
+ | 0.5898 | 2.66 | 11000 | 0.6311 |
76
+ | 0.576 | 2.78 | 11500 | 0.6291 |
77
+ | 0.5699 | 2.9 | 12000 | 0.6291 |
78
+ | 0.6093 | 3.02 | 12500 | 0.6290 |
79
+ | 0.5754 | 3.14 | 13000 | 0.6292 |
80
+ | 0.6294 | 3.26 | 13500 | 0.6282 |
81
+ | 0.591 | 3.38 | 14000 | 0.6283 |
82
+ | 0.599 | 3.5 | 14500 | 0.6273 |
83
+ | 0.5933 | 3.62 | 15000 | 0.6281 |
84
+ | 0.565 | 3.75 | 15500 | 0.6268 |
85
+ | 0.5884 | 3.87 | 16000 | 0.6267 |
86
+ | 0.5809 | 3.99 | 16500 | 0.6266 |
87
+ | 0.5618 | 4.11 | 17000 | 0.6271 |
88
+ | 0.5749 | 4.23 | 17500 | 0.6274 |
89
+ | 0.577 | 4.35 | 18000 | 0.6268 |
90
+ | 0.5947 | 4.47 | 18500 | 0.6267 |
91
+ | 0.5902 | 4.59 | 19000 | 0.6268 |
92
+ | 0.5869 | 4.71 | 19500 | 0.6268 |
93
+ | 0.5829 | 4.83 | 20000 | 0.6268 |
94
+ | 0.5587 | 4.95 | 20500 | 0.6267 |
95
+
96
+
97
+ ### Framework versions
98
+
99
+ - Transformers 4.39.3
100
+ - Pytorch 2.2.0
101
+ - Datasets 2.17.0
102
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "transformers_version": "4.39.3"
4
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d881866cb21742071f3eea08b5e3fdb077d3a1fbd594fe8d34a9be02eb338aac
3
  size 620848256
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1adc2dc02080f837d0cddb5483e336432dfe8d54dfebbb1d4ea1de200ab88efc
3
  size 620848256
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[UNK]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[PAD]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30522": {
44
+ "content": "<INST>",
45
+ "lstrip": false,
46
+ "normalized": true,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": false
50
+ }
51
+ },
52
+ "clean_up_tokenization_spaces": true,
53
+ "cls_token": "[CLS]",
54
+ "mask_token": "[MASK]",
55
+ "model_max_length": 512,
56
+ "pad_token": "[PAD]",
57
+ "sep_token": "[SEP]",
58
+ "tokenizer_class": "LongelmTokenizer",
59
+ "unk_token": "[UNK]"
60
+ }