yakazimir commited on
Commit
74ea74b
·
verified ·
1 Parent(s): a9730f6

Model save

Browse files
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: trl-lib/qwen1.5-0.5b-sft
5
+ tags:
6
+ - trl
7
+ - simpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: qwen_fUNL_entropy_0_01
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # qwen_fUNL_entropy_0_01
18
+
19
+ This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.0504
22
+ - Sft Loss: 4.0281
23
+ - Rewards/chosen: -4.4231
24
+ - Rewards/rejected: -5.1418
25
+ - Rewards/accuracies: 0.6862
26
+ - Rewards/margins: 0.7187
27
+ - Logps/rejected: -5.1418
28
+ - Logps/chosen: -4.4231
29
+ - Logits/rejected: -0.2955
30
+ - Logits/chosen: -0.3687
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 1e-06
50
+ - train_batch_size: 2
51
+ - eval_batch_size: 4
52
+ - seed: 42
53
+ - distributed_type: multi-GPU
54
+ - gradient_accumulation_steps: 16
55
+ - total_train_batch_size: 32
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 3.0
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.0548 | 0.2141 | 400 | 0.0557 | 4.8295 | -5.3467 | -5.4723 | 0.5326 | 0.1256 | -5.4723 | -5.3467 | 0.1095 | -0.0277 |
66
+ | 0.0537 | 0.4282 | 800 | 0.0529 | 4.1330 | -4.6614 | -4.9903 | 0.6024 | 0.3289 | -4.9903 | -4.6614 | 0.2188 | 0.0763 |
67
+ | 0.0545 | 0.6422 | 1200 | 0.0523 | 4.2856 | -4.6580 | -5.0486 | 0.6350 | 0.3906 | -5.0486 | -4.6580 | 0.0914 | -0.0257 |
68
+ | 0.0518 | 0.8563 | 1600 | 0.0519 | 4.0636 | -4.5007 | -4.9176 | 0.6313 | 0.4169 | -4.9176 | -4.5007 | 0.0782 | -0.0290 |
69
+ | 0.0537 | 1.0704 | 2000 | 0.0517 | 3.9662 | -4.4270 | -4.8924 | 0.6469 | 0.4654 | -4.8924 | -4.4270 | -0.1550 | -0.2400 |
70
+ | 0.0533 | 1.2845 | 2400 | 0.0514 | 4.4069 | -4.8229 | -5.4257 | 0.6632 | 0.6028 | -5.4257 | -4.8229 | -0.1556 | -0.2460 |
71
+ | 0.0522 | 1.4986 | 2800 | 0.0511 | 4.2244 | -4.5446 | -5.1374 | 0.6803 | 0.5928 | -5.1374 | -4.5446 | -0.2984 | -0.3849 |
72
+ | 0.053 | 1.7127 | 3200 | 0.0508 | 4.1193 | -4.4960 | -5.1073 | 0.6691 | 0.6113 | -5.1073 | -4.4960 | -0.2032 | -0.2947 |
73
+ | 0.0538 | 1.9267 | 3600 | 0.0505 | 4.0434 | -4.4193 | -5.0638 | 0.6847 | 0.6445 | -5.0638 | -4.4193 | -0.2476 | -0.3292 |
74
+ | 0.0504 | 2.1408 | 4000 | 0.0505 | 4.0585 | -4.4646 | -5.1658 | 0.6840 | 0.7011 | -5.1658 | -4.4646 | -0.2103 | -0.2919 |
75
+ | 0.053 | 2.3549 | 4400 | 0.0505 | 4.0905 | -4.4767 | -5.1722 | 0.6840 | 0.6956 | -5.1722 | -4.4767 | -0.2850 | -0.3632 |
76
+ | 0.0525 | 2.5690 | 4800 | 0.0504 | 4.0700 | -4.4483 | -5.1426 | 0.6832 | 0.6943 | -5.1426 | -4.4483 | -0.1890 | -0.2741 |
77
+ | 0.0509 | 2.7831 | 5200 | 0.0504 | 4.0135 | -4.3932 | -5.0993 | 0.6855 | 0.7061 | -5.0993 | -4.3932 | -0.1516 | -0.2376 |
78
+ | 0.0504 | 2.9972 | 5600 | 0.0504 | 4.0281 | -4.4231 | -5.1418 | 0.6862 | 0.7187 | -5.1418 | -4.4231 | -0.2955 | -0.3687 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.44.2
84
+ - Pytorch 2.2.2+cu121
85
+ - Datasets 2.18.0
86
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.999297541394882,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.0619825580417259,
5
+ "train_runtime": 31817.9407,
6
+ "train_samples": 59790,
7
+ "train_samples_per_second": 5.637,
8
+ "train_steps_per_second": 0.176
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.44.2"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.999297541394882,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.0619825580417259,
5
+ "train_runtime": 31817.9407,
6
+ "train_samples": 59790,
7
+ "train_samples_per_second": 5.637,
8
+ "train_steps_per_second": 0.176
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff