shouray commited on
Commit
8aef58c
·
verified ·
1 Parent(s): 8b2f8a9

shouray/Condition-Model-3

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [TheBloke/Llama-2-13B-chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.0570
20
 
21
  ## Model description
22
 
@@ -44,33 +44,21 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - training_steps: 30
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.8 | 1.0 | 1 | 1.0124 |
55
- | 0.3868 | 2.0 | 3 | 0.8254 |
56
- | 0.3112 | 3.0 | 5 | 0.5926 |
57
- | 0.4737 | 4.0 | 6 | 0.5064 |
58
- | 0.4067 | 5.0 | 7 | 0.4373 |
59
- | 0.163 | 6.0 | 9 | 0.3445 |
60
- | 0.1325 | 7.0 | 11 | 0.2647 |
61
- | 0.2128 | 8.0 | 12 | 0.2263 |
62
- | 0.1826 | 9.0 | 13 | 0.1899 |
63
- | 0.0706 | 10.0 | 15 | 0.1438 |
64
- | 0.0574 | 11.0 | 17 | 0.1187 |
65
- | 0.0971 | 12.0 | 18 | 0.1078 |
66
- | 0.0864 | 13.0 | 19 | 0.0992 |
67
- | 0.0372 | 14.0 | 21 | 0.0841 |
68
- | 0.0318 | 15.0 | 23 | 0.0729 |
69
- | 0.0572 | 16.0 | 24 | 0.0688 |
70
- | 0.0539 | 17.0 | 25 | 0.0657 |
71
- | 0.0249 | 18.0 | 27 | 0.0609 |
72
- | 0.0231 | 19.0 | 29 | 0.0577 |
73
- | 0.044 | 20.0 | 30 | 0.0570 |
74
 
75
 
76
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [TheBloke/Llama-2-13B-chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.1030
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - training_steps: 100
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 1.0135 | 0.9412 | 12 | 0.4043 |
55
+ | 0.145 | 1.9608 | 25 | 0.1342 |
56
+ | 0.0703 | 2.9804 | 38 | 0.1080 |
57
+ | 0.0531 | 4.0 | 51 | 0.1023 |
58
+ | 0.0532 | 4.9412 | 63 | 0.1040 |
59
+ | 0.0476 | 5.9608 | 76 | 0.1037 |
60
+ | 0.0459 | 6.9804 | 89 | 0.1028 |
61
+ | 0.0449 | 7.8431 | 100 | 0.1030 |
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
 
64
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "o_proj",
24
  "k_proj",
25
- "v_proj",
26
- "q_proj"
 
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "k_proj",
24
+ "o_proj",
25
+ "q_proj",
26
+ "v_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a790be2ccc118bf5f53205683a05aaab2038957693175722da253a20b2dc31d0
3
  size 52471504
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9ae492e16ddbddbaa034401c54f7a806e99118b05b17b2a1b2326058b708c77
3
  size 52471504
runs/Jun26_13-25-12_13d3a7536ccc/events.out.tfevents.1719408313.13d3a7536ccc.162.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:265ba577d736b5dc902d131937d124e3361cf399368842883b7035dd41b99ad0
3
+ size 9696
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cec7146a65f7d55f647021da9410ca0431bcca38582b3cd4a842a4c6032570b
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b381e15e59d6da5795cafb71e3b8c1b0aec8cb6221559305809553e459ae785
3
  size 5112