JiAYu1997 commited on
Commit
d5640f4
1 Parent(s): 8c6774e

End of training

Browse files
Files changed (2) hide show
  1. README.md +4 -4
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -6,14 +6,14 @@ tags:
6
  - sft
7
  - generated_from_trainer
8
  model-index:
9
- - name: HRJD_FinetuneV2_2
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # HRJD_FinetuneV2_2
17
 
18
  This model is a fine-tuned version of [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) on the None dataset.
19
 
@@ -34,7 +34,7 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
  - train_batch_size: 1
39
  - eval_batch_size: 8
40
  - seed: 42
@@ -43,7 +43,7 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: constant
45
  - lr_scheduler_warmup_ratio: 0.03
46
- - training_steps: 3000
47
 
48
  ### Training results
49
 
 
6
  - sft
7
  - generated_from_trainer
8
  model-index:
9
+ - name: HRJD_FinetuneV2_3
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # HRJD_FinetuneV2_3
17
 
18
  This model is a fine-tuned version of [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) on the None dataset.
19
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
  - train_batch_size: 1
39
  - eval_batch_size: 8
40
  - seed: 42
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: constant
45
  - lr_scheduler_warmup_ratio: 0.03
46
+ - training_steps: 5000
47
 
48
  ### Training results
49
 
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e678ed4592bf3956fd97ece138bcbe59beff702749f7ccee8ea684e5f85307ad
3
  size 13677706
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8dca660e6288dbf9cef19e650de7c03c034255c536fa8fbfb19074934dd5db9
3
  size 13677706