Aivesa commited on
Commit
454f7a5
·
verified ·
1 Parent(s): afb6bed

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  library_name: peft
3
  license: apache-2.0
4
- base_model: 01-ai/Yi-1.5-9B-Chat-16K
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Aivesa/dataset_042b5f5f-32d6-47b9-b2e4-5373647ef8f2
10
  model-index:
11
- - name: 32546f09-c053-4243-8a22-3c00ead461eb
12
  results: []
13
  ---
14
 
@@ -21,18 +21,18 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: 01-ai/Yi-1.5-9B-Chat-16K
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Aivesa/dataset_042b5f5f-32d6-47b9-b2e4-5373647ef8f2
32
  type:
33
- field_input: my_solu
34
- field_instruction: prompt
35
- field_output: solution
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
@@ -48,7 +48,7 @@ fsdp_config: null
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
- hub_model_id: Aivesa/32546f09-c053-4243-8a22-3c00ead461eb
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
@@ -88,10 +88,10 @@ use_accelerate: true
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
- wandb_name: 042b5f5f-32d6-47b9-b2e4-5373647ef8f2
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
- wandb_runid: 042b5f5f-32d6-47b9-b2e4-5373647ef8f2
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
@@ -100,11 +100,11 @@ xformers_attention: null
100
 
101
  </details><br>
102
 
103
- # 32546f09-c053-4243-8a22-3c00ead461eb
104
 
105
- This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the Aivesa/dataset_042b5f5f-32d6-47b9-b2e4-5373647ef8f2 dataset.
106
  It achieves the following results on the evaluation set:
107
- - Loss: 0.8782
108
 
109
  ## Model description
110
 
@@ -138,9 +138,9 @@ The following hyperparameters were used during training:
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
- | 1.0157 | 0.0017 | 3 | 1.0931 |
142
- | 1.2851 | 0.0034 | 6 | 1.0198 |
143
- | 0.9428 | 0.0051 | 9 | 0.8782 |
144
 
145
 
146
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
  license: apache-2.0
4
+ base_model: unsloth/mistral-7b-instruct-v0.2
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
+ - Aivesa/dataset_cb3acbb3-babd-4857-af0b-0acdfd981c3f
10
  model-index:
11
+ - name: 6272701b-9847-425c-8ff7-08f7e2fe06c3
12
  results: []
13
  ---
14
 
 
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
+ base_model: unsloth/mistral-7b-instruct-v0.2
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
+ path: Aivesa/dataset_cb3acbb3-babd-4857-af0b-0acdfd981c3f
32
  type:
33
+ field_input: yl
34
+ field_instruction: x
35
+ field_output: yw
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
 
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
+ hub_model_id: Aivesa/6272701b-9847-425c-8ff7-08f7e2fe06c3
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
 
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
+ wandb_name: cb3acbb3-babd-4857-af0b-0acdfd981c3f
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
+ wandb_runid: cb3acbb3-babd-4857-af0b-0acdfd981c3f
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
 
100
 
101
  </details><br>
102
 
103
+ # 6272701b-9847-425c-8ff7-08f7e2fe06c3
104
 
105
+ This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the Aivesa/dataset_cb3acbb3-babd-4857-af0b-0acdfd981c3f dataset.
106
  It achieves the following results on the evaluation set:
107
+ - Loss: 1.0659
108
 
109
  ## Model description
110
 
 
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
+ | 5.3218 | 0.0001 | 3 | 1.3255 |
142
+ | 5.0832 | 0.0002 | 6 | 1.1658 |
143
+ | 3.9279 | 0.0003 | 9 | 1.0659 |
144
 
145
 
146
  ### Framework versions