Paladiso commited on
Commit
d161b5b
·
verified ·
1 Parent(s): 28dd978

End of training

Browse files
Files changed (1) hide show
  1. README.md +19 -16
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  library_name: peft
3
- base_model: katuni4ka/tiny-random-qwen1.5-moe
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
- - Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc
9
  model-index:
10
- - name: 52d5acd4-4d39-43e2-8519-fe17241200b6
11
  results: []
12
  ---
13
 
@@ -20,17 +20,18 @@ should probably proofread and complete it, then remove this comment. -->
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
- base_model: katuni4ka/tiny-random-qwen1.5-moe
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
- path: Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc
31
  type:
32
- field_instruction: instruction
33
- field_output: output
 
34
  system_format: '{system}'
35
  system_prompt: ''
36
  debug: null
@@ -46,7 +47,7 @@ fsdp_config: null
46
  gradient_accumulation_steps: 4
47
  gradient_checkpointing: false
48
  group_by_length: false
49
- hub_model_id: Paladiso/52d5acd4-4d39-43e2-8519-fe17241200b6
50
  hub_private_repo: true
51
  hub_repo: null
52
  hub_strategy: checkpoint
@@ -77,6 +78,8 @@ sample_packing: false
77
  save_safetensors: true
78
  saves_per_epoch: 4
79
  sequence_len: 512
 
 
80
  strict: false
81
  tf32: false
82
  tokenizer_type: AutoTokenizer
@@ -86,10 +89,10 @@ use_accelerate: true
86
  val_set_size: 0.05
87
  wandb_entity: null
88
  wandb_mode: online
89
- wandb_name: 5e3967ba-d17c-4d93-91ec-a23620abb5dc
90
  wandb_project: Gradients-On-Demand
91
  wandb_run: your_name
92
- wandb_runid: 5e3967ba-d17c-4d93-91ec-a23620abb5dc
93
  warmup_steps: 10
94
  weight_decay: 0.0
95
  xformers_attention: null
@@ -98,11 +101,11 @@ xformers_attention: null
98
 
99
  </details><br>
100
 
101
- # 52d5acd4-4d39-43e2-8519-fe17241200b6
102
 
103
- This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc dataset.
104
  It achieves the following results on the evaluation set:
105
- - Loss: 11.9366
106
 
107
  ## Model description
108
 
@@ -136,9 +139,9 @@ The following hyperparameters were used during training:
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
- | 11.9333 | 0.0002 | 3 | 11.9376 |
140
- | 11.9358 | 0.0004 | 6 | 11.9372 |
141
- | 11.9335 | 0.0007 | 9 | 11.9366 |
142
 
143
 
144
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ base_model: Korabbit/llama-2-ko-7b
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
+ - Paladiso/dataset_6f876187-6d0a-4410-b9c7-16623cec1fad
9
  model-index:
10
+ - name: bb558fe8-9262-4aef-9da8-8c8ad8a19526
11
  results: []
12
  ---
13
 
 
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
+ base_model: Korabbit/llama-2-ko-7b
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
+ path: Paladiso/dataset_6f876187-6d0a-4410-b9c7-16623cec1fad
31
  type:
32
+ field_input: Description
33
+ field_instruction: Prompt
34
+ field_output: GT
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
 
47
  gradient_accumulation_steps: 4
48
  gradient_checkpointing: false
49
  group_by_length: false
50
+ hub_model_id: Paladiso/bb558fe8-9262-4aef-9da8-8c8ad8a19526
51
  hub_private_repo: true
52
  hub_repo: null
53
  hub_strategy: checkpoint
 
78
  save_safetensors: true
79
  saves_per_epoch: 4
80
  sequence_len: 512
81
+ special_tokens:
82
+ pad_token: </s>
83
  strict: false
84
  tf32: false
85
  tokenizer_type: AutoTokenizer
 
89
  val_set_size: 0.05
90
  wandb_entity: null
91
  wandb_mode: online
92
+ wandb_name: 6f876187-6d0a-4410-b9c7-16623cec1fad
93
  wandb_project: Gradients-On-Demand
94
  wandb_run: your_name
95
+ wandb_runid: 6f876187-6d0a-4410-b9c7-16623cec1fad
96
  warmup_steps: 10
97
  weight_decay: 0.0
98
  xformers_attention: null
 
101
 
102
  </details><br>
103
 
104
+ # bb558fe8-9262-4aef-9da8-8c8ad8a19526
105
 
106
+ This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the Paladiso/dataset_6f876187-6d0a-4410-b9c7-16623cec1fad dataset.
107
  It achieves the following results on the evaluation set:
108
+ - Loss: 0.7600
109
 
110
  ## Model description
111
 
 
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
+ | 6.9483 | 0.0110 | 3 | 6.5820 |
143
+ | 4.8242 | 0.0220 | 6 | 3.0552 |
144
+ | 1.0649 | 0.0329 | 9 | 0.7600 |
145
 
146
 
147
  ### Framework versions