File size: 8,669 Bytes
d59b967 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
05/24/2024 18:37:53 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\tokenizer.json 05/24/2024 18:37:53 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None 05/24/2024 18:37:53 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\special_tokens_map.json 05/24/2024 18:37:53 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\tokenizer_config.json 05/24/2024 18:37:53 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 05/24/2024 18:37:53 - INFO - llamafactory.data.template - Add pad token: <|end_of_text|> 05/24/2024 18:37:53 - INFO - llamafactory.data.loader - Loading dataset itos_data_ko.json... 05/24/2024 18:37:54 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\config.json 05/24/2024 18:37:54 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "meta-llama/Meta-Llama-3-8B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/24/2024 18:37:54 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\model.safetensors.index.json 05/24/2024 18:37:54 - INFO - transformers.modeling_utils - Instantiating LlamaForCausalLM model under default dtype torch.float16. 05/24/2024 18:37:54 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": 128001 } 05/24/2024 18:38:27 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing LlamaForCausalLM. 05/24/2024 18:38:27 - INFO - transformers.modeling_utils - All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Meta-Llama-3-8B. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. 05/24/2024 18:38:27 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\generation_config.json 05/24/2024 18:38:27 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": 128001, "max_length": 4096, "temperature": 0.6, "top_p": 0.9 } 05/24/2024 18:38:27 - INFO - llamafactory.model.utils.checkpointing - Gradient checkpointing enabled. 05/24/2024 18:38:27 - INFO - llamafactory.model.utils.attention - Using torch SDPA for faster training and inference. 05/24/2024 18:38:27 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 05/24/2024 18:38:27 - INFO - llamafactory.model.adapter - Fine-tuning method: LoRA 05/24/2024 18:38:27 - INFO - llamafactory.model.loader - trainable params: 3407872 || all params: 8033669120 || trainable%: 0.0424 05/24/2024 18:38:27 - INFO - transformers.trainer - Using auto half precision backend 05/24/2024 18:38:28 - INFO - transformers.trainer - ***** Running training ***** 05/24/2024 18:38:28 - INFO - transformers.trainer - Num examples = 338 05/24/2024 18:38:28 - INFO - transformers.trainer - Num Epochs = 3 05/24/2024 18:38:28 - INFO - transformers.trainer - Instantaneous batch size per device = 2 05/24/2024 18:38:28 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 05/24/2024 18:38:28 - INFO - transformers.trainer - Gradient Accumulation steps = 8 05/24/2024 18:38:28 - INFO - transformers.trainer - Total optimization steps = 63 05/24/2024 18:38:28 - INFO - transformers.trainer - Number of trainable parameters = 3,407,872 05/24/2024 18:42:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.9767, 'learning_rate': 4.9227e-05, 'epoch': 0.24} 05/24/2024 18:46:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.9754, 'learning_rate': 4.6956e-05, 'epoch': 0.47} 05/24/2024 18:50:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.9723, 'learning_rate': 4.3326e-05, 'epoch': 0.71} 05/24/2024 18:55:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.7976, 'learning_rate': 3.8564e-05, 'epoch': 0.95} 05/24/2024 18:59:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.7409, 'learning_rate': 3.2962e-05, 'epoch': 1.18} 05/24/2024 19:03:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.4753, 'learning_rate': 2.6868e-05, 'epoch': 1.42} 05/24/2024 19:07:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.5400, 'learning_rate': 2.0659e-05, 'epoch': 1.66} 05/24/2024 19:11:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.4686, 'learning_rate': 1.4718e-05, 'epoch': 1.89} 05/24/2024 19:15:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.4011, 'learning_rate': 9.4128e-06, 'epoch': 2.13} 05/24/2024 19:19:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.4184, 'learning_rate': 5.0717e-06, 'epoch': 2.37} 05/24/2024 19:23:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.3458, 'learning_rate': 1.9631e-06, 'epoch': 2.60} 05/24/2024 19:27:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.3665, 'learning_rate': 2.7923e-07, 'epoch': 2.84} 05/24/2024 19:30:27 - INFO - transformers.trainer - Training completed. Do not forget to share your model on huggingface.co/models =) 05/24/2024 19:30:27 - INFO - transformers.trainer - Saving model checkpoint to saves\LLaMA3-8B\lora\train_2024-05-24-18-35-12 05/24/2024 19:30:27 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at C:\Users\itos\.cache\huggingface\hub\models--meta-llama--Meta-Llama-3-8B\snapshots\62bd457b6fe961a42a631306577e622c83876cb6\config.json 05/24/2024 19:30:27 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/24/2024 19:30:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves\LLaMA3-8B\lora\train_2024-05-24-18-35-12\tokenizer_config.json 05/24/2024 19:30:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves\LLaMA3-8B\lora\train_2024-05-24-18-35-12\special_tokens_map.json 05/24/2024 19:30:27 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} |