File size: 11,160 Bytes
850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc 850bad8 53b15dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 6, device: cuda:6, n_gpu: 1, distributed training: True, compute dtype: None [INFO|tokenization_utils_base.py:2287] 2024-07-30 03:45:41,182 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2287] 2024-07-30 03:45:41,182 >> loading file tokenizer_config.json [INFO|tokenization_utils_base.py:2533] 2024-07-30 03:45:41,444 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|template.py:270] 2024-07-30 03:45:41,444 >> Replace eos token: <|eot_id|> [INFO|loader.py:52] 2024-07-30 03:45:41,445 >> Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.hparams.parser - Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: None 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:41 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... 07/30/2024 03:45:43 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test_2.json... [INFO|configuration_utils.py:731] 2024-07-30 03:45:46,553 >> loading configuration file saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/config.json [INFO|configuration_utils.py:800] 2024-07-30 03:45:46,554 >> Model config LlamaConfig { "_name_or_path": "saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": [ 128001, 128008, 128009 ], "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.43.3", "use_cache": false, "vocab_size": 128256 } [INFO|patcher.py:81] 2024-07-30 03:45:46,555 >> Using KV cache for faster generation. [INFO|modeling_utils.py:3631] 2024-07-30 03:45:46,580 >> loading weights file saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/model.safetensors.index.json [INFO|modeling_utils.py:1572] 2024-07-30 03:45:46,580 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1038] 2024-07-30 03:45:46,582 >> Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": [ 128001, 128008, 128009 ] } 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/30/2024 03:45:46 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. [INFO|modeling_utils.py:4463] 2024-07-30 03:45:50,818 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4471] 2024-07-30 03:45:50,818 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:991] 2024-07-30 03:45:50,822 >> loading configuration file saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/generation_config.json [INFO|configuration_utils.py:1038] 2024-07-30 03:45:50,822 >> Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": [ 128001, 128008, 128009 ], "temperature": 0.6, "top_p": 0.9 } [INFO|attention.py:84] 2024-07-30 03:45:50,828 >> Using torch SDPA for faster training and inference. [INFO|loader.py:196] 2024-07-30 03:45:50,833 >> all params: 8,030,261,248 [INFO|trainer.py:3819] 2024-07-30 03:45:50,942 >> ***** Running Prediction ***** [INFO|trainer.py:3821] 2024-07-30 03:45:50,942 >> Num examples = 1253 [INFO|trainer.py:3824] 2024-07-30 03:45:50,942 >> Batch size = 2 [WARNING|logging.py:328] 2024-07-30 03:45:51,611 >> We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/30/2024 03:45:52 - INFO - llamafactory.model.loader - all params: 8,030,261,248 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/30/2024 03:45:52 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) [INFO|trainer.py:127] 2024-07-30 03:45:59,801 >> Saving prediction results to saves/LLaMA3.1-8B-Chat/full/eval_2024-07-30-02-47-53_llama3.1_truthqa_bench2/generated_predictions.jsonl |