/var/spool/slurmd/job336337/slurm_script: line 12: activate: No such file or directory
W0217 20:17:08.481000 3089289 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.481000 3089289 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.481000 3089289 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.481000 3089289 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.635000 48757 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.635000 48757 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.635000 48757 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.635000 48757 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.684000 1807834 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.684000 1807834 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.684000 1807834 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.684000 1807834 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.770000 26625 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.770000 26625 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.770000 26625 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.770000 26625 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.795000 26604 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.795000 26604 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.795000 26604 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.795000 26604 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.843000 69159 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.843000 69159 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.843000 69159 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.843000 69159 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.880000 3100165 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.880000 3100165 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.880000 3100165 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.880000 3100165 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.920000 88915 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.920000 88915 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.920000 88915 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.920000 88915 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.933000 1618782 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:08.933000 1618782 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:08.933000 1618782 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:08.933000 1618782 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.043000 91872 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.043000 91872 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.043000 91872 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.043000 91872 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.063000 91580 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.063000 91580 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.063000 91580 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.063000 91580 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.123000 26057 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.123000 26057 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.123000 26057 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.123000 26057 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.169000 91237 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.169000 91237 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.169000 91237 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.169000 91237 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.174000 88854 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.174000 88854 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.174000 88854 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.174000 88854 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.210000 88279 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.210000 88279 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.210000 88279 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.210000 88279 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.997000 3156229 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0217 20:17:09.997000 3156229 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0217 20:17:09.997000 3156229 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0217 20:17:09.997000 3156229 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.29it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.37it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.39it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.89it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.74it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.19it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.07it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 2Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.38it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.45it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.54it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.38it/s]Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.68it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.19it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.55it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.15it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.18it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.08it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.51it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.07it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.96it/loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.25it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.30it/s]Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
0%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.28it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.73it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.72it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.50it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.55it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.58it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.63it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.57it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.96it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00,You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.62it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.71it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.24it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.24it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.07it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.8Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.15it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.35it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.47it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.06it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.47it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.16it/s]
Loadi
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.39it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.20it/s]You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.26it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.87it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.73it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.22it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.87it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.36it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.86it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.16it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.44it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.59it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.70it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.73it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.26it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.71it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.76it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.93it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.57it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.98it/s]
Loading checkpoint shards
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.20it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.32it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.21it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.35it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.14it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.96it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.95it/s]
Loading checkpoint shards
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.22it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.73it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.83it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.37it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.20it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.42it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.37it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.49it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.09it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.14it/s]
Loading ch
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.75it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.20it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.31it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.29it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.39it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.32it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.13it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.12it/s]
Loading ch
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.13it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.24it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.51it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.16it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.93it/s]
Loading checkpoint shards: 40%|████ | 2/5 [0
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.24it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.87it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.57it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.29it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.15it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.52it/s]
Loading checkpoint shards2it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.83it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.00it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.00it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.61it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.53it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.58it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.66it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.27it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.38it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.38it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.81it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.85it/s]
Loading checkpoint shng checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.66it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.10it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.08it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.08it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.56it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.76it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.65it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.17it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.49it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.44it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.91it/s]
Loading checkpoint shards: 60%|███ 3.87it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.56it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.41it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.25it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.37it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.36it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.29it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.31it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.53it/s]
Loading checkpoint shards: 80%|█████�
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.27it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.88it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.30it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.29it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.25it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.04it/s]
Loading checkpoint shards00, 4.15it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.79it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.31it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.52it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.03it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.27it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.71it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.15it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.29it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.80it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.66it/s]
Loading check
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.27it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.31it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.02it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.74it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.49it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.32it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.13it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.09it/
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.02it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.25it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.23it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.14it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.23it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.04it/s]
Loading checkpoint shards
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.38it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.95it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.89it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.39it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.01it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.85it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.75it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 60
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.14it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.14it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.28it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.57it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.23it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.10it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.50it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.35it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.96it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.73it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.76it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.93it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.10it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.93it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.66it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
�██ | 4/5 [00:01<00:00, 3.25it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.90it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.76it/s]All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.65it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.34it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.99it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.86it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.70it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
███ | 3/5 [00:00<00:00, 4.45it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.00it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.92it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.70it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.37it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.16it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.85it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
: 20%|██ | 1/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.54it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.13it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.98it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.16it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.65it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.54it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.52it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.65it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.85it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.47it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.34it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [ards: 60%|██████ | 3/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.79it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.34it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.44it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.29it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.00it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.85it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.48it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
eckpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.82it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.64it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.66it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.83it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.78it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.38it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.48it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.14it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.13it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.04it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.15it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.92it/s]
Loading checkpoint sha
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.48it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.92it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.69it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.96it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.18it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.01it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.00it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.13it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.79it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00rds: 60%|██████ | 3/5 [00:00<00:00, 4.03it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.71it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.45it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.94it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.54it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.67it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.27it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.07it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.25it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.05it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.49it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.61it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
0:00<00:00, 3.90it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.57it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.75it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.50it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.60it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.51it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.20it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.43it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.13it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.72it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.21it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.02it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
00:00<00:00, 4.51it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.75it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.88it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.24it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.65it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.05it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.80it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.95it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.86it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.88it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.35it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.12it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
eckpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.08it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.60it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.33it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.50it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.49it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.43it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.67it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.88it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.87it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.75it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.70it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.64it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.42it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.07it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.91it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.60it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.68it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.44it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.49it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.14it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 5.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.87it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.38it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.26it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.16it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.15it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.25it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.41it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.64it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.53it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.52it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.59it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.36it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.51it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
%|██████ | 3/5 [00:00<00:00, 3.26it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.55it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.05it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.27it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.61it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.19it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.41it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.16it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.89it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.43it/s]
Loading checkpoint shards: 80%|████�Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.61it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.85it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.57it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.71it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.37it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
: 20%|██ | 1/5 [00:00<00:01, 2.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.09it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.17it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.08it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.14it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.04it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.96it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.60it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.56it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.56it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.42it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.48it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00,loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.29it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
: 20%|██ | 1/5 [00:00<00:01, 3.12it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.25it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.30it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.24it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.15it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.77it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.98it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.61it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.54it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00,loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.80it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.54it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
point shards: 60%|██████ | 3/5 [00:00<00:00, 3.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.44it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.17it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.25it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.97it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.89it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.93it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.71it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.54it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.20it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.39it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.94it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.59it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.35it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.37it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.01it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.20it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.99it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.91it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.01it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.93it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.70it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.91it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.73it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.33it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.53it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.27it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.69it/s]loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.12it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.82it/s]
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.80it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.92it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.60it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
: 20%|██ | 1/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.26it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.97it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.83it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.73it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.63it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.71it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.61it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.45it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.31it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.30it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.76it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.03it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
: 20%|██ | 1/5 [00:00<00:01, 3.30it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.52it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.30it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.23it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.66it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.76it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.64it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.04it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.84it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.63it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.42it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.92it/s]Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.86it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.15it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.89it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.59it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.08it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.86it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.92it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.16it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.98it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.93it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.21it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.96it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
<00:00, 4.28it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.18it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.69it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.48it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.78it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.99it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.58it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.52it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.25it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.89it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.70it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.35it/s]
Lloading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.81it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.52it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.28it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.26it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.89it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.49it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.88it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.59it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
�███ | 4/5 [00:00<00:00, 4.48it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.83it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.41it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.46it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.62it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.96it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.73it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.42it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
00:00<00:00, 4.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.20it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.97it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.81it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.28it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.75it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file chat_template.jinja from cache at None
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.73it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.91it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.83it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.53it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.29it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.27it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file chat_template.jinja from cache at None
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.98it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.88it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.05it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.14it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.96it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.24it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.15it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
00:00<00:00, 4.24it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.18it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.85it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.84it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.73it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.43it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.43it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.95it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.50it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.78it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.77it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.73it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.50it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.82it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.53it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.05it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.29it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.32it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.81it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.45it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
4.40it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.36it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.99it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.82it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.67it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.32it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.84it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.54it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.58it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.55it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.50it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
4.29it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.33it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.95it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.83it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.35it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.78it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.75it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.56it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.56it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.07it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.05it/s]
Loading checkpoint shards: 80%|█████�Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.95it/s]
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
�██ | 4/5 [00:01<00:00, 3.96it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.31it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.07it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.11it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.29it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.74it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.83it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.71it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.08it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.86it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.69it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.91it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.87it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
oading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.42it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.30it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.81it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.51it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.64it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.32it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.94it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.63it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.37it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.97it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.68it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.85it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.07it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.95it/s]
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.64it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.34it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.94it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.48it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.09it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.94it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.87it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.58it/s]
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.81it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.28it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.51it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.81it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.44it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.79it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.49it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.87it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.64it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.79it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.84it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.80it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.48it/s]
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.13it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.94it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.85it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.01it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.97it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.33it/s]
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.86it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.63it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.22it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.02it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.80it/s]
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.40it/s]loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.17it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.97it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.84it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.46it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.91it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.76it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.39it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.46it/s]
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.37it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.59it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.20it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.39it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.75it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.35it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.85it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.54it/s]
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.64it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.27it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.83it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.17it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.01it/s]
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.19it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.97it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.20it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.08it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.88it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.53it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.25it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.01it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.26it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.99it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.28it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.05it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.28it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.18it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.31it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.96it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.77it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.13it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.92it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.84it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.94it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.96it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.12it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.92it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.01it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.87it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.89it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
loading file added_tokens.json from cache at None
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.96it/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.04it/s]Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.33it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1616: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: jchen169 to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
wandb: Tracking run with wandb version 0.19.6
wandb: Run data is saved locally in /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/wandb/run-20250217_203915-bihwjece
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run qwen-vl-diff-clip-16-nodes_early_pool2d_4
wandb: ⭐️ View project at https://wandb.ai/jchen169/huggingface
wandb: 🚀 View run at https://wandb.ai/jchen169/huggingface/runs/bihwjece
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 8000
Will skip the first 0 epochs then the first 8000 batches in the first epoch.
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
0%| | 0/569592 [00:00, ?it/s]/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
1%|▏ | 8001/569592 [00:39<45:54, 203.88it/s]
1%|▏ | 8001/569592 [00:55<45:54, 203.88it/s]
1%|▏ | 8001/569592 [00:56<45:54, 203.88it/s]
1%|▏ | 8002/569592 [00:57<1:16:14, 122.77it/s]
1%|▏ | 8002/569592 [00:57<1:16:14, 122.77it/s]
1%|▏ | 8003/569592 [00:58<1:18:26, 119.31it/s]
1%|▏ | 8003/569592 [00:58<1:18:26, 119.31it/s]
1%|▏ | 8004/569592 [00:59<1:21:38, 114.65it/s]
1%|▏ | 8004/569592 [00:59<1:21:38, 114.65it/s]
1%|▏ | 8005/569592 [01:00<1:26:24, 108.31it/s]
1%|▏ | 8005/569592 [01:00<1:26:24, 108.31it/s]
1%|▏ | 8006/569592 [01:01<1:33:05, 100.55it/s]
1%|▏ | 8006/569592 [01:01<1:33:05, 100.55it/s]
1%|▏ | 8007/569592 [01:02<1:43:02, 90.83it/s]
1%|▏ | 8007/569592 [01:02<1:43:02, 90.83it/s]
1%|▏ | 8008/569592 [01:03<1:56:28, 80.36it/s]
1%|▏ | 8008/569592 [01:03<1:56:28, 80.36it/s]
1%|▏ | 8009/569592 [01:04<2:15:18, 69.18it/s]
1%|▏ | 8009/569592 [01:04<2:15:18, 69.18it/s]
1%|▏ | 8010/569592 [01:09<4:47:43, 32.53it/s]
1%|▏ | 8010/569592 [01:09<4:47:43, 32.53it/s]
1%|▏ | 8011/569592 [01:10<5:47:30, 26.93it/s]
1%|▏ | 8011/569592 [01:10<5:47:30, 26.93it/s]
1%|▏ | 8012/569592 [01:12<6:50:38, 22.79it/s]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95835312 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1%|▏ | 8012/569592 [01:12<6:50:38, 22.79it/s]
1%|▏ | 8013/569592 [01:13<8:04:25, 19.32it/s]
1%|▏ | 8013/569592 [01:13<8:04:25, 19.32it/s]
1%|▏ | 8014/569592 [01:18<18:59:27, 8.21it/s]
1%|▏ | 8014/569592 [01:18<18:59:27, 8.21it/s]
1%|▏ | 8015/569592 [01:21<25:36:18, 6.09it/s]
1%|▏ | 8015/569592 [01:21<25:36:18, 6.09it/s]
1%|▏ | 8016/569592 [01:22<28:38:00, 5.45it/s]
1%|▏ | 8016/569592 [01:22<28:38:00, 5.45it/s]
1%|▏ | 8017/569592 [01:23<33:32:25, 4.65it/s]
1%|▏ | 8017/569592 [01:23<33:32:25, 4.65it/s]
1%|▏ | 8018/569592 [01:28<70:10:03, 2.22it/s]
1%|▏ | 8018/569592 [01:28<70:10:03, 2.22it/s]
1%|▏ | 8019/569592 [01:31<88:37:06, 1.76it/s]
1%|▏ | 8019/569592 [01:31<88:37:06, 1.76it/s]
1%|▏ | 8020/569592 [01:32<94:01:00, 1.66it/s]
1%|▏ | 8020/569592 [01:32<94:01:00, 1.66it/s]
1%|▏ | 8021/569592 [01:33<108:15:34, 1.44it/s]
1%|▏ | 8021/569592 [01:33<108:15:34, 1.44it/s]
1%|▏ | 8022/569592 [01:39<216:52:56, 1.39s/it]
1%|▏ | 8022/569592 [01:39<216:52:56, 1.39s/it]
1%|▏ | 8023/569592 [01:41<224:00:15, 1.44s/it]
1%|▏ | 8023/569592 [01:41<224:00:15, 1.44s/it]
1%|▏ | 8024/569592 [01:42<210:34:10, 1.35s/it]
1%|▏ | 8024/569592 [01:42<210:34:10, 1.35s/it]
1%|▏ | 8025/569592 [01:44<218:32:35, 1.40s/it]
1%|▏ | 8025/569592 [01:44<218:32:35, 1.40s/it]
1%|▏ | 8026/569592 [01:49<377:54:08, 2.42s/it]
1%|▏ | 8026/569592 [01:49<377:54:08, 2.42s/it]
1%|▏ | 8027/569592 [01:52<378:50:23, 2.43s/it]
1%|▏ | 8027/569592 [01:52<378:50:23, 2.43s/it]
1%|▏ | 8028/569592 [01:53<320:01:50, 2.05s/it]
1%|▏ | 8028/569592 [01:53<320:01:50, 2.05s/it]
1%|▏ | 8029/569592 [01:54<273:56:23, 1.76s/it]
1%|▏ | 8029/569592 [01:54<273:56:23, 1.76s/it]
1%|▏ | 8030/569592 [02:01<497:33:42, 3.19s/it]
1%|▏ | 8030/569592 [02:01<497:33:42, 3.19s/it]
1%|▏ | 8031/569592 [02:03<442:19:32, 2.84s/it]
1%|▏ | 8031/569592 [02:03<442:19:32, 2.84s/it]
1%|▏ | 8032/569592 [02:04<355:44:41, 2.28s/it]
1%|▏ | 8032/569592 [02:04<355:44:41, 2.28s/it]
1%|▏ | 8033/569592 [02:05<331:01:45, 2.12s/it]
1%|▏ | 8033/569592 [02:05<331:01:45, 2.12s/it]
1%|▏ | 8034/569592 [02:11<477:45:13, 3.06s/it]
1%|▏ | 8034/569592 [02:11<477:45:13, 3.06s/it]
1%|▏ | 8035/569592 [02:12<412:12:25, 2.64s/it]
1%|▏ | 8035/569592 [02:12<412:12:25, 2.64s/it]
1%|▏ | 8036/569592 [02:13<333:16:24, 2.14s/it]
1%|▏ | 8036/569592 [02:13<333:16:24, 2.14s/it]
1%|▏ | 8037/569592 [02:15<332:14:34, 2.13s/it]
1%|▏ | 8037/569592 [02:15<332:14:34, 2.13s/it]
1%|▏ | 8038/569592 [02:20<467:42:10, 3.00s/it]
1%|▏ | 8038/569592 [02:20<467:42:10, 3.00s/it]
1%|▏ | 8039/569592 [02:23<470:39:44, 3.02s/it]
1%|▏ | 8039/569592 [02:23<470:39:44, 3.02s/it]
1%|▏ | 8040/569592 [02:24<374:52:44, 2.40s/it]
1%|▏ | 8040/569592 [02:24<374:52:44, 2.40s/it]
1%|▏ | 8041/569592 [02:25<308:01:29, 1.97s/it]
1%|▏ | 8041/569592 [02:25<308:01:29, 1.97s/it]
1%|▏ | 8042/569592 [02:30<437:59:26, 2.81s/it]
1%|▏ | 8042/569592 [02:30<437:59:26, 2.81s/it]
1%|▏ | 8043/569592 [02:33<430:48:08, 2.76s/it]
1%|▏ | 8043/569592 [02:33<430:48:08, 2.76s/it]
1%|▏ | 8044/569592 [02:34<344:38:38, 2.21s/it]
1%|▏ | 8044/569592 [02:34<344:38:38, 2.21s/it]
1%|▏ | 8045/569592 [02:36<333:14:44, 2.14s/it]
1%|▏ | 8045/569592 [02:36<333:14:44, 2.14s/it]
1%|▏ | 8046/569592 [02:40<452:56:25, 2.90s/it]
1%|▏ | 8046/569592 [02:40<452:56:25, 2.90s/it]
1%|▏ | 8047/569592 [02:44<485:35:34, 3.11s/it]
1%|▏ | 8047/569592 [02:44<485:35:34, 3.11s/it]
1%|▏ | 8048/569592 [02:45<382:10:17, 2.45s/it]
1%|▏ | 8048/569592 [02:45<382:10:17, 2.45s/it]
1%|▏ | 8049/569592 [02:46<310:20:45, 1.99s/it]
1%|▏ | 8049/569592 [02:46<310:20:45, 1.99s/it]
1%|▏ | 8050/569592 [02:51<456:22:53, 2.93s/it]
1%|▏ | 8050/569592 [02:51<456:22:53, 2.93s/it]
1%|▏ | 8051/569592 [02:55<511:17:27, 3.28s/it]
1%|▏ | 8051/569592 [02:55<511:17:27, 3.28s/it]
1%|▏ | 8052/569592 [02:56<401:34:06, 2.57s/it]
1%|▏ | 8052/569592 [02:56<401:34:06, 2.57s/it]
1%|▏ | 8053/569592 [02:57<327:06:55, 2.10s/it]
1%|▏ | 8053/569592 [02:57<327:06:55, 2.10s/it]
1%|▏ | 8054/569592 [03:00<390:37:20, 2.50s/it]
1%|▏ | 8054/569592 [03:00<390:37:20, 2.50s/it]
1%|▏ | 8055/569592 [03:02<372:28:25, 2.39s/it]
1%|▏ | 8055/569592 [03:02<372:28:25, 2.39s/it]
1%|▏ | 8056/569592 [03:03<305:34:05, 1.96s/it]
1%|▏ | 8056/569592 [03:03<305:34:05, 1.96s/it]
1%|▏ | 8057/569592 [03:06<329:13:57, 2.11s/it]
1%|▏ | 8057/569592 [03:06<329:13:57, 2.11s/it]
1%|▏ | 8058/569592 [03:11<457:04:30, 2.93s/it]
1%|▏ | 8058/569592 [03:11<457:04:30, 2.93s/it]
1%|▏ | 8059/569592 [03:14<463:22:56, 2.97s/it]
1%|▏ | 8059/569592 [03:14<463:22:56, 2.97s/it]
1%|▏ | 8060/569592 [03:15<369:53:21, 2.37s/it]
1%|▏ | 8060/569592 [03:15<369:53:21, 2.37s/it]
1%|▏ | 8061/569592 [03:16<305:55:50, 1.96s/it]
1%|▏ | 8061/569592 [03:16<305:55:50, 1.96s/it]
1%|▏ | 8062/569592 [03:21<478:41:32, 3.07s/it]
1%|▏ | 8062/569592 [03:21<478:41:32, 3.07s/it]
1%|▏ | 8063/569592 [03:22<385:10:10, 2.47s/it]
1%|▏ | 8063/569592 [03:23<385:10:10, 2.47s/it]
1%|▏ | 8064/569592 [03:24<336:27:36, 2.16s/it]
1%|▏ | 8064/569592 [03:24<336:27:36, 2.16s/it]
1%|▏ | 8065/569592 [03:26<355:10:51, 2.28s/it]
1%|▏ | 8065/569592 [03:26<355:10:51, 2.28s/it]
1%|▏ | 8066/569592 [03:31<474:31:52, 3.04s/it]
1%|▏ | 8066/569592 [03:31<474:31:52, 3.04s/it]
1%|▏ | 8067/569592 [03:34<456:00:48, 2.92s/it]
1%|▏ | 8067/569592 [03:34<456:00:48, 2.92s/it]
1%|▏ | 8068/569592 [03:35<365:48:44, 2.35s/it]
1%|▏ | 8068/569592 [03:35<365:48:44, 2.35s/it]
1%|▏ | 8069/569592 [03:36<300:05:54, 1.92s/it]
1%|▏ | 8069/569592 [03:36<300:05:54, 1.92s/it]
1%|▏ | 8070/569592 [03:42<485:26:28, 3.11s/it]
1%|▏ | 8070/569592 [03:42<485:26:28, 3.11s/it]
1%|▏ | 8071/569592 [03:44<430:14:39, 2.76s/it]
1%|▏ | 8071/569592 [03:44<430:14:39, 2.76s/it]
1%|▏ | 8072/569592 [03:45<345:55:28, 2.22s/it]
1%|▏ | 8072/569592 [03:45<345:55:28, 2.22s/it]
1%|▏ | 8073/569592 [03:47<345:51:52, 2.22s/it]
1%|▏ | 8073/569592 [03:47<345:51:52, 2.22s/it]
1%|▏ | 8074/569592 [03:53<514:21:20, 3.30s/it]
1%|▏ | 8074/569592 [03:53<514:21:20, 3.30s/it]
1%|▏ | 8075/569592 [03:54<430:25:52, 2.76s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (103117440 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1%|▏ | 8075/569592 [03:54<430:25:52, 2.76s/it]
1%|▏ | 8076/569592 [03:55<345:08:28, 2.21s/it]
1%|▏ | 8076/569592 [03:55<345:08:28, 2.21s/it]
1%|▏ | 8077/569592 [03:56<285:52:00, 1.83s/it]
1%|▏ | 8077/569592 [03:56<285:52:00, 1.83s/it]
1%|▏ | 8078/569592 [04:03<536:14:59, 3.44s/it]
1%|▏ | 8078/569592 [04:03<536:14:59, 3.44s/it]
1%|▏ | 8079/569592 [04:04<421:46:41, 2.70s/it]
1%|▏ | 8079/569592 [04:08<421:46:41, 2.70s/it]
1%|▏ | 8080/569592 [04:08<491:32:48, 3.15s/it]
1%|▏ | 8080/569592 [04:08<491:32:48, 3.15s/it]
1%|▏ | 8081/569592 [04:09<386:20/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94638025 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:59, 2.48s/it]
1%|▏ | 8081/569592 [04:09<386:20:59, 2.48s/it]
1%|▏ | 8082/569592 [04:12<393:18:22, 2.52s/it]
1%|▏ | 8082/569592 [04:12<393:18:22, 2.52s/it]
1%|▏ | 8083/569592 [04:14<389:37:18, 2.50s/it]
1%|▏ | 8083/569592 [04:14<389:37:18, 2.50s/it]
1%|▏ | 8084/569592 [04:15<317:59:47, 2.04s/it]
1%|▏ | 8084/569592 [04:15<317:59:47, 2.04s/it]
1%|▏ | 8085/569592 [04:17<311:53:35, 2.00s/it]
1%|▏ | 8085/569592 [04:17<311:53:35, 2.00s/it]
1%|▏ | 8086/569592 [04:23<500:11:33, 3.21s/it]
1%|▏ | 8086/569592 [04:23<500:11:33, 3.21s/it]
1%|▏ | 8087/569592 [04:24<395:00:59, 2.53s/it]
1%|▏ | 8087/569592 [04:24<395:00:59, 2.53s/it]
1%|▏ | 8088/569592 [04:25<328:22:23, 2.11s/it]
1%|▏ | 8088/569592 [04:25<328:22:23, 2.11s/it]
1%|▏ | 8089/569592 [04:28<346:51:22, 2.22s/it]
1%|▏ | 8089/569592 [04:28<346:51:22, 2.22s/it]
1%|▏ | 8090/569592 [04:33<503:47:51, 3.23s/it]
1%|▏ | 8090/569592 [04:34<503:47:51, 3.23s/it]
1%|▏ | 8091/569592 [04:34<398:13:25, 2.55s/it]
1%|▏ | 8091/569592 [04:34<398:13:25, 2.55s/it]
1%|▏ | 8092/569592 [04:35<324:14:46, 2.08s/it]
1%/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98085130 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
|▏ | 8092/569592 [04:35<324:14:46, 2.08s/it]
1%|▏ | 8093/569592 [04:37<321:18:08, 2.06s/it]
1%|▏ | 8093/569592 [04:37<321:18:08, 2.06s/it]
1%|▏ | 8094/569592 [04:44<540:36:31, 3.47s/it]
1%|▏ | 8094/569592 [04:44<540:36:31, 3.47s/it]
1%|▏ | 8095/569592 [04:45<422:02:21, 2.71s/it]
1%|▏ | 8095/569592 [04:45<422:02:21, 2.71s/it]
1%|▏ | 8096/569592 [04:46<339:40:58, 2.18s/it]
1%|▏ | 8096/569592 [04:46<339:40:58, 2.18s/it]
1%|▏ | 8097/569592 [04:49<367:22:14, 2.36s/it]
1%|▏ | 8097/569592 [04:49<367:22:14, 2.36s/it]
1%|▏ | 8098/569592 [04:54<484:04:17, 3.10s/it]
1%|▏ | 8098/569592 [04:54<484:04:17, 3.10s/it]
1%|▏ | 8099/569592 [04:55<382:12:30, 2.45s/it]
1%|▏ | 8099/569592 [04:55<382:12:30, 2.45s/it]
1%|▏ | 8100/569592 [04:56<310:52:01, 1.99s/it]
1%|▏ | 8100/569592 [04:56<310:52:01, 1.99s/it]
1%|▏ | 8101/569592 [04:59<357:37:39, 2.29s/it]
1%|▏ | 8101/569592 [04:59<357:37:39, 2.29s/it]
1%|▏ | 8102/569592 [05:04<499:09:31, 3.20s/it]
1%|▏ | 8102/569592 [05:04<499:09:31, 3.20s/it]
1%|▏ | 8103/569592 [05:05<392:46:54, 2.52s/it]
1%|▏ | 8103/569592 [05:05<392:46:54, 2.52s/it]
1%|▏ | 8104/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (91375000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/569592 [05:06<317:58:38, 2.04s/it]
1%|▏ | 8104/569592 [05:06<317:58:38, 2.04s/it]
1%|▏ | 8105/569592 [05:10<436:58:05, 2.80s/it]
1%|▏ | 8105/569592 [05:10<436:58:05, 2.80s/it]
1%|▏ | 8106/569592 [05:16<555:40:11, 3.56s/it]
1%|▏ | 8106/569592 [05:16<555:40:11, 3.56s/it]
1%|▏ | 8107/569592 [05:16<429:40:11, 2.75s/it]
1%|▏ | 8107/569592 [05:16<429:40:11, 2.75s/it]
1%|▏ | 8108/569592 [05:17<341:50:46, 2.19s/it]
1%|▏ | 8108/569592 [05:17<341:50:46, 2.19s/it]
1%|▏ | 8109/569592 [05:20<354:49:28, 2.27s/it]
1%|▏ | 8109/569592 [05:20<354:49:28, 2.27s/it]
1%|▏ | 8110/569592 [05:29<665:52:33, 4.27s/it]
1%|▏ | 8110/569592 [05:29<665:52:33, 4.27s/it]
1%|▏ | 8111/569592 [05:34<718:58:25, 4.61s/it]
1%|▏ | 8111/569592 [05:34<718:58:25, 4.61s/it]
1%|▏ | 8112/569592 [05:40<758:15:55, 4.86s/it]
1%|▏ | 8112/569592 [05:40<758:15:55, 4.86s/it]
1%|▏ | 8113/569592 [05:45<780:54:08, 5.01s/it]
1%|▏ | 8113/569592 [05:45<780:54:08, 5.01s/it]
1%|▏ | 8114/569592 [05:50<796:57:27, 5.11s/it]
1%|▏ | 8114/569592 [05:50<796:57:27, 5.11s/it]
1%|▏ | 8115/569592 [05:51<600:39:03, 3.85s/it]
1%|▏ | 8115/569592 [05:51<600:39:03, 3.85s/it]
1%|▏ | 8116/569592 [05:52<464:30:25, 2.98s/it]
1%|▏ | 8116/569592 [05:52<464:30:25, 2.98s/it]
1%|▏ | 8117/569592 [05:53<369:13:38, 2.37s/it]
1%|▏ | 8117/569592 [05:53<369:13:38, 2.37s/it]
1%|▏ | 8118/569592 [05:54<302:34:30, 1.94s/it]
1%|▏ | 8118/569592 [05:54<302:34:30, 1.94s/it]
1%|▏ | 8119/569592 [05:55<260:34:14, 1.67s/it]
1%|▏ | 8119/569592 [05:55<260:34:14, 1.67s/it]
1%|▏ | 8120/569592 [05:56<225:31:04, 1.45s/it]
1%|▏ | 8120/569592 [05:56<225:31:04, 1.45s/it]
1%|▏ | 8121/569592 [05:57<202:16:17, 1.30s/it]
1%|▏ | 8121/569592 [05:57<202:16:17, 1.30s/it]
1%|▏ | 8122/569592 [05:58<185:22:13, 1.19s/it]
1%|▏ | 8122/569592 [05:58<185:22:13, 1.19s/it]
1%|▏ | 8123/569592 [06:04<401:38:24, 2.58s/it]
1%|▏ | 8123/569592 [06:04<401:38:24, 2.58s/it]
1%|▏ | 8124/569592 [06:05<329:26:46, 2.11s/it]
1%|▏ | 8124/569592 [06:05<329:26:46, 2.11s/it]
1%|▏ | 8125/569592 [06:06<283:55:50, 1.82s/it]
1%|▏ | 8125/569592 [06:06<283:55:50, 1.82s/it]
1%|▏ | 8126/569592 [06:08<280:27:35, 1.80s/it]
1%|▏ | 8126/569592 [06:08<280:27:35, 1.80s/it]
1%|▏ | 8127/569592 [06:13<449:55:07, 2.88s/it]
1%|▏ | 8127/569592 [06:13<449:55:07, 2.88s/it]
1%|▏ | 8128/569592 [06:15<402:18:45, 2.58s/it]
1%|▏ | 8128/569592 [06:15<402:18:45, 2.58s/it]
1%|▏ | 8129/569592 [06:16<347:30:40, 2.23s/it]
1%|▏ | 8129/569592 [06:16<347:30:40, 2.23s/it]
1%|▏ | 8130/569592 [06:18<300:34:37, 1.93s/it]
1%|▏ | 8130/569592 [06:18<300:34:37, 1.93s/it]
1%|▏ | 8131/569592 [06:24<493:22:54, 3.16s/it]
1%|▏ | 8131/569592 [06:24<493:22:54, 3.16s/it]
1%|▏ | 8132/569592 [06:26<436:01:58, 2.80s/it]
1%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (97128300 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 8132/569592 [06:26<436:01:58, 2.80s/it]
1%|▏ | 8133/569592 [06:27<379:34:02, 2.43s/it]
1%|▏ | 8133/569592 [06:27<379:34:02, 2.43s/it]
1%|▏ | 8134/569592 [06:28<310:08:34, 1.99s/it]
1%|▏ | 8134/569592 [06:28<310:08:34, 1.99s/it]
1%|▏ | 8135/569592 [06:34<480:12:34, 3.08s/it]
1%|▏ | 8135/569592 [06:34<480:12:34, 3.08s/it]
1%|▏ | 8136/569592 [06:36<421:57:22, 2.71s/it]
1%|▏ | 8136/569592 [06:36<421:57:22, 2.71s/it]
1%|▏ | 8137/569592 [06:37<367:16:12, 2.35s/it]
1%|▏ | 8137/569592 [06:37<367:16:12, 2.35s/it]
1%|▏ | 8138/569592 [06:38<307:21:01, 1.97s/it]
1%|▏ | 8138/569592 [06:38<307:21:01, 1.97s/it]
1%|▏ | 8139/569592 [06:44<502:47:13, 3.22s/it]
1%|▏ | 8139/569592 [06:44<502:47:13, 3.22s/it]
1%|▏ | 8140/569592 [06:45<405:06:41, 2.60s/it]
1%|▏ | 8140/569592 [06:45<405:06:41, 2.60s/it]
1%|▏ | 8141/569592 [06:47<342:20:16, 2.20s/it]
1%|▏ | 8141/569592 [06:47<342:20:16, 2.20s/it]
1%|▏ | 8142/569592 [06:48<294:35:54, 1.89s/it]
1%|▏ | 8142/569592 [06:48<294:35:54, 1.89s/it]
1%|▏ | 8143/569592 [06:54<485:28:01, 3.11s/it]
1%|▏ | 8143/569592 [06:54<485:28:01, 3.11s/it]
1%|▏ | 8144/569592 [06:56<419:37:03, 2.69s/it]
1%|▏ | 8144/569592 [06:56<419:37:03, 2.69s/it]
1%|▏ | 8145/569592 [06:57<368:50:27, 2.37s/it]
1%|▏ | 8145/569592 [06:57<368:50:27, 2.37s/it]
1%|▏ | 8146/569592 [06:59<334:44:36, 2.15s/it]
1%|▏ | 8146/569592 [06:59<334:44:36, 2.15s/it]
1%|▏ | 8147/569592 [07:04<481:57:40, 3.09s/it]
1%|▏ | 8147/569592 [07:04<481:57:40, 3.09s/it]
1%|▏ | 8148/569592 [07:05<399:22:01, 2.56s/it]
1%|▏ | 8148/569592 [07:05<399:22:01, 2.56s/it]
1%|▏ | 8149/569592 [07:08<382:25:56, 2.45s/it]
1%|▏ | 8149/569592 [07:08<382:25:56, 2.45s/it]
1%|▏ | 8150/569592 [07:09<331:45:06, 2.13s/it]
1%|▏ | 8150/569592 [07:09<331:45:06, 2.13s/it]
1%|▏ | 8151/569592 [07:15<510:02:24, 3.27s/it]
1%|▏ | 8151/569592 [07:15<510:02:24, 3.27s/it]
1%|▏ | 8152/569592 [07:16<403:26:04, 2.59s/it]
1%|▏ | 8152/569592 [07:16<403:26:04, 2.59s/it]
1%|▏ | 8153/569592 [07:18<372:28:06, 2.39s/it]
1%|▏ | 8153/569592 [07:18<372:28:06, 2.39s/it]
1%|▏ | 8154/569592 [07:19<320:36:08, 2.06s/it]
1%|▏ | 8154/569592 [07:19<320:36:08, 2.06s/it]
1%|▏ | 8155/569592 [07:25<501:22:01, 3.21s/it]
1%|▏ | 8155/569592 [07:25<501:22:01, 3.21s/it]
1%|▏ | 8156/569592 [07:27<425:54:38, 2.73s/it]
1%|▏ | 8156/569592 [07:27<425:54:38, 2.73s/it]
1%|▏ | 8157/569592 [07:28<360:58:51, 2.31s/it]
1%|▏ | 8157/569592 [07:28<360:58:51, 2.31s/it]
1%|▏ | 8158/569592 [07:29<296:58:41, 1.90s/it]
1%|▏ | 8158/569592 [07:29<296:58:41, 1.90s/it]
1%|▏ | 8159/569592 [07:35<501:40:24, 3.22s/it]
1%|▏ | 8159/569592 [07:35<501:40:24, 3.22s/it]
1%|▏ | 8160/569592 [07:37<430:32:25, 2.76s/it]
1%|▏ | 8160/569592 [07:37<430:32:25, 2.76s/it]
1%|▏ | 8161/569592 [07:40<444:52:39, 2.85s/it]
1%|▏ | 8161/569592 [07:40<444:52:39, 2.85s/it]
1%|▏ | 8162/569592 [07:41<356:25:19, 2.29s/it]
1%|▏ | 8162/569592 [07:41<356:25:19, 2.29s/it]
1%|▏ | 8163/569592 [07:46<466:25:47, 2.99s/it]
1%|▏ | 8163/569592 [07:46<466:25:47, 2.99s/it]
1%|▏ | 8164/569592 [07:47<398:23:07, 2.55s/it]
1%|▏ | 8164/569592 [07:47<398:23:07, 2.55s/it]
1%|▏ | 8165/569592 [07:50<436:15:47, 2.80s/it]
1%|▏ | 8165/569592 [07:50<436:15:47, 2.80s/it]
1%|▏ | 8166/569592 [07:51<348:11:01, 2.23s/it]
1%|▏ | 8166/569592 [07:51<348:11:01, 2.23s/it]
1%|▏ | 8167/569592 [07:56<461:55:09, 2.96s/it]
1%|▏ | 8167/569592 [07:56<461:55:09, 2.96s/it]
1%|▏ | 8168/569592 [07:59<449:12:31, 2.88s/it]
1%|▏ | 8168/569592 [07:59<449:12:31, 2.88s/it]
1%|▏ | 8169/569592 [08:01<402:52:33, 2.58s/it]
1%|▏ | 8169/569592 [08:01<402:52:33, 2.58s/it]
1%|▏ | 8170/569592 [08:02<327:05:56, 2.10s/it]
1%|▏ | 8170/569592 [08:02<327:05:56, 2.10s/it]
1%|▏ | 8171/569592 [08:06<419:27:38, 2.69s/it]
1%|▏ | 8171/569592 [08:06<419:27:38, 2.69s/it]
1%|▏ | 8172/569592 [08:08<413:14:08, 2.65s/it]
1%|▏ | 8/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (91595708 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
172/569592 [08:08<413:14:08, 2.65s/it]
1%|▏ | 8173/569592 [08:11<409:17:25, 2.62s/it]
1%|▏ | 8173/569592 [08:11<409:17:25, 2.62s/it]
1%|▏ | 8174/569592 [08:12<333:13:37, 2.14s/it]
1%|▏ | 8174/569592 [08:12<333:13:37, 2.14s/it]
1%|▏ | 8175/569592 [08:15<394:26:46, 2.53s/it]
1%|▏ | 8175/569592 [08:15<394:26:46, 2.53s/it]
1%|▏ | 8176/569592 [08:17<355:34:40, 2.28s/it]
1%|▏ | 8176/569592 [08:17<355:34:40, 2.28s/it]
1%|▏ | 8177/569592 [08:21<438:32:38, 2.81s/it]
1%|▏ | 8177/569592 [08:21<438:32:38, 2.81s/it]
1%|▏ | 8178/569592 [08:22<349:54:34, 2.24s/it]
1%|▏ | 8178/569592 [08:22<349:54:34, 2.24s/it]
1%|▏ | 8179/569592 [08:26<450:02:21, 2.89s/it]
1%|▏ | 8179/569592 [08:26<450:02:21, 2.89s/it]
1%|▏ | 8180/569592 [08:28<399:01:51, 2.56s/it]
1%|▏ | 8180/569592 [08:28<399:01:51, 2.56s/it]
1%|▏ | 8181/569592 [08:31<416:54:23, 2.67s/it]
1%|▏ | 8181/569592 [08:31<416:54:23, 2.67s/it]
1%|▏ | 8182/569592 [08:32<334:01:10, 2.14s/it]
1%|▏ | 8182/569592 [08:32<334:01:10, 2.14s/it]
1%|▏ | 8183/569592 [08:34<346:20:46, 2.22s/it]
1%|▏ | 8183/569592 [08:34<346:20:46, 2.22s/it]
1%|▏ | 8184/569592 [08:38<408:08:08, 2.62s/it]
1%|▏ | 8184/569592 [08:38<408:08:08, 2.62s/it]
1%|▏ | 8185/569592 [08:40<379:00:39, 2.43s/it]
1%|▏ | 8185/569592 [08:40<379:00:39, 2.43s/it]
1%|▏ | 8186/569592 [08:41<311:45:32, 2.00s/it]
1%|▏ | 8186/569592 [08:41<311:45:32, 2.00s/it]
1%|▏ | 8187/569592 [08:45<424:46:16, 2.72s/it]
1%|▏ | 8187/569592 [08:45<424:46:16, 2.72s/it]
1%|▏ | 8188/569592 [08:47<367:55:16, 2.36s/it]
1%|▏ | 8188/569592 [08:47<367:55:16, 2.36s/it]
1%|▏ | 8189/569592 [08:51<445:56:10, 2.86s/it]
1%|▏ | 8189/569592 [08:51<445:56:10, 2.86s/it]
1%|▏ | 8190/569592 [08:52<356:02:58, 2.28s/it]
1%|▏ | 8190/569592 [08:52<356:02:58, 2.28s/it]
1%|▏ | 8191/569592 [08:55<402:11:04, 2.58s/it]
1%|▏ | 8191/569592 [08:55<402:11:04, 2.58s/it]
1%|▏ | 8192/569592 [08:58<410:13:32, 2.63s/it]
1%|▏ | 8192/569592 [08:58<410:13:32, 2.63s/it]
1%|▏ | 8193/569592 [09:01<418:41:08, 2.68s/it]
1%|▏ | 8193/569592 [09:01<418:41:08, 2.68s/it]
1%|▏ | 8194/569592 [09:01<335:39:54, 2.15s/it]
1%|▏ | 8194/569592 [09:01<335:39:54, 2.15s/it]
1%|▏ | 8195/569592 [09:06<434:35:59, 2.79s/it]
1%|▏ | 8195/569592 [09:06<434:35:59, 2.79s/it]
1%|▏ | 8196/569592 [09:08<398:33:23, 2.56s/it]
1%|▏ | 8196/569592 [09:08<398:33:23, 2.56s/it]
1%|▏ | 8197/569592 [09:11<452:52:29, 2.90s/it]
1%|▏ | 8197/569592 [09:11<452:52:29, 2.90s/it]
1%|▏ | 8198/569592 [09:12<360:38:51, 2.31s/it]
1%|▏ | 8198/569592 [09:12<360:38:51, 2.31s/it]
1%|▏ | 8199/569592 [09:17<447:16:21, 2.87s/it]
1%|▏ | 8199/569592 [09:17<447:16:21, 2.87s/it]
1%|▏ | 8200/569592 [09:18<363:26:52, 2.33s/it]
1%|▏ | 8200/569592 [09:18<363:26:52, 2.33s/it]
1%|▏ | 8201/569592 [09:21<434:54:38, 2.79s/it]
1%|▏ | 8201/569592 [09:22<434:54:38, 2.79s/it]
1%|▏ | 8202/569592 [09:22<346:49:21, 2.22s/it]
1%|▏ | 8202/569592 [09:22<346:49:21, 2.22s/it]
1%|▏ | 8203/569592 [09:25<383:30:39, 2.46s/it]
1%|▏ | 8203/569592 [09:25<383:30:39, 2.46s/it]
1%|▏ | 8204/569592 [09:28<387:07:42, 2.48s/it]
1%|▏ | 8204/569592 [09:28<387:07:42, 2.48s/it]
1%|▏ | 8205/569592 [09:32<461:56:17, 2.96s/it]
1%|▏ | 8205/569592 [09:32<461:56:17, 2.96s/it]
1%|▏ | 8206/569592 [09:33<369:16:19, 2.37s/it]
1%|▏ | 8206/569592 [09:33<369:16:19, 2.37s/it]
1%|▏ | 8207/569592 [09:36<381:57:38, 2.45s/it]
1%|▏ | 8207/569592 [09:36<381:57:38, 2.45s/it]
1%|▏ | 8208/569592 [09:38<381:30:06, 2.45s/it]
1%|▏ | 8208/569592 [09:38<381:30:06, 2.45s/it]
1%|▏ | 8209/569592 [09:43<478:24:10, 3.07s/it]
1%|▏ | 8209/569592 [09:43<478:24:10, 3.07s/it]
1%|▏ | 8210/569592 [09:43<376:59:26, 2.42s/it]
1%|▏ | 8210/569592 [09:44<376:59:26, 2.42s/it]
1%|▏ | 8211/569592 [09:46<386:23:45, 2.48s/it]
1%|▏ | 8211/569592 [09:46<386:23:45, 2.48s/it]
1%|▏ | 8212/569592 [09:47<327:56:02, 2.10s/it]
1%|▏ | 8212/569592 [09:47<327:56:02, 2.10s/it]
1%|▏ | 8213/569592 [09:52<435:25:50, 2.79s/it]
1%|▏ | 8213/569592 [09:52<435:25:50, 2.79s/it]
1%|▏ | 8214/569592 [09:53<349:29:34, 2.24s/it]
1%|▏ | 8214/569592 [09:53<349:29:34, 2.24s/it]
1%|▏ | 8215/569592 [09:56<408:17:54, 2.62s/it]
1%|▏ | 8215/569592 [09:56<408:17:54, 2.62s/it]
1%|▏ | 8216/569592 [09:58<369:19:46, 2.37s/it]
1%|▏ | 8216/569592 [09:58<369:19:46, 2.37s/it]
1%|▏ | 8217/569592 [10:03<503:09:39, 3.23s/it]
1%|▏ | 8217/569592 [10:03<503:09:39, 3.23s/it]
1%|▏ | 8218/569592 [10:04<396:36:45, 2.54s/it]
1%|▏ | 8218/569592 [10:04<396:36:45, 2.54s/it]
1%|▏ | 8219/569592 [10:05<322:43:07, 2.07s/it]
1%|▏ | 8219/569592 [10:05<322:43:07, 2.07s/it]
1%|▏ | 8220/569592 [10:08<350:10:55, 2.25s/it]
1%|▏ | 8220/569592 [10:08<350:10:55, 2.25s/it]
1%|▏ | 8221/569592 [10:12<454:06:29, 2.91s/it]
1%|▏ | 8221/569592 [10:12<454:06:29, 2.91s/it]
1%|▏ | 8222/569592 [10:13<362:00:28, 2.32s/it]
1%|▏ | 8222/569592 [10:13<362:00:28, 2.32s/it]
1%|▏ | 8223/569592 [10:19<547:28:44, 3.51s/it]
1%|▏ | 8223/569592 [10:19<547:28:44, 3.51s/it]
1%|▏ | 8224/569592 [10:23<560:53:52, 3.60s/it]
1%|▏ | 8224/569592 [10:23<560:53:52, 3.60s/it]
1%|▏ | 8225/569592 [10:28<624:11:18, 4.00s/it]
1%|▏ | 8225/569592 [10:28<624:11:18, 4.00s/it]
1%|▏ | 8226/569592 [10:32<620:46:43, 3.98s/it]
1%|▏ | 8226/569592 [10:32<620:46:43, 3.98s/it]
1%|▏ | 8227/569592 [10:36<622:45:56, 3.99s/it]
1%|▏ | 8227/569592 [10:36<622:45:56, 3.99s/it]
1%|▏ | 8228/569592 [10:40<616:43:48, 3.96s/it]
1%|▏ | 8228/569592 [10:40<616:43:48, 3.96s/it]
1%|▏ | 8229/569592 [10:43<581:05:19, 3.73s/it]
1%|▏ | 8229/569592 [10:43<581:05:19, 3.73s/it]
1%|▏ | 8230/569592 [10:47<583:55:51, 3.74s/it]
1%|▏ | 8230/569592 [10:47<583:55:51, 3.74s/it]
1%|▏ | 8231/569592 [10:51<595:03:12, 3.82s/it]
1%|▏ | 8231/569592 [10:51<595:03:12, 3.82s/it]
1%|▏ | 8232/569592 [10:54<576:45:30, 3.70s/it]
1%|▏ | 8232/569592 [10:54<576:45:30, 3.70s/it]
1%|▏ | 8233/569592 [10:55<447:06:01, 2.87s/it]
1%|▏ | 8233/569592 [10:55<447:06:01, 2.87s/it]
1%|▏ | 8234/569592 [10:56<356:40:22, 2.29s/it]
1%|▏ | 8234/569592 [10:56<356:40:22, 2.29s/it]
1%|▏ | 8235/569592 [10:57<294:51:06, 1.89s/it]
1%|▏ | 8235/569592 [10:57<294:51:06, 1.89s/it]
1%|▏ | 8236/569592 [10:58<251:56:42, 1.62s/it]
1%|▏ | 8236/569592 [10:58<251:56:42, 1.62s/it]
1%|▏ | 8237/569592 [10:59<220:44:51, 1.42s/it]
1%|▏ | 8237/569592 [10:59<220:44:51, 1.42s/it]
1%|▏ | 8238/569592 [11:00<199:21:24, 1.28s/it]
1%|▏ | 8238/569592 [11:00<199:21:24, 1.28s/it]
1%|▏ | 8239/569592 [11:01<188:11:35, 1.21s/it]
1%|▏ | 8239/569592 [11:01<188:11:35, 1.21s/it]
1%|▏ | 8240/569592 [11:04<263:44:52, 1.69s/it]
1%|▏ | 8240/569592 [11:04<263:44:52, 1.69s/it]
1%|▏ | 8241/569592 [11:09<403:18:58, 2.59s/it]
1%|▏ | 8241/569592 [11:09<403:18:58, 2.59s/it]
1%|▏ | 8242/569592 [11:10<328:00:56, 2.10s/it]
1%|▏ | 8242/569592 [11:10<328:00:56, 2.10s/it]
1%|▏ | 8243/569592 [11:11<276:42:48, 1.77s/it]
1%|▏ | 8243/569592 [11:11<276:42:48, 1.77s/it]
1%|▏ | 8244/569592 [11:14<344:38:04, 2.21s/it]
1%|▏ | 8244/569592 [11:14<344:38:04, 2.21s/it]
1%|▏ | 8245/569592 [11:18<438:50:45, 2.81s/it]
1%|▏ | 8245/569592 [11:18<438:50:45, 2.81s/it]
1%|▏ | 8246/569592 [11:19<354:24:14, 2.27s/it]
1%|▏ | 8246/569592 [11:19<354:24:14, 2.27s/it]
1%|▏ | 8247/569592 [11:20<309:00:35, 1.98s/it]
1%|▏ | 8247/569592 [11:21<309:00:35, 1.98s/it]
1%|▏ | 8248/569592 [11:24<400:47:32, 2.57s/it]
1%|▏ | 8248/569592 [11:24<400:47:32, 2.57s/it]
1%|▏ | 8249/569592 [11:28<468:18:03, 3.00s/it]
1%|▏ | 8249/569592 [11:28<468:18:03, 3.00s/it]
1%|▏ | 8250/569592 [11:29<379:52:42, 2.44s/it]
1%|▏ | 8250/569592 [11:30<379:52:42, 2.44s/it]
1%|▏ | 8251/569592 [11:31<324:44:45, 2.08s/it]
1%|▏ | 8251/569592 [11:31<324:44:45, 2.08s/it]
1%|▏ | 8252/569592 [11:34<367:09:23, 2.35s/it]
1%|▏ | 8252/569592 [11:34<367:09:23, 2.35s/it]
1%|▏ | 8253/569592 [11:38<464:17:26, 2.98s/it]
1%|▏ | 8253/569592 [11:38<464:17:26, 2.98s/it]
1%|▏ | 8254/569592 [11:40<394:52:55, 2.53s/it]
1%|▏ | 8254/569592 [11:40<394:52:55, 2.53s/it]
1%|▏ | 8255/569592 [11:41<321:32:59, 2.06s/it]
1%|▏ | 8255/569592 [11:41<321:32:59, 2.06s/it]
1%|▏ | 8256/569592 [11:43<348:06:29, 2.23s/it]
1%|▏ | 8256/569592 [11:43<348:06:29, 2.23s/it]
1%|▏ | 8257/569592 [11:48<466:23:30, 2.99s/it]
1%|▏ | 8257/569592 [11:48<466:23:30, 2.99s/it]
1%|▏ | 8258/569592 [11:49<386:01:31, 2.48s/it]
1%|▏ | 8258/569592 [11:49<386:01:31, 2.48s/it]
1%|▏ | 8259/569592 [11:51<339:11:55, 2.18s/it]
1%|▏ | 8259/569592 [11:51<339:11:55, 2.18s/it]
1%|▏ | 8260/569592 [11:54<375:35:52, 2.41s/it]
1%|▏ | 8260/569592 [11:54<375:35:52, 2.41s/it]
1%|▏ | 8261/569592 [11:58<481:00:42, 3.08s/it]
1%|▏ | 8261/569592 [11:58<481:00:42, 3.08s/it]
1%|▏ | 8262/569592 [12:00<427:54:16, 2.74s/it]
1%|▏ | 8262/569592 [12:00<427:54:16, 2.74s/it]
1%|▏ | 8263/569592 [12:02<362:15:48, 2.32s/it]
1%|▏ | 8263/569592 [12:02<362:15:48, 2.32s/it]
1%|▏ | 8264/569592 [12:04<383:59:18, 2.46s/it]
1%|▏ | 8264/569592 [12:04<383:59:18, 2.46s/it]
1%|▏ | 8265/569592 [12:08<445:01:07, 2.85s/it]
1%|▏ | 8265/569592 [12:08<445:01:07, 2.85s/it]
1%|▏ | 8266/569592 [12:10<394:27:51, 2.53s/it]
1%|▏ | 8266/569592 [12:10<394:27:51, 2.53s/it]
1%|▏ | 8267/569592 [12:11<336:07:05, 2.16s/it]
1%|▏ | 8267/569592 [12:11<336:07:05, 2.16s/it]
1%|▏ | 8268/569592 [12:15<416:43:02, 2.67s/it]
1%|▏ | 8268/569592 [12:15<416:43:02, 2.67s/it]
1%|▏ | 8269/569592 [12:20<516:42:34, 3.31s/it]
1%|▏ | 8269/569592 [12:20<516:42:34, 3.31s/it]
1%|▏ | 8270/569592 [12:21<406:00:34, 2.60s/it]
1%|▏ | 8270/569592 [12:21<406:00:34, 2.60s/it]
1%|▏ | 8271/569592 [12:22<331:00:17, 2.12s/it]
1%|▏ | 8271/569592 [12:22<331:00:17, 2.12s/it]
1%|▏ | 8272/569592 [12:25<383:27:03, 2.46s/it]
1%|▏ | 8272/569592 [12:25<383:27:03, 2.46s/it]
1%|▏ | 8273/569592 [12:30<484:14:08, 3.11s/it]
1%|▏ | 8273/569592 [12:30<484:14:08, 3.11s/it]
1%|▏ | 8274/569592 [12:31<386:40:31, 2.48s/it]
1%|▏ | 8274/569592 [12:31<386:40:31, 2.48s/it]
1%|▏ | 8275/569592 [12:32<325:01:18, 2.08s/it]
1%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (89543531 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 8275/569592 [12:32<325:01:18, 2.08s/it]
1%|▏ | 8276/569592 [12:36<428:58:30, 2.75s/it]
1%|▏ | 8276/569592 [12:36<428:58:30, 2.75s/it]
1%|▏ | 8277/569592 [12:40<477:49:10, 3.06s/it]
1%|▏ | 8277/569592 [12:40<477:49:10, 3.06s/it]
1%|▏ | 8278/569592 [12:41<380:42:13, 2.44s/it]
1%|▏ | 8278/569592 [12:41<380:42:13, 2.44s/it]
1%|▏ | 8279/569592 [12:42<330:04:57, 2.12s/it]
1%|▏ | 8279/569592 [12:42<330:04:57, 2.12s/it]
1%|▏ | 8280/569592 [12:46<415:17:52, 2.66s/it]
1%|▏ | 8280/569592 [12:46<415:17:52, 2.66s/it]
1%|▏ | 8281/569592 [12:50<448:45:43, 2.88s/it]
1%|▏ | 8281/569592 [12:50<448:45:43, 2.88s/it]
1%|▏ | 8282/569592 [12:51<357:35:22, 2.29s/it]
1%|▏ | 8282/569592 [12:51<357:35:22, 2.29s/it]
1%|▏ | 8283/569592 [12:53<344:54:18, 2.21s/it]
1%|▏ | 8283/569592 [12:53<344:54:18, 2.21s/it]
1%|▏ | 8284/569592 [12:57<430:29:13, 2.76s/it]
1%|▏ | 8284/569592 [12:57<430:29:13, 2.76s/it]
1%|▏ | 8285/569592 [13:00<466:07:57, 2.99s/it]
1%|▏ | 8285/569592 [13:00<466:07:57, 2.99s/it]
1%|▏ | 8286/569592 [13:01<370:43:48, 2.38s/it]
1%|▏ | 8286/569592 [13:01<370:43:48, 2.38s/it]
1%|▏ | 8287/569592 [13:02<303:52:10, 1.95s/it]
1%|▏ | 8287/569592 [13:02<303:52:10, 1.95s/it]
1%|▏ | 8288/569592 [13:05<369:21:31, 2.37s/it]
1%|▏ | 8288/569592 [13:06<369:21:31, 2.37s/it]
1%|▏ | 8289/569592 [13:11<510:27:18, 3.27s/it]
1%|▏ | 8289/569592 [13:11<510:27:18, 3.27s/it]
1%|▏ | 8290/569592 [13:12<413:16:19, 2.65s/it]
1%|▏ | 8290/569592 [13:12<413:16:19, 2.65s/it]
1%|▏ | 8291/569592 [13:13<333:27:56, 2.14s/it]
1%|▏ | 8291/569592 [13:13<333:27:56, 2.14s/it]
1%|▏ | 8292/569592 [13:15<347:47:54, 2.23s/it]
1%|▏ | 8292/569592 [13:16<347:47:54, 2.23s/it]
1%|▏ | 8293/569592 [13:21<499:19:01, 3.20s/it]
1%|▏ | 8293/569592 [13:21<499:19:01, 3.20s/it]
1%|▏ | 8294/569592 [13:22<395:57:12, 2.54s/it]
1%|▏ | 8294/569592 [13:22<395:57:12, 2.54s/it]
1%|▏ | 8295/569592 [13:23<341:34:12, 2.19s/it]
1%|▏ | 8295/569592 [13:23<341:34:12, 2.19s/it]
1%|▏ | 8296/569592 [13:26<357:38:23, 2.29s/it]
1%|▏ | 8296/569592 [13:26<357:38:23, 2.29s/it]
1%|▏ | 8297/569592 [13:32<525:52:33, 3.37s/it]
1%|▏ | 8297/569592 [13:32<525:52:33, 3.37s/it]
1%|▏ | 8298/569592 [13:33<414:07:59, 2.66s/it]
1%|▏ | 8298/569592 [13:33<414:07:59, 2.66s/it]
1%|▏ | 8299/569592 [13:34<333:01:23, 2.14s/it]
1%|▏ | 8299/569592 [13:34<333:01:23, 2.14s/it]
1%|▏ | 8300/569592 [13:38<418:15:52, 2.68s/it]
1%|▏ | 8300/569592 [13:38<418:15:52, 2.68s/it]
1%|▏ | 8301/569592 [13:41<454:11:39, 2.91s/it]
1%|▏ | 8301/569592 [13:41<454:11:39, 2.91s/it]
1%|▏ | 8302/569592 [13:42<375:12:12, 2.41s/it]
1%|▏ | 8302/569592 [13:42<375:12:12, 2.41s/it]
1%|▏ | 8303/569592 [13:43<314:11:57, 2.02s/it]
1%|▏ | 8303/569592 [13:43<314:11:57, 2.02s/it]
1%|▏ | 8304/569592 [13:48<414:42:21, 2.66s/it]
1%|▏ | 8304/569592 [13:48<414:42:21, 2.66s/it]
1%|▏ | 8305/569592 [13:51<446:49:58, 2.87s/it]
1%|▏ | 8305/569592 [13:51<446:49:58, 2.87s/it]
1%|▏ | 8306/569592 [13:52<361:53:34, 2.32s/it]
1%|▏ | 8306/569592 [13:52<361:53:34, 2.32s/it]
1%|▏ | 8307/569592 [13:53<298:55:32, 1.92s/it]
1%|▏ | 8307/569592 [13:53<298:55:32, 1.92s/it]
1%|▏ | 8308/569592 [13:59<514:22:50, 3.30s/it]
1%|▏ | 8308/569592 [13:59<514:22:50, 3.30s/it]
1%|▏ | 8309/569592 [14:01<451:13:31, 2.89s/it]
1%|▏ | 8309/569592 [14:01<451:13:31, 2.89s/it]
1%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98911692 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 8310/569592 [14:03<399:10:57, 2.56s/it]
1%|▏ | 8310/569592 [14:03<399:10:57, 2.56s/it]
1%|▏ | 8311/569592 [14:04<323:43:03, 2.08s/it]
1%|▏ | 8311/569592 [14:04<323:43:03, 2.08s/it]
1%|▏ | 8312/569592 [14:10<500:46:10, 3.21s/it]
1%|▏ | 8312/569592 [14:10<500:46:10, 3.21s/it]
1%|▏ | 8313/569592 [14:11<396:38:42, 2.54s/it]
1%|▏ | 8313/569592 [14:11<396:38:42, 2.54s/it]
1%|▏ | 8314/569592 [14:13<356:40:40, 2.29s/it]
1%|▏ | 8314/569592 [14:13<356:40:40, 2.29s/it]
1%|▏ | 8315/569592 [14:14<294:59:53, 1.89s/it]
1%|▏ | 8315/569592 [14:14<294:59:53, 1.89s/it]
1%|▏ | 8316/569592 [14:19<461:08:41, 2.96s/it]
1%|▏ | 8316/569592 [14:19<461:08:41, 2.96s/it]
1%|▏ | 8317/569592 [14:21<433:24:55, 2.78s/it]
1%|▏ | 8317/569592 [14:21<433:24:55, 2.78s/it]
1%|▏ | 8318/569592 [14:24<403:27:04, 2.59s/it]
1%|▏ | 8318/569592 [14:24<403:27:04, 2.59s/it]
1%|▏ | 8319/569592 [14:24<325:12:56, 2.09s/it]
1%|▏ | 8319/569592 [14:24<325:12:56, 2.09s/it]
1%|▏ | 8320/569592 [14:29<459:14:17, 2.95s/it]
1%|▏ | 8320/569592 [14:29<459:14:17, 2.95s/it]
1%|▏ | 8321/569592 [14:32<444:30:58, 2.85s/it]
1%|▏ | 8321/569592 [14:32<444:30:58, 2.85s/it]
1%|▏ | 8322/569592 [14:34<401:08:34, 2.57s/it]
1%|▏ | 8322/569592 [14:34<401:08:34, 2.57s/it]
1%|▏ | 8323/569592 [14:35<323:27:32, 2.07s/it]
1%|▏ | 8323/569592 [14:35<323:27:32, 2.07s/it]
1%|▏ | 8324/569592 [14:39<404:04:04, 2.59s/it]
1%|▏ | 8324/569592 [14:39<404:04:04, 2.59s/it]
1%|▏ | 8325/569592 [14:42<455:59:20, 2.92s/it]
1%|▏ | 8325/569592 [14:42<455:59:20, 2.92s/it]
1%|▏ | 8326/569592 [14:44<403:20:49, 2.59s/it]
1%|▏ | 8326/569592 [14:44<403:20:49, 2.59s/it]
1%|▏ | 8327/569592 [14:45<325:21:39, 2.09s/it]
1%|▏ | 8327/569592 [14:45<325:21:39, 2.09s/it]
1%|▏ | 8328/569592 [14:50<473:47:38, 3.04s/it]
1%|▏ | 8328/569592 [14:50<473:47:38, 3.04s/it]
1%|▏ | 8329/569592 [14:52<425:14:02, 2.73s/it]
1%|▏ | 8329/569592 [14:52<425:14:02, 2.73s/it]
1%|▏ | 8330/569592 [14:55<399:19:14, 2.56s/it]
1%|▏ | 8330/569592 [14:55<399:19:14, 2.56s/it]
1%|▏ | 8331/569592 [14:55<322:08:29, 2.07s/it]
1%|▏ | 8331/569592 [14:56<322:08:29, 2.07s/it]
1%|▏ | 8332/569592 [15:01<477:34:04, 3.06s/it]
1%|▏ | 8332/569592 [15:01<477:34:04, 3.06s/it]
1%|▏ | 8333/569592 [15:02<379:00:44, 2.43s/it]
1%|▏ | 8333/569592 [15:02<379:00:44, 2.43s/it]
1%|▏ | 8334/569592 [15:05<396:51:37, 2.55s/it]
1%|▏ | 8334/569592 [15:05<396:51:37, 2.55s/it]
1%|▏ | 8335/569592 [15:06<322:15:58, 2.07s/it]
1%|▏ | 8335/569592 [15:06<322:15:58, 2.07s/it]
1%|▏ | 8336/569592 [15:14<610:50:15, 3.92s/it]
1%|▏ | 8336/569592 [15:14<610:50:15, 3.92s/it]
1%|▏ | 8337/569592 [15:17<593:09:18, 3.80s/it]
1%|▏ | 8337/569592 [15:17<593:09:18, 3.80s/it]
1%|▏ | 8338/569592 [15:22<655:56:06, 4.21s/it]
1%|▏ | 8338/569592 [15:23<655:56:06, 4.21s/it]
1%|▏ | 8339/569592 [15:26<645:19:54, 4.14s/it]
1%|▏ | 8339/569592 [15:26<645:19:54, 4.14s/it]
1%|▏ | 8340/569592 [15:30<597:32:22, 3.83s/it]
1%|▏ | 8340/569592 [15:30<597:32:22, 3.83s/it]
1%|▏ | 8341/569592 [15:34<636:51:18, 4.08s/it]
1%|▏ | 8341/569592 [15:34<636:51:18, 4.08s/it]
1%|▏ | 8342/569592 [15:39<657:58:52, 4.22s/it]
1%|▏ | 8342/569592 [15:39<657:58:52, 4.22s/it]
1%|▏ | 8343/569592 [15:40<501:55:49, 3.22s/it]
1%|▏ | 8343/569592 [15:40<501:55:49, 3.22s/it]
1%|▏ | 8344/569592 [15:44<569:57:57, 3.66s/it]
1%|▏ | 8344/569592 [15:44<569:57:57, 3.66s/it]
1%|▏ | 8345/569592 [15:48<547:14:00, 3.51s/it]
1%|▏ | 8345/569592 [15:48<547:14:00, 3.51s/it]
1%|▏ | 8346/569592 [15:53<615:47:01, 3.95s/it]
1%|▏ | 8346/569592 [15:53<615:47:01, 3.95s/it]
1%|▏ | 8347/569592 [15:53<473:04:56, 3.03s/it]
1%|▏ | 8347/569592 [15:53<473:04:56, 3.03s/it]
1%|▏ | 8348/569592 [15:58<556:00:30, 3.57s/it]
1%|▏ | 8348/569592 [15:58<556:00:30, 3.57s/it]
1%|▏ | 8349/569592 [16:01<532:41:36, 3.42s/it]
1%|▏ | 8349/569592 [16:01<532:41:36, 3.42s/it]
1%|▏ | 8350/569592 [16:05<550:44:13, 3.53s/it]
1%|▏ | 8350/569592 [16:05<550:44:13, 3.53s/it]
1%|▏ | 8351/569592 [16:06<429:30:22, 2.76s/it]
1%|▏ | 8351/569592 [16:06<429:30:22, 2.76s/it]
1%|▏ | 8352/569592 [16:07<348:10:06, 2.23s/it]
1%|▏ | 8352/569592 [16:07<348:10:06, 2.23s/it]
1%|▏ | 8353/569592 [16:08<289:00:35, 1.85s/it]
1%|▏ | 8353/569592 [16:08<289:00:35, 1.85s/it]
1%|▏ | 8354/569592 [16:09<248:51:44, 1.60s/it]
1%|▏ | 8354/569592 [16:09<248:51:44, 1.60s/it]
1%|▏ | 8355/569592 [16:10<218:34:08, 1.40s/it]
1%|▏ | 8355/569592 [16:10<218:34:08, 1.40s/it]
1%|▏ | 8356/569592 [16:11<198:02:35, 1.27s/it]
1%|▏ | 8356/569592 [16:11<198:02:35, 1.27s/it]
1%|▏ | 8357/569592 [16:12<183:14:56, 1.18s/it]
1%|▏ | 8357/569592 [16:12<183:14:56, 1.18s/it]
1%|▏ | 8358/569592 [16:14<219:23:37, 1.41s/it]
1%|▏ | 8358/569592 [16:14<219:23:37, 1.41s/it]
1%|▏ | 8359/569592 [16:19<402:26:00, 2.58s/it]
1%|▏ | 8359/569592 [16:19<402:26:00, 2.58s/it]
1%|▏ | 8360/569592 [16:20<327:09:50, 2.10s/it]
1%|▏ | 8360/569592 [16:20<327:09:50, 2.10s/it]
1%|▏ | 8361/569592 [16:21<283:49:09, 1.82s/it]
1%|▏ | 8361/569592 [16:21<283:49:09, 1.82s/it]
1%|▏ | 8362/569592 [16:24<322:23:02, 2.07s/it]
1%|▏ | 8362/569592 [16:24<322:23:02, 2.07s/it]
1%|▏ | 8363/569592 [16:29<447:40:34, 2.87s/it]
1%|▏ | 8363/569592 [16:29<447:40:34, 2.87s/it]
1%|▏ | 8364/569592 [16:30<397:48:47, 2.55s/it]
1%|▏ | 8364/569592 [16:31<397:48:47, 2.55s/it]
1%|▏ | 8365/569592 [16:31<324:25:53, 2.08s/it]
1%|▏ | 8365/569592 [16:31<324:25:53, 2.08s/it]
1%|▏ | 8366/569592 [16:35<402:48:19, 2.58s/it]
1%|▏ | 8366/569592 [16:35<402:48:19, 2.58s/it]
1%|▏ | 8367/569592 [16:38<429:31:19, 2.76s/it]
1%|▏ | 8367/569592 [16:38<429:31:19, 2.76s/it]
1%|▏ | 8368/569592 [16:41<401:40:44, 2.58s/it]
1%|▏ | 8368/569592 [16:41<401:40:44, 2.58s/it]
1%|▏ | 8369/569592 [16:41<325:06:07, 2.09s/it]
1%|▏ | 8369/569592 [16:41<325:06:07, 2.09s/it]
1%|▏ | 8370/569592 [16:45<383:03:53, 2.46s/it]
1%|▏ | 8370/569592 [16:45<383:03:53, 2.46s/it]
1%|▏ | 8371/569592 [16:50<491:36:31, 3.15s/it]
1%|▏ | 8371/569592 [16:50<491:36:31, 3.15s/it]
1%|▏ | 8372/569592 [16:51<389:47:54, 2.50s/it]
1%|▏ | 8372/569592 [16:51<389:47:54, 2.50s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98911692 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1%|▏ | 8373/569592 [16:51<316:34:50, 2.03s/it]
1%|▏ | 8373/569592 [16:52<316:34:50, 2.03s/it]
1%|▏ | 8374/569592 [16:54<357:44:07, 2.29s/it]
1%|▏ | 8374/569592 [16:54<357:44:07, 2.29s/it]
1%|▏ | 8375/569592 [17:00<516:28:10, 3.31s/it]
1%|▏ | 8375/569592 [17:00<516:28:10, 3.31s/it]
1%|▏ | 8376/569592 [17:01<405:59:13, 2.60s/it]
1%|▏ | 8376/569592 [17:01<405:59:13, 2.60s/it]
1%|▏ | 8377/569592 [17:02<327:16:11, 2.10s/it]
1%|▏ | 8377/569592 [17:02<327:16:11, 2.10s/it]
1%|▏ | 8378/569592 [17:05<357:29:20, 2.29s/it]
1%|▏ | 8378/569592 [17:05<357:29:20, 2.29s/it]
1%|▏ | 8379/569592 [17:10<496:29:41, 3.18s/it]
1%|▏ | 8379/569592 [17:10<496:29:41, 3.18s/it]
1%|▏ | 8380/569592 [17:11<394:10:39, 2.53s/it]
1%|▏ | 8380/569592 [17:11<394:10:39, 2.53s/it]
1%|▏ | 8381/569592 [17:12<328:08:00, 2.10s/it]
1%|▏ | 8381/569592 [17:12<328:08:00, 2.10s/it]
1%|▏ | 8382/569592 [17:15<357:22:29, 2.29s/it]
1%|▏ | 8382/569592 [17:15<357:22:29, 2.29s/it]
1%|▏ | 8383/569592 [17:22<567:09:52, 3.64s/it]
1%|▏ | 8383/569592 [17:22<567:09:52, 3.64s/it]
1%|▏ | 8384/569592 [17:23<441:32:30, 2.83s/it]
1%|▏ | 8384/569592 [17:23<441:32:30, 2.83s/it]
1%|▏ | 8385/569592 [17:23<353:43:18, 2.27s/it]
1%|▏ | 8385/569592 [17:24<353:43:18, 2.27s/it]
1%|▏ | 8386/569592 [17:26<377:32:48, 2.42s/it]
1%|▏ | 8386/569592 [17:26<377:32:48, 2.42s/it]
1%|▏ | 8387/569592 [17:32<523:08:16, 3.36s/it]
1%|▏ | 8387/569592 [17:32<523:08:16, 3.36s/it]
1%|▏ | 8388/569592 [17:33<412:02:42, 2.64s/it]
1%|▏ | 8388/569592 [17:33<412:02:42, 2.64s/it]
1%|▏ | 8389/569592 [17:34<331:30:15, 2.13s/it]
1%|▏ | 8389/569592 [17:34<331:30:15, 2.13s/it]
1%|▏ | 8390/569592 [17:37<368:56:23, 2.37s/it]
1%|▏ | 8390/569592 [17:37<368:56:23, 2.37s/it]
1%|▏ | 8391/569592 [17:41<453:03:12, 2.91s/it]
1%|▏ | 8391/569592 [17:41<453:03:12, 2.91s/it]
1%|▏ | 8392/569592 [17:42<372:54:12, 2.39s/it]
1%|▏ | 8392/569592 [17:42<372:54:12, 2.39s/it]
1%|▏ | 8393/569592 [17:43<325:51:16, 2.09s/it]
1%|▏ | 8393/569592 [17:43<325:51:16, 2.09s/it]
1%|▏ | 8394/569592 [17:46<344:30:00, 2.21s/it]
1%|▏ | 8394/569592 [17:46<344:30:00, 2.21s/it]
1%|▏ | 8395/569592 [17:53<559:10:41, 3.59s/it]
1%|▏ | 8395/569592 [17:53<559:10:41, 3.59s/it]
1%|▏ | 8396/569592 [17:54<434:08:06, 2.78s/it]
1%|▏ | 8396/569592 [17:54<434:08:06, 2.78s/it]
1%|▏ | 8397/569592 [17:55<349:07:44, 2.24s/it]
1%|▏ | 8397/569592 [17:55<349:07:44, 2.24s/it]
1%|▏ | 8398/569592 [17:56<313:39:57, 2.01s/it]
1%|▏ | 8398/569592 [17:56<313:39:57, 2.01s/it]
1%|▏ | 8399/569592 [18:02<497:24:28, 3.19s/it]
1%|▏ | 8399/569592 [18:02<497:24:28, 3.19s/it]
1%|▏ | 8400/569592 [18:03<393:22:42, 2.52s/it]
1%|▏ | 8400/569592 [18:03<393:22:42, 2.52s/it]
1%|▏ | 8401/569592 [18:04<319:11:30, 2.05s/it]
1%|▏ | 8401/569592 [18:04<319:11:30, 2.05s/it]
1%|▏ | 8402/569592 [18:07<377:43:02, 2.42s/it]
1%|▏ | 8402/569592 [18:07<377:43:02, 2.42s/it]
1%|▏ | 8403/569592 [18:13<535:04:42, 3.43s/it]
1%|▏ | 8403/569592 [18:13<535:04:42, 3.43s/it]
1%|▏ | 8404/569592 [18:14<417:57:08, 2.68s/it]
1%|▏ | 8404/569592 [18:14<417:57:08, 2.68s/it]
1%|▏ | 8405/569592 [18:15<337:12:03, 2.16s/it]
1%|▏ | 8405/569592 [18:15<337:12:03, 2.16s/it]
1%|▏ | 8406/569592 [18:17<359:40:19, 2.31s/it]
1%|▏ | 8406/569592 [18:18<359:40:19, 2.31s/it]
1%|▏ | 8407/569592 [18:23<492:22:15, 3.16s/it]
1%|▏ | 8407/569592 [18:23<492:22:15, 3.16s/it]
1%|▏ | 8408/569592 [18:25<450:57:23, 2.89s/it]
1%|▏ | 8408/569592 [18:25<450:57:23, 2.89s/it]
1%|▏ | 8409/569592 [18:26<359:09:10, 2.30s/it]
1%|▏ | 8409/569592 [18:26<359:09:10, 2.30s/it]
1%|▏ | 8410/569592 [18:28<337:43:02, 2.17s/it]
1%|▏ | 8410/569592 [18:28<337:43:02, 2.17s/it]
1%|▏ | 8411/569592 [18:32<433:44:01, 2.78s/it]
1%|▏ | 8411/569592 [18:32<433:44:01, 2.78s/it]
1%|▏ | 8412/569592 [18:35<425:46:19, 2.73s/it]
1%|▏ | 8412/569592 [18:35<425:46:19, 2.73s/it]
1%|▏ | 8413/569592 [18:35<343:29:31, 2.20s/it]
1%|▏ | 8413/569592 [18:36<343:29:31, 2.20s/it]
1%|▏ | 8414/569592 [18:38<371:30:41, 2.38s/it]
1%|▏ | 8414/569592 [18:38<371:30:41, 2.38s/it]
1%|▏ | 8415/569592 [18:42<425:32:04, 2.73s/it]
1%|▏ | 8415/569592 [18:42<425:32:04, 2.73s/it]
1%|▏ | 8416/569592 [18:44<392:29:48, 2.52s/it]
1%|▏ | 8416/569592 [18:44<392:29:48, 2.52s/it]
1%|▏ | 8417/569592 [18:45<327:24:11, 2.10s/it]
1%|▏ | 8417/569592 [18:45<327:24:11, 2.10s/it]
1%|▏ | 8418/569592 [18:48<382:14:01, 2.45s/it]
1%|▏ | 8418/569592 [18:48<382:14:01, 2.45s/it]
1%|▏ | 8419/569592 [18:52<448:30:47, 2.88s/it]
1%|▏ | 8419/569592 [18:52<448:30:47, 2.88s/it]
1%|▏ | 8420/569592 [18:54<396:33:06, 2.54s/it]
1%|▏ | 8420/569592 [18:54<396:33:06, 2.54s/it]
1%|▏ | 8421/569592 [18:55<340:31:23, 2.18s/it]
1%|▏ | 8421/569592 [18:55<340:31:23, 2.18s/it]
1%|▏ | 8422/569592 [18:58<363:50:20, 2.33s/it]
1%|▏ | 8422/569592 [18:58<363:50:20, 2.33s/it]
1%|▏ | 8423/569592 [19:02<461:04:50, 2.96s/it]
1%|▏ | 8423/569592 [19:02<461:04:50, 2.96s/it]
1%|▏ | 8424/569592 [19:04<390:43:22, 2.51s/it]
1%|▏ | 8424/569592 [19:04<390:43:22, 2.51s/it]
1%|▏ | 8425/569592 [19:05<346:47:46, 2.22s/it]
1%|▏ | 8425/569592 [19:05<346:47:46, 2.22s/it]
1%|▏ | 8426/569592 [19:09<399:37:51, 2.56s/it]
1%|▏ | 8426/569592 [19:09<399:37:51, 2.56s/it]
1%|▏ | 8427/569592 [19:12<452:21:07, 2.90s/it]
1%|▏ | 8427/569592 [19:12<452:21:07, 2.90s/it]
1%|▏ | 8428/569592 [19:14<383:29:44, 2.46s/it]
1%|▏ | 8428/569592 [19:14<383:29:44, 2.46s/it]
1%|▏ | 8429/569592 [19:15<312:53:55, 2.01s/it]
1%|▏ | 8429/569592 [19:15<312:53:55, 2.01s/it]
1%|▏ | 8430/569592 [19:20<482:20:43, 3.09s/it]
1%|▏ | 8430/569592 [19:20<482:20:43, 3.09s/it]
1%|▏ | 8431/569592 [19:23<438:02:44, 2.81s/it]
1%|▏ | 8431/569592 [19:23<438:02:44, 2.81s/it]
1%|▏ | 8432/569592 [19:24<365:37:14, 2.35s/it]
1%|▏ | 8432/569592 [19:24<365:37:14, 2.35s/it]
1%|▏ | 8433/569592 [19:26<340:31:05, 2.18s/it]
1%|▏ | 8433/569592 [19:26<340:31:05, 2.18s/it]
1%|▏ | 8434/569592 [19:28<365:42:52, 2.35s/it]
1%|▏ | 8434/569592 [19:28<365:42:52, 2.35s/it]
1%|▏ | 8435/569592 [19:33<456:31:47, 2.93s/it]
1%|▏ | 8435/569592 [19:33<456:31:47, 2.93s/it]
1%|▏ | 8436/569592 [19:34<363:16:51, 2.33s/it]
1%|▏ | 8436/569592 [19:34<363:16:51, 2.33s/it]
1%|▏ | 8437/569592 [19:36<358:13:13, 2.30s/it]
1%|▏ | 8437/569592 [19:36<358:13:13, 2.30s/it]
1%|▏ | 8438/569592 [19:40<443:07:43, 2.84s/it]
1%|▏ | 8438/569592 [19:40<443:07:43, 2.84s/it]
1%|▏ | 8439/569592 [19:43<449:20:58, 2.88s/it]
1%|▏ | 8439/569592 [19:43<449:20:58, 2.88s/it]
1%|▏ | 8440/569592 [19:44<362:51:36, 2.33s/it]
1%|▏ | 8440/569592 [19:44<362:51:36, 2.33s/it]
1%|▏ | 8441/569592 [19:45<305:19:55, 1.96s/it]
1%|▏ | 8441/569592 [19:45<305:19:55, 1.96s/it]
1%|▏ | 8442/569592 [19:50<461:37:42, 2.96s/it]
1%|▏ | 8442/569592 [19:50<461:37:42, 2.96s/it]
1%|▏ | 8443/569592 [19:53<431:31:55, 2.77s/it]
1%|▏ | 8443/569592 [19:53<431:31:55, 2.77s/it]
1%|▏ | 8444/569592 [19:55<406:50:59, 2.61s/it]
1%|▏ | 8444/569592 [19:55<406:50:59, 2.61s/it]
1%|▏ | 8445/569592 [19:56<329:25:11, 2.11s/it]
1%|▏ | 8445/569592 [19:56<329:25:11, 2.11s/it]
1%|▏ | 8446/569592 [20:01<451:56:46, 2.90s/it]
1%|▏ | 8446/569592 [20:01<451:56:46, 2.90s/it]
1%|▏ | 8447/569592 [20:03<422:10:51, 2.71s/it]
1%|▏ | 8447/569592 [20:03<422:10:51, 2.71s/it]
1%|▏ | 8448/569592 [20:06<422:47:10, 2.71s/it]
1%|▏ | 8448/569592 [20:06<422:47:10, 2.71s/it]
1%|▏ | 8449/569592 [20:11<551:35:58, 3.54s/it]
1%|▏ | 8449/569592 [20:11<551:35:58, 3.54s/it]
1%|▏ | 8450/569592 [20:15<591:39:24, 3.80s/it]
1%|▏ | 8450/569592 [20:15<591:39:24, 3.80s/it]
1%|▏ | 8451/569592 [20:20<635:17:48, 4.08s/it]
1%|▏ | 8451/569592 [20:20<635:17:48, 4.08s/it]
1%|▏ | 8452/569592 [20:24<645:56:04, 4.14s/it]
1%|▏ | 8452/569592 [20:24<645:56:04, 4.14s/it]
1%|▏ | 8453/569592 [20:28<602:44:27, 3.87s/it]
1%|▏ | 8453/569592 [20:28<602:44:27, 3.87s/it]
1%|▏ | 8454/569592 [20:32<631:35:46, 4.05s/it]
1%|▏ | 8454/569592 [20:32<631:35:46, 4.05s/it]
1%|▏ | 8455/569592 [20:36<641:17:26, 4.11s/it]
1%|▏ | 8455/569592 [20:36<641:17:26, 4.11s/it]
1%|▏ | 8456/569592 [20:40<630:30:36, 4.05s/it]
1%|▏ | 8456/569592 [20:40<630:30:36, 4.05s/it]
1%|▏ | 8457/569592 [20:44<633:14:52, 4.06s/it]
1%|▏ | 8457/569592 [20:44<633:14:52, 4.06s/it]
1%|▏ | 8458/569592 [20:49<652:12:12, 4.18s/it]
1%|▏ | 8458/569592 [20:49<652:12:12, 4.18s/it]
1%|▏ | 8459/569592 [20:53<648:59:16, 4.16s/it]
1%|▏ | 8459/569592 [20:53<648:59:16, 4.16s/it]
1%|▏ | 8460/569592 [20:54<496:36:40, 3.19s/it]
1%|▏ | 8460/569592 [20:54<496:36:40, 3.19s/it]
1%|▏ | 8461/569592 [20:58<519:37:34, 3.33s/it]
1%|▏ | 8461/569592 [20:58<519:37:34, 3.33s/it]
1%|▏ | 8462/569592 [21:01<533:57:39, 3.43s/it]
1%|▏ | 8462/569592 [21:01<533:57:39, 3.43s/it]
1%|▏ | 8463/569592 [21:02<421:25:44, 2.70s/it]
1%|▏ | 8463/569592 [21:02<421:25:44, 2.70s/it]
1%|▏ | 8464/569592 [21:06<477:36:03, 3.06s/it]
1%|▏ | 8464/569592 [21:06<477:36:03, 3.06s/it]
1%|▏ | 8465/569592 [21:07<379:07:33, 2.43s/it]
1%|▏ | 8465/569592 [21:07<379:07:33, 2.43s/it]
1%|▏ | 8466/569592 [21:12<502:11:55, 3.22s/it]
1%|▏ | 8466/569592 [21:12<502:11:55, 3.22s/it]
1%|▏ | 8467/569592 [21:16<517:41:31, 3.32s/it]
1%|▏ | 8467/569592 [21:16<517:41:31, 3.32s/it]
1%|▏ | 8468/569592 [21:17<405:00:31, 2.60s/it]
1%|▏ | 8468/569592 [21:17<405:00:31, 2.60s/it]
1%|▏ | 8469/569592 [21:18<326:38:08, 2.10s/it]
1%|▏ | 8469/569592 [21:18<326:38:08, 2.10s/it]
1%|▏ | 8470/569592 [21:18<272:46:08, 1.75s/it]
1%|▏ | 8470/569592 [21:18<272:46:08, 1.75s/it]
1%|▏ | 8471/569592 [21:19<234:47:28, 1.51s/it]
1%|▏ | 8471/569592 [21:19<234:47:28, 1.51s/it]
1%|▏ | 8472/569592 [21:20<207:25:32, 1.33s/it]
1%|▏ | 8472/569592 [21:20<207:25:32, 1.33s/it]
1%|▏ | 8473/569592 [21:21<190:14:07, 1.22s/it]
1%|▏ | 8473/569592 [21:21<190:14:07, 1.22s/it]
1%|▏ | 8474/569592 [21:22<176:49:18, 1.13s/it]
1%|▏ | 8474/569592 [21:22<176:49:18, 1.13s/it]
1%|▏ | 8475/569592 [21:25<254:20:24, 1.63s/it]
1%|▏ | 8475/569592 [21:25<254:20:24, 1.63s/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (99387746 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/it]
1%|▏ | 8476/569592 [21:30<387:19:05, 2.48s/it]
1%|▏ | 8476/569592 [21:30<387:19:05, 2.48s/it]
1%|▏ | 8477/569592 [21:31<320:04:22, 2.05s/it]
1%|▏ | 8477/569592 [21:31<320:04:22, 2.05s/it]
1%|▏ | 8478/569592 [21:32<269:13:00, 1.73s/it]
1%|▏ | 8478/569592 [21:32<269:13:00, 1.73s/it]
1%|▏ | 8479/569592 [21:35<360:59:04, 2.32s/it]
1%|▏ | 8479/569592 [21:35<360:59:04, 2.32s/it]
1%|▏ | 8480/569592 [21:39<432:00:33, 2.77s/it]
1%|▏ | 8480/569592 [21:39<432:00:33, 2.77s/it]
1%|▏ | 8481/569592 [21:40<354:08:57, 2.27s/it]
1%|▏ | 8481/569592 [21:40<354:08:57, 2.27s/it]
1%|▏ | 8482/569592 [21:42<347:44:47, 2.23s/it]
1%|▏ | 8482/569592 [21:42<347:44:47, 2.23s/it]
1%|▏ | 8483/569592 [21:45<378:08:52, 2.43s/it]
1%|▏ | 8483/569592 [21:45<378:08:52, 2.43s/it]
1%|▏ | 8484/569592 [21:50<474:11:47, 3.04s/it]
1%|▏ | 8484/569592 [21:50<474:11:47, 3.04s/it]
1%|▏ | 8485/569592 [21:51<407:02:59, 2.61s/it]
1%|▏ | 8485/569592 [21:51<407:02:59, 2.61s/it]
1%|▏ | 8486/569592 [21:52<330:41:05, 2.12s/it]
1%|▏ | 8486/569592 [21:52<330:41:05, 2.12s/it]
1%|▏ | 8487/569592 [21:55<375:25:09, 2.41s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1%|▏ | 8487/569592 [21:55<375:25:09, 2.41s/it]
1%|▏ | 8488/569592 [22:00<462:25:42, 2.97s/it]
1%|▏ | 8488/569592 [22:00<462:25:42, 2.97s/it]
1%|▏ | 8489/569592 [22:01<378:54:29, 2.43s/it]
1%|▏ | 8489/569592 [22:01<378:54:29, 2.43s/it]
1%|▏ | 8490/569592 [22:04<426:25:26, 2.74s/it]
1%|▏ | 8490/569592 [22:04<426:25:26, 2.74s/it]
1%|▏ | 8491/569592 [22:06<359:54:53, 2.31s/it]
1%|▏ | 8491/569592 [22:06<359:54:53, 2.31s/it]
1%|▏ | 8492/569592 [22:09<431:13:42, 2.77s/it]
1%|▏ | 8492/569592 [22:09<431:13:42, 2.77s/it]
1%|▏ | 8493/569592 [22:11<390:40:38, 2.51s/it]
1%|▏ | 8493/569592 [22:11<390:40:38, 2.51s/it]
1%|▏ | 8494/569592 [22:12<323:23:28, 2.07s/it]
1%|▏ | 8494/569592 [22:12<323:23:28, 2.07s/it]
1%|▏ | 8495/569592 [22:15<362:21:30, 2.32s/it]
1%|▏ | 8495/569592 [22:15<362:21:30, 2.32s/it]
1%|▏ | 8496/569592 [22:20<484:02:27, 3.11s/it]
1%|▏ | 8496/569592 [22:20<484:02:27, 3.11s/it]
1%|▏ | 8497/569592 [22:22<416:11:58, 2.67s/it]
1%|▏ | 8497/569592 [22:22<416:11:58, 2.67s/it]
1%|▏ | 8498/569592 [22:23<340:57:47, 2.19s/it]
1%|▏ | 8498/569592 [22:23<340:57:47, 2.19s/it]
1%|▏ | 8499/569592 [22:25<353:45:09, 2.27s/it]
1%|▏ | 8499/569592 [22:25<353:45:09, 2.27s/it]
1%|▏ | 8500/569592 [22:30<462:42:56, 2.97s/it]
1%|▏ | 8500/569592 [22:30<462:42:56, 2.97s/it]
1%|▏ | 8501/569592 [22:31<389:07:36, 2.50s/it]
1%|▏ | 8501/569592 [22:31<389:07:36, 2.50s/it]
1%|▏ | 8502/569592 [22:34<393:26:46, 2.52s/it]
1%|▏ | 8502/569592 [22:34<393:26:46, 2.52s/it]
1%|▏ | 8503/569592 [22:36<355:43:29, 2.28s/it]
1%|▏ | 8503/569592 [22:36<355:43:29, 2.28s/it]
1%|▏ | 8504/569592 [22:41<484:03:05, 3.11s/it]
1%|▏ | 8504/569592 [22:41<484:03:05, 3.11s/it]
1%|▏ | 8505/569592 [22:42<416:53:08, 2.67s/it]
1%|▏ | 8505/569592 [22:42<416:53:08, 2.67s/it]
1%|▏ | 8506/569592 [22:45<397:57:04, 2.55s/it]
1%|▏ | 8506/569592 [22:45<397:57:04, 2.55s/it]
1%|▏ | 8507/569592 [22:47<374:56:09, 2.41s/it]
1%|▏ | 8507/569592 [22:47<374:56:09, 2.41s/it]
1%|▏ | 8508/569592 [22:51<450:18:39, 2.89s/it]
1%|▏ | 8508/569592 [22:51<450:18:39, 2.89s/it]
1%|▏ | 8509/569592 [22:52<393:58:02, 2.53s/it]
1%|▏ | 8509/569592 [22:52<393:58:02, 2.53s/it]
1%|▏ | 8510/569592 [22:56<450:53:13, 2.89s/it]
1%|▏ | 8510/569592 [22:56<450:53:13, 2.89s/it]
1%|▏ | 8511/569592 [22:57<374:20:52, 2.40s/it]
1%|▏ | 8511/569592 [22:57<374:20:52, 2.40s/it]
1%|▏ | 8512/569592 [23:01<421:20:22, 2.70s/it]
1%|▏ | 8512/569592 [23:01<421:20:22, 2.70s/it]
1%|▏ | 8513/569592 [23:03<381:49:29, 2.45s/it]
1%|▏ | 8513/569592 [23:03<381:49:29, 2.45s/it]
1%|▏ | 8514/569592 [23:07<452:25:48, 2.90s/it]
1%|▏ | 8514/569592 [23:07<452:25:48, 2.90s/it]
1%|▏ | 8515/569592 [23:08<364:24:06, 2.34s/it]
1%|▏ | 8515/569592 [23:08<364:24:06, 2.34s/it]
1%|▏ | 8516/569592 [23:12<438:03:06, 2.81s/it]
1%|▏ | 8516/569592 [23:12<438:03:06, 2.81s/it]
1%|▏ | 8517/569592 [23:13<352:43:01, 2.26s/it]
1%|▏ | 8517/569592 [23:13<352:43:01, 2.26s/it]
1%|▏ | 8518/569592 [23:18<500:22:08, 3.21s/it]
1%|▏ | 8518/569592 [23:18<500:22:08, 3.21s/it]
1%|▏ | 8519/569592 [23:19<393:16:23, 2.52s/it]
1%|▏ | 8519/569592 [23:19<393:16:23, 2.52s/it]
1%|▏ | 8520/569592 [23:20<339:21:17, 2.18s/it]
1%|▏ | 8520/569592 [23:20<339:21:17, 2.18s/it]
1%|▏ | 8521/569592 [23:25<448:40:32, 2.88s/it]
1%|▏ | 8521/569592 [23:25<448:40:32, 2.88s/it]
1%|▏ | 8522/569592 [23:28<479:30:14, 3.08s/it]
1%|▏ | 8522/569592 [23:28<479:30:14, 3.08s/it]
1%|▏ | 8523/569592 [23:29<380:02:12, 2.44s/it]
1%|▏ | 8523/569592 [23:29<380:02:12, 2.44s/it]
1%|▏ | 8524/569592 [23:33<419:45:34, 2.69s/it]
1%|▏ | 8524/569592 [23:33<419:45:34, 2.69s/it]
1%|▏ | 8525/569592 [23:34<342:52:36, 2.20s/it]
1%|▏ | 8525/569592 [23:34<342:52:36, 2.20s/it]
1%|▏ | 8526/569592 [23:37<422:24:36, 2.71s/it]
1%|▏ | 8526/569592 [23:37<422:24:36, 2.71s/it]
1%|▏ | 8527/569592 [23:38<342:57:41, 2.20s/it]
1%|▏ | 8527/569592 [23:38<342:57:41, 2.20s/it]
1%|▏ | 8528/569592 [23:41<368:32:34, 2.36s/it]
1%|▏ | 8528/569592 [23:41<368:32:34, 2.36s/it]
1%|▏ | 8529/569592 [23:44<370:26:08, 2.38s/it]
1%|▏ | 8529/569592 [23:44<370:26:08, 2.38s/it]
1%|▏ | 8530/569592 [23:49<499:47:46, 3.21s/it]
1%|▏ | 8530/569592 [23:49<499:47:46, 3.21s/it]
1%|▏ | 8531/569592 [23:50<392:26:42, 2.52s/it]
1%|▏ | 8531/569592 [23:50<392:26:42, 2.52s/it]
1%|▏ | 8532/569592 [23:51<351:09:31, 2.25s/it]
1%|▏ | 8532/569592 [23:51<351:09:31, 2.25s/it]
1%|▏ | 8533/569592 [23:53<336:57:27, 2.16s/it]
1%|▏ | 8533/569592 [23:53<336:57:27, 2.16s/it]
1%|▏ | 8534/569592 [24:00<528:07:50, 3.39s/it]
1%|▏ | 8534/569592 [24:00<528:07:50, 3.39s/it]
1%|▏ | 8535/569592 [24:00<411:58:19, 2.64s/it]
1%|▏ | 8535/569592 [24:00<411:58:19, 2.64s/it]
1%|▏ | 8536/569592 [24:01<334:51:55, 2.15s/it]
1%|▏ | 8536/569592 [24:01<334:51:55, 2.15s/it]
1%|▏ | 8537/569592 [24:03<317:54:42, 2.04s/it]
1%|▏ | 8537/569592 [24:03<317:54:42, 2.04s/it]
1%|▏ | 8538/569592 [24:10<523:17:54, 3.36s/it]
1%|▏ | 8538/569592 [24:10<523:17:54, 3.36s/it]
1%|▏ | 8539/569592 [24:11<410:32:05, 2.63s/it]
1%|▏ | 8539/569592 [24:11<410:32:05, 2.63s/it]
1%|▏ | 8540/569592 [24:12<331:11:20, 2.13s/it]
1%|▏ | 8540/569592 [24:12<331:11:20, 2.13s/it]
1%|▏ | 8541/569592 [24:13<307:18:44, 1.97s/it]
1%|▏ | 8541/569592 [24:13<307:18:44, 1.97s/it]
1%|▏ | 8542/569592 [24:19<467:34:14, 3.00s/it]
1%|▏ | 8542/569592 [24:19<467:34:14, 3.00s/it]
1%|▏ | 8543/569592 [24:19<371:53:19, 2.39s/it]
1%|▏ | 8543/569592 [24:19<371:53:19, 2.39s/it]
2%|▏ | 8544/569592 [24:23<441:52:12, 2.84s/it]
2%|▏ | 8544/569592 [24:23<441:52:12, 2.84s/it]
2%|▏ | 8545/569592 [24:24<351:55:06, 2.26s/it]
2%|▏ | 8545/569592 [24:24<351:55:06, 2.26s/it]
2%|▏ | 8546/569592 [24:29<453:34:26, 2.91s/it]
2%|▏ | 8546/569592 [24:29<453:34:26, 2.91s/it]
2%|▏ | 8547/569592 [24:30<362:16:14, 2.32s/it]
2%|▏ | 8547/569592 [24:30<362:16:14, 2.32s/it]
2%|▏ | 8548/569592 [24:31<333:42:37, 2.14s/it]
2%|▏ | 8548/569592 [24:31<333:42:37, 2.14s/it]
2%|▏ | 8549/569592 [24:34<355:35:16, 2.28s/it]
2%|▏ | 8549/569592 [24:34<355:35:16, 2.28s/it]
2%|▏ | 8550/569592 [24:40<532:13:59, 3.42s/it]
2%|▏ | 8550/569592 [24:40<532:13:59, 3.42s/it]
2%|▏ | 8551/569592 [24:41<418:36:18, 2.69s/it]
2%|▏ | 8551/569592 [24:41<418:36:18, 2.69s/it]
2%|▏ | 8552/569592 [24:44<417:19:16, 2.68s/it]
2%|▏ | 8552/569592 [24:44<417:19:16, 2.68s/it]
2%|▏ | 8553/569592 [24:45<363:50:01, 2.33s/it]
2%|▏ | 8553/569592 [24:45<363:50:01, 2.33s/it]
2%|▏ | 8554/569592 [24:50<474:45:58, 3.05s/it]
2%|▏ | 8554/569592 [24:50<474:45:58, 3.05s/it]
2%|▏ | 8555/569592 [24:51<376:37:01, 2.42s/it]
2%|▏ | 8555/569592 [24:51<376:37:01, 2.42s/it]
2%|▏ | 8556/569592 [24:53<344:19:20, 2.21s/it]
2%|▏ | 8556/569592 [24:53<344:19:20, 2.21s/it]
2%|▏ | 8557/569592 [24:55<364:15:39, 2.34s/it]
2%|▏ | 8557/569592 [24:55<364:15:39, 2.34s/it]
2%|▏ | 8558/569592 [25:00<491:22:10, 3.15s/it]
2%|▏ | 8558/569592 [25:00<491:22:10, 3.15s/it]
2%|▏ | 8559/569592 [25:01<387:59:25, 2.49s/it]
2%|▏ | 8559/569592 [25:01<387:59:25, 2.49s/it]
2%|▏ | 8560/569592 [25:03<355:01:35, 2.28s/it]
2%|▏ | 8560/569592 [25:03<355:01:35, 2.28s/it]
2%|▏ | 8561/569592 [25:09<512:30:19, 3.29s/it]
2%|▏ | 8561/569592 [25:09<512:30:19, 3.29s/it]
2%|▏ | 8562/569592 [25:13<566:34:55, 3.64s/it]
2%|▏ | 8562/569592 [25:13<566:34:55, 3.64s/it]
2%|▏ | 8563/569592 [25:14<452:11:56, 2.90s/it]
2%|▏ | 8563/569592 [25:14<452:11:56, 2.90s/it]
2%|▏ | 8564/569592 [25:19<549:58:36, 3.53s/it]
2%|▏ | 8564/569592 [25:19<549:58:36, 3.53s/it]
2%|▏ | 8565/569592 [25:24<608:00:06, 3.90s/it]
2%|▏ | 8565/569592 [25:24<608:00:06, 3.90s/it]
2%|▏ | 8566/569592 [25:28<594:38:43, 3.82s/it]
2%|▏ | 8566/569592 [25:28<594:38:43, 3.82s/it]
2%|▏ | 8567/569592 [25:33<648:37:03, 4.16s/it]
2%|▏ | 8567/569592 [25:33<648:37:03, 4.16s/it]
2%|▏ | 8568/569592 [25:37<680:17:16, 4.37s/it]
2%|▏ | 8568/569592 [25:38<680:17:16, 4.37s/it]
2%|▏ | 8569/569592 [25:42<678:52:34, 4.36s/it]
2%|▏ | 8569/569592 [25:42<678:52:34, 4.36s/it]
2%|▏ | 8570/569592 [25:45<634:05:32, 4.07s/it]
2%|▏ | 8570/569592 [25:45<634:05:32, 4.07s/it]
2%|▏ | 8571/569592 [25:50<663:56:17, 4.26s/it]
2%|▏ | 8571/569592 [25:50<663:56:17, 4.26s/it]
2%|▏ | 8572/569592 [25:54<635:30:13, 4.08s/it]
2%|▏ | 8572/569592 [25:54<635:30:13, 4.08s/it]
2%|▏ | 8573/569592 [25:57<596:24:34, 3.83s/it]
2%|▏ | 8573/569592 [25:57<596:24:34, 3.83s/it]
2%|▏ | 8574/569592 [26:01<592:17:00, 3.80s/it]
2%|▏ | 8574/569592 [26:01<592:17:00, 3.80s/it]
2%|▏ | 8575/569592 [26:01<456:57:41, 2.93s/it]
2%|▏ | 8575/569592 [26:01<456:57:41, 2.93s/it]
2%|▏ | 8576/569592 [26:05<485:21:37, 3.11s/it]
2%|▏ | 8576/569592 [26:05<485:21:37, 3.11s/it]
2%|▏ | 8577/569592 [26:08<483:11:32, 3.10s/it]
2%|▏ | 8577/569592 [26:08<483:11:32, 3.10s/it]
2%|▏ | 8578/569592 [26:09<382:46:40, 2.46s/it]
2%|▏ | 8578/569592 [26:09<382:46:40, 2.46s/it]
2%|▏ | 8579/569592 [26:14<493:10:25, 3.16s/it]
2%|▏ | 8579/569592 [26:14<493:10:25, 3.16s/it]
2%|▏ | 8580/569592 [26:19<562:26:35, 3.61s/it]
2%|▏ | 8580/569592 [26:19<562:26:35, 3.61s/it]
2%|▏ | 8581/569592 [26:23<602:50:14, 3.87s/it]
2%|▏ | 8581/569592 [26:23<602:50:14, 3.87s/it]
2%|▏ | 8582/569592 [26:28<646:19:10, 4.15s/it]
2%|▏ | 8582/569592 [26:28<646:19:10, 4.15s/it]
2%|▏ | 8583/569592 [26:29<494:15:54, 3.17s/it]
2%|▏ | 8583/569592 [26:29<494:15:54, 3.17s/it]
2%|▏ | 8584/569592 [26:32<509:49:50, 3.27s/it]
2%|▏ | 8584/569592 [26:33<509:49:50, 3.27s/it]
2%|▏ | 8585/569592 [26:36<539:01:13, 3.46s/it]
2%|▏ | 8585/569592 [26:36<539:01:13, 3.46s/it]
2%|▏ | 8586/569592 [26:37<421:06:35, 2.70s/it]
2%|▏ | 8586/569592 [26:37<421:06:35, 2.70s/it]
2%|▏ | 8587/569592 [26:38<339:34:32, 2.18s/it]
2%|▏ | 8587/569592 [26:38<339:34:32, 2.18s/it]
2%|▏ | 8588/569592 [26:39<282:10:57, 1.81s/it]
2%|▏ | 8588/569592 [26:39<282:10:57, 1.81s/it]
2%|▏ | 8589/569592 [26:40<241:24:51, 1.55s/it]
2%|▏ | 8589/569592 [26:40<241:24:51, 1.55s/it]
2%|▏ | 8590/569592 [26:41<212:01:54, 1.36s/it]
2%|▏ | 8590/569592 [26:41<212:01:54, 1.36s/it]
2%|▏ | 8591/569592 [26:42<191:57:36, 1.23s/it]
2%|▏ | 8591/569592 [26:42<191:57:36, 1.23s/it]
2%|▏ | 8592/569592 [26:43<179:46:29, 1.15s/it]
2%|▏ | 8592/569592 [26:43<179:46:29, 1.15s/it]
2%|▏ | 8593/569592 [26:45<221:52:32, 1.42s/it]
2%|▏ | 8593/569592 [26:45<221:52:32, 1.42s/it]
2%|▏ | 8594/569592 [26:51<442:35:06, 2.84s/it]
2%|▏ | 8594/569592 [26:51<442:35:06, 2.84s/it]
2%|▏ | 8595/569592 [26:52<354:35:36, 2.28s/it]
2%|▏ | 8595/569592 [26:52<354:35:36, 2.28s/it]
2%|▏ | 8596/569592 [26:53<293:54:28, 1.89s/it]
2%|▏ | 8596/569592 [26:53<293:54:28, 1.89s/it]
2%|▏ | 8597/569592 [26:55<319:31:31, 2.05s/it]
2%|▏ | 8597/569592 [26:55<319:31:31, 2.05s/it]
2%|▏ | 8598/569592 [27:01<485:29:36, 3.12s/it]
2%|▏ | 8598/569592 [27:01<485:29:36, 3.12s/it]
2%|▏ | 8599/569592 [27:02<388:33:04, 2.49s/it]
2%|▏ | 8599/569592 [27:02<388:33:04, 2.49s/it]
2%|▏ | 8600/569592 [27:03<316:09:03, 2.03s/it]
2%|▏ | 8600/569592 [27:03<316:09:03, 2.03s/it]
2%|▏ | 8601/569592 [27:05<324:00:20, 2.08s/it]
2%|▏ | 8601/56959/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101518284 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2 [27:05<324:00:20, 2.08s/it]
2%|▏ | 8602/569592 [27:11<510:56:48, 3.28s/it]
2%|▏ | 8602/569592 [27:11<510:56:48, 3.28s/it]
2%|▏ | 8603/569592 [27:12<404:15:21, 2.59s/it]
2%|▏ | 8603/569592 [27:12<404:15:21, 2.59s/it]
2%|▏ | 8604/569592 [27:13<328:08:47, 2.11s/it]
2%|▏ | 8604/569592 [27:13<328:08:47, 2.11s/it]
2%|▏ | 8605/569592 [27:16<356:33:53, 2.29s/it]
2%|▏ | 8605/569592 [27:16<356:33:53, 2.29s/it]
2%|▏ | 8606/569592 [27:22<533:17:57, 3.42s/it]
2%|▏ | 8606/569592 [27:22<533:17:57, 3.42s/it]
2%|▏ | 8607/569592 [27:23<416:18:35, 2.67s/it]
2%|▏ | 8607/569592 [27:23<416:18:35, 2.67s/it]
2%|▏ | 8608/569592 [27:24<336:58:03, 2.16s/it]
2%|▏ | 8608/569592 [27:24<336:58:03, 2.16s/it]
2%|▏ | 8609/569592 [27:26<344:38:46, 2.21s/it]
2%|▏ | 8609/569592 [27:26<344:38:46, 2.21s/it]
2%|▏ | 8610/569592 [27:31<494:03:40, 3.17s/it]
2%|▏ | 8610/569592 [27:31<494:03:40, 3.17s/it]
2%|▏ | 8611/569592 [27:32<389:00:23, 2.50s/it]
2%|▏ | 8611/569592 [27:32<389:00:23, 2.50s/it]
2%|▏ | 8612/569592 [27:33<319:59:33, 2.05s/it]
2%|▏ | 8612/569592 [27:33<319:59:33, 2.05s/it]
2%|▏ | 8613/569592 [27:37<376:34:57, 2.42s/it]
2%|▏ | 8613/569592 [27:37<376:34:57, 2.42s/it]
2%|▏ | 8614/569592 [27:42<492:42:05, 3.16s/it]
2%|▏ | 8614/569592 [27:42<492:42:05, 3.16s/it]
2%|▏ | 8615/569592 [27:43<394:00:55, 2.53s/it]
2%|▏ | 8615/569592 [27:43<394:00:55, 2.53s/it]
2%|▏ | 8616/569592 [27:44<319:47:00, 2.05s/it]
2%|▏ | 8616/569592 [27:44<319:47:00, 2.05s/it]
2%|▏ | 8617/569592 [27:47<388:08:42, 2.49s/it]
2%|▏ | 8617/569592 [27:47<388:08:42, 2.49s/it]
2%|▏ | 8618/569592 [27:51<473:37:35, 3.04s/it]
2%|▏ | 8618/569592 [27:51<473:37:35, 3.04s/it]
2%|▏ | 8619/569592 [27:52<380:37:33, 2.44s/it]
2%|▏ | 8619/569592 [27:52<380:37:33, 2.44s/it]
2%|▏ | 8620/569592 [27:53<309:47:58, 1.99s/it]
2%|▏ | 8620/569592 [27:53<309:47:58, 1.99s/it]
2%|▏ | 8621/569592 [27:58<429:38:21, 2.76s/it]
2%|▏ | 8621/569592 [27:58<429:38:21, 2.76s/it]
2%|▏ | 8622/569592 [28:01<464:06:26, 2.98s/it]
2%|▏ | 8622/569592 [28:01<464:06:26, 2.98s/it]
2%|▏ | 8623/569592 [28:02<371:20:55, 2.38s/it]
2%|▏ | 8623/569592 [28:02<371:20:55, 2.38s/it]
2%|▏ | 8624/569592 [28:03<303:17:24, 1.95s/it]
2%|▏ | 8624/569592 [28:03<303:17:24, 1.95s/it]
2%|▏ | 8625/569592 [28:09<486:34:26, 3.12s/it]
2%|▏ | 8625/569592 [28:09<486:34:26, 3.12s/it]
2%|▏ | 8626/569592 [28:12<485:41:50, 3.12s/it]
2%|▏ | 8626/569592 [28:12<485:41:50, 3.12s/it]
2%|▏ | 8627/569592 [28:13<383:07:29, 2.46s/it]
2%|▏ | 8627/569592 [28:13<383:07:29, 2.46s/it]
2%|▏ | 8628/569592 [28:14<312:32:11, 2.01s/it]
2%|▏ | 8628/569592 [28:14<312:32:11, 2.01s/it]
2%|▏ | 8629/569592 [28:19<466:23:56, 2.99s/it]
2%|▏ | 8629/569592 [28:20<466:23:56, 2.99s/it]
2%|▏ | 8630/569592 [28:24<518:51:27, 3.33s/it]
2%|▏ | 8630/569592 [28:24<518:51:27, 3.33s/it]
2%|▏ | 8631/569592 [28:25<406:38:24, 2.61s/it]
2%|▏ | 8631/569592 [28:25<406:38:24, 2.61s/it]
2%|▏ | 8632/569592 [28:26<329:29:54, 2.11s/it]
2%|▏ | 8632/569592 [28:26<329:29:54, 2.11s/it]
2%|▏ | 8633/569592 [28:28<348:31:20, 2.24s/it]
2%|▏ | 8633/569592 [28:28<348:31:20, 2.24s/it]
2%|▏ | 8634/569592 [28:34<515:28:15, 3.31s/it]
2%|▏ | 8634/569592 [28:34<515:28:15, 3.31s/it]
2%|▏ | 8635/569592 [28:35<406:08:21, 2.61s/it]
2%|▏ | 8635/569592 [28:35<406:08:21, 2.61s/it]
2%|▏ | 8636/569592 [28:36<328:56:00, 2.11s/it]
2%|▏ | 8636/569592 [28:36<328:56:00, 2.11s/it]
2%|▏ | 8637/569592 [28:40<445:49:35, 2.86s/it]
2%|▏ | 8637/569592 [28:40<445:49:35, 2.86s/it]
2%|▏ | 8638/569592 [28:43<421:30:48, 2.71s/it]
2%|▏ | 8638/569592 [28:43<421:30:48, 2.71s/it]
2%|▏ | 8639/569592 [28:44<368:07:19, 2.36s/it]
2%|▏ | 8639/569592 [28:44<368:07:19, 2.36s/it]
2%|▏ | 8640/569592 [28:45<301:48:45, 1.94s/it]
2%|▏ | 8640/569592 [28:45<301:48:45, 1.94s/it]
2%|▏ | 8641/569592 [28:50<415:41:28, 2.67s/it]
2%|▏ | 8641/569592 [28:50<415:41:28, 2.67s/it]
2%|▏ | 8642/569592 [28:54<489:05:21, 3.14s/it]
2%|▏ | 8642/569592 [28:54<489:05:21, 3.14s/it]
2%|▏ | 8643/569592 [28:55<416:44:02, 2.67s/it]
2%|▏ | 8643/569592 [28:55<416:44:02, 2.67s/it]
2%|▏ | 8644/569592 [28:56<336:02:37, 2.16s/it]
2%|▏ | 8644/569592 [28:56<336:02:37, 2.16s/it]
2%|▏ | 8645/569592 [28:59<357:58:55, 2.30s/it]
2%|▏ | 8645/569592 [28:59<357:58:55, 2.30s/it]
2%|▏ | 8646/569592 [29:03<447:36:35, 2.87s/it]
2%|▏ | 8646/569592 [29:03<447:36:35, 2.87s/it]
2%|▏ | 8647/569592 [29:08<514:23:33, 3.30s/it]
2%|▏ | 8647/569592 [29:08<514:23:33, 3.30s/it]
2%|▏ | 8648/569592 [29:08<403:06:44, 2.59s/it]
2%|▏ | 8648/569592 [29:08<403:06:44, 2.59s/it]
2%|▏ | 8649/569592 [29:09<325:39:30, 2.09s/it]
2%|▏ | 8649/569592 [29:09<325:39:30, 2.09s/it]
2%|▏ | 8650/569592 [29:13<378:00:48, 2.43s/it]
2%|▏ | 8650/569592 [29:13<378:00:48, 2.43s/it]
2%|▏ | 8651/569592 [29:16<431:40:48, 2.77s/it]
2%|▏ | 8651/569592 [29:16<431:40:48, 2.77s/it]
2%|▏ | 8652/569592 [29:17<346:22:29, 2.22s/it]
2%|▏ | 8652/569592 [29:17<346:22:29, 2.22s/it]
2%|▏ | 8653/569592 [29:19<343:16:55, 2.20s/it]
2%|▏ | 8653/569592 [29:19<343:16:55, 2.20s/it]
2%|▏ | 8654/569592 [29:22<364:53:28, 2.34s/it]
2%|▏ | 8654/569592 [29:22<364:53:28, 2.34s/it]
2%|▏ | 8655/569592 [29:27<507:11:37, 3.26s/it]
2%|▏ | 8655/569592 [29:27<507:11:37, 3.26s/it]
2%|▏ | 8656/569592 [29:28<399:27:10, 2.56s/it]
2%|▏ | 8656/569592 [29:28<399:27:10, 2.56s/it]
2%|▏ | 8657/569592 [29:31<408:47:21, 2.62s/it]
2%|▏ | 8657/569592 [29:31<408:47:21, 2.62s/it]
2%|▏ | 8658/569592 [29:33<382:37:22, 2.46s/it]
2%|▏ | 8658/569592 [29:33<382:37:22, 2.46s/it]
2%|▏ | 8659/569592 [29:36<395:29:29, 2.54s/it]
2%|▏ | 8659/569592 [29:36<395:29:29, 2.54s/it]
2%|▏ | 8660/569592 [29:37<319:46:45, 2.05s/it]
2%|▏ | 8660/569592 [29:37<319:46:45, 2.05s/it]
2%|▏ | 8661/569592 [29:40<362:05:51, 2.32s/it]
2%|▏ | 8661/569592 [29:40<362:05:51, 2.32s/it]
2%|▏ | 8662/569592 [29:42<361:22:30, 2.32s/it]
2%|▏ | 8662/569592 [29:42<361:22:30, 2.32s/it]
2%|▏ | 8663/569592 [29:46<424:57:22, 2.73s/it]
2%|▏ | 8663/569592 [29:46<424:57:22, 2.73s/it]
2%|▏ | 8664/569592 [29:47<341:14:11, 2.19s/it]
2%|▏ | 8664/569592 [29:47<341:14:11, 2.19s/it]
2%|▏ | 8665/569592 [29:50<384:20:18, 2.47s/it]
2%|▏ | 8665/569592 [29:50<384:20:18, 2.47s/it]
2%|▏ | 8666/569592 [29:53<435:37:54, 2.80s/it]
2%|▏ | 8666/569592 [29:53<435:37:54, 2.80s/it]
2%|▏ | 8667/569592 [29:56<430:33:10, 2.76s/it]
2%|▏ | 8667/569592 [29:56<430:33:10, 2.76s/it]
2%|▏ | 8668/569592 [29:57<344:26:21, 2.21s/it]
2%|▏ | 8668/569592 [29:57<344:26:21, 2.21s/it]
2%|▏ | 8669/569592 [29:59<332:53:51, 2.14s/it]
2%|▏ | 8669/569592 [29:59<332:53:51, 2.14s/it]
2%|▏ | 8670/569592 [30:05<498:22:56, 3.20s/it]
2%|▏ | 8670/569592 [30:05<498:22:56, 3.20s/it]
2%|▏ | 8671/569592 [30:07<477:04:56, 3.06s/it]
2%|▏ | 8671/569592 [30:07<477:04:56, 3.06s/it]
2%|▏ | 8672/569592 [30:08<376:15:52, 2.41s/it]
2%|▏ | 8672/569592 [30:08<376:15:52, 2.41s/it]
2%|▏ | 8673/569592 [30:11<392:47:49, 2.52s/it]
2%|▏ | 8673/569592 [30:11<392:47:49, 2.52s/it]
2%|▏ | 8674/569592 [30:17<548:59:21, 3.52s/it]
2%|▏ | 8674/569592 [30:17<548:59:21, 3.52s/it]
2%|▏ | 8675/569592 [30:21<574:31:33, 3.69s/it]
2%|▏ | 8675/569592 [30:21<574:31:33, 3.69s/it]
2%|▏ | 8676/569592 [30:26<627:40:47, 4.03s/it]
2%|▏ | 8676/569592 [30:26<627:40:47, 4.03s/it]
2%|▏ | 8677/569592 [30:30<620:56:39, 3.99s/it]
2%|▏ | 8677/569592 [30:30<620:56:39, 3.99s/it]
2%|▏ | 8678/569592 [30:34<657:41:29, 4.22s/it]
2%|▏ | 8678/569592 [30:34<657:41:29, 4.22s/it]
2%|▏ | 8679/569592 [30:38<615:20:41, 3.95s/it]
2%|▏ | 8679/569592 [30:38<615:20:41, 3.95s/it]
2%|▏ | 8680/569592 [30:42<655:55:11, 4.21s/it]
2%|▏ | 8680/569592 [30:43<655:55:11, 4.21s/it]
2%|▏ | 8681/569592 [30:46<606:32:25, 3.89s/it]
2%|▏ | 8681/569592 [30:46<606:32:25, 3.89s/it]
2%|▏ | 8682/569592 [30:51<677:16:31, 4.35s/it]
2%|▏ | 8682/569592 [30:51<677:16:31, 4.35s/it]
2%|▏ | 8683/569592 [30:56<697:24:09, 4.48s/it]
2%|▏ | 8683/569592 [30:56<697:24:09, 4.48s/it]
2%|▏ | 8684/569592 [31:01<709:50:02, 4.56s/it]
2%|▏ | 8684/569592 [31:01<709:50:02, 4.56s/it]
2%|▏ | 8685/569592 [31:06<729:03:48, 4.68s/it]
2%|▏ | 8685/569592 [31:06<729:03:48, 4.68s/it]
2%|▏ | 8686/569592 [31:10<736:04:51, 4.72s/it]
2%|▏ | 8686/569592 [31:10<736:04:51, 4.72s/it]
2%|▏ | 8687/569592 [31:15<736:27:25, 4.73s/it]
2%|▏ | 8687/569592 [31:15<736:27:25, 4.73s/it]
2%|▏ | 8688/569592 [31:19<718:06:05, 4.61s/it]
2%|▏ | 8688/569592 [31:19<718:06:05, 4.61s/it]
2%|▏ | 8689/569592 [31:22<635:28:56, 4.08s/it]
2%|▏ | 8689/569592 [31:22<635:28:56, 4.08s/it]
2%|▏ | 8690/569592 [31:26<611:52:19, 3.93s/it]
2%|▏ | 8690/569592 [31:26<611:52:19, 3.93s/it]
2%|▏ | 8691/569592 [31:31<656:23:26, 4.21s/it]
2%|▏ | 8691/569592 [31:31<656:23:26, 4.21s/it]
2%|▏ | 8692/569592 [31:35<644:40:42, 4.14s/it]
2%|▏ | 8692/569592 [31:35<644:40:42, 4.14s/it]
2%|▏ | 8693/569592 [31:36<493:49:27, 3.17s/it]
2%|▏ | 8693/569592 [31:36<493:49:27, 3.17s/it]
2%|▏ | 8694/569592 [31:39<506:07:11, 3.25s/it]
2%|▏ | 8694/569592 [31:39<506:07:11, 3.25s/it]
2%|▏ | 8695/569592 [31:43<538:09:04, 3.45s/it]
2%|▏ | 8695/569592 [31:43<538:09:04, 3.45s/it]
2%|▏ | 8696/569592 [31:46<517:28:54, 3.32s/it]
2%|▏ | 8696/569592 [31:46<517:28:54, 3.32s/it]
2%|▏ | 8697/569592 [31:49<503:56:26, 3.23s/it]
2%|▏ | 8697/569592 [31:49<503:56:26, 3.23s/it]
2%|▏ | 8698/569592 [31:54<578:51:19, 3.72s/it]
2%|▏ | 8698/569592 [31:54<578:51:19, 3.72s/it]
2%|▏ | 8699/569592 [31:58<588:54:44, 3.78s/it]
2%|▏ | 8699/569592 [31:58<588:54:44, 3.78s/it]
2%|▏ | 8700/569592 [32:02<610:40:20, 3.92s/it]
2%|▏ | 8700/569592 [32:02<610:40:20, 3.92s/it]
2%|▏ | 8701/569592 [32:07<645:07:46, 4.14s/it]
2%|▏ | 8701/569592 [32:07<645:07:46, 4.14s/it]
2%|▏ | 8702/569592 [32:10<607:31:09, 3.90s/it]
2%|▏ | 8702/569592 [32:10<607:31:09, 3.90s/it]
2%|▏ | 8703/569592 [32:11<467:22:24, 3.00s/it]
2%|▏ | 8703/569592 [32:11<467:22:24, 3.00s/it]
2%|▏ | 8704/569592 [32:12<368:56:00, 2.37s/it]
2%|▏ | 8704/569592 [32:12<368:56:00, 2.37s/it]
2%|▏ | 8705/569592 [32:13<301:53:28, 1.94s/it]
2%|▏ | 8705/569592 [32:13<301:53:28, 1.94s/it]
2%|▏ | 8706/569592 [32:14<255:28:48, 1.64s/it]
2%|▏ | 8706/569592 [32:14<255:28:48, 1.64s/it]
2%|▏ | 8707/569592 [32:15<222:20:45, 1.43s/it]
2%|▏ | 8707/569592 [32:15<222:20:45, 1.43s/it]
2%|▏ | 8708/569592 [32:16<200:05:51, 1.28s/it]
2%|▏ | 8708/569592 [32:16<200:05:51, 1.28s/it]
2%|▏ | 8709/569592 [32:17<183:54:13, 1.18s/it]
2%|▏ | 8709/569592 [32:17<183:54:13, 1.18s/it]
2%|▏ | 8710/569592 [32:19<243:22:22, 1.56s/it]
2%|▏ | 8710/569592 [32:19<243:22:22, 1.56s/it]
2%|▏ | 8711/569592 [32:24<407:07:55, 2.61s/it]
2%|▏ | 8711/569592 [32:24<407:07:55, 2.61s/it]
2%|▏ | 8712/569592 [32:25<330:43:28, 2.12s/it]
2%|▏ | 8712/569592 [32:25<330:43:28, 2.12s/it]
2%|▏ | 8713/569592 [32:26<275:52:21, 1.77s/it]
2%|▏ | 8713/569592 [32:26<275:52:21, 1.77s/it]
2%|▏ | 8714/569592 [32:29<357:02:22, 2.29s/it]
2%|▏ | 8714/569592 [32:29<357:02:22, 2.29s/it]
2%|▏ | 8715/569592 [32:33<431:55:44, 2.77s/it]
2%|▏ | 8715/569592 [32:33<431:55:44, 2.77s/it]
2%|▏ | 8716/569592 [32:35<370:54:48, 2.38s/it]
2%|▏ | 8716/569592 [32:35<370:54:48, 2.38s/it]
2%|▏ | 8717/569592 [32:36<311:16:39, 2.00s/it]
2%|▏ | 8717/569592 [32:36<311:16:39, 2.00s/it]
2%|▏ | 8718/569592 [32:39<352:45:08, 2.26s/it]
2%|▏ | 8718/569592 [32:39<352:45:08, 2.26s/it]
2%|▏ | 8719/569592 [32:44<504:06:04, 3.24s/it]
2%|▏ | 8719/569592 [32:44<504:06:04, 3.24s/it]
2%|▏ | 8720/569592 [32:45<396:59:08, 2.55s/it]
2%|▏ | 8720/569592 [32:45<396:59:08, 2.55s/it]
2%|▏ | 8721/569592 [32:46<322:11:30, 2.07s/it]
2%|▏ | 8721/569592 [32:46<322:11:30, 2.0/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
7s/it]
2%|▏ | 8722/569592 [32:50<394:15:01, 2.53s/it]
2%|▏ | 8722/569592 [32:50<394:15:01, 2.53s/it]
2%|▏ | 8723/569592 [32:54<490:12:01, 3.15s/it]
2%|▏ | 8723/569592 [32:54<490:12:01, 3.15s/it]
2%|▏ | 8724/569592 [32:55<391:42:45, 2.51s/it]
2%|▏ | 8724/569592 [32:55<391:42:45, 2.51s/it]
2%|▏ | 8725/569592 [32:57<324:22:08, 2.08s/it]
2%|▏ | 8725/569592 [32:57<324:22:08, 2.08s/it]
2%|▏ | 8726/569592 [33:00<370:32:26, 2.38s/it]
2%|▏ | 8726/569592 [33:00<370:32:26, 2.38s/it]
2%|▏ | 8727/569592 [33:05<491:28:19, 3.15s/it]
2%|▏ | 8727/569592 [33:05<491:28:19, 3.15s/it]
2%|▏ | 8728/569592 [33:06<390:49:37, 2.51s/it]
2%|▏ | 8728/569592 [33:06<390:49:37, 2.51s/it]
2%|▏ | 8729/569592 [33:07<337:07:51, 2.16s/it]
2%|▏ | 8729/569592 [33:07<337:07:51, 2.16s/it]
2%|▏ | 8730/569592 [33:10<381:35:12, 2.45s/it]
2%|▏ | 8730/569592 [33:10<381:35:12, 2.45s/it]
2%|▏ | 8731/569592 [33:15<485:24:10, 3.12s/it]
2%|▏ | 8731/569592 [33:15<485:24:10, 3.12s/it]
2%|▏ | 8732/569592 [33:17<429:33:30, 2.76s/it]
2%|▏ | 8732/569592 [33:17<429:33:30, 2.76s/it]
2%|▏ | 8733/569592 [33:18<349:25:59, 2.24s/it]
2%|▏ | 8733/569592 [33:18<349:25:59, 2.24s/it]
2%|▏ | 8734/569592 [33:21<378:49:36, 2.43s/it]
2%|▏ | 8734/569592 [33:21<378:49:36, 2.43s/it]
2%|▏ | 8735/569592 [33:25<486:03:43, 3.12s/it]
2%|▏ | 8735/569592 [33:25<486:03:43, 3.12s/it]
2%|▏ | 8736/569592 [33:27<426:21:38, 2.74s/it]
2%|▏ | 8736/569592 [33:27<426:21:38, 2.74s/it]
2%|▏ | 8737/569592 [33:28<343:15:51, 2.20s/it]
2%|▏ | 8737/569592 [33:28<343:15:51, 2.20s/it]
2%|▏ | 8738/569592 [33:30<353:38:40, 2.27s/it]
2%|▏ | 8738/569592 [33:30<353:38:40, 2.27s/it]
2%|▏ | 8739/569592 [33:35<452:48:01, 2.91s/it]
2%|▏ | 8739/569592 [33:35<452:48:01, 2.91s/it]
2%|▏ | 8740/569592 [33:37<432:17:55, 2.77s/it]
2%|▏ | 8740/569592 [33:37<432:17:55, 2.77s/it]
2%|▏ | 8741/569592 [33:38<347:25:04, 2.23s/it]
2%|▏ | 8741/569592 [33:38<347:25:04, 2.23s/it]
2%|▏ | 8742/569592 [33:41<359:10:59, 2.31s/it]
2%|▏ | 8742/569592 [33:41<359:10:59, 2.31s/it]
2%|▏ | 8743/569592 [33:45<463:39:09, 2.98s/it]
2%|▏ | 8743/569592 [33:45<463:39:09, 2.98s/it]
2%|▏ | 8744/569592 [33:47<417:24:58, 2.68s/it]
2%|▏ | 8744/569592 [33:47<417:24:58, 2.68s/it]
2%|▏ | 8745/569592 [33:48<335:57:34, 2.16s/it]
2%|▏ | 8745/569592 [33:48<335:57:34, 2.16s/it]
2%|▏ | 8746/569592 [33:51<386:16:07, 2.48s/it]
2%|▏ | 8746/569592 [33:51<386:16:07, 2.48s/it]
2%|▏ | 8747/569592 [33:54<402:18:14, 2.58s/it]
2%|▏ | 8747/569592 [33:54<402:18:14, 2.58s/it]
2%|▏ | 8748/569592 [33:58<466:09:22, 2.99s/it]
2%|▏ | 8748/569592 [33:58<466:09:22, 2.99s/it]
2%|▏ | 8749/569592 [33:59<369:38:51, 2.37s/it]
2%|▏ | 8749/569592 [33:59<369:38:51, 2.37s/it]
2%|▏ | 8750/569592 [34:02<384:19:54, 2.47s/it]
2%|▏ | 8750/569592 [34:02<384:19:54, 2.47s/it]
2%|▏ | 8751/569592 [34:06<451:35:27, 2.90s/it]
2%|▏ | 8751/569592 [34:06<451:35:27, 2.90s/it]
2%|▏ | 8752/569592 [34:07<385:57:07, 2.48s/it]
2%|▏ | 8752/569592 [34:07<385:57:07, 2.48s/it]
2%|▏ | 8753/569592 [34:09<368:07:56, 2.36s/it]
2%|▏ | 8753/569592 [34:10<368:07:56, 2.36s/it]
2%|▏ | 8754/569592 [34:12<402:31:43, 2.58s/it]
2%|▏ | 8754/569592 [34:12<402:31:43, 2.58s/it]
2%|▏ | 8755/569592 [34:16<447:39:29, 2.87s/it]
2%|▏ | 8755/569592 [34:16<447:39:29, 2.87s/it]
2%|▏ | 8756/569592 [34:18<416:36:31, 2.67s/it]
2%|▏ | 8756/569592 [34:18<416:36:31, 2.67s/it]
2%|▏ | 8757/569592 [34:19<337:00:19, 2.16s/it]
2%|▏ | 8757/569592 [34:19<337:00:19, 2.16s/it]
2%|▏ | 8758/569592 [34:23<393:39:36, 2.53s/it]
2%|▏ | 8758/569592 [34:23<393:39:36, 2.53s/it]
2%|▏ | 8759/569592 [34:25<396:58:17, 2.55s/it]
2%|▏ | 8759/569592 [34:25<396:58:17, 2.55s/it]
2%|▏ | 8760/569592 [34:27<357:15:49, 2.29s/it]
2%|▏ | 8760/569592 [34:27<357:15:49, 2.29s/it]
2%|▏ | 8761/569592 [34:30<403:01:14, 2.59s/it]
2%|▏ | 8761/569592 [34:30<403:01:14, 2.59s/it]
2%|▏ | 8762/569592 [34:33<396:17:10, 2.54s/it]
2%|▏ | 8762/569592 [34:33<396:17:10, 2.54s/it]
2%|▏ | 8763/569592 [34:36<422:22:58, 2.71s/it]
2%|▏ | 8763/569592 [34:36<422:22:58, 2.71s/it]
2%|▏ | 8764/569592 [34:39<439:07:56, 2.82s/it]
2%|▏ | 8764/569592 [34:39<439:07:56, 2.82s/it]
2%|▏ | 8765/569592 [34:40<387:28:13, 2.49s/it]
2%|▏ | 8765/569592 [34:40<387:28:13, 2.49s/it]
2%|▏ | 8766/569592 [34:42<364:24:29, 2.34s/it]
2%|▏ | 8766/569592 [34:42<364:24:29, 2.34s/it]
2%|▏ | 8767/569592 [34:48<491:04:58, 3.15s/it]
2%|▏ | 8767/569592 [34:48<491:04:58, 3.15s/it]
2%|▏ | 8768/569592 [34:49<391:58:07, 2.52s/it]
2%|▏ | 8768/569592 [34:49<391:58:07, 2.52s/it]
2%|▏ | 8769/569592 [34:51<390:05:35, 2.50s/it]
2%|▏ | 8769/569592 [34:51<390:05:35, 2.50s/it]
2%|▏ | 8770/569592 [34:52<322:55:28, 2.07s/it]
2%|▏ | 8770/569592 [34:52<322:55:28, 2.07s/it]
2%|▏ | 8771/569592 [34:58<486:04:31, 3.12s/it]
2%|▏ | 8771/569592 [34:58<486:04:31, 3.12s/it]
2%|▏ | 8772/569592 [34:59<385:17:31, 2.47s/it]
2%|▏ | 8772/569592 [34:59<385:17:31, 2.47s/it]
2%|▏ | 8773/569592 [35:02<405:43:44, 2.60s/it]
2%|▏ | 8773/569592 [35:02<405:43:44, 2.60s/it]
2%|▏ | 8774/569592 [35:03<331:51:53, 2.13s/it]
2%|▏ | 8774/569592 [35:03<331:51:53, 2.13s/it]
2%|▏ | 8775/569592 [35:08<490:49:38, 3.15s/it]
2%|▏ | 8775/569592 [35:08<490:49:38, 3.15s/it]
2%|▏ | 8776/569592 [35:09<390:47:33, 2.51s/it]
2%|▏ | 8776/569592 [35:09<390:47:33, 2.51s/it]
2%|▏ | 8777/569592 [35:11<364:57:18, 2.34s/it]
2%|▏ | 8777/569592 [35:11<364:57:18, 2.34s/it]
2%|▏ | 8778/569592 [35:13<324:51:38, 2.09s/it]
2%|▏ | 8778/569592 [35:13<324:51:38, 2.09s/it]
2%|▏ | 8779/569592 [35:18<499:20:58, 3.21s/it]
2%|▏ | 8779/569592 [35:18<499:20:58, 3.21s/it]
2%|▏ | 8780/569592 [35:19<392:30:01, 2.52s/it]
2%|▏ | 8780/569592 [35:19<392:30:01, 2.52s/it]
2%|▏ | 8781/569592 [35:22<390:14:35, 2.51s/it]
2%|▏ | 8781/569592 [35:22<390:14:35, 2.51s/it]
2%|▏ | 8782/569592 [35:23<316:43:02, 2.03s/it]
2%|▏ | 8782/569592 [35:23<316:43:02, 2.03s/it]
2%|▏ | 8783/569592 [35:27<415:41:32, 2.67s/it]
2%|▏ | 8783/569592 [35:27<415:41:32, 2.67s/it]
2%|▏ | 8784/569592 [35:29<404:15:21, 2.60s/it]
2%|▏ | 8784/569592 [35:29<404:15:21, 2.60s/it]
2%|▏ | 8785/569592 [35:32<404:37:31, 2.60s/it]
2%|▏ | 8785/569592 [35:32<404:37:31, 2.60s/it]
2%|▏ | 8786/569592 [35:33<325:56:05, 2.09s/it]
2%|▏ | 8786/569592 [35:33<325:56:05, 2.09s/it]
2%|▏ | 8787/569592 [35:42<660:28:50, 4.24s/it]
2%|▏ | 8787/569592 [35:42<660:28:50, 4.24s/it]
2%|▏ | 8788/569592 [35:46<647:08:33, 4.15s/it]
2%|▏ | 8788/569592 [35:46<647:08:33, 4.15s/it]
2%|▏ | 8789/569592 [35:51<674:32:33, 4.33s/it]
2%|▏ | 8789/569592 [35:51<674:32:33, 4.33s/it]
2%|▏ | 8790/569592 [35:56<706:27:34, 4.54s/it]
2%|▏ | 8790/569592 [35:56<706:27:34, 4.54s/it]
2%|▏ | 8791/569592 [35:59<660:48:45, 4.24s/it]
2%|▏ | 8791/569592 [35:59<660:48:45, 4.24s/it]
2%|▏ | 8792/569592 [36:02<604:41:24, 3.88s/it]
2%|▏ | 8792/569592 [36:02<604:41:24, 3.88s/it]
2%|▏ | 8793/569592 [36:07<651:48:54, 4.18s/it]
2%|▏ | 8793/569592 [36:07<651:48:54, 4.18s/it]
2%|▏ | 8794/569592 [36:14<774:24:46, 4.97s/it]
2%|▏ | 8794/569592 [36:14<774:24:46, 4.97s/it]
2%|▏ | 8795/569592 [36:18<714:16:04, 4.59s/it]
2%|▏ | 8795/569592 [36:18<714:16:04, 4.59s/it]
2%|▏ | 8796/569592 [36:22<722:04:00, 4.64s/it]
2%|▏ | 8796/569592 [36:22<722:04:00, 4.64s/it]
2%|▏ | 8797/569592 [36:26<660:13:49, 4.24s/it]
2%|▏ | 8797/569592 [36:26<660:13:49, 4.24s/it]
2%|▏ | 8798/569592 [36:29<613:38:48, 3.94s/it]
2%|▏ | 8798/569592 [36:29<613:38:48, 3.94s/it]
2%|▏ | 8799/569592 [36:34<666:46:00, 4.28s/it]
2%|▏ | 8799/569592 [36:34<666:46:00, 4.28s/it]
2%|▏ | 8800/569592 [36:39<693:53:24, 4.45s/it]
2%|▏ | 8800/569592 [36:39<693:53:24, 4.45s/it]
2%|▏ | 8801/569592 [36:44<702:52:38, 4.51s/it]
2%|▏ | 8801/569592 [36:44<702:52:38, 4.51s/it]
2%|▏ | 8802/569592 [36:47<647:14:30, 4.15s/it]
2%|▏ | 8802/569592 [36:47<647:14:30, 4.15s/it]
2%|▏ | 8803/569592 [36:50<601:32:39, 3.86s/it]
2%|▏ | 8803/569592 [36:50<601:32:39, 3.86s/it]
2%|▏ | 8804/569592 [36:51<462:39:20, 2.97s/it]
2%|▏ | 8804/569592 [36:51<462:39:20, 2.97s/it]
2%|▏ | 8805/569592 [36:56<538:25:36, 3.46s/it]
2%|▏ | 8805/569592 [36:56<538:25:36, 3.46s/it]
2%|▏ | 8806/569592 [36:57<420:51:38, 2.70s/it]
2%|▏ | 8806/569592 [36:57<420:51:38, 2.70s/it]
2%|▏ | 8807/569592 [37:01<495:18:53, 3.18s/it]
2%|▏ | 8807/569592 [37:01<495:18:53, 3.18s/it]
2%|▏ | 8808/569592 [37:05<559:09:55, 3.59s/it]
2%|▏ | 8808/569592 [37:05<559:09:55, 3.59s/it]
2%|▏ | 8809/569592 [37:10<596:05:26, 3.83s/it]
2%|▏ | 8809/569592 [37:10<596:05:26, 3.83s/it]
2%|▏ | 8810/569592 [37:15<643:45:10, 4.13s/it]
2%|▏ | 8810/569592 [37:15<643:45:10, 4.13s/it]
2%|▏ | 8811/569592 [37:18<600:44:14, 3.86s/it]
2%|▏ | 8811/569592 [37:18<600:44:14, 3.86s/it]
2%|▏ | 8812/569592 [37:21<588:32:15, 3.78s/it]
2%|▏ | 8812/569592 [37:21<588:32:15, 3.78s/it]
2%|▏ | 8813/569592 [37:22<455:00:16, 2.92s/it]
2%|▏ | 8813/569592 [37:22<455:00:16, 2.92s/it]
2%|▏ | 8814/569592 [37:27<555:22:40, 3.57s/it]
2%|▏ | 8814/569592 [37:27<555:22:40, 3.57s/it]
2%|▏ | 8815/569592 [37:31<554:40:37, 3.56s/it]
2%|▏ | 8815/569592 [37:31<554:40:37, 3.56s/it]
2%|▏ | 8816/569592 [37:36<612:18:24, 3.93s/it]
2%|▏ | 8816/569592 [37:36<612:18:24, 3.93s/it]
2%|▏ | 8817/569592 [37:39<580:03:24, 3.72s/it]
2%|▏ | 8817/569592 [37:39<580:03:24, 3.72s/it]
2%|▏ | 8818/569592 [37:44<639:49:57, 4.11s/it]
2%|▏ | 8818/569592 [37:44<639:49:57, 4.11s/it]
2%|▏ | 8819/569592 [37:48<645:52:36, 4.15s/it]
2%|▏ | 8819/569592 [37:48<645:52:36, 4.15s/it]
2%|▏ | 8820/569592 [37:51<604:35:35, 3.88s/it]
2%|▏ | 8820/569592 [37:51<604:35:35, 3.88s/it]
2%|▏ | 8821/569592 [37:52<465:30:41, 2.99s/it]
2%|▏ | 8821/569592 [37:52<465:30:41, 2.99s/it]
2%|▏ | 8822/569592 [37:53<368:47:54, 2.37s/it]
2%|▏ | 8822/569592 [37:53<368:47:54, 2.37s/it]
2%|▏ | 8823/569592 [37:54<305:41:29, 1.96s/it]
2%|▏ | 8823/569592 [37:54<305:41:29, 1.96s/it]
2%|▏ | 8824/569592 [37:55<257:11:12, 1.65s/it]
2%|▏ | 8824/569592 [37:55<257:11:12, 1.65s/it]
2%|▏ | 8825/569592 [37:56<224:25:01, 1.44s/it]
2%|▏ | 8825/569592 [37:56<224:25:01, 1.44s/it]
2%|▏ | 8826/569592 [37:57<201:00:41, 1.29s/it]
2%|▏ | 8826/569592 [37:57<201:00:41, 1.29s/it]
2%|▏ | 8827/569592 [37:58<185:48:52, 1.19s/it]
2%|▏ | 8827/569592 [37:58<185:48:52, 1.19s/it]
2%|▏ | 8828/569592 [38:00<239:27:23, 1.54s/it]
2%|▏ | 8828/569592 [38:00<239:27:23, 1.54s/it]
2%|▏ | 8829/569592 [38:05<402:52:54, 2.59s/it]
2%|▏ | 8829/569592 [38:05<402:52:54, 2.59s/it]
2%|▏ | 8830/569592 [38:06<329:53:30, 2.12s/it]
2%|▏ | 8830/569592 [38:07<329:53:30, 2.12s/it]
2%|▏ | 8831/569592 [38:07<277:11:46, 1.78s/it]
2%|▏ | 8831/569592 [38:08<277:11:46, 1.78s/it]
2%|▏ | 8832/569592 [38:11<351:26:41, 2.26s/it]
2%|▏ | 8832/569592 [38:11<351:26:41, 2.26s/it]
2%|▏ | 8833/569592 [38:15<456:36:51, 2.93s/it]
2%|▏ | 8833/569592 [38:15<456:36:51, 2.93s/it]
2%|▏ | 8834/569592 [38:16<365:39:49, 2.35s/it]
2%|▏ | 8834/569592 [38:16<365:39:49, 2.35s/it]
2%|▏ | 8835/569592 [38:18<323:27:51, 2.08s/it]
2%|▏ | 8835/569592 [38:18<323:27:51, 2.08s/it]
2%|▏ | 8836/569592 [38:21<383:15:39, 2.46s/it]
2%|▏ | 8836/569592 [38:22<383:15:39, 2.46s/it]
2%|▏ | 8837/569592 [38:26<496:46:31, 3.19s/it]
2%|▏ | 8837/569592 [38:26<496:46:31, 3.19s/it]
2%|▏ | 8838/569592 [38:27<395:54:16, 2.54s/it]
2%|▏ | 8838/569592 [38:27<395:54:16, 2.54s/it]
2%|▏ | 8839/569592 [38:28<323:38:49, 2.08s/it]
2%|▏ | 8839/569592 [38:28<323:38:49, 2.08s/it]
2%|▏ | 8840/569592 [38:32<404:33:48, 2.60s/it]
2%|▏ | 8840/569592 [38:32<404:33:48, 2.60s/it]
2%|▏ | 8841/569592 [38:35<449:53:16, 2.89s/it]
2%|▏ | 8841/569592 [38:35<449:53:16, 2.89s/it]
2%|▏ | 8842/569592 [38:37<386:23:50, 2.48s/it]
2%|▏ | 8842/569592 [38:37<386:23:50, 2.48s/it]
2%|▏ | 8843/569592 [38:38<318:06:37, 2.04s/it]
2%|▏ | 8843/569592 [38:38<318:06:37, 2.04s/it]
2%|▏ | 8844/569592 [38:42<425:01:41, 2.73s/it]
2%|▏ | 8844/569592 [38:42<425:01:41, 2.73s/it]
2%|▏ | 8845/569592 [38:46<487:35:09, 3.13s/it]
2%|▏ | 8845/569592 [38:46<487:35:09, 3.13s/it]
2%|▏ | 8846/569592 [38:47<388:33:25, 2.49s/it]
2%|▏ | 8846/569592 [38:47<388:33:25, 2.49s/it]
2%|▏ | 8847/569592 [38:48<315:31:22, 2.03s/it]
2%|▏ | 8847/569592 [38:48<315:31:22, 2.03s/it]
2%|▏ | 8848/569592 [38:51<350:58:15, 2.25s/it]
2%|▏ | 8848/569592 [38:51<350:58:15, 2.25s/it]
2%|▏ | 8849/569592 [38:56<456:13:12, 2.93s/it]
2%|▏ | 8849/569592 [38:56<456:13:12, 2.93s/it]
2%|▏ | 8850/569592 [38:57<366:55:40, 2.36s/it]
2%|▏ | 8850/569592 [38:57<366:55:40, 2.36s/it]
2%|▏ | 8851/569592 [38:58<301:32:29, 1.94s/it]
2%|▏ | 8851/569592 [38:58<301:32:29, 1.94s/it]
2%|▏ | 8852/569592 [39:02<439:11:57, 2.82s/it]
2%|▏ | 8852/569592 [39:02<439:11:57, 2.82s/it]
2%|▏ | 8853/569592 [39:08<566:28:58, 3.64s/it]
2%|▏ | 8853/569592 [39:08<566:28:58, 3.64s/it]
2%|▏ | 8854/569592 [39:09<440:03:38, 2.83s/it]
2%|▏ | 8854/569592 [39:09<440:03:38, 2.83s/it]
2%|▏ | 8855/569592 [39:10<352:01:38, 2.26s/it]
2%|▏ | 8855/569592 [39:10<352:01:38, 2.26s/it]
2%|▏ | 8856/569592 [39:12<359:57:51, 2.31s/it]
2%|▏ | 8856/569592 [39:12<359:57:51, 2.31s/it]
2%|▏ | 8857/569592 [39:18<524:22:55, 3.37s/it]
2%|▏ | 8857/569592 [39:18<524:22:55, 3.37s/it]
2%|▏ | 8858/569592 [39:19<410:05:17, 2.63s/it]
2%|▏ | 8858/569592 [39:19<410:05:17, 2.63s/it]
2%|▏ | 8859/569592 [39:20<331:32:31, 2.13s/it]
2%|▏ | 8859/569592 [39:20<331:32:31, 2.13s/it]
2%|▏ | 8860/569592 [39:22<345:49:49, 2.22s/it]
2%|▏ | 8860/569592 [39:22<345:49:49, 2.22s/it]
2%|▏ | 8861/569592 [39:28<501:50:37, 3.22s/it]
2%|▏ | 8861/569592 [39:28<501:50:37, 3.22s/it]
2%|▏ | 8862/569592 [39:29<396:02:34, 2.54s/it]
2%|▏ | 8862/569592 [39:29<396:02:34, 2.54s/it]
2%|▏ | 8863/569592 [39:30<325:41:46, 2.09s/it]
2%|▏ | 8863/569592 [39:30<325:41:46, 2.09s/it]
2%|▏ | 8864/569592 [39:32<342:05:03, 2.20s/it]
2%|▏ | 8864/569592 [39:32<342:05:03, 2.20s/it]
2%|▏ | 8865/569592 [39:38<499:40:16, 3.21s/it]
2%|▏ | 8865/569592 [39:38<499:40:16, 3.21s/it]
2%|▏ | 8866/569592 [39:39<417:27:35, 2.68s/it]
2%|▏ | 8866/569592 [39:39<417:27:35, 2.68s/it]
2%|▏ | 8867/569592 [39:40<338:27:47, 2.17s/it]
2%|▏ | 8867/569592 [39:40<338:27:47, 2.17s/it]
2%|▏ | 8868/569592 [39:43<334:04:46, 2.14s/it]
2%|▏ | 8868/569592 [39:43<334:04:46, 2.14s/it]
2%|▏ | 8869/569592 [39:47<451:02:54, 2.90s/it]
2%|▏ | 8869/569592 [39:47<451:02:54, 2.90s/it]
2%|▏ | 8870/569592 [39:50<440:53:45, 2.83s/it]
2%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (91927968 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
�� | 8870/569592 [39:50<440:53:45, 2.83s/it]
2%|▏ | 8871/569592 [39:51<353:25:20, 2.27s/it]
2%|▏ | 8871/569592 [39:51<353:25:20, 2.27s/it]
2%|▏ | 8872/569592 [39:53<328:25:47, 2.11s/it]
2%|▏ | 8872/569592 [39:53<328:25:47, 2.11s/it]
2%|▏ | 8873/569592 [39:58<483:06:18, 3.10s/it]
2%|▏ | 8873/569592 [39:58<483:06:18, 3.10s/it]
2%|▏ | 8874/569592 [39:59<383:59:37, 2.47s/it]
2%|▏ | 8874/569592 [39:59<383:59:37, 2.47s/it]
2%|▏ | 8875/569592 [40:01<354:34:22, 2.28s/it]
2%|▏ | 8875/569592 [40:01<354:34:22, 2.28s/it]
2%|▏ | 8876/569592 [40:03<361:18:30, 2.32s/it]
2%|▏ | 8876/569592 [40:03<361:18:30, 2.32s/it]
2%|▏ | 8877/569592 [40:09<517:36:40, 3.32s/it]
2%|▏ | 8877/569592 [40:09<517:36:40, 3.32s/it]
2%|▏ | 8878/569592 [40:10<406:02:27, 2.61s/it]
2%|▏ | 8878/569592 [40:10<406:02:27, 2.61s/it]
2%|▏ | 8879/569592 [40:11<347:45:57, 2.23s/it]
2%|▏ | 8879/569592 [40:11<347:45:57, 2.23s/it]
2%|▏ | 8880/569592 [40:13<330:10:17, 2.12s/it]
2%|▏ | 8880/569592 [40:13<330:10:17, 2.12s/it]
2%|▏ | 8881/569592 [40:18<458:01:22, 2.94s/it]
2%|▏ | 8881/569592 [40:18<458:01:22, 2.94s/it]
2%|▏ | 8882/569592 [40:19<367:12:42, 2.36s/it]
2%|▏ | 8882/569592 [40:19<367:12:42, 2.36s/it]
2%|▏ | 8883/569592 [40:23<446:39:25, 2.87s/it]
2%|▏ | 8883/569592 [40:23<446:39:25, 2.87s/it]
2%|▏ | 8884/569592 [40:24<366:56:06, 2.36s/it]
2%|▏ | 8884/569592 [40:24<366:56:06, 2.36s/it]
2%|▏ | 8885/569592 [40:28<434:17:37, 2.79s/it]
2%|▏ | 8885/569592 [40:28<434:17:37, 2.79s/it]
2%|▏ | 8886/569592 [40:30<383:13:12, 2.46s/it]
2%|▏ | 8886/569592 [40:30<383:13:12, 2.46s/it]
2%|▏ | 8887/569592 [40:32<358:53:06, 2.30s/it]
2%|▏ | 8887/569592 [40:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
32<358:53:06, 2.30s/it]
2%|▏ | 8888/569592 [40:33<327:27:08, 2.10s/it]
2%|▏ | 8888/569592 [40:33<327:27:08, 2.10s/it]
2%|▏ | 8889/569592 [40:39<508:38:04, 3.27s/it]
2%|▏ | 8889/569592 [40:39<508:38:04, 3.27s/it]
2%|▏ | 8890/569592 [40:40<399:30:23, 2.57s/it]
2%|▏ | 8890/569592 [40:40<399:30:23, 2.57s/it]
2%|▏ | 8891/569592 [40:42<363:06:03, 2.33s/it]
2%|▏ | 8891/569592 [40:42<363:06:03, 2.33s/it]
2%|▏ | 8892/569592 [40:44<340:03:54, 2.18s/it]
2%|▏ | 8892/569592 [40:44<340:03:54, 2.18s/it]
2%|▏ | 8893/569592 [40:48<437:10:07, 2.81s/it]
2%|▏ | 8893/569592 [40:48<437:10:07, 2.81s/it]
2%|▏ | 8894/569592 [40:51<432:12:38, 2.78s/it]
2%|▏ | 8894/569592 [40:51<432:12:38, 2.78s/it]
2%|▏ | 8895/569592 [40:52<363:56:42, 2.34s/it]
2%|▏ | 8895/569592 [40:52<363:56:42, 2.34s/it]
2%|▏ | 8896/569592 [40:53<308:58:56, 1.98s/it]
2%|▏ | 8896/569592 [40:53<308:58:56, 1.98s/it]
2%|▏ | 8897/569592 [40:58<460:50:43, 2.96s/it]
2%|▏ | 8897/569592 [40:58<460:50:43, 2.96s/it]
2%|▏ | 8898/569592 [41:01<430:47:14, 2.77s/it]
2%|▏ | 8898/569592 [41:01<430:47:14, 2.77s/it]
2%|▏ | 8899/569592 [41:02<359:24:43, 2.31s/it]
2%|▏ | 8899/569592 [41:02<359:24:43, 2.31s/it]
2%|▏ | 8900/569592 [41:07<481:18:25, 3.09s/it]
2%|▏ | 8900/569592 [41:07<481:18:25, 3.09s/it]
2%|▏ | 8901/569592 [41:11<522:29:08, 3.35s/it]
2%|▏ | 8901/569592 [41:11<522:29:08, 3.35s/it]
2%|▏ | 8902/569592 [41:14<528:29:56, 3.39s/it]
2%|▏ | 8902/569592 [41:14<528:29:56, 3.39s/it]
2%|▏ | 8903/569592 [41:19<571:10:02, 3.67s/it]
2%|▏ | 8903/569592 [41:19<571:10:02, 3.67s/it]
2%|▏ | 8904/569592 [41:24<637:26:20, 4.09s/it]
2%|▏ | 8904/569592 [41:24<637:26:20, 4.09s/it]
2%|▏ | 8905/569592 [41:25<487:42:47, 3.13s/it]
2%|▏ | 8905/569592 [41:25<487:42:47, 3.13s/it]
2%|▏ | 8906/569592 [41:29<567:05:58, 3.64s/it]
2%|▏ | 8906/569592 [41:29<567:05:58, 3.64s/it]
2%|▏ | 8907/569592 [41:30<440:17:05, 2.83s/it]
2%|▏ | 8907/569592 [41:30<440:17:05, 2.83s/it]
2%|▏ | 8908/569592 [41:35<524:26:14, 3.37s/it]
2%|▏ | 8908/569592 [41:35<524:26:14, 3.37s/it]
2%|▏ | 8909/569592 [41:40<591:16:39, 3.80s/it]
2%|▏ | 8909/569592 [41:40<591:16:39, 3.80s/it]
2%|▏ | 8910/569592 [41:44<606:14:47, 3.89s/it]
2%|▏ | 8910/569592 [41:44<606:14:47, 3.89s/it]
2%|▏ | 8911/569592 [41:48<619:25:50, 3.98s/it]
2%|▏ | 8911/569592 [41:48<619:25:50, 3.98s/it]
2%|▏ | 8912/569592 [41:53<648:52:37, 4.17s/it]
2%|▏ | 8912/569592 [41:53<648:52:37, 4.17s/it]
2%|▏ | 8913/569592 [41:56<599:03:01, 3.85s/it]
2%|▏ | 8913/569592 [41:56<599:03:01, 3.85s/it]
2%|▏ | 8914/569592 [42:01<645:03:00, 4.14s/it]
2%|▏ | 8914/569592 [42:01<645:03:00, 4.14s/it]
2%|▏ | 8915/569592 [42:05<664:05:06, 4.26s/it]
2%|▏ | 8915/569592 [42:05<664:05:06, 4.26s/it]
2%|▏ | 8916/569592 [42:09<649:31:18, 4.17s/it]
2%|▏ | 8916/569592 [42:09<649:31:18, 4.17s/it]
2%|▏ | 8917/569592 [42:13<616:06:32, 3.96s/it]
2%|▏ | 8917/569592 [42:13<616:06:32, 3.96s/it]
2%|▏ | 8918/569592 [42:13<473:18:19, 3.04s/it]
2%|▏ | 8918/569592 [42:13<473:18:19, 3.04s/it]
2%|▏ | 8919/569592 [42:14<374:00:47, 2.40s/it]
2%|▏ | 8919/569592 [42:14<374:00:47, 2.40s/it]
2%|▏ | 8920/569592 [42:19<494:11:44, 3.17s/it]
2%|▏ | 8920/569592 [42:19<494:11:44, 3.17s/it]
2%|▏ | 8921/569592 [42:23<535:23:35, 3.44s/it]
2%|▏ | 8921/569592 [42:23<535:23:35, 3.44s/it]
2%|▏ | 8922/569592 [42:28<583:41:57, 3.75s/it]
2%|▏ | 8922/569592 [42:28<583:41:57, 3.75s/it]
2%|▏ | 8923/569592 [42:31<554:49:07, 3.56s/it]
2%|▏ | 8923/569592 [42:31<554:49:07, 3.56s/it]
2%|▏ | 8924/569592 [42:34<538:34:08, 3.46s/it]
2%|▏ | 8924/569592 [42:34<538:34:08, 3.46s/it]
2%|▏ | 8925/569592 [42:39<611:12:57, 3.92s/it]
2%|▏ | 8925/569592 [42:39<611:12:57, 3.92s/it]
2%|▏ | 8926/569592 [42:44<652:24:47, 4.19s/it]
2%|▏ | 8926/569592 [42:44<652:24:47, 4.19s/it]
2%|▏ | 8927/569592 [42:49<682:23:54, 4.38s/it]
2%|▏ | 8927/569592 [42:49<682:23:54, 4.38s/it]
2%|▏ | 8928/569592 [42:53<691:29:48, 4.44s/it]
2%|▏ | 8928/569592 [42:53<691:29:48, 4.44s/it]
2%|▏ | 8929/569592 [42:58<695:53:31, 4.47s/it]
2%|▏ | 8929/569592 [42:58<695:53:31, 4.47s/it]
2%|▏ | 8930/569592 [42:59<528:21:25, 3.39s/it]
2%|▏ | 8930/569592 [42:59<528:21:25, 3.39s/it]
2%|▏ | 8931/569592 [43:02<515:05:36, 3.31s/it]
2%|▏ | 8931/569592 [43:02<515:05:36, 3.31s/it]
2%|▏ | 8932/569592 [43:06<560:07:09, 3.60s/it]
2%|▏ | 8932/569592 [43:06<560:07:09, 3.60s/it]
2%|▏ | 8933/569592 [43:07<437:28:03, 2.81s/it]
2%|▏ | 8933/569592 [43:07<437:28:03, 2.81s/it]
2%|▏ | 8934/569592 [43:11<471:10:38, 3.03s/it]
2%|▏ | 8934/569592 [43:11<471:10:38, 3.03s/it]
2%|▏ | 8935/569592 [43:14<476:03:25, 3.06s/it]
2%|▏ | 8935/569592 [43:14<476:03:25, 3.06s/it]
2%|▏ | 8936/569592 [43:19<554:16:48, 3.56s/it]
2%|▏ | 8936/569592 [43:19<554:16:48, 3.56s/it]
2%|▏ | 8937/569592 [43:23<601:29:55, 3.86s/it]
2%|▏ | 8937/569592 [43:23<601:29:55, 3.86s/it]
2%|▏ | 8938/569592 [43:26<561:20:31, 3.60s/it]
2%|▏ | 8938/569592 [43:26<561:20:31, 3.60s/it]
2%|▏ | 8939/569592 [43:27<434:24:08, 2.79s/it]
2%|▏ | 8939/569592 [43:27<434:24:08, 2.79s/it]
2%|▏ | 8940/569592 [43:28<346:08:44, 2.22s/it]
2%|▏ | 8940/569592 [43:28<346:08:44, 2.22s/it]
2%|▏ | 8941/569592 [43:29<286:40:27, 1.84s/it]
2%|▏ | 8941/569592 [43:29<286:40:27, 1.84s/it]
2%|▏ | 8942/569592 [43:30<244:24:43, 1.57s/it]
2%|▏ | 8942/569592 [43:30<244:24:43, 1.57s/it]
2%|▏ | 8943/569592 [43:31<215:15:00, 1.38s/it]
2%|▏ | 8943/569592 [43:31<215:15:00, 1.38s/it]
2%|▏ | 8944/569592 [43:32<194:41:59, 1.25s/it]
2%|▏ | 8944/569592 [43:32<194:41:59, 1.25s/it]
2%|▏ | 8945/569592 [43:33<179:38:01, 1.15s/it]
2%|▏ | 8945/569592 [43:33<179:38:01, 1.15s/it]
2%|▏ | 8946/569592 [43:36<292:37:11, 1.88s/it]
2%|▏ | 8946/569592 [43:36<292:37:11, 1.88s/it]
2%|▏ | 8947/569592 [43:40<375:42:01, 2.41s/it]
2%|▏ | 8947/569592 [43:40<375:42:01, 2.41s/it]
2%|▏ | 8948/569592 [43:41<310:07:02, 1.99s/it]
2%|▏ | 8948/569592 [43:41<310:07:02, 1.99s/it]
2%|▏ | 8949/569592 [43:42<260:14:55, 1.67s/it]
2%|▏ | 8949/569592 [43:42<260:14:55, 1.67s/it]
2%|▏ | 8950/569592 [43:46<385:46:47, 2.48s/it]
2%|▏ | 8950/569592 [43:46<385:46:47, 2.48s/it]
2%|▏ | 8951/569592 [43:50<430:52:16, 2.77s/it]
2%|▏ | 8951/569592 [43:50<430:52:16, 2.77s/it]
2%|▏ | 8952/569592 [43:51<348:40:53, 2.24s/it]
2%|▏ | 8952/569592 [43:51<348:40:53, 2.24s/it]
2%|▏ | 8953/569592 [43:52<288:14:05, 1.85s/it]
2%|▏ | 8953/569592 [43:52<288:14:05, 1.85s/it]
2%|▏ | 8954/569592 [43:56<415:05:34, 2.67s/it]
2%|▏ | 8954/569592 [43:56<415:05:34, 2.67s/it]
2%|▏ | 8955/569592 [44:00<457:11:58, 2.94s/it]
2%|▏ | 8955/569592 [44:00<457:11:58, 2.94s/it]
2%|▏ | 8956/569592 [44:01<373:23:03, 2.40s/it]
2%|▏ | 8956/569592 [44:01<373:23:03, 2.40s/it]
2%|▏ | 8957/569592 [44:02<306:40:22, 1.97s/it]
2%|▏ | 8957/569592 [44:02<306:40:22, 1.97s/it]
2%|▏ | 8958/569592 [44:06<416:23:17, 2.67s/it]
2%|▏ | 8958/569592 [44:06<416:23:17, 2.67s/it]
2%|▏ | 8959/569592 [44:10<475:24:36, 3.05s/it]
2%|▏ | 8959/569592 [44:10<475:24:36, 3.05s/it]
2%|▏ | 8960/569592 [44:11<397:02:08, 2.55s/it]
2%|▏ | 8960/569592 [44:11<397:02:08, 2.55s/it]
2%|▏ | 8961/569592 [44:13<339:52:31, 2.18s/it]
2%|▏ | 8961/569592 [44:13<339:52:31, 2.18s/it]
2%|▏ | 8962/569592 [44:17<420/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92113584 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:29:13, 2.70s/it]
2%|▏ | 8962/569592 [44:17<420:29:13, 2.70s/it]
2%|▏ | 8963/569592 [44:19<426:20:01, 2.74s/it]
2%|▏ | 8963/569592 [44:20<426:20:01, 2.74s/it]
2%|▏ | 8964/569592 [44:21<389:49:24, 2.50s/it]
2%|▏ | 8964/569592 [44:21<389:49:24, 2.50s/it]
2%|▏ | 8965/569592 [44:22<318:58:15, 2.05s/it]
2%|▏ | 8965/569592 [44:22<318:58:15, 2.05s/it]
2%|▏ | 8966/569592 [44:27<440:47:49, 2.83s/it]
2%|▏ | 8966/569592 [44:27<440:47:49, 2.83s/it]
2%|▏ | 8967/569592 [44:30<437:40:10, 2.81s/it]
2%|▏ | 8967/569592 [44:30<437:40:10, 2.81s/it]
2%|▏ | 8968/569592 [44:31<374:14:54, 2.40s/it]
2%|▏ | 8968/569592 [44:31<374:14:54, 2.40s/it]
2%|▏ | 8969/569592 [44:33<347:13:11, 2.23s/it]
2%|▏ | 8969/569592 [44:33<347:13:11, 2.23s/it]
2%|▏ | 8970/569592 [44:37<446:58:49, 2.87s/it]
2%|▏ | 8970/569592 [44:38<446:58:49, 2.87s/it]
2%|▏ | 8971/569592 [44:40<407:59:00, 2.62s/it]
2%|▏ | 8971/569592 [44:40<407:59:00, 2.62s/it]
2%|▏ | 8972/569592 [44:41<367:24:18, 2.36s/it]
2%|▏ | 8972/569592 [44:41<367:24:18, 2.36s/it]
2%|▏ | 8973/569592 [44:43<330:51:01, 2.12s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100000000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 8973/569592 [44:43<330:51:01, 2.12s/it]
2%|▏ | 8974/569592 [44:47<430:07:17, 2.76s/it]
2%|▏ | 8974/569592 [44:47<430:07:17, 2.76s/it]
2%|▏ | 8975/569592 [44:49<400:20:57, 2.57s/it]
2%|▏ | 8975/569592 [44:49<400:20:57, 2.57s/it]
2%|▏ | 8976/569592 [44:52<389:15:49, 2.50s/it]
2%|▏ | 8976/569592 [44:52<389:15:49, 2.50s/it]
2%|▏ | 8977/569592 [44:54<366:41:52, 2.35s/it]
2%|▏ | 8977/569592 [44:54<366:41:52, 2.35s/it]
2%|▏ | 8978/569592 [44:59<512:18:56, 3.29s/it]
2%|▏ | 8978/569592 [44:59<512:18:56, 3.29s/it]
2%|▏ | 8979/569592 [45:00<403:39:12, 2.59s/it]
2%|▏ | 8979/569592 [45:00<403:39:12, 2.59s/it]
2%|▏ | 8980/569592 [45:02<388:23:54, 2.49s/it]
2%|▏ | 8980/569592 [45:02<388:23:54, 2.49s/it]
2%|▏ | 8981/569592 [45:04<340:31:38, 2.19s/it]
2%|▏ | 8981/569592 [45:04<340:31:38, 2.19s/it]
2%|▏ | 8982/569592 [45:09<462:28:00, 2.97s/it]
2%|▏ | 8982/569592 [45:09<462:28:00, 2.97s/it]
2%|▏ | 8983/569592 [45:10<414:02:20, 2.66s/it]
2%|▏ | 8983/569592 [45:11<414:02:20, 2.66s/it]
2%|▏ | 8984/569592 [45:13<404:40:44, 2.60s/it]
2%|▏ | 8984/569592 [45:13<404:40:44, 2.60s/it]
2%|▏ | 8985/569592 [45:14<328:24:56, 2.11s/it]
2%|▏ | 8985/569592 [45:14<328:24:56, 2.11s/it]
2%|▏ | 8986/569592 [45:19<489:07:12, 3.14s/it]
2%|▏ | 8986/569592 [45:19<489:07:12, 3.14s/it]
2%|▏ | 8987/569592 [45:22<454:58:43, 2.92s/it]
2%|▏ | 8987/569592 [45:22<454:58:43, 2.92s/it]
2%|▏ | 8988/569592 [45:23<360:58:14, 2.32s/it]
2%|▏ | 8988/569592 [45:23<360:58:14, 2.32s/it]
2%|▏ | 8989/569592 [45:25<346:36:54, 2.23s/it]
2%|▏ | 8989/569592 [45:25<346:36:54, 2.23s/it]
2%|▏ | 8990/569592 [45:30<483:23:17, 3.10s/it]
2%|▏ | 8990/569592/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
[45:30<483:23:17, 3.10s/it]
2%|▏ | 8991/569592 [45:32<420:59:22, 2.70s/it]
2%|▏ | 8991/569592 [45:32<420:59:22, 2.70s/it]
2%|▏ | 8992/569592 [45:33<342:08:13, 2.20s/it]
2%|▏ | 8992/569592 [45:33<342:08:13, 2.20s/it]
2%|▏ | 8993/569592 [45:35<339:16:44, 2.18s/it]
2%|▏ | 8993/569592 [45:35<339:16:44, 2.18s/it]
2%|▏ | 8994/569592 [45:42<550:21:37, 3.53s/it]
2%|▏ | 8994/569592 [45:42<550:21:37, 3.53s/it]
2%|▏ | 8995/569592 [45:43<429:17:54, 2.76s/it]
2%|▏ | 8995/569592 [45:43<429:17:54, 2.76s/it]
2%|▏ | 8996/569592 [45:43<346:41:37, 2.23s/it]
2%|▏ | 8996/569592 [45:44<346:41:37, 2.23s/it]
2%|▏ | 8997/569592 [45:45<290:08:45, 1.86s/it]
2%|▏ | 8997/569592 [45:45<290:08:45, 1.86s/it]
2%|▏ | 8998/569592 [45:52<544:17:51, 3.50s/it]
2%|▏ | 8998/569592 [45:52<544:17:51, 3.50s/it]
2%|▏ | 8999/569592 [45:53<430:15:55, 2.76s/it]
2%|▏ | 8999/569592 [45:53<430:15:55, 2.76s/it]
2%|▏ | 9000/569592 [45:54<345:25:41, 2.22s/it]
2%|▏ | 9000/569592 [45:54<345:25:41, 2.22s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-8000] due to args.save_total_limit
2%|▏ | 9001/569592 [48:10<6594:56:44, 42.35s/it]
2%|▏ | 9001/569592 [48:10<6594:56:44, 42.35s/it]
2%|▏ | 9002/569592 [48:11<4658:09:52, 29.91s/it]
2%|▏ | 9002/569592 [48:11<4658:09:52, 29.91s/it]
2%|▏ | 9003/569592 [48:12<3304:26:29, 21.22s/it]
2%|▏ | 9003/569592 [48:12<3304:26:29, 21.22s/it]
2%|▏ | 9004/569592 [48:13<2362:10:48, 15.17s/it]
2%|▏ | 9004/569592 [48:13<2362:10:48, 15.17s/it]
2%|▏ | 9005/569592 [48:14<1699:50:02, 10.92s/it]
2%|▏ | 9005/569592 [48:14<1699:50:02, 10.92s/it]
2%|▏ | 9006/569592 [48:15<1234:10:06, 7.93s/it]
2%|▏ | 9006/569592 [48:15<1234:10:06, 7.93s/it]
2%|▏ | 9007/569592 [48:16<908:51:21, 5.84s/it]
2%|▏ | 9007/569592 [48:16<908:51:21, 5.84s/it]
2%|▏ | 9008/569592 [48:17<681:33:25, 4.38s/it]
2%|▏ | 9008/569592 [48:17<681:33:25, 4.38s/it]
2%|▏ | 9009/569592 [48:20<640:00:57, 4.11s/it]
2%|▏ | 9009/569592 [48:20<640:00:57, 4.11s/it]
2%|▏ | 9010/569592 [48:24<609:33:11, 3.91s/it]
2%|▏ | 9010/569592 [48:24<609:33:11, 3.91s/it]
2%|▏ | 9011/569592 [48:25<476:45:22, 3.06s/it]
2%|▏ | 9011/569592 [48:25<476:45:22, 3.06s/it]
2%|▏ | 9012/569592 [48:26<378:51:15, 2.43s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92370816 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9012/569592 [48:26<378:51:15, 2.43s/it]
2%|▏ | 9013/569592 [48:33<590:51:42, 3.79s/it]
2%|▏ | 9013/569592 [48:33<590:51:42, 3.79s/it]
2%|▏ | 9014/569592 [48:38<655:51:18, 4.21s/it]
2%|▏ | 9014/569592 [48:38<655:51:18, 4.21s/it]
2%|▏ | 9015/569592 [48:42<667:11:13, 4.28s/it]
2%|▏ | 9015/569592 [48:42<667:11:13, 4.28s/it]
2%|▏ | 9016/569592 [48:48<731:57:11, 4.70s/it]
2%|▏ | 9016/569592 [48:48<731:57:11, 4.70s/it]
2%|▏ | 9017/569592 [48:53<736:54:10, 4.73s/it]
2%|▏ | 9017/569592 [48:53<736:54:10, 4.73s/it]
2%|▏ | 9018/569592 [48:57<729:54:36, 4.69s/it]
2%|▏ | 9018/569592 [48:57<729:54:36, 4.69s/it]
2%|▏ | 9019/569592 [49:01<695:15:06, 4.46s/it]
2%|▏ | 9019/569592 [49:01<695:15:06, 4.46s/it]
2%|▏ | 9020/569592 [49:05<658:22:31, 4.23s/it]
2%|▏ | 9020/569592 [49:05<658:22:31, 4.23s/it]
2%|▏ | 9021/569592 [49:10<704:58:12, 4.53s/it]
2%|▏ | 9021/569592 [49:10<704:58:12, 4.53s/it]
2%|▏ | 9022/569592 [49:14<691:55:18, 4.44s/it]
2%|▏ | 9022/569592 [49:14<691:55:18, 4.44s/it]
2%|▏ | 9023/569592 [49:18<645:05:26, 4.14s/it]
2%|▏ | 9023/569592 [49:18<645:05:26, 4.14s/it]
2%|▏ | 9024/569592 [49:23<678:37:45, 4.36s/it]
2%|▏ | 9024/569592 [49:23<678:37:45, 4.36s/it]
2%|▏ | 9025/569592 [49:28<713:49:40, 4.58s/it]
2%|▏ | 9025/569592 [49:28<713:49:40, 4.58s/it]
2%|▏ | 9026/569592 [49:32<712:18:01, 4.57s/it]
2%|▏ | 9026/569592 [49:32<712:18:01, 4.57s/it]
2%|▏ | 9027/569592 [49:37<702:25:30, 4.51s/it]
2%|▏ | 9027/569592 [49:37<702:25:30, 4.51s/it]
2%|▏ | 9028/569592 [49:42<725:24:21, 4.66s/it]
2%|▏ | 9028/569592 [49:42<725:24:21, 4.66s/it]
2%|▏ | 9029/569592 [49:46<722:22:04, 4.64s/it]
2%|▏ | 9029/569592 [49:46<722:22:04, 4.64s/it]
2%|▏ | 9030/569592 [49:51<749:21:31, 4.81s/it]
2%|▏ | 9030/569592 [49:51<749:21:31, 4.81s/it]
2%|▏ | 9031/569592 [49:55<703:57:10, 4.52s/it]
2%|▏ | 9031/569592 [49:55<703:57:10, 4.52s/it]
2%|▏ | 9032/569592 [49:56<535:04:50, 3.44s/it]
2%|▏ | 9032/569592 [49:56<535:04:50, 3.44s/it]
2%|▏ | 9033/569592 [50:00<546:53:30, 3.51s/it]
2%|▏ | 9033/569592 [50:00<546:53:30, 3.51s/it]
2%|▏ | 9034/569592 [50:01<426:07:55, 2.74s/it]
2%|▏ | 9034/569592 [50:01<426:07:55, 2.74s/it]
2%|▏ | 9035/569592 [50:05<474:26:51, 3.05s/it]
2%|▏ | 9035/569592 [50:05<474:26:51, 3.05s/it]
2%|▏ | 9036/569592 [50:10<571:10:37, 3.67s/it]
2%|▏ | 9036/569592 [50:10<571:10:37, 3.67s/it]
2%|▏ | 9037/569592 [50:13<570:21:25, 3.66s/it]
2%|▏ | 9037/569592 [50:13<570:21:25, 3.66s/it]
2%|▏ | 9038/569592 [50:18<608:05:08, 3.91s/it]
2%|▏ | 9038/569592 [50:18<608:05:08, 3.91s/it]
2%|▏ | 9039/569592 [50:19<466:28:28, 3.00s/it]
2%|▏ | 9039/569592 [50:19<466:28:28, 3.00s/it]
2%|▏ | 9040/569592 [50:24<563:29:28, 3.62s/it]
2%|▏ | 9040/569592 [50:24<563:29:28,/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95728560 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3.62s/it]
2%|▏ | 9041/569592 [50:28<574:40:46, 3.69s/it]
2%|▏ | 9041/569592 [50:28<574:40:46, 3.69s/it]
2%|▏ | 9042/569592 [50:31<568:39:54, 3.65s/it]
2%|▏ | 9042/569592 [50:31<568:39:54, 3.65s/it]
2%|▏ | 9043/569592 [50:36<625:22:35, 4.02s/it]
2%|▏ | 9043/569592 [50:36<625:22:35, 4.02s/it]
2%|▏ | 9044/569592 [50:40<619:43:41, 3.98s/it]
2%|▏ | 9044/569592 [50:40<619:43:41, 3.98s/it]
2%|▏ | 9045/569592 [50:43<573:48:36, 3.69s/it]
2%|▏ | 9045/569592 [50:43<573:48:36, 3.69s/it]
2%|▏ | 9046/569592 [50:46<561:41:01, 3.61s/it]
2%|▏ | 9046/569592 [50:46<561:41:01, 3.61s/it]
2%|▏ | 9047/569592 [50:47<439:03:26, 2.82s/it]
2%|▏ | 9047/569592 [50:47<439:03:26, 2.82s/it]
2%|▏ | 9048/569592 [50:52<518:27:27, 3.33s/it]
2%|▏ | 9048/569592 [50:52<518:27:27, 3.33s/it]
2%|▏ | 9049/569592 [50:56<560:49:26, 3.60s/it]
2%|▏ | 9049/569592 [50:56<560:49:26, 3.60s/it]
2%|▏ | 9050/569592 [51:00<571:41:15, 3.67s/it]
2%|▏ | 9050/569592 [51:00<571:41:15, 3.67s/it]
2%|▏ | 9051/569592 [51:03<562:09:18, 3.61s/it]
2%|▏ | 9051/569592 [51:03<562:09:18, 3.61s/it]
2%|▏ | 9052/569592 [51:07<564:39:50, 3.63s/it]
2%|▏ | 9052/569592 [51:07<564:39:50, 3.63s/it]
2%|▏ | 9053/569592 [51:11<569:02:18, 3.65s/it]
2%|▏ | 9053/569592 [51:11<569:02:18, 3.65s/it]
2%|▏ | 9054/569592 [51:16<640:06:09, 4.11s/it]
2%|▏ | 9054/569592 [51:16<640:06:09, 4.11s/it]
2%|▏ | 9055/569592 [51:17<492:53:34, 3.17s/it]
2%|▏ | 9055/569592 [51:17<492:53:34, 3.17s/it]
2%|▏ | 9056/569592 [51:18<387:20:42, 2.49s/it]
2%|▏ | 9056/569592 [51:18<387:20:42, 2.49s/it]
2%|▏ | 9057/569592 [51:19<315:23:23, 2.03s/it]
2%|▏ | 9057/569592 [51:19<315:23:23, 2.03s/it]
2%|▏ | 9058/569592 [51:20<266:33:49, 1.71s/it]
2%|▏ | 9058/569592 [51:20<266:33:49, 1.71s/it]
2%|▏ | 9059/569592 [51:21<229:21:27, 1.47s/it]
2%|▏ | 9059/569592 [51:21<229:21:27, 1.47s/it]
2%|▏ | 9060/569592 [51:22<204:38:35, 1.31s/it]
2%|▏ | 9060/569592 [51:22<204:38:35, 1.31s/it]
2%|▏ | 9061/569592 [51:23<186:27:53, 1.20s/it]
2%|▏ | 9061/569592 [51:23<186:27:53, 1.20s/it]
2%|▏ | 9062/569592 [51:23<174:27:19, 1.12s/it]
2%|▏ | 9062/569592 [51:24<174:27:19, 1.12s/it]
2%|▏ | 9063/569592 [51:29<397:34:43, 2.55s/it]
2%|▏ | 9063/569592 [51:29<397:34:43, 2.55s/it]
2%|▏ | 9064/569592 [51:31<359:30:39, 2.31s/it]
2%|▏ | 9064/569592 [51:31<359:30:39, 2.31s/it]
2%|▏ | 9065/569592 [51:32<306:16:27, 1.97s/it]
2%|▏ | 9065/569592 [51:32<306:16:27, 1.97s/it]
2%|▏ | 9066/569592 [51:33<258:23:59, 1.66s/it]
2%|▏ | 9066/569592 [51:33<258:23:59, 1.66s/it]
2%|▏ | 9067/569592 [51:40<479:20:14, 3.08s/it]
2%|▏ | 9067/569592 [51:40<479:20:14, 3.08s/it]
2%|▏ | 9068/569592 [51:41<378:50:21, 2.43s/it]
2%|▏ | 9068/569592 [51:41<378:50:21, 2.43s/it]
2%|▏ | 9069/569592 [51:42<316:03:02, 2.03s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98312448 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9069/569592 [51:42<316:03:02, 2.03s/it]
2%|▏ | 9070/569592 [51:43<290:46:23, 1.87s/it]
2%|▏ | 9070/569592 [51:43<290:46:23, 1.87s/it]
2%|▏ | 9071/569592 [51:49<494:01:01, 3.17s/it]
2%|▏ | 9071/569592 [51:49<494:01:01, 3.17s/it]
2%|▏ | 9072/569592 [51:52<472:43:23, 3.04s/it]
2%|▏ | 9072/569592 [51:52<472:43:23, 3.04s/it]
2%|▏ | 9073/569592 [51:53<378:12:17, 2.43s/it]
2%|▏ | 9073/569592 [51:53<378:12:17, 2.43s/it]
2%|▏ | 9074/569592 [51:54<308:12:09, 1.98s/it]
2%|▏ | 9074/569592 [51:54<308:12:09, 1.98s/it]
2%|▏ | 9075/569592 [52:01<519:28:21, 3.34s/it]
2%|▏ | 9075/569592 [52:01<519:28:21, 3.34s/it]
2%|▏ | 9076/569592 [52:02<409:32:45, 2.63s/it]
2%|▏ | 9076/569592 [52:02<409:32:45, 2.63s/it]
2%|▏ | 9077/569592 [52:02<330:04:01, 2.12s/it]
2%|▏ | 9077/569592 [52:02<330:04:01, 2.12s/it]
2%|▏ | 9078/569592 [52:03<278:22:53, 1.79s/it]
2%|▏ | 9078/569592 [52:03<278:22:53, 1.79s/it]
2%|▏ | 9079/569592 [52:10<494:59:00, 3.18s/it]
2%|▏ | 9079/569592 [52:10<494:59:00, 3.18s/it]
2%|▏ | 9080/569592 [52:12<424:53:16, 2.73s/it]
2%|▏ | 9080/569592 [52:12<424:53:16, 2.73s/it]
2%|▏ | 9081/569592 [52:13<342:35:09, 2.20s/it]
2%|▏ | 9081/569592 [52:13<342:35:09, 2.20s/it]
2%|▏ | 9082/569592 [52:14<309:07:48, 1.99s/it]
2%|▏ | 9082/569592 [52:14<309:07:48, 1.99s/it]
2%|▏ | 9083/569592 [52:20<484:23:37, 3.11s/it]
2%|▏ | 9083/569592 [52:20<484:23:37, 3.11s/it]
2%|▏ | 9084/569592 [52:22<421:28:36, 2.71s/it]
2%|▏ | 9084/569592 [52:22<421:28:36, 2.71s/it]
2%|▏ | 9085/569592 [52:22<339:21:43, 2.18s/it]
2%|▏ | 9085/569592 [52:22<339:21:43, 2.18s/it]
2%|▏ | 9086/569592 [52:25<353:07:11, 2.27s/it]
2%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
�� | 9086/569592 [52:25<353:07:11, 2.27s/it]
2%|▏ | 9087/569592 [52:30<473:02:32, 3.04s/it]
2%|▏ | 9087/569592 [52:30<473:02:32, 3.04s/it]
2%|▏ | 9088/569592 [52:32<421:33:26, 2.71s/it]
2%|▏ | 9088/569592 [52:32<421:33:26, 2.71s/it]
2%|▏ | 9089/569592 [52:33<379:01:59, 2.43s/it]
2%|▏ | 9089/569592 [52:34<379:01:59, 2.43s/it]
2%|▏ | 9090/569592 [52:34<310:04:47, 1.99s/it]
2%|▏ | 9090/569592 [52:34<310:04:47, 1.99s/it]
2%|▏ | 9091/569592 [52:40<471:46:00, 3.03s/it]
2%|▏ | 9091/569592 [52:40<471:46:00, 3.03s/it]
2%|▏ | 9092/569592 [52:42<412:08:08, 2.65s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (97202284 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9092/569592 [52:42<412:08:08, 2.65s/it]
2%|▏ | 9093/569592 [52:44<386:15:23, 2.48s/it]
2%|▏ | 9093/569592 [52:44<386:15:23, 2.48s/it]
2%|▏ | 9094/569592 [52:46<368:30:41, 2.37s/it]
2%|▏ | 9094/569592 [52:46<368:30:41, 2.37s/it]
2%|▏ | 9095/569592 [52:50<433:47:58, 2.79s/it]
2%|▏ | 9095/569592 [52:50<433:47:58, 2.79s/it]
2%|▏ | 9096/569592 [52:51<390:15:57, 2.51s/it]
2%|▏ | 9096/569592 [52:51<390:15:57, 2.51s/it]
2%|▏ | 9097/569592 [52:55<449:48:12, 2.89s/it]
2%|▏ | 9097/569592 [52:55<449:48:12, 2.89s/it]
2%|▏ | 9098/569592 [52:56<360:14:21, 2.31s/it]
2%|▏ | 9098/569592 [52:56<360:14:21, 2.31s/it]
2%|▏ | 9099/569592 [53:00<406:23:53, 2.61s/it]
2%|▏ | 9099/569592 [53:00<406:23:53, 2.61s/it]
2%|▏ | 9100/569592 [53:02<392:16:54, 2.52s/it]
2%|▏ | 9100/569592 [53:02<392:16:54, 2.52s/it]
2%|▏ | 9101/569592 [53:06<460:20:46, 2.96s/it]
2%|▏ | 9101/569592 [53:06<460:20:46, 2.96s/it]
2%|▏ | 9102/569592 [53:07<368:41:50, 2.37s/it]
2%|▏ | 9102/569592 [53:07<368:41:50, 2.37s/it]
2%|▏ | 9103/569592 [53:10<383:57:52, 2.47s/it]
2%|▏ | 9103/569592 [53:10<383:57:52, 2.47s/it]
2%|▏ | 9104/569592 [53:12<376:47:20, 2.42s/it]
2%|▏ | 9104/569592 [53:12<376:47:20, 2.42s/it]
2%|▏ | 9105/569592 [53:16<465:22:09, 2.99s/it]
2%|▏ | 9105/569592 [53:16<465:22:09, 2.99s/it]
2%|▏ | 9106/569592 [53:17<369:52:00, 2.38s/it]
2%|▏ | 9106/569592 [53:17<369:52:00, 2.38s/it]
2%|▏ | 9107/569592 [53:20<389:54:32, 2.50s/it]
2%|▏ | 9107/569592 [53:20<389:54:32, 2.50s/it]
2%|▏ | 9108/569592 [53:23<409:19:30, 2.63s/it]
2%|▏ | 9108/569592 [53:23<409:19:30, 2.63s/it]
2%|▏ | 9109/569592 [53:26<456:31:52, 2.93s/it]
2%|▏ | 9109/569592 [53:26<456:31:52, 2.93s/it]
2%|▏ | 9110/569592 [53:27<367:51:59, 2.36s/it]
2%|▏ | 9110/569592 [53:27<367:51:59, 2.36s/it]
2%|▏ | 9111/569592 [53:31<412:37:59, 2.65s/it]
2%|▏ | 9111/569592 [53:31<412:37:59, 2.65s/it]
2%|▏ | 9112/569592 [53:33<372:45:37, 2.39s/it]
2%|▏ | 9112/569592 [53:33<372:45:37, 2.39s/it]
2%|▏ | 9113/569592 [53:36<434:39:35, 2.79s/it]
2%|▏ | 9113/569592 [53:36<434:39:35, 2.79s/it]
2%|▏ | 9114/569592 [53:37<348:44:53, 2.24s/it]
2%|▏ | 9114/569592 [53:37<348:44:53, 2.24s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100663296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9115/569592 [53:41<441:58:25, 2.84s/it]
2%|▏ | 9115/569592 [53:42<441:58:25, 2.84s/it]
2%|▏ | 9116/569592 [53:42<354:16:59, 2.28s/it]
2%|▏ | 9116/569592 [53:42<354:16:59, 2.28s/it]
2%|▏ | 9117/569592 [53:46<397:37:32, 2.55s/it]
2%|▏ | 9117/569592 [53:46<397:37:32, 2.55s/it]
2%|▏ | 9118/569592 [53:47<341:26:38, 2.19s/it]
2%|▏ | 9118/569592 [53:47<341:26:38, 2.19s/it]
2%|▏ | 9119/569592 [53:51<421:12:40, 2.71s/it]
2%|▏ | 9119/569592 [53:51<421:12:40, 2.71s/it]
2%|▏ | 9120/569592 [53:52<341:44:13, 2.20s/it]
2%|▏ | 9120/569592 [53:52<341:44:13, 2.20s/it]
2%|▏ | 9121/569592 [53:56<423:24:28, 2.72s/it]
2%|▏ | 9121/569592 [53:56<423:24:28, 2.72s/it]
2%|▏ | 9122/569592 [53:58<389:55:47, 2.50s/it]
2%|▏ | 9122/569592 [53:58<389:55:47, 2.50s/it]
2%|▏ | 9123/569592 [54:02<451:26:45, 2.90s/it]
2%|▏ | 9123/569592 [54:02<451:26:45, 2.90s/it]
2%|▏ | 9124/569592 [54:03<358:52:06, 2.31s/it]
2%|▏ | 9124/569592 [54:03<358:52:06, 2.31s/it]
2%|▏ | 9125/569592 [54:07<464:40:31, 2.98s/it]
2%|▏ | 9125/569592 [54:07<464:40:31, 2.98s/it]
2%|▏ | 9126/569592 [54:11<500:08:53, 3.21s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9126/569592 [54:11<500:08:53, 3.21s/it]
2%|▏ | 9127/569592 [54:13<430:02:06, 2.76s/it]
2%|▏ | 9127/569592 [54:13<430:02:06, 2.76s/it]
2%|▏ | 9128/569592 [54:16<476:27:10, 3.06s/it]
2%|▏ | 9128/569592 [54:16<476:27:10, 3.06s/it]
2%|▏ | 9129/569592 [54:20<501:46:06, 3.22s/it]
2%|▏ | 9129/569592 [54:20<501:46:06, 3.22s/it]
2%|▏ | 9130/569592 [54:23<514:06:15, 3.30s/it]
2%|▏ | 9130/569592 [54:24<514:06:15, 3.30s/it]
2%|▏ | 9131/569592 [54:28<589:37:05, 3.79s/it]
2%|▏ | 9131/569592 [54:28<589:37:05, 3.79s/it]
2%|▏ | 9132/569592 [54:31<553:18:16, 3.55s/it]
2%|▏ | 9132/569592 [54:31<553:18:16, 3.55s/it]
2%|▏ | 9133/569592 [54:37<626:27:29, 4.02s/it]
2%|▏ | 9133/569592 [54:37<626:27:29, 4.02s/it]
2%|▏ | 9134/569592 [54:41<666:02:53, 4.28s/it]
2%|▏ | 9134/569592 [54:41<666:02:53, 4.28s/it]
2%|▏ | 9135/569592 [54:45<631:44:41, 4.06s/it]
2%|▏ | 9135/569592 [54:45<631:44:41, 4.06s/it]
2%|▏ | 9136/569592 [54:50<681:01:21, 4.37s/it]
2%|▏ | 9136/569592 [54:50<681:01:21, 4.37s/it]
2%|▏ | 9137/569592 [54:54<654:08:53, 4.20s/it]
2%|▏ | 9137/569592 [54:54<654:08:53, 4.20s/it]
2%|▏ | 9138/569592 [54:59<688:52:20, 4.42s/it]
2%|▏ | 9138/569592 [54:59<688:52:20, 4.42s/it]
2%|▏ | 9139/569592 [55:04<725:47:51, 4.66s/it]
2%|▏ | 9139/569592 [55:04<725:47:51, 4.66s/it]
2%|▏ | 9140/569592 [55:07<647:53:02, 4.16s/it]
2%|▏ | 9140/569592 [55:07<647:53:02, 4.16s/it]
2%|▏ | 9141/569592 [55:12<689:10:25, 4.43s/it]
2%|▏ | 9141/569592 [55:12<689:10:25, 4.43s/it]
2%|▏ | 9142/569592 [55:16<675:28:05, 4.34s/it]
2%|▏ | 9142/569592 [55:16<675:28:05, 4.34s/it]
2%|▏ | 9143/569592 [55:21<704:13:39, 4.52s/it]
2%|▏ | 9143/569592 [55:21<704:13:39, 4.52s/it]
2%|▏ | 9144/569592 [55:25<664:01:19, 4.27s/it]
2%|▏ | 9144/569592 [55:25<664:01:19, 4.27s/it]
2%|▏ | 9145/569592 [55:28<631:57:42, 4.06s/it]
2%|▏ | 9145/569592 [55:28<631:57:42, 4.06s/it]
2%|▏ | 9146/569592 [55:32<612:53:55, 3.94s/it]
2%|▏ | 9146/569592 [55:32<612:53:55, 3.94s/it]
2%|▏ | 9147/569592 [55:33<472:28:42, 3.03s/it]
2%|▏ | 9147/569592 [55:33<472:28:42, 3.03s/it]
2%|▏ | 9148/569592 [55:38<554:42:59, 3.56s/it]
2%|▏ | 9148/569592 [55:38<554:42:59, 3.56s/it]
2%|▏ | 9149/569592 [55:41<555:42:17, 3.57s/it]
2%|▏ | 9149/569592 [55:41<555:42:17, 3.57s/it]
2%|▏ | 9150/569592 [55:46<625:04:19, 4.02s/it]
2%|▏ | 9150/569592 [55:46<625:04:19, 4.02s/it]
2%|▏ | 9151/569592 [55:51<648:06:03, 4.16s/it]
2%|▏ | 9151/569592 [55:51<648:06:03, 4.16s/it]
2%|▏ | 9152/569592 [55:52<496:04:18, 3.19s/it]
2%|▏ | 9152/569592 [55:52<496:04:18, 3.19s/it]
2%|▏ | 9153/569592 [55:57<592:45:12, 3.81s/it]
2%|▏ | 9153/569592 [55:57<592:45:12, 3.81s/it]
2%|▏ | 9154/569592 [56:01<584:52:53, 3.76s/it]
2%|▏ | 9154/569592 [56:01<584:52:53, 3.76s/it]
2%|▏ | 9155/569592 [56:04<575:21:28, 3.70s/it]
2%|▏ | 9155/569592 [56:04<575:21:28, 3.70s/it]
2%|▏ | 9156/569592 [56:09<635:41:08, 4.08s/it]
2%|▏ | 9156/569592 [56:09<635:41:08, 4.08s/it]
2%|▏ | 9157/569592 [56:10<486:41:40, 3.13s/it]
2%|▏ | 9157/569592 [56:10<486:41:40, 3.13s/it]
2%|▏ | 9158/569592 [56:13<492:11:48, 3.16s/it]
2%|▏ | 9158/569592 [56:13<492:11:48, 3.16s/it]
2%|▏ | 9159/569592 [56:14<387:38:14, 2.49s/it]
2%|▏ | 9159/569592 [56:14<387:38:14, 2.49s/it]
2%|▏ | 9160/569592 [56:15<317:57:15, 2.04s/it]
2%|▏ | 9160/569592 [56:15<317:57:15, 2.04s/it]
2%|▏ | 9161/569592 [56:20<454:31:27, 2.92s/it]
2%|▏ | 9161/569592 [56:20<454:31:27, 2.92s/it]
2%|▏ | 9162/569592 [56:25<516:02:30, 3.31s/it]
2%|▏ | 9162/569592 [56:25<516:02:30, 3.31s/it]
2%|▏ | 9163/569592 [56:28<508:22:38, 3.27s/it]
2%|▏ | 9163/569592 [56:28<508:22:38, 3.27s/it]
2%|▏ | 9164/569592 [56:32<539:29:11, 3.47s/it]
2%|▏ | 9164/569592 [56:32<539:29:11, 3.47s/it]
2%|▏ | 9165/569592 [56:35<543:40:05, 3.49s/it]
2%|▏ | 9165/569592 [56:35<543:40:05, 3.49s/it]
2%|▏ | 9166/569592 [56:36<422:43:40, 2.72s/it]
2%|▏ | 9166/569592 [56:36<422:43:40, 2.72s/it]
2%|▏ | 9167/569592 [56:40<457:16:19, 2.94s/it]
2%|▏ | 9167/569592 [56:40<457:16:19, 2.94s/it]
2%|▏ | 9168/569592 [56:41<366:54:39, 2.36s/it]
2%|▏ | 9168/569592 [56:41<366:54:39, 2.36s/it]
2%|▏ | 9169/569592 [56:42<303:43:32, 1.95s/it]
2%|▏ | 9169/569592 [56:42<303:43:32, 1.95s/it]
2%|▏ | 9170/569592 [56:42<258:12:24, 1.66s/it]
2%|▏ | 9170/569592 [56:43<258:12:24, 1.66s/it]
2%|▏ | 9171/569592 [56:46<336:26:33, 2.16s/it]
2%|▏ | 9171/569592 [56:46<336:26:33, 2.16s/it]
2%|▏ | 9172/569592 [56:47<280:43:26, 1.80s/it]
2%|▏ | 9172/569592 [56:47<280:43:26, 1.80s/it]
2%|▏ | 9173/569592 [56:48<240:32:00, 1.55s/it]
2%|▏ | 9173/569592 [56:48<240:32:00, 1.55s/it]
2%|▏ | 9174/569592 [56:49<212:27:08, 1.36s/it]
2%|▏ | 9174/569592 [56:49<212:27:08, 1.36s/it]
2%|▏ | 9175/569592 [56:50<192:54:23, 1.24s/it]
2%|▏ | 9175/569592 [56:50<192:54:23, 1.24s/it]
2%|▏ | 9176/569592 [56:53<315:19:23, 2.03s/it]
2%|▏ | 9176/569592 [56:54<315:19:23, 2.03s/it]
2%|▏ | 9177/569592 [56:55<268:54:04, 1.73s/it]
2%|▏ | 9177/569592 [56:55<268:54:04, 1.73s/it]
2%|▏ | 9178/569592 [56:59<399:01:17, 2.56s/it]
2%|▏ | 9178/569592 [56:59<399:01:17, 2.56s/it]
2%|▏ | 9179/569592 [57:00<323:40:58, 2.08s/it]
2%|▏ | 9179/569592 [57:00<323:40:58, 2.08s/it]
2%|▏ | 9180/569592 [57:03<377:42:58, 2.43s/it]
2%|▏ | 9180/569592 [57:03<377:42:58, 2.43s/it]
2%|▏ | 9181/569592 [57:05<333:26:34, 2.14s/it]
2%|▏ | 9181/569592 [57:05<333:26:34, 2.14s/it]
2%|▏ | 9182/569592 [57:08<408:06:02, 2.62s/it]
2%|▏ | 9182/569592 [57:08<408:06:02, 2.62s/it]
2%|▏ | 9183/569592 [57:09<334:40:24, 2.15s/it]
2%|▏ | 9183/569592 [57:10<334:40:24, 2.15s/it]
2%|▏ | 9184/569592 [57:14<424:13:51, 2.73s/it]
2%|▏ | 9184/569592 [57:14<424:13:51, 2.73s/it]
2%|▏ | 9185/569592 [57:15<361:32:40, 2.32s/it]
2%|▏ | 9185/569592 [57:15<361:32:40, 2.32s/it]
2%|▏ | 9186/569592 [57:19<431:39:57, 2.77s/it]
2%|▏ | 9186/569592 [57:19<431:39:57, 2.77s/it]
2%|▏ | 9187/569592 [57:20<373:13:20, 2.40s/it]
2%|▏ | 9187/569592 [57:20<373:13:20, 2.40s/it]
2%|▏ | 9188/569592 [57:24<415:15:13, 2.67s/it]
2%|▏ | 9188/569592 [57:24<415:15:13, 2.67s/it]
2%|▏ | 9189/569592 [57:25<345:25:43, 2.22s/it]
2%|▏ | 9189/569592 [57:25<345:25:43, 2.22s/it]
2%|▏ | 9190/569592 [57:30<466:20:28, 3.00s/it]
2%|▏ | 9190/569592 [57:30<466:20:28, 3.00s/it]
2%|▏ | 9191/569592 [57:31<390:56:44, 2.51s/it]
2%|▏ | 9191/569592 [57:31<390:56:44, 2.51s/it]
2%|▏ | 9192/569592 [57:33<370:06:36, 2.38s/it]
2%|▏ | 9192/569592 [57:33<370:06:36, 2.38s/it]
2%|▏ | 9193/569592 [57:37<459:18:37, 2.95s/it]
2%|▏ | 9193/569592 [57:37<459:18:37, 2.95s/it]
2%|▏ | 9194/569592 [57:40<427:26:49, 2.75s/it]
2%|▏ | 9194/569592 [57:40<427:26:49, 2.75s/it]
2%|▏ | 9195/569592 [57:41<349:31:50, 2.25s/it]
2%|▏ | 9195/569592 [57:41<349:31:50, 2.25s/it]
2%|▏ | 9196/569592 [57:44<393:11:30, 2.53s/it]
2%|▏ | 9196/569592 [57:44<393:11:30, 2.53s/it]
2%|▏ | 9197/569592 [57:46<377:41:54, 2.43s/it]
2%|▏ | 9197/569592 [57:46<377:41:54, 2.43s/it]
2%|▏ | 9198/569592 [57:49<397:50:37, 2.56s/it]
2%|▏ | 9198/569592 [57:49<397:50:37, 2.56s/it]
2%|▏ | 9199/569592 [57:51<368:24:40, 2.37s/it]
2%|▏ | 9199/569592 [57:51<368:24:40, 2.37s/it]
2%|▏ | 9200/569592 [57:54<401:21:14, 2.58s/it]
2%|▏ | 9200/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [57:54<401:21:14, 2.58s/it]
2%|▏ | 9201/569592 [57:56<362:15:42, 2.33s/it]
2%|▏ | 9201/569592 [57:56<362:15:42, 2.33s/it]
2%|▏ | 9202/569592 [58:00<443:17:24, 2.85s/it]
2%|▏ | 9202/569592 [58:00<443:17:24, 2.85s/it]
2%|▏ | 9203/569592 [58:01<383:08:36, 2.46s/it]
2%|▏ | 9203/569592 [58:01<383:08:36, 2.46s/it]
2%|▏ | 9204/569592 [58:04<398:54:50, 2.56s/it]
2%|▏ | 9204/569592 [58:04<398:54:50, 2.56s/it]
2%|▏ | 9205/569592 [58:06<373:40:01, 2.40s/it]
2%|▏ | 9205/569592 [58:06<373:40:01, 2.40s/it]
2%|▏ | 9206/569592 [58:11<486:17:31, 3.12s/it]
2%|▏ | 9206/569592 [58:11<486:17:31, 3.12s/it]
2%|▏ | 9207/569592 [58:12<396:25:36, 2.55s/it]
2%|▏ | 9207/569592 [58:12<396:25:36, 2.55s/it]
2%|▏ | 9208/569592 [58:16<449:13:58, 2.89s/it]
2%|▏ | 9208/569592 [58:16<449:13:58, 2.89s/it]
2%|▏ | 9209/569592 [58:17<359:03:32, 2.31s/it]
2%|▏ | 9209/569592 [58:17<359:03:32, 2.31s/it]
2%|▏ | 9210/569592 [58:21<435:40:46, 2.80s/it]
2%|▏ | 9210/569592 [58:21<435:40:46, 2.80s/it]
2%|▏ | 9211/569592 [58:24<449:46:37, 2.89s/it]
2%|▏ | 9211/569592 [58:24<449:46:37, 2.89s/it]
2%|▏ | 9212/569592 [58:26<441:04:59, 2.83s/it]
2%|▏ | 9212/569592 [58:26<441:04:59, 2.83s/it]
2%|▏ | 9213/569592 [58:27<353:53:57, 2.27s/it]
2%|▏ | 9213/569592 [58:27<353:53:57, 2.27s/it]
2%|▏ | 9214/569592 [58:31<417:30:06, 2.68s/it]
2%|▏ | 9214/569592 [58:31<417:30:06, 2.68s/it]
2%|▏ | 9215/569592 [58:33<375:15:22, 2.41s/it]
2%|▏ | 9215/569592 [58:33<375:15:22, 2.41s/it]
2%|▏ | 9216/569592 [58:36<425:09:06, 2.73s/it]
2%|▏ | 9216/569592 [58:36<425:09:06, 2.73s/it]
2%|▏ | 9217/569592 [58:37<344:24:44, 2.21s/it]
2%|▏ | 9217/569592 [58:37<344:24:44, 2.21s/it]
2%|▏ | 9218/569592 [58:42<448:39:40, 2.88s/it]
2%|▏ | 9218/569592 [58:42<448:39:40, 2.88s/it]
2%|▏ | 9219/569592 [58:43<383:11:07, 2.46s/it]
2%|▏ | 9219/569592 [58:43<383:11:07, 2.46s/it]
2%|▏ | 9220/569592 [58:46<378:46:34, 2.43s/it]
2%|▏ | 9220/569592 [58:46<378:46:34, 2.43s/it]
2%|▏ | 9221/569592 [58:48<354:11:15, 2.28s/it]
2%|▏ | 9221/569592 [58:48<354:11:15, 2.28s/it]
2%|▏ | 9222/569592 [58:52<442:17:41, 2.84s/it]
2%|▏ | 9222/569592 [58:52<442:17:41, 2.84s/it]
2%|▏ | 9223/569592 [58:54<420:37:08, 2.70s/it]
2%|▏ | 9223/569592 [58:54<420:37:08, 2.70s/it]
2%|▏ | 9224/569592 [58:56<401:18:32, 2.58s/it]
2%|▏ | 9224/569592 [58:56<401:18:32, 2.58s/it]
2%|▏ | 9225/569592 [58:58<346:32:18, 2.23s/it]
2%|▏ | 9225/569592 [58:58<346:32:18, 2.23s/it]
2%|▏ | 9226/569592 [59:02<423:06:28, 2.72s/it]
2%|▏ | 9226/569592 [59:02<423:06:28, 2.72s/it]
2%|▏ | 9227/569592 [59:05<459:12:58, 2.95s/it]
2%|▏ | 9227/569592 [59:05<459:12:58, 2.95s/it]
2%|▏ | 9228/569592 [59:07<401:47:25, 2.58s/it]
2%|▏ | 9228/569592 [59:07<401:47:25, 2.58s/it]
2%|▏ | 9229/569592 [59:08<347:03:39, 2.23s/it]
2%|▏ | 9229/569592 [59:08<347:03:39, 2.23s/it]
2%|▏ | 9230/569592 [59:12<412:56:21, 2.65s/it]
2%|▏ | 9230/569592 [59:12<412:56:21, 2.65s/it]
2%|▏ | 9231/569592 [59:15<451:02:48, 2.90s/it]
2%|▏ | 9231/569592 [59:15<451:02:48, 2.90s/it]
2%|▏ | 9232/569592 [59:17<403:02:01, 2.59s/it]
2%|▏ | 9232/569592 [59:17<403:02:01, 2.59s/it]
2%|▏ | 9233/569592 [59:18<326:01:38, 2.09s/it]
2%|▏ | 9233/569592 [59:18<326:01:38, 2.09s/it]
2%|▏ | 9234/569592 [59:23<476:33:09, 3.06s/it]
2%|▏ | 9234/569592 [59:23<476:33:09, 3.06s/it]
2%|▏ | 9235/569592 [59:26<436:14:03, 2.80s/it]
2%|▏ | 9235/569592 [59:26<436:14:03, 2.80s/it]
2%|▏ | 9236/569592 [59:27<382:34:07, 2.46s/it]
2%|▏ | 9236/569592 [59:27<382:34:07, 2.46s/it]
2%|▏ | 9237/569592 [59:29<349:53:37, 2.25s/it]
2%|▏ | 9237/569592 [59:29<349:53:37, 2.25s/it]
2%|▏ | 9238/569592 [59:33<445:17:15, 2.86s/it]
2%|▏ | 9238/569592 [59:33<445:17:15, 2.86s/it]
2%|▏ | 9239/569592 [59:34<359:09:24, 2.31s/it]
2%|▏ | 9239/569592 [59:34<359:09:24, 2.31s/it]
2%|▏ | 9240/569592 [59:37<393:21:52, 2.53s/it]
2%|▏ | 9240/569592 [59:37<393:21:52, 2.53s/it]
2%|▏ | 9241/569592 [59:41<453:07:25, 2.91s/it]
2%|▏ | 9241/569592 [59:41<453:07:25, 2.91s/it]
2%|▏ | 9242/569592 [59:45<473:58:18, 3.05s/it]
2%|▏ | 9242/569592 [59:45<473:58:18, 3.05s/it]
2%|▏ | 9243/569592 [59:49<545:41:43, 3.51s/it]
2%|▏ | 9243/569592 [59:49<545:41:43, 3.51s/it]
2%|▏ | 9244/569592 [59:53<560:19:16, 3.60s/it]
2%|▏ | 9244/569592 [59:53<560:19:16, 3.60s/it]
2%|▏ | 9245/569592 [59:56<538:23:37, 3.46s/it]
2%|▏ | 9245/569592 [59:56<538:23:37, 3.46s/it]
2%|▏ | 9246/569592 [1:00:03<688:33:56, 4.42s/it]
2%|▏ | 9246/569592 [1:00:03<688:33:56, 4.42s/it]
2%|▏ | 9247/569592 [1:00:08<706:49:09, 4.54s/it]
2%|▏ | 9247/569592 [1:00:08<706:49:09, 4.54s/it]
2%|▏ | 9248/569592 [1:00:12<704:42:48, 4.53s/it]
2%|▏ | 9248/569592 [1:00:12<704:42:48, 4.53s/it]
2%|▏ | 9249/569592 [1:00:17<707:31:34, 4.55s/it]
2%|▏ | 9249/569592 [1:00:17<707:31:34, 4.55s/it]
2%|▏ | 9250/569592 [1:00:21<708:31:52, 4.55s/it]
2%|▏ | 9250/569592 [1:00:21<708:31:52, 4.55s/it]
2%|▏ | 9251/569592 [1:00:26<713:52:45, 4.59s/it]
2%|▏ | 9251/569592 [1:00:26<713:52:45, 4.59s/it]
2%|▏ | 9252/569592 [1:00:33<818:22:32, 5.26s/it]
2%|▏ | 9252/569592 [1:00:33<818:22:32, 5.26s/it]
2%|▏ | 9253/569592 [1:00:36<732:31:54, 4.71s/it]
2%|▏ | 9253/569592 [1:00:36<732:31:54, 4.71s/it]
2%|▏ | 9254/569592 [1:00:41<733:36:36, 4.71s/it]
2%|▏ | 9254/569592 [1:00:41<733:36:36, 4.71s/it]
2%|▏ | 9255/569592 [1:00:45<726:47:55, 4.67s/it]
2%|▏ | 9255/569592 [1:00:45<726:47:55, 4.67s/it]
2%|▏ | 9256/569592 [1:00:50<718:26:08, 4.62s/it]
2%|▏ | 9256/569592 [1:00:50<718:26:08, 4.62s/it]
2%|▏ | 9257/569592 [1:00:54<681:49:06, 4.38s/it]
2%|▏ | 9257/569592 [1:00:54<681:49:06, 4.38s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95564208 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9258/569592 [1:00:58<658:00:21, 4.23s/it]
2%|▏ | 9258/569592 [1:00:58<658:00:21, 4.23s/it]
2%|▏ | 9259/569592 [1:01:02<680:26:08, 4.37s/it]
2%|▏ | 9259/569592 [1:01:02<680:26:08, 4.37s/it]
2%|▏ | 9260/569592 [1:01:05<617:39:21, 3.97s/it]
2%|▏ | 9260/569592 [1:01:05<617:39:21, 3.97s/it]
2%|▏ | 9261/569592 [1:01:09<622:06:42, 4.00s/it]
2%|▏ | 9261/569592 [1:01:09<622:06:42, 4.00s/it]
2%|▏ | 9262/569592 [1:01:10<483:24:45, 3.11s/it]
2%|▏ | 9262/569592 [1:01:11<483:24:45, 3.11s/it]
2%|▏ | 9263/569592 [1:01:16<584:38:51, 3.76s/it]
2%|▏ | 9263/569592 [1:01:16<584:38:51, 3.76s/it]
2%|▏ | 9264/569592 [1:01:20<629:04:55, 4.04s/it]
2%|▏ | 9264/569592 [1:01:20<629:04:55, 4.04s/it]
2%|▏ | 9265/569592 [1:01:21<480:46:15, 3.09s/it]
2%|▏ | 9265/569592 [1:01:21<480:46:15, 3.09s/it]
2%|▏ | 9266/569592 [1:01:26<566:06:09, 3.64s/it]
2%|▏ | 9266/569592 [1:01:26<566:06:09, 3.64s/it]
2%|▏ | 9267/569592 [1:01:30<550:59:03, 3.54s/it]
2%|▏ | 9267/569592 [1:01:30<550:59:03, 3.54s/it]
2%|▏ | 9268/569592 [1:01:33<557:26:24, 3.58s/it]
2%|▏ | 9268/569592 [1:01:33<557:26:24, 3.58s/it]
2%|▏ | 9269/569592 [1:01:38<626:02:31, 4.02s/it]
2%|▏ | 9269/569592 [1:01:38<626:02:31, 4.02s/it]
2%|▏ | 9270/569592 [1:01:39<478:47:09, 3.08s/it]
2%|▏ | 9270/569592 [1:01:39<478:47:09, 3.08s/it]
2%|▏ | 9271/569592 [1:01:45<589:51:23, 3.79s/it]
2%|▏ | 9271/569592 [1:01:45<589:51:23, 3.79s/it]
2%|▏ | 9272/569592 [1:01:46<456:19:19, 2.93s/it]
2%|▏ | 9272/569592 [1:01:46<456:19:19, 2.93s/it]
2%|▏ | 9273/569592 [1:01:50<539:02:04, 3.46s/it]
2%|▏ | 9273/569592 [1:01:50<539:02:04, 3.46s/it]
2%|▏ | 9274/569592 [1:01:54<560:16:53, 3.60s/it]
2%|▏ | 9274/569592 [1:01:54<560:16:53, 3.60s/it]
2%|▏ | 9275/569592 [1:01:55<433:25:48, 2.78s/it]
2%|▏ | 9275/569592 [1:01:55<433:25:48, 2.78s/it]
2%|▏ | 9276/569592 [1:01:58<455:10:00, 2.92s/it]
2%|▏ | 9276/569592 [1:01:58<455:10:00, 2.92s/it]
2%|▏ | 9277/569592 [1:02:02<488:37:11, 3.14s/it]
2%|▏ | 9277/569592 [1:02:02<488:37:11, 3.14s/it]
2%|▏ | 9278/569592 [1:02:05<497:45:48, 3.20s/it]
2%|▏ | 9278/569592 [1:02:05<497:45:48, 3.20s/it]
2%|▏ | 9279/569592 [1:02:09<504:05:32, 3.24s/it]
2%|▏ | 9279/569592 [1:02:09<504:05:32, 3.24s/it]
2%|▏ | 9280/569592 [1:02:12<504:14:21, 3.24s/it]
2%|▏ | 9280/569592 [1:02:12<504:14:21, 3.24s/it]
2%|▏ | 9281/569592 [1:02:13<395:20:34, 2.54s/it]
2%|▏ | 9281/569592 [1:02:13<395:20:34, 2.54s/it]
2%|▏ | 9282/569592 [1:02:18<514:49:15, 3.31s/it]
2%|▏ | 9282/569592 [1:02:18<514:49:15, 3.31s/it]
2%|▏ | 9283/569592 [1:02:19<402:30:34, 2.59s/it]
2%|▏ | 9283/569592 [1:02:19<402:30:34, 2.59s/it]
2%|▏ | 9284/569592 [1:02:20<328:56:58, 2.11s/it]
2%|▏ | 9284/569592 [1:02:20<328:56:58, 2.11s/it]
2%|▏ | 9285/569592 [1:02:23<391:21:38, 2.51s/it]
2%|▏ | 9285/569592 [1:02:23<391:21:38, 2.51s/it]
2%|▏ | 9286/569592 [1:02:24<318:32:53, 2.05s/it]
2%|▏ | 9286/569592 [1:02:24<318:32:53, 2.05s/it]
2%|▏ | 9287/569592 [1:02:25<267:58:41, 1.72s/it]
2%|▏ | 9287/569592 [1:02:25<267:58:41, 1.72s/it]
2%|▏ | 9288/569592 [1:02:26<231:45:12, 1.49s/it]
2%|▏ | 9288/569592 [1:02:26<231:45:12, 1.49s/it]
2%|▏ | 9289/569592 [1:02:29<319:25:50, 2.05s/it]
2%|▏ | 9289/569592 [1:02:29<319:25:50, 2.05s/it]
2%|▏ | 9290/569592 [1:02:30<266:59:12, 1.72s/it]
2%|▏ | 9290/569592 [1:02:30<266:59:12, 1.72s/it]
2%|▏ | 9291/569592 [1:02:32<250:59:55, 1.61s/it]
2%|▏ | 9291/569592 [1:02:32<250:59:55, 1.61s/it]
2%|▏ | 9292/569592 [1:02:33<219:11:09, 1.41s/it]
2%|▏ | 9292/569592 [1:02:33<219:11:09, 1.41s/it]
2%|▏ | 9293/569592 [1:02:36<303:30:44, 1.95s/it]
2%|▏ | 9293/569592 [1:02:36<303:30:44, 1.95s/it]
2%|▏ | 9294/569592 [1:02:37<259:04:06, 1.66s/it]
2%|▏ | 9294/569592 [1:02:37<259:04:06, 1.66s/it]
2%|▏ | 9295/569592 [1:02:41<388:49:38, 2.50s/it]
2%|▏ | 9295/569592 [1:02:41<388:49:38, 2.50s/it]
2%|▏ | 9296/569592 [1:02:43<361:29:07, 2.32s/it]
2%|▏ | 9296/569592 [1:02:43<361:29:07, 2.32s/it]
2%|▏ | 9297/569592 [1:02:45<357:10:58, 2.29s/it]
2%|▏ | 9297/569592 [1:02:46<357:10:58, 2.29s/it]
2%|▏ | 9298/569592 [1:02:47<303:23:12, 1.95s/it]
2%|▏ | 9298/569592 [1:02:47<303:23:12, 1.95s/it]
2%|▏ | 9299/569592 [1:02:51<432:59:56, 2.78s/it]
2%|▏ | 9299/569592 [1:02:51<432:59:56, 2.78s/it]
2%|▏ | 9300/569592 [1:02:53<361:07:55, 2.32s/it]
2%|▏ | 9300/569592 [1:02:53<361:07:55, 2.32s/it]
2%|▏ | 9301/569592 [1:02:55<371:17:32, 2.39s/it]
2%|▏ | 9301/569592 [1:02:55<371:17:32, 2.39s/it]
2%|▏ | 9302/569592 [1:02:58<373:14:48, 2.40s/it]
2%|▏ | 9302/569592 [1:02:58<373:14:48, 2.40s/it]
2%|▏ | 9303/569592 [1:03:03<520:27:12, 3.34s/it]
2%|▏ | 9303/569592 [1:03:03<520:27:12, 3.34s/it]
2%|▏ | 9304/569592 [1:03:04<412:59:36, 2.65s/it]
2%|▏ | 9304/569592 [1:03:04<412:59:36, 2.65s/it]
2%|▏ | 9305/569592 [1:03:06<382:05:41, 2.46s/it]
2%|▏ | 9305/569592 [1:03:06<382:05:41, 2.46s/it]
2%|▏ | 9306/569592 [1:03:08<342:21:53, 2.20s/it]
2%|▏ | 9306/569592 [1:03:08<342:21:53, 2.20s/it]
2%|▏ | 9307/569592 [1:03:12<447:37:25, 2.88s/it]
2%|▏ | 9307/569/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
592 [1:03:12<447:37:25, 2.88s/it]
2%|▏ | 9308/569592 [1:03:14<374:33:01, 2.41s/it]
2%|▏ | 9308/569592 [1:03:14<374:33:01, 2.41s/it]
2%|▏ | 9309/569592 [1:03:16<392:33:19, 2.52s/it]
2%|▏ | 9309/569592 [1:03:16<392:33:19, 2.52s/it]
2%|▏ | 9310/569592 [1:03:18<346:39:08, 2.23s/it]
2%|▏ | 9310/569592 [1:03:18<346:39:08, 2.23s/it]
2%|▏ | 9311/569592 [1:03:25<570:52:11, 3.67s/it]
2%|▏ | 9311/569592 [1:03:25<570:52:11, 3.67s/it]
2%|▏ | 9312/569592 [1:03:26<443:25:38, 2.85s/it]
2%|▏ | 9312/569592 [1:03:26<443:25:38, 2.85s/it]
2%|▏ | 9313/569592 [1:03:27<354:59:12, 2.28s/it]
2%|▏ | 9313/569592 [1:03:27<354:59:12, 2.28s/it]
2%|▏ | 9314/569592 [1:03:28<323:42:48, 2.08s/it]
2%|▏ | 9314/569592 [1:03:28<323:42:48, 2.08s/it]
2%|▏ | 9315/569592 [1:03:35<519:12:48, 3.34s/it]
2%|▏ | 9315/569592 [1:03:35<519:12:48, 3.34s/it]
2%|▏ | 9316/569592 [1:03:36<409:17:11, 2.63s/it]
2%|▏ | 9316/569592 [1:03:36<409:17:11, 2.63s/it]
2%|▏ | 9317/569592 [1:03:37<373:28:17, 2.40s/it]
2%|▏ | 9317/569592 [1:03:38<373:28:17, 2.40s/it]
2%|▏ | 9318/569592 [1:03:39<317:26:58, 2.04s/it]
2%|▏ | 9318/569592 [1:03:39<317:26:58, 2.04s/it]
2%|▏ | 9319/569592 [1:03:44<460:48:11, 2.96s/it]
2%|▏ | 9319/569592 [1:03:44<460:48:11, 2.96s/it]
2%|▏ | 9320/569592 [1:03:45<371:45:59, 2.39s/it]
2%|▏ | 9320/569592 [1:03:45<371:45:59, 2.39s/it]
2%|▏ | 9321/569592 [1:03:47<380:51:28, 2.45s/it]
2%|▏ | 9321/569592 [1:03:47<380:51:28, 2.45s/it]
2%|▏ | 9322/569592 [1:03:49<334:40:36, 2.15s/it]
2%|▏ | 9322/569592 [1:03:49<334:40:36, 2.15s/it]
2%|▏ | 9323/569592 [1:03:53<437:44:07, 2.81s/it]
2%|▏ | 9323/569592 [1:03:53<437:44:07, 2.81s/it]
2%|▏ | 9324/569592 [1:03:54<350:41:32, 2.25s/it]
2%|▏ | 9324/569592 [1:03:54<350:41:32, 2.25s/it]
2%|▏ | 9325/569592 [1:03:58<399:35:58, 2.57s/it]
2%|▏ | 9325/569592 [1:03:58<399:35:58, 2.57s/it]
2%|▏ | 9326/569592 [1:04:00<380:38:05, 2.45s/it]
2%|▏ | 9326/569592 [1:04:00<380:38:05, 2.45s/it]
2%|▏ | 9327/569592 [1:04:05<492:05:42, 3.16s/it]
2%|▏ | 9327/569592 [1:04:05<492:05:42, 3.16s/it]
2%|▏ | 9328/569592 [1:04:05<389:06:16, 2.50s/it]
2%|▏ | 9328/569592 [1:04:05<389:06:16, 2.50s/it]
2%|▏ | 9329/569592 [1:04:07<357:44:58, 2.30s/it]
2%|▏ | 9329/569592 [1:04:07<357:44:58, 2.30s/it]
2%|▏ | 9330/569592 [1:04:11<412:58:16, 2.65s/it]
2%|▏ | 9330/569592 [1:04:11<412:58:16, 2.65s/it]
2%|▏ | 9331/569592 [1:04:13<373:50:20, 2.40s/it]
2%|▏ | 9331/569592 [1:04:13<373:50:20, 2.40s/it]
2%|▏ | 9332/569592 [1:04:15<379:01:01, 2.44s/it]
2%|▏ | 9332/569592 [1:04:15<379:01:01, 2.44s/it]
2%|▏ | 9333/569592 [1:04:18<421:21:02, 2.71s/it]
2%|▏ | 9333/569592 [1:04:18<421:21:02, 2.71s/it]
2%|▏ | 9334/569592 [1:04:21<418:11:27, 2.69s/it]
2%|▏ | 9334/569592 [1:04:21<418:11:27, 2.69s/it]
2%|▏ | 9335/569592 [1:04:24<441:20:19, 2.84s/it]
2%|▏ | 9335/569592 [1:04:25<441:20:19, 2.84s/it]
2%|▏ | 9336/569592 [1:04:26<376:25:23, 2.42s/it]
2%|▏ | 9336/569592 [1:04:26<376:25:23, 2.42s/it]
2%|▏ | 9337/569592 [1:04:29<400:38:08, 2.57s/it]
2%|▏ | 9337/569592 [1:04:29<400:38:08, 2.57s/it]
2%|▏ | 9338/569592 [1:04:30<351:24:24, 2.26s/it]
2%|▏ | 9338/569592 [1:04:30<351:24:24, 2.26s/it]
2%|▏ | 9339/569592 [1:04:33<363:56:53, 2.34s/it]
2%|▏ | 9339/569592 [1:04:33<363:56:53, 2.34s/it]
2%|▏ | 9340/569592 [1:04:37<451:55:49, 2.90s/it]
2%|▏ | 9340/569592 [1:04:37<451:55:49, 2.90s/it]
2%|▏ | 9341/569592 [1:04:39<409:26:10, 2.63s/it]
2%|▏ | 9341/569592 [1:04:39<409:26:10, 2.63s/it]
2%|▏ | 9342/569592 [1:04:42<440:31:32, 2.83s/it]
2%|▏ | 9342/569592 [1:04:42<440:31:32, 2.83s/it]
2%|▏ | 9343/569592 [1:04:44<408:47:34, 2.63s/it]
2%|▏ | 9343/569592 [1:04:44<408:47:34, 2.63s/it]
2%|▏ | 9344/569592 [1:04:46<379:20:07, 2.44s/it]
2%|▏ | 9344/569592 [1:04:46<379:20:07, 2.44s/it]
2%|▏ | 9345/569592 [1:04:49<389:03:22, 2.50s/it]
2%|▏ | 9345/569592 [1:04:49<389:03:22, 2.50s/it]
2%|▏ | 9346/569592 [1:04:53<455:58:35, 2.93s/it]
2%|▏ | 9346/569592 [1:04:53<455:58:35, 2.93s/it]
2%|▏ | 9347/569592 [1:04:54<366:01:09, 2.35s/it]
2%|▏ | 9347/569592 [1:04:54<366:01:09, 2.35s/it]
2%|▏ | 9348/569592 [1:04:56<351:51:41, 2.26s/it]
2%|▏ | 9348/569592 [1:04:56<351:51:41, 2.26s/it]
2%|▏ | 9349/569592 [1:04:59<407:57:55, 2.62s/it]
2%|▏ | 9349/569592 [1:04:59<407:57:55, 2.62s/it]
2%|▏ | 9350/569592 [1:05:03<459:14:39, 2.95s/it]
2%|▏ | 9350/569592 [1:05:03<459:14:39, 2.95s/it]
2%|▏ | 9351/569592 [1:05:05<388:07:45, 2.49s/it]
2%|▏ | 9351/569592 [1:05:05<388:07:45, 2.49s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (99598400 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 9352/569592 [1:05:06<317:01:45, 2.04s/it]
2%|▏ | 9352/569592 [1:05:06<317:01:45, 2.04s/it]
2%|▏ | 9353/569592 [1:05:08<338:18:44, 2.17s/it]
2%|▏ | 9353/569592 [1:05:08<338:18:44, 2.17s/it]
2%|▏ | 9354/569592 [1:05:13<456:23:36, 2.93s/it]
2%|▏ | 9354/569592 [1:05:13<456:23:36, 2.93s/it]
2%|▏ | 9355/569592 [1:05:16<480:01:50, 3.08s/it]
2%|▏ | 9355/569592 [1:05:16<480:01:50, 3.08s/it]
2%|▏ | 9356/569592 [1:05:21<568:26:38, 3.65s/it]
2%|▏ | 9356/569592 [1:05:21<568:26:38, 3.65s/it]
2%|▏ | 9357/569592 [1:05:24<543:09:55, 3.49s/it]
2%|▏ | 9357/569592 [1:05:24<543:09:55, 3.49s/it]
2%|▏ | 9358/569592 [1:05:29<599:01:33, 3.85s/it]
2%|▏ | 9358/569592 [1:05:29<599:01:33, 3.85s/it]
2%|▏ | 9359/569592 [1:05:30<461:55:07, 2.97s/it]
2%|▏ | 9359/569592 [1:05:30<461:55:07, 2.97s/it]
2%|▏ | 9360/569592 [1:05:31<366:50:32, 2.36s/it]
2%|▏ | 9360/569592 [1:05:31<366:50:32, 2.36s/it]
2%|▏ | 9361/569592 [1:05:35<431:19:44, 2.77s/it]
2%|▏ | 9361/569592 [1:05:35<431:19:44, 2.77s/it]
2%|▏ | 9362/569592 [1:05:40<570:33:34, 3.67s/it]
2%|▏ | 9362/569592 [1:05:40<570:33:34, 3.67s/it]
2%|▏ | 9363/569592 [1:05:45<630:00:14, 4.05s/it]
2%|▏ | 9363/569592 [1:05:45<630:00:14, 4.05s/it]
2%|▏ | 9364/569592 [1:05:48<589:46:29, 3.79s/it]
2%|▏ | 9364/569592 [1:05:48<589:46:29, 3.79s/it]
2%|▏ | 9365/569592 [1:05:49<454:46:59, 2.92s/it]
2%|▏ | 9365/569592 [1:05:49<454:46:59, 2.92s/it]
2%|▏ | 9366/569592 [1:05:53<481:09:38, 3.09s/it]
2%|▏ | 9366/569592 [1:05:53<481:09:38, 3.09s/it]
2%|▏ | 9367/569592 [1:05:58<564:31:56, 3.63s/it]
2%|▏ | 9367/569592 [1:05:58<564:31:56, 3.63s/it]
2%|▏ | 9368/569592 [1:06:03<636:01:57, 4.09s/it]
2%|▏ | 9368/569592 [1:06:03<636:01:57, 4.09s/it]
2%|▏ | 9369/569592 [1:06:08<664:31:07, 4.27s/it]
2%|▏ | 9369/569592 [1:06:08<664:31:07, 4.27s/it]
2%|▏ | 9370/569592 [1:06:12<650:48:40, 4.18s/it]
2%|▏ | 9370/569592 [1:06:12<650:48:40, 4.18s/it]
2%|▏ | 9371/569592 [1:06:15<603:39:26, 3.88s/it]
2%|▏ | 9371/569592 [1:06:15<603:39:26, 3.88s/it]
2%|▏ | 9372/569592 [1:06:19<640:39:35, 4.12s/it]
2%|▏ | 9372/569592 [1:06:19<640:39:35, 4.12s/it]
2%|▏ | 9373/569592 [1:06:24<668:23:41, 4.30s/it]
2%|▏ | 9373/569592 [1:06:24<668:23:41, 4.30s/it]
2%|▏ | 9374/569592 [1:06:29<686:44:23, 4.41s/it]
2%|▏ | 9374/569592 [1:06:29<686:44:23, 4.41s/it]
2%|▏ | 9375/569592 [1:06:33<697:00:04, 4.48s/it]
2%|▏ | 9375/569592 [1:06:33<697:00:04, 4.48s/it]
2%|▏ | 9376/569592 [1:06:38<688:19:50, 4.42s/it]
2%|▏ | 9376/569592 [1:06:38<688:19:50, 4.42s/it]
2%|▏ | 9377/569592 [1:06:42<696:32:35, 4.48s/it]
2%|▏ | 9377/569592 [1:06:42<696:32:35, 4.48s/it]
2%|▏ | 9378/569592 [1:06:43<528:38:56, 3.40s/it]
2%|▏ | 9378/569592 [1:06:43<528:38:56, 3.40s/it]
2%|▏ | 9379/569592 [1:06:48<589:44:58, 3.79s/it]
2%|▏ | 9379/569592 [1:06:48<589:44:58, 3.79s/it]
2%|▏ | 9380/569592 [1:06:53<631:28:13, 4.06s/it]
2%|▏ | 9380/569592 [1:06:53<631:28:13, 4.06s/it]
2%|▏ | 9381/569592 [1:06:57<648:21:04, 4.17s/it]
2%|▏ | 9381/569592 [1:06:57<648:21:04, 4.17s/it]
2%|▏ | 9382/569592 [1:07:02<670:27:18, 4.31s/it]
2%|▏ | 9382/569592 [1:07:02<670:27:18, 4.31s/it]
2%|▏ | 9383/569592 [1:07:03<510:42:27, 3.28s/it]
2%|▏ | 9383/569592 [1:07:03<510:42:27, 3.28s/it]
2%|▏ | 9384/569592 [1:07:07<574:39:40, 3.69s/it]
2%|▏ | 9384/569592 [1:07:07<574:39:40, 3.69s/it]
2%|▏ | 9385/569592 [1:07:08<446:54:47, 2.87s/it]
2%|▏ | 9385/569592 [1:07:08<446:54:47, 2.87s/it]
2%|▏ | 9386/569592 [1:07:13<535:28:24, 3.44s/it]
2%|▏ | 9386/569592 [1:07:13<535:28:24, 3.44s/it]
2%|▏ | 9387/569592 [1:07:17<566:34:32, 3.64s/it]
2%|▏ | 9387/569592 [1:07:17<566:34:32, 3.64s/it]
2%|▏ | 9388/569592 [1:07:20<542:50:19, 3.49s/it]
2%|▏ | 9388/569592 [1:07:20<542:50:19, 3.49s/it]
2%|▏ | 9389/569592 [1:07:24<556:38:22, 3.58s/it]
2%|▏ | 9389/569592 [1:07:24<556:38:22, 3.58s/it]
2%|▏ | 9390/569592 [1:07:27<555:22:25, 3.57s/it]
2%|▏ | 9390/569592 [1:07:27<555:22:25, 3.57s/it]
2%|▏ | 9391/569592 [1:07:31<542:57:40, 3.49s/it]
2%|▏ | 9391/569592 [1:07:31<542:57:40, 3.49s/it]
2%|▏ | 9392/569592 [1:07:32<423:15:51, 2.72s/it]
2%|▏ | 9392/569592 [1:07:32<423:15:51, 2.72s/it]
2%|▏ | 9393/569592 [1:07:33<340:23:21, 2.19s/it]
2%|▏ | 9393/569592 [1:07:33<340:23:21, 2.19s/it]
2%|▏ | 9394/569592 [1:07:37<447:41:50, 2.88s/it]
2%|▏ | 9394/569592 [1:07:37<447:41:50, 2.88s/it]
2%|▏ | 9395/569592 [1:07:38<358:10:33, 2.30s/it]
2%|▏ | 9395/569592 [1:07:38<358:10:33, 2.30s/it]
2%|▏ | 9396/569592 [1:07:39<294:58:37, 1.90s/it]
2%|▏ | 9396/569592 [1:07:39<294:58:37, 1.90s/it]
2%|▏ | 9397/569592 [1:07:43<378:24:09, 2.43s/it]
2%|▏ | 9397/569592 [1:07:43<378:24:09, 2.43s/it]
2%|▏ | 9398/569592 [1:07:46<433:42:29, 2.79s/it]
2%|▏ | 9398/569592 [1:07:46<433:42:29, 2.79s/it]
2%|▏ | 9399/569592 [1:07:50<476:52:28, 3.06s/it]
2%|▏ | 9399/569592 [1:07:50<476:52:28, 3.06s/it]
2%|▏ | 9400/569592 [1:07:51<379:24:51, 2.44s/it]
2%|▏ | 9400/569592 [1:07:51<379:24:51, 2.44s/it]
2%|▏ | 9401/569592 [1:07:52<310:51:57, 2.00s/it]
2%|▏ | 9401/569592 [1:07:52<310:51:57, 2.00s/it]
2%|▏ | 9402/569592 [1:07:53<262:05:31, 1.68s/it]
2%|▏ | 9402/569592 [1:07:53<262:05:31, 1.68s/it]
2%|▏ | 9403/569592 [1:07:56<337:44:23, 2.17s/it]
2%|▏ | 9403/569592 [1:07:56<337:44:23, 2.17s/it]
2%|▏ | 9404/569592 [1:07:57<288:24:25, 1.85s/it]
2%|▏ | 9404/569592 [1:07:57<288:24:25, 1.85s/it]
2%|▏ | 9405/569592 [1:07:59<254:59:35, 1.64s/it]
2%|▏ | 9405/569592 [1:07:59<254:59:35, 1.64s/it]
2%|▏ | 9406/569592 [1:08:02<363:20:49, 2.34s/it]
2%|▏ | 9406/569592 [1:08:02<363:20:49, 2.34s/it]
2%|▏ | 9407/569592 [1:08:03<297:54:11, 1.91s/it]
2%|▏ | 9407/569592 [1:08:03<297:54:11, 1.91s/it]
2%|▏ | 9408/569592 [1:08:05<303:56:12, 1.95s/it]
2%|▏ | 9408/569592 [1:08:05<303:56:12, 1.95s/it]
2%|▏ | 9409/569592 [1:08:06<257:22:45, 1.65s/it]
2%|▏ | 9409/569592 [1:08:06<257:22:45, 1.65s/it]
2%|▏ | 9410/569592 [1:08:07<225:36:11, 1.45s/it]
2%|▏ | 9410/569592 [1:08:07<225:36:11, 1.45s/it]
2%|▏ | 9411/569592 [1:08:12<352:12:28, 2.26s/it]
2%|▏ | 9411/569592 [1:08:12<352:12:28, 2.26s/it]
2%|▏ | 9412/569592 [1:08:16<445:58:52, 2.87s/it]
2%|▏ | 9412/569592 [1:08:16<445:58:52, 2.87s/it]
2%|▏ | 9413/569592 [1:08:17<356:45:07, 2.29s/it]
2%|▏ | 9413/569592 [1:08:17<356:45:07, 2.29s/it]
2%|▏ | 9414/569592 [1:08:18<304:53:40, 1.96s/it]
2%|▏ | 9414/569592 [1:08:18<304:53:40, 1.96s/it]
2%|▏ | 9415/569592 [1:08:22<421:41:59, 2.71s/it]
2%|▏ | 9415/569592 [1:08:22<421:41:59, 2.71s/it]
2%|▏ | 9416/569592 [1:08:25<396:09:38, 2.55s/it]
2%|▏ | 9416/569592 [1:08:25<396:09:38, 2.55s/it]
2%|▏ | 9417/569592 [1:08:27<371:24:34, 2.39s/it]
2%|▏ | 9417/569592 [1:08:27<371:24:34, 2.39s/it]
2%|▏ | 9418/569592 [1:08:28<342:30:24, 2.20s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9418/569592 [1:08:28<342:30:24, 2.20s/it]
2%|▏ | 9419/569592 [1:08:33<441:35:32, 2.84s/it]
2%|▏ | 9419/569592 [1:08:34<441:35:32, 2.84s/it]
2%|▏ | 9420/569592 [1:08:35<432:23:01, 2.78s/it]
2%|▏ | 9420/569592 [1:08:35<432:23:01, 2.78s/it]
2%|▏ | 9421/569592 [1:08:38<405:35:02, 2.61s/it]
2%|▏ | 9421/569592 [1:08:38<405:35:02, 2.61s/it]
2%|▏ | 9422/569592 [1:08:41<426:21:41, 2.74s/it]
2%|▏ | 9422/569592 [1:08:41<426:21:41, 2.74s/it]
2%|▏ | 9423/569592 [1:08:43<395:09:25, 2.54s/it]
2%|▏ | 9423/569592 [1:08:43<395:09:25, 2.54s/it]
2%|▏ | 9424/569592 [1:08:48<510:45:50, 3.28s/it]
2%|▏ | 9424/569592 [1:08:48<510:45:50, 3.28s/it]
2%|▏ | 9425/569592 [1:08:49<406:07:20, 2.61s/it]
2%|▏ | 9425/569592 [1:08:49<406:07:20, 2.61s/it]
2%|▏ | 9426/569592 [1:08:51<389:37:13, 2.50s/it]
2%|▏ | 9426/569592 [1:08:51<389:37:13, 2.50s/it]
2%|▏ | 9427/569592 [1:08:53<376:49:30, 2.42s/it]
2%|▏ | 9427/569592 [1:08:53<376:49:30, 2.42s/it]
2%|▏ | 9428/569592 [1:08:58<474:22:48, 3.05s/it]
2%|▏ | 9428/569592 [1:08:58<474:22:48, 3.05s/it]
2%|▏ | 9429/569592 [1:08:59<381:55:13, 2.45s/it]
2%|▏ | 9429/569592 [1:08:59<381:55:13, 2.45s/it]
2%|▏ | 9430/569592 [1:09:01<384:50:41, 2.47s/it]
2%|▏ | 9430/569592 [1:09:01<384:50:41, 2.47s/it]
2%|▏ | 9431/569592 [1:09:03<357:34:44, 2.30s/it]
2%|▏ | 9431/569592 [1:09:03<357:34:44, 2.30s/it]
2%|▏ | 9432/569592 [1:09:08<453:45:52, 2.92s/it]
2%|▏ | 9432/569592 [1:09:08<453:45:52, 2.92s/it]
2%|▏ | 9433/569592 [1:09:09<372:38:27, 2.39s/it]
2%|▏ | 9433/569592 [1:09:09<372:38:27, 2.39s/it]
2%|▏ | 9434/569592 [1:09:11<357:45:04, 2.30s/it]
2%|▏ | 9434/569592 [1:09:11<357:45:04, 2.30s/it]
2%|▏ | 9435/569592 [1:09:13<375:31:24, 2.41s/it]
2%|▏ | 9435/569592 [1:09:13<375:31:24, 2.41s/it]
2%|▏ | 9436/569592 [1:09:17<447:49:34, 2.88s/it]
2%|▏ | 9436/569592 [1:09:17<447:49:34, 2.88s/it]
2%|▏ | 9437/569592 [1:09:19<409:27:17, 2.63s/it]
2%|▏ | 9437/569592 [1:09:19<409:27:17, 2.63s/it]
2%|▏ | 9438/569592 [1:09:22<400:56:28, 2.58s/it]
2%|▏ | 9438/569592 [1:09:22<400:56:28, 2.58s/it]
2%|▏ | 9439/569592 [1:09:24<362:12:07, 2.33s/it]
2%|▏ | 9439/569592 [1:09:24<362:12:07, 2.33s/it]
2%|▏ | 9440/569592 [1:09:27<398:50:19, 2.56s/it]
2%|▏ | 9440/569592 [1:09:27<398:50:19, 2.56s/it]
2%|▏ | 9441/569592 [1:09:29<376:49:06, 2.42s/it]
2%|▏ | 9441/569592 [1:09:29<376:49:06, 2.42s/it]
2%|▏ | 9442/569592 [1:09:32<416:23:03, 2.68s/it]
2%|▏ | 9442/569592 [1:09:32<416:23:03, 2.68s/it]
2%|▏ | 9443/569592 [1:09:34<374:31:15, 2.41s/it]
2%|▏ | 9443/569592 [1:09:34<374:31:15, 2.41s/it]
2%|▏ | 9444/569592 [1:09:36<370:59:38, 2.38s/it]
2%|▏ | 9444/569592 [1:09:36<370:59:38, 2.38s/it]
2%|▏ | 9445/569592 [1:09:41<463:05:00, 2.98s/it]
2%|▏ | 9445/569592 [1:09:41<463:05:00, 2.98s/it]
2%|▏ | 9446/569592 [1:09:42<370:07:03, 2.38s/it]
2%|▏ | 9446/569592 [1:09:42<370:07:03, 2.38s/it]
2%|▏ | 9447/569592 [1:09:44<376:01:14, 2.42s/it]
2%|▏ | 9447/569592 [1:09:44<376:01:14, 2.42s/it]
2%|▏ | 9448/569592 [1:09:47<401:31:13, 2.58s/it]
2%|▏ | 9448/569592 [1:09:47<401:31:13, 2.58s/it]
2%|▏ | 9449/569592 [1:09:50<418:41:30, 2.69s/it]
2%|▏ | 9449/569592 [1:09:50<418:41:30, 2.69s/it]
2%|▏ | 9450/569592 [1:09:54<465:24:33, 2.99s/it]
2%|▏ | 9450/569592 [1:09:54<465:24:33, 2.99s/it]
2%|▏ | 9451/569592 [1:09:55<376:08:19, 2.42s/it]
2%|▏ | 9451/569592 [1:09:55<376:08:19, 2.42s/it]
2%|▏ | 9452/569592 [1:09:56<325:22:41, 2.09s/it]
2%|▏ | 9452/569592 [1:09:56<325:22:41, 2.09s/it]
2%|▏ | 9453/569592 [1:10:00<401:50:58, 2.58s/it]
2%|▏ | 9453/569592 [1:10:00<401:50:58, 2.58s/it]
2%|▏ | 9454/569592 [1:10:03<422:34:48, 2.72s/it]
2%|▏ | 9454/569592 [1:10:03<422:34:48, 2.72s/it]
2%|▏ | 9455/569592 [1:10:06<434:55:22, 2.80s/it]
2%|▏ | 9455/569592 [1:10:06<434:55:22, 2.80s/it]
2%|▏ | 9456/569592 [1:10:07<349:52:27, 2.25s/it]
2%|▏ | 9456/569592 [1:10:07<349:52:27, 2.25s/it]
2%|▏ | 9457/569592 [1:10:10<385:37:41, 2.48s/it]
2%|▏ | 9457/569592 [1:10:10<385:37:41, 2.48s/it]
2%|▏ | 9458/569592 [1:10:13<421:53:49, 2.71s/it]
2%|▏ | 9458/569592 [1:10:13<421:53:49, 2.71s/it]
2%|▏ | 9459/569592 [1:10:16<447:57:29, 2.88s/it]
2%|▏ | 9459/569592 [1:10:16<447:57:29, 2.88s/it]
2%|▏ | 9460/569592 [1:10:17<356:29:28, 2.29s/it]
2%|▏ | 9460/569592 [1:10:17<356:29:28, 2.29s/it]
2%|▏ | 9461/569592 [1:10:21<425:56:00, 2.74s/it]
2%|▏ | 9461/569592 [1:10:21<425:56:00, 2.74s/it]
2%|▏ | 9462/569592 [1:10:23<402:59:23, 2.59s/it]
2%|▏ | 9462/569592 [1:10:23<402:59:23, 2.59s/it]
2%|▏ | 9463/569592 [1:10:25<373:10:44, 2.40s/it]
2%|▏ | 9463/569592 [1:10:25<373:10:44, 2.40s/it]
2%|▏ | 9464/569592 [1:10:26<317:13:42, 2.04s/it]
2%|▏ | 9464/569592 [1:10:26<317:13:42, 2.04s/it]
2%|▏ | 9465/569592 [1:10:32<490:44:24, 3.15s/it]
2%|▏ | 9465/569592 [1:10:32<490:44:24, 3.15s/it]
2%|▏ | 9466/569592 [1:10:34<415:21:00, 2.67s/it]
2%|▏ | 9466/569592 [1:10:34<415:21:00, 2.67s/it]
2%|▏ | 9467/569592 [1:10:37<430:10:02, 2.76s/it]
2%|▏ | 9467/569592 [1:10:37<430:10:02, 2.76s/it]
2%|▏ | 9468/569592 [1:10:42<524:39:20, 3.37s/it]
2%|▏ | 9468/569592 [1:10:42<524:39:20, 3.37s/it]
2%|▏ | 9469/569592 [1:10:47<619:52:23, 3.98s/it]
2%|▏ | 9469/569592 [1:10:47<619:52:23, 3.98s/it]
2%|▏ | 9470/569592 [1:10:50<584:34:27, 3.76s/it]
2%|▏ | 9470/569592 [1:10:50<584:34:27, 3.76s/it]
2%|▏ | 9471/569592 [1:10:51<451:18:18, 2.90s/it]
2%|▏ | 9471/569592 [1:10:51<451:18:18, 2.90s/it]
2%|▏ | 9472/569592 [1:10:52<358:22:33, 2.30s/it]
2%|▏ | 9472/569592 [1:10:52<358:22:33, 2.30s/it]
2%|▏ | 9473/569592 [1:10:53<294:53:27, 1.90s/it]
2%|▏ | 9473/569592 [1:10:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90994915 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
53<294:53:27, 1.90s/it]
2%|▏ | 9474/569592 [1:10:57<381:14:24, 2.45s/it]
2%|▏ | 9474/569592 [1:10:57<381:14:24, 2.45s/it]
2%|▏ | 9475/569592 [1:11:02<503:28:14, 3.24s/it]
2%|▏ | 9475/569592 [1:11:02<503:28:14, 3.24s/it]
2%|▏ | 9476/569592 [1:11:07<583:15:55, 3.75s/it]
2%|▏ | 9476/569592 [1:11:07<583:15:55, 3.75s/it]
2%|▏ | 9477/569592 [1:11:10<564:01:06, 3.63s/it]
2%|▏ | 9477/569592 [1:11:10<564:01:06, 3.63s/it]
2%|▏ | 9478/569592 [1:11:11<438:14:36, 2.82s/it]
2%|▏ | 9478/569592 [1:11:11<438:14:36, 2.82s/it]
2%|▏ | 9479/569592 [1:11:16<545:06:46, 3.50s/it]
2%|▏ | 9479/569592 [1:11:16<545:06:46, 3.50s/it]
2%|▏ | 9480/569592 [1:11:21<612:00:37, 3.93s/it]
2%|▏ | 9480/569592 [1:11:21<612:00:37, 3.93s/it]
2%|▏ | 9481/569592 [1:11:28<762:55:36, 4.90s/it]
2%|▏ | 9481/569592 [1:11:28<762:55:36, 4.90s/it]
2%|▏ | 9482/569592 [1:11:31<673:31:01, 4.33s/it]
2%|▏ | 9482/569592 [1:11:31<673:31:01, 4.33s/it]
2%|▏ | 9483/569592 [1:11:36<695:50:22, 4.47s/it]
2%|▏ | 9483/569592 [1:11:36<695:50:22, 4.47s/it]
2%|▏ | 9484/569592 [1:11:39<635:54:09, 4.09s/it]
2%|▏ | 9484/569592 [1:11:39<635:54:09, 4.09s/it]
2%|▏ | 9485/569592 [1:11:44<665:05:47, 4.27s/it]
2%|▏ | 9485/569592 [1:11:44<665:05:47, 4.27s/it]
2%|▏ | 9486/569592 [1:11:49<711:19:54, 4.57s/it]
2%|▏ | 9486/569592 [1:11:49<711:19:54, 4.57s/it]
2%|▏ | 9487/569592 [1:11:54<726:39:22, 4.67s/it]
2%|▏ | 9487/569592 [1:11:54<726:39:22, 4.67s/it]
2%|▏ | 9488/569592 [1:11:59<739:37:14, 4.75s/it]
2%|▏ | 9488/569592 [1:11:59<739:37:14, 4.75s/it]
2%|▏ | 9489/569592 [1:12:03<683:29:47, 4.39s/it]
2%|▏ | 9489/569592 [1:12:03<683:29:47, 4.39s/it]
2%|▏ | 9490/569592 [1:12:06<627:35:31, 4.03s/it]
2%|▏ | 9490/569592 [1:12:06<627:35:31, 4.03s/it]
2%|▏ | 9491/569592 [1:12:09<581:53:16, 3.74s/it]
2%|▏ | 9491/569592 [1:12:09<581:53:16, 3.74s/it]
2%|▏ | 9492/569592 [1:12:13<626:02:33, 4.02s/it]
2%|▏ | 9492/569592 [1:12:13<626:02:33, 4.02s/it]
2%|▏ | 9493/569592 [1:12:19<690:06:34, 4.44s/it]
2%|▏ | 9493/569592 [1:12:19<690:06:34, 4.44s/it]
2%|▏ | 9494/569592 [1:12:24<715:50:36, 4.60s/it]
2%|▏ | 9494/569592 [1:12:24<715:50:36, 4.60s/it]
2%|▏ | 9495/569592 [1:12:28<673:13:51, 4.33s/it]
2%|▏ | 9495/569592 [1:12:28<673:13:51, 4.33s/it]
2%|▏ | 9496/569592 [1:12:31<623:10:44, 4.01s/it]
2%|▏ | 9496/569592 [1:12:31<623:10:44, 4.01s/it]
2%|▏ | 9497/569592 [1:12:34<593:08:21, 3.81s/it]
2%|▏ | 9497/569592 [1:12:34<593:08:21, 3.81s/it]
2%|▏ | 9498/569592 [1:12:35<457:56:11, 2.94s/it]
2%|▏ | 9498/569592 [1:12:35<457:56:11, 2.94s/it]
2%|▏ | 9499/569592 [1:12:40<562:03:02, 3.61s/it]
2%|▏ | 9499/569592 [1:12:40<562:03:02, 3.61s/it]
2%|▏ | 9500/569592 [1:12:41<436:31:55, 2.81s/it]
2%|▏ | 9500/569592 [1:12:41<436:31:55, 2.81s/it]
2%|▏ | 9501/569592 [1:12:45<462:18:06, 2.97s/it]
2%|▏ | 9501/569592 [1:12:45<462:18:06, 2.97s/it]
2%|▏ | 9502/569592 [1:12:48<469:52:43, 3.02s/it]
2%|▏ | 9502/569592 [1:12:48<469:52:43, 3.02s/it]
2%|▏ | 9503/569592 [1:12:49<384:33:49, 2.47s/it]
2%|▏ | 9503/569592 [1:12:49<384:33:49, 2.47s/it]
2%|▏ | 9504/569592 [1:12:54<493:25:04, 3.17s/it]
2%|▏ | 9504/569592 [1:12:54<493:25:04, 3.17s/it]
2%|▏ | 9505/569592 [1:12:57<497:38:24, 3.20s/it]
2%|▏ | 9505/569592 [1:12:57<497:38:24, 3.20s/it]
2%|▏ | 9506/569592 [1:13:00<493:39:52, 3.17s/it]
2%|▏ | 9506/569592 [1:13:00<493:39:52, 3.17s/it]
2%|▏ | 9507/569592 [1:13:04<553:48:32, 3.56s/it]
2%|▏ | 9507/569592 [1:13:05<553:48:32, 3.56s/it]
2%|▏ | 9508/569592 [1:13:10<637:04:21, 4.09s/it]
2%|▏ | 9508/569592 [1:13:10<637:04:21, 4.09s/it]
2%|▏ | 9509/569592 [1:13:11<487:55:10, 3.14s/it]
2%|▏ | 9509/569592 [1:13:11<487:55:10, 3.14s/it]
2%|▏ | 9510/569592 [1:13:12<384:50:53, 2.47s/it]
2%|▏ | 9510/569592 [1:13:12<384:50:53, 2.47s/it]
2%|▏ | 9511/569592 [1:13:13<316:02:08, 2.03s/it]
2%|▏ | 9511/569592 [1:13:13<316:02:08, 2.03s/it]
2%|▏ | 9512/569592 [1:13:17<427:07:51, 2.75s/it]
2%|▏ | 9512/569592 [1:13:17<427:07:51, 2.75s/it]
2%|▏ | 9513/569592 [1:13:18<343:07:19, 2.21s/it]
2%|▏ | 9513/569592 [1:13:18<343:07:19, 2.21s/it]
2%|▏ | 9514/569592 [1:13:19<284:20:33, 1.83s/it]
2%|▏ | 9514/569592 [1:13:19<284:20:33, 1.83s/it]
2%|▏ | 9515/569592 [1:13:23<368:48:09, 2.37s/it]
2%|▏ | 9515/569592 [1:13:23<368:48:09, 2.37s/it]
2%|▏ | 9516/569592 [1:13:24<303:06:58, 1.95s/it]
2%|▏ | 9516/569592 [1:13:24<303:06:58, 1.95s/it]
2%|▏ | 9517/569592 [1:13:28<411:03:03, 2.64s/it]
2%|▏ | 9517/569592 [1:13:28<411:03:03, 2.64s/it]
2%|▏ | 9518/569592 [1:13:29<332:34:53, 2.14s/it]
2%|▏ | 9518/569592 [1:13:29<332:34:53, 2.14s/it]
2%|▏ | 9519/569592 [1:13:30<277:27:20, 1.78s/it]
2%|▏ | 9519/569592 [1:13:30<277:27:20, 1.78s/it]
2%|▏ | 9520/569592 [1:13:33<352:13:23, 2.26s/it]
2%|▏ | 9520/569592 [1:13:33<352:13:23, 2.26s/it]
2%|▏ | 9521/569592 [1:13:34<291:21:51, 1.87s/it]
2%|▏ | 9521/569592 [1:13:34<291:21:51, 1.87s/it]
2%|▏ | 9522/569592 [1:13:35<251:38:53, 1.62s/it]
2%|▏ | 9522/569592 [1:13:35<251:38:53, 1.62s/it]
2%|▏ | 9523/569592 [1:13:36<220:00:44, 1.41s/it]
2%|▏ | 9523/569592 [1:13:36<220:00:44, 1.41s/it]
2%|▏ | 9524/569592 [1:13:39<309:43:58, 1.99s/it]
2%|▏ | 9524/569592 [1:13:39<309:43:58, 1.99s/it]
2%|▏ | 9525/569592 [1:13:44<426:51:37, 2.74s/it]
2%|▏ | 9525/569592 [1:13:44<426:51:37, 2.74s/it]
2%|▏ | 9526/569592 [1:13:45<366:43:40, 2.36s/it]
2%|▏ | 9526/569592 [1:13:45<366:43:40, 2.36s/it]
2%|▏ | 9527/569592 [1:13:46<305:25:45, 1.96s/it]
2%|▏ | 9527/569592 [1:13:46<305:25:45, 1.96s/it]
2%|▏ | 9528/569592 [1:13:48<280:26:26, 1.80s/it]
2%|▏ | 9528/569592 [1:13:48<280:26:26, 1.80s/it]
2%|▏ | 9529/569592 [1:13:54<470:49:03, 3.03s/it]
2%|▏ | 9529/569592 [1:13:54<470:49:03, 3.03s/it]
2%|▏ | 9530/569592 [1:13:55<395:22:44, 2.54s/it]
2%|▏ | 9530/569592 [1:13:55<395:22:44, 2.54s/it]
2%|▏ | 9531/569592 [1:13:56<336:20:46, 2.16s/it]
2%|▏ | 9531/569592 [1:13:56<336:20:46, 2.16s/it]
2%|▏ | 9532/569592 [1:13:58<330:15:26, 2.12s/it]
2%|▏ | 9532/569592 [1:13:58<330:15:26, 2.12s/it]
2%|▏ | 9533/569592 [1:14:04<490:11:04, 3.15s/it]
2%|▏ | 9533/569592 [1:14:04<490:11:04, 3.15s/it]
2%|▏ | 9534/569592 [1:14:05<401:33:37, 2.58s/it]
2%|▏ | 9534/569592 [1:14:05<401:33:37, 2.58s/it]
2%|▏ | 9535/569592 [1:14:06<332:03:30, 2.13s/it]
2%|▏ | 9535/569592 [1:14:06<332:03:30, 2.13s/it]
2%|▏ | 9536/569592 [1:14:08<319:09:20, 2.05s/it]
2%|▏ | 9536/569592 [1:14:08<319:09:20, 2.05s/it]
2%|▏ | 9537/569592 [1:14:15<536:50:05, 3.45s/it]
2%|▏ | 9537/569592 [1:14:15<536:50:05, 3.45s/it]
2%|▏ | 9538/569592 [1:14:16<431:23:28, 2.77s/it]
2%|▏ | 9538/569592 [1:14:16<431:23:28, 2.77s/it]
2%|▏ | 9539/569592 [1:14:18<382:15:38, 2.46s/it]
2%|▏ | 9539/569592 [1:14:18<382:15:38, 2.46s/it]
2%|▏ | 9540/569592 [1:14:19<311:25:00, 2.00s/it]
2%|▏ | 9540/569592 [1:14:19<311:25:00, 2.00s/it]
2%|▏ | 9541/569592 [1:14:26<548:51:57, 3.53s/it]
2%|▏ | 9541/569592 [1:14:26<548:51:57, 3.53s/it]
2%|▏ | 9542/569592 [1:14:27<433:16:13, 2.79s/it]
2%|▏ | 9542/569592 [1:14:27<433:16:13, 2.79s/it]
2%|▏ | 9543/569592 [1:14:28<347:46:15, 2.24s/it]
2%|▏ | 9543/569592 [1:14:28<347:46:15, 2.24s/it]
2%|▏ | 9544/569592 [1:14:29<286:23:01, 1.84s/it]
2%|▏ | 9544/569592 [1:14:29<286:23:01, 1.84s/it]
2%|▏ | 9545/569592 [1:14:36<525:36:18, 3.38s/it]
2%|▏ | 9545/56959/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96058696 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2 [1:14:36<525:36:18, 3.38s/it]
2%|▏ | 9546/569592 [1:14:37<421:24:35, 2.71s/it]
2%|▏ | 9546/569592 [1:14:37<421:24:35, 2.71s/it]
2%|▏ | 9547/569592 [1:14:38<339:48:55, 2.18s/it]
2%|▏ | 9547/569592 [1:14:38<339:48:55, 2.18s/it]
2%|▏ | 9548/569592 [1:14:40<321:19:38, 2.07s/it]
2%|▏ | 9548/569592 [1:14:40<321:19:38, 2.07s/it]
2%|▏ | 9549/569592 [1:14:46<504:08:18, 3.24s/it]
2%|▏ | 9549/569592 [1:14:46<504:08:18, 3.24s/it]
2%|▏ | 9550/569592 [1:14:47<400:48:32, 2.58s/it]
2%|▏ | 9550/569592 [1:14:47<400:48:32, 2.58s/it]
2%|▏ | 9551/569592 [1:14:48<330:14:31, 2.12s/it]
2%|▏ | 9551/569592 [1:14:48<330:14:31, 2.12s/it]
2%|▏ | 9552/569592 [1:14:50<355:36:39, 2.29s/it]
2%|▏ | 9552/569592 [1:14:50<355:36:39, 2.29s/it]
2%|▏ | 9553/569592 [1:14:55<451:34:37, 2.90s/it]
2%|▏ | 9553/569592 [1:14:55<451:34:37, 2.90s/it]
2%|▏ | 9554/569592 [1:14:57<409:50:14, 2.63s/it]
2%|▏ | 9554/569592 [1:14:57<409:50:14, 2.63s/it]
2%|▏ | 9555/569592 [1:14:58<330:40:02, 2.13s/it]
2%|▏ | 9555/569592 [1:14:58<330:40:02, 2.13s/it]
2%|▏ | 9556/569592 [1:15:00<348:58:08, 2.24s/it]
2%|▏ | 9556/569592 [1:15:00<34/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
8:58:08, 2.24s/it]
2%|▏ | 9557/569592 [1:15:05<476:05:09, 3.06s/it]
2%|▏ | 9557/569592 [1:15:05<476:05:09, 3.06s/it]
2%|▏ | 9558/569592 [1:15:06<378:06:02, 2.43s/it]
2%|▏ | 9558/569592 [1:15:06<378:06:02, 2.43s/it]
2%|▏ | 9559/569592 [1:15:08<348:02:49, 2.24s/it]
2%|▏ | 9559/569592 [1:15:08<348:02:49, 2.24s/it]
2%|▏ | 9560/569592 [1:15:10<327:33:47, 2.11s/it]
2%|▏ | 9560/569592 [1:15:10<327:33:47, 2.11s/it]
2%|▏ | 9561/569592 [1:15:15<485:33:26, 3.12s/it]
2%|▏ | 9561/569592 [1:15:15<485:33:26, 3.12s/it]
2%|▏ | 9562/569592 [1:15:16<385:06:10, 2.48s/it]
2%|▏ | 9562/569592 [1:15:16<385:06:10, 2.48s/it]
2%|▏ | 9563/569592 [1:15:19<385:31:17, 2.48s/it]
2%|▏ | 9563/569592 [1:15:19<385:31:17, 2.48s/it]
2%|▏ | 9564/569592 [1:15:20<313:59:20, 2.02s/it]
2%|▏ | 9564/569592 [1:15:20<313:59:20, 2.02s/it]
2%|▏ | 9565/569592 [1:15:27<545:53:52, 3.51s/it]
2%|▏ | 9565/569592 [1:15:27<545:53:52, 3.51s/it]
2%|▏ | 9566/569592 [1:15:28<427:57:54, 2.75s/it]
2%|▏ | 9566/569592 [1:15:28<427:57:54, 2.75s/it]
2%|▏ | 9567/569592 [1:15:29<362:54:34, 2.33s/it]
2%|▏ | 9567/569592 [1:15:29<362:54:34, 2.33s/it]
2%|▏ | 9568/569592 [1:15:30<296:58:22, 1.91s/it]
2%|▏ | 9568/569592 [1:15:30<296:58:22, 1.91s/it]
2%|▏ | 9569/569592 [1:15:37<524:55:30, 3.37s/it]
2%|▏ | 9569/569592 [1:15:37<524:55:30, 3.37s/it]
2%|▏ | 9570/569592 [1:15:38<412:16:03, 2.65s/it]
2%|▏ | 9570/569592 [1:15:38<412:16:03, 2.65s/it]
2%|▏ | 9571/569592 [1:15:39<358:13:24, 2.30s/it]
2%|▏ | 9571/569592 [1:15:39<358:13:24, 2.30s/it]
2%|▏ | 9572/569592 [1:15:40<294:10:18, 1.89s/it]
2%|▏ | 9572/569592 [1:15:40<294:10:18, 1.89s/it]
2%|▏ | 9573/569592 [1:15:47<520:24:50, 3.35s/it]
2%|▏ | 9573/569592 [1:15:47<520:24:50, 3.35s/it]
2%|▏ | 9574/569592 [1:15:48<412:16:34, 2.65s/it]
2%|▏ | 9574/569592 [1:15:48<412:16:34, 2.65s/it]
2%|▏ | 9575/569592 [1:15:49<355:29:25, 2.29s/it]
2%|▏ | 9575/569592 [1:15:49<355:29:25, 2.29s/it]
2%|▏ | 9576/569592 [1:15:50<301:48:49, 1.94s/it]
2%|▏ | 9576/569592 [1:15:50<301:48:49, 1.94s/it]
2%|▏ | 9577/569592 [1:15:56<482:14:46, 3.10s/it]
2%|▏ | 9577/569592 [1:15:56<482:14:46, 3.10s/it]
2%|▏ | 9578/569592 [1:15:59<488:53:45, 3.14s/it]
2%|▏ | 9578/569592 [1:15:59<488:53:45, 3.14s/it]
2%|▏ | 9579/569592 [1:16:00<384:21:58, 2.47s/it]
2%|▏ | 9579/569592 [1:16:00<384:21:58, 2.47s/it]
2%|▏ | 9580/569592 [1:16:01<313:18:55, 2.01s/it]
2%|▏ | 9580/569592 [1:16:01<313:18:55, 2.01s/it]
2%|▏ | 9581/569592 [1:16:06<445:52:03, 2.87s/it]
2%|▏ | 9581/569592 [1:16:06<445:52:03, 2.87s/it]
2%|▏ | 9582/569592 [1:16:10<490:38:23, 3.15s/it]
2%|▏ | 9582/569592 [1:16:10<490:38:23, 3.15s/it]
2%|▏ | 9583/569592 [1:16:15<590:24:01, 3.80s/it]
2%|▏ | 9583/569592 [1:16:15<590:24:01, 3.80s/it]
2%|▏ | 9584/569592 [1:16:16<456:22:14, 2.93s/it]
2%|▏ | 9584/569592 [1:16:16<456:22:14, 2.93s/it]
2%|▏ | 9585/569592 [1:16:21<547:48:04, 3.52s/it]
2%|▏ | 9585/569592 [1:16:21<547:48:04, 3.52s/it]
2%|▏ | 9586/569592 [1:16:26<601:12:19, 3.86s/it]
2%|▏ | 9586/569592 [1:16:26<601:12:19, 3.86s/it]
2%|▏ | 9587/569592 [1:16:29<567:27:50, 3.65s/it]
2%|▏ | 9587/569592 [1:16:29<567:27:50, 3.65s/it]
2%|▏ | 9588/569592 [1:16:35<708:47:07, 4.56s/it]
2%|▏ | 9588/569592 [1:16:36<708:47:07, 4.56s/it]
2%|▏ | 9589/569592 [1:16:38<629:43:50, 4.05s/it]
2%|▏ | 9589/569592 [1:16:38<629:43:50, 4.05s/it]
2%|▏ | 9590/569592 [1:16:42<592:04:31, 3.81s/it]
2%|▏ | 9590/569592 [1:16:42<592:04:31, 3.81s/it]
2%|▏ | 9591/569592 [1:16:46<640:30:34, 4.12s/it]
2%|▏ | 9591/569592 [1:16:46<640:30:34, 4.12s/it]
2%|▏ | 9592/569592 [1:16:50<604:45:27, 3.89s/it]
2%|▏ | 9592/569592 [1:16:50<604:45:27, 3.89s/it]
2%|▏ | 9593/569592 [1:16:53<567:12:46, 3.65s/it]
2%|▏ | 9593/569592 [1:16:53<567:12:46, 3.65s/it]
2%|▏ | 9594/569592 [1:16:57<601:10:28, 3.86s/it]
2%|▏ | 9594/569592 [1:16:57<601:10:28, 3.86s/it]
2%|▏ | 9595/569592 [1:16:58<464:00:39, 2.98s/it]
2%|▏ | 9595/569592 [1:16:58<464:00:39, 2.98s/it]
2%|▏ | 9596/569592 [1:17:03<564:37:23, 3.63s/it]
2%|▏ | 9596/569592 [1:17:03<564:37:23, 3.63s/it]
2%|▏ | 9597/569592 [1:17:08<627:46:31, 4.04s/it]
2%|▏ | 9597/569592 [1:17:08<627:46:31, 4.04s/it]
2%|▏ | 9598/569592 [1:17:13<654:50:42, 4.21s/it]
2%|▏ | 9598/569592 [1:17:13<654:50:42, 4.21s/it]
2%|▏ | 9599/569592 [1:17:18<689:56:40, 4.44s/it]
2%|▏ | 9599/569592 [1:17:18<689:56:40, 4.44s/it]
2%|▏ | 9600/569592 [1:17:23<700:48:15, 4.51s/it]
2%|▏ | 9600/569592 [1:17:23<700:48:15, 4.51s/it]
2%|▏ | 9601/569592 [1:17:27<700:25:18, 4.50s/it]
2%|▏ | 9601/569592 [1:17:27<700:25:18, 4.50s/it]
2%|▏ | 9602/569592 [1:17:30<628:38:36, 4.04s/it]
2%|▏ | 9602/569592 [1:17:30<628:38:36, 4.04s/it]
2%|▏ | 9603/569592 [1:17:35<670:07:58, 4.31s/it]
2%|▏ | 9603/569592 [1:17:35<670:07:58, 4.31s/it]
2%|▏ | 9604/569592 [1:17:40<698:34:44, 4.49s/it]
2%|▏ | 9604/569592 [1:17:40<698:34:44, 4.49s/it]
2%|▏ | 9605/569592 [1:17:44<664:49:13, 4.27s/it]
2%|▏ | 9605/569592 [1:17:44<664:49:13, 4.27s/it]
2%|▏ | 9606/569592 [1:17:48<682:21:40, 4.39s/it]
2%|▏ | 9606/569592 [1:17:48<682:21:40, 4.39s/it]
2%|▏ | 9607/569592 [1:17:53<710:12:52, 4.57s/it]
2%|▏ | 9607/569592 [1:17:53<710:12:52, 4.57s/it]
2%|▏ | 9608/569592 [1:17:58<713:23:23, 4.59s/it]
2%|▏ | 9608/569592 [1:17:58<713:23:23, 4.59s/it]
2%|▏ | 9609/569592 [1:18:01<642:12:10, 4.13s/it]
2%|▏ | 9609/569592 [1:18:01<642:12:10, 4.13s/it]
2%|▏ | 9610/569592 [1:18:06<673:08:38, 4.33s/it]
2%|▏ | 9610/569592 [1:18:06<673:08:38, 4.33s/it]
2%|▏ | 9611/569592 [1:18:07<512:32:20, 3.30s/it]
2%|▏ | 9611/569592 [1:18:07<512:32:20, 3.30s/it]
2%|▏ | 9612/569592 [1:18:10<533:31:47, 3.43s/it]
2%|▏ | 9612/569592 [1:18:10<533:31:47, 3.43s/it]
2%|▏ | 9613/569592 [1:18:15<609:08:49, 3.92s/it]
2%|▏ | 9613/569592 [1:18:15<609:08:49, 3.92s/it]
2%|▏ | 9614/569592 [1:18:19<573:16:44, 3.69s/it]
2%|▏ | 9614/569592 [1:18:19<573:16:44, 3.69s/it]
2%|▏ | 9615/569592 [1:18:24<633:31:17, 4.07s/it]
2%|▏ | 9615/569592 [1:18:24<633:31:17, 4.07s/it]
2%|▏ | 9616/569592 [1:18:24<484:05:44, 3.11s/it]
2%|▏ | 9616/569592 [1:18:24<484:05:44, 3.11s/it]
2%|▏ | 9617/569592 [1:18:28<504:06:53, 3.24s/it]
2%|▏ | 9617/569592 [1:18:28<504:06:53, 3.24s/it]
2%|▏ | 9618/569592 [1:18:29<395:30:56, 2.54s/it]
2%|▏ | 9618/569592 [1:18:29<395:30:56, 2.54s/it]
2%|▏ | 9619/569592 [1:18:34<505:54:01, 3.25s/it]
2%|▏ | 9619/569592 [1:18:34<505:54:01, 3.25s/it]
2%|▏ | 9620/569592 [1:18:38<571:16:13, 3.67s/it]
2%|▏ | 9620/569592 [1:18:38<571:16:13, 3.67s/it]
2%|▏ | 9621/569592 [1:18:43<620:25:46, 3.99s/it]
2%|▏ | 9621/569592 [1:18:43<620:25:46, 3.99s/it]
2%|▏ | 9622/569592 [1:18:46<580:32:33, 3.73s/it]
2%|▏ | 9622/569592 [1:18:46<580:32:33, 3.73s/it]
2%|▏ | 9623/569592 [1:18:47<447:44:41, 2.88s/it]
2%|▏ | 9623/569592 [1:18:47<447:44:41, 2.88s/it]
2%|▏ | 9624/569592 [1:18:50<459:48:24, 2.96s/it]
2%|▏ | 9624/569592 [1:18:50<459:48:24, 2.96s/it]
2%|▏ | 9625/569592 [1:18:54<509:31:06, 3.28s/it]
2%|▏ | 9625/569592 [1:18:54<509:31:06, 3.28s/it]
2%|▏ | 9626/569592 [1:18:58<535:27:14, 3.44s/it]
2%|▏ | 9626/569592 [1:18:58<535:27:14, 3.44s/it]
2%|▏ | 9627/569592 [1:18:59<417:22:01, 2.68s/it]
2%|▏ | 9627/569592 [1:18:59<417:22:01, 2.68s/it]
2%|▏ | 9628/569592 [1:19:00<336:40:28, 2.16s/it]
2%|▏ | 9628/569592 [1:19:00<336:40:28, 2.16s/it]
2%|▏ | 9629/569592 [1:19:04<408:03:48, 2.62s/it]
2%|▏ | 9629/569592 [1:19:04<408:03:48, 2.62s/it]
2%|▏ | 9630/569592 [1:19:05<334:08:48, 2.15s/it]
2%|▏ | 9630/569592 [1:19:05<334:08:48, 2.15s/it]
2%|▏ | 9631/569592 [1:19:06<278:09:40, 1.79s/it]
2%|▏ | 9631/569592 [1:19:06<278:09:40, 1.79s/it]
2%|▏ | 9632/569592 [1:19:07<239:28:55, 1.54s/it]
2%|▏ | 9632/569592 [1:19:07<239:28:55, 1.54s/it]
2%|▏ | 9633/569592 [1:19:11<354:00:19, 2.28s/it]
2%|▏ | 9633/569592 [1:19:11<354:00:19, 2.28s/it]
2%|▏ | 9634/569592 [1:19:12<293:21:37, 1.89s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9634/569592 [1:19:12<293:21:37, 1.89s/it]
2%|▏ | 9635/569592 [1:19:17<452:26:02, 2.91s/it]
2%|▏ | 9635/569592 [1:19:17<452:26:02, 2.91s/it]
2%|▏ | 9636/569592 [1:19:18<361:33:54, 2.32s/it]
2%|▏ | 9636/569592 [1:19:18<361:33:54, 2.32s/it]
2%|▏ | 9637/569592 [1:19:19<297:14:35, 1.91s/it]
2%|▏ | 9637/569592 [1:19:19<297:14:35, 1.91s/it]
2%|▏ | 9638/569592 [1:19:23<387:26:53, 2.49s/it]
2%|▏ | 9638/569592 [1:19:23<387:26:53, 2.49s/it]
2%|▏ | 9639/569592 [1:19:24<317:39:49, 2.04s/it]
2%|▏ | 9639/569592 [1:19:24<317:39:49, 2.04s/it]
2%|▏ | 9640/569592 [1:19:25<265:16:49, 1.71s/it]
2%|▏ | 9640/569592 [1:19:25<265:16:49, 1.71s/it]
2%|▏ | 9641/569592 [1:19:26<229:31:12, 1.48s/it]
2%|▏ | 9641/569592 [1:19:26<229:31:12, 1.48s/it]
2%|▏ | 9642/569592 [1:19:29<312:22:04, 2.01s/it]
2%|▏ | 9642/569592 [1:19:29<312:22:04, 2.01s/it]
2%|▏ | 9643/569592 [1:19:33<422:36:25, 2.72s/it]
2%|▏ | 9643/569592 [1:19:33<422:36:25, 2.72s/it]
2%|▏ | 9644/569592 [1:19:34<341:36:09, 2.20s/it]
2%|▏ | 9644/569592 [1:19:34<341:36:09, 2.20s/it]
2%|▏ | 9645/569592 [1:19:35<286:24:55, 1.84s/it]
2%|▏ | 9645/569592 [1:19:35<286:24:55, 1.84s/it]
2%|▏ | 9646/569592 [1:19:38<324:33:34, 2.09s/it]
2%|▏ | 9646/569592 [1:19:38<324:33:34, 2.09s/it]
2%|▏ | 9647/569592 [1:19:43<460:34:34, 2.96s/it]
2%|▏ | 9647/569592 [1:19:43<460:34:34, 2.96s/it]
2%|▏ | 9648/569592 [1:19:44<377:37:32, 2.43s/it]
2%|▏ | 9648/569592 [1:19:44<377:37:32, 2.43s/it]
2%|▏ | 9649/569592 [1:19:45<307:17:21, 1.98s/it]
2%|▏ | 9649/569592 [1:19:45<307:17:21, 1.98s/it]
2%|▏ | 9650/569592 [1:19:48<368:08:24, 2.37s/it]
2%|▏ | 9650/569592 [1:19:48<368:08:24, 2.37s/it/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96086991 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
]
2%|▏ | 9651/569592 [1:19:53<463:42:06, 2.98s/it]
2%|▏ | 9651/569592 [1:19:53<463:42:06, 2.98s/it]
2%|▏ | 9652/569592 [1:19:54<409:17:53, 2.63s/it]
2%|▏ | 9652/569592 [1:19:54<409:17:53, 2.63s/it]
2%|▏ | 9653/569592 [1:19:56<336:18:24, 2.16s/it]
2%|▏ | 9653/569592 [1:19:56<336:18:24, 2.16s/it]
2%|▏ | 9654/569592 [1:19:58<372:40:06, 2.40s/it]
2%|▏ | 9654/569592 [1:19:58<372:40:06, 2.40s/it]
2%|▏ | 9655/569592 [1:20:03<454:15:52, 2.92s/it]
2%|▏ | 9655/569592 [1:20:03<454:15:52, 2.92s/it]
2%|▏ | 9656/569592 [1:20:06<484:55:55, 3.12s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9656/569592 [1:20:06<484:55:55, 3.12s/it]
2%|▏ | 9657/569592 [1:20:07<387:15:56, 2.49s/it]
2%|▏ | 9657/569592 [1:20:07<387:15:56, 2.49s/it]
2%|▏ | 9658/569592 [1:20:09<359:26:32, 2.31s/it]
2%|▏ | 9658/569592 [1:20:09<359:26:32, 2.31s/it]
2%|▏ | 9659/569592 [1:20:13<420:26:32, 2.70s/it]
2%|▏ | 9659/569592 [1:20:13<420:26:32, 2.70s/it]
2%|▏ | 9660/569592 [1:20:15<403:33:02, 2.59s/it]
2%|▏ | 9660/569592 [1:20:15<403:33:02, 2.59s/it]
2%|▏ | 9661/569592 [1:20:16<328:27:05, 2.11s/it]
2%|▏ | 9661/569592 [1:20:16<328:27:05, 2.11s/it]
2%|▏ | 9662/569592 [1:20:18<335:45:39, 2.16s/it]
2%|▏ | 9662/569592 [1:20:18<335:45:39, 2.16s/it]
2%|▏ | 9663/569592 [1:20:24<497:46:43, 3.20s/it]
2%|▏ | 9663/569592 [1:20:24<497:46:43, 3.20s/it]
2%|▏ | 9664/569592 [1:20:25<394:20:22, 2.54s/it]
2%|▏ | 9664/569592 [1:20:25<394:20:22, 2.54s/it]
2%|▏ | 9665/569592 [1:20:26<339:12:46, 2.18s/it]
2%|▏ | 9665/569592 [1:20:26<339:12:46, 2.18s/it]
2%|▏ | 9666/569592 [1:20:30<398:10:41, 2.56s/it]
2%|▏ | 9666/569592 [1:20:30<398:10:41, 2.56s/it]
2%|▏ | 9667/569592 [1:20:34<486:30:29, 3.13s/it]
2%|▏ | 9667/569592 [1:20:34<486:30:29, 3.13s/it]
2%|▏ | 9668/569592 [1:20:36<440:27:54, 2.83s/it]
2%|▏ | 9668/569592 [1:20:36<440:27:54, 2.83s/it]
2%|▏ | 9669/569592 [1:20:37<353:50:30, 2.28s/it]
2%|▏ | 9669/569592 [1:20:37<353:50:30, 2.28s/it]
2%|▏ | 9670/569592 [1:20:40<389:08:51, 2.50s/it]
2%|▏ | 9670/569592 [1:20:41<389:08:51, 2.50s/it]
2%|▏ | 9671/569592 [1:20:45<468:18:48, 3.01s/it]
2%|▏ | 9671/569592 [1:20:45<468:18:48, 3.01s/it]
2%|▏ | 9672/569592 [1:20:45<373:23:55, 2.40s/it]
2%|▏ | 9672/569592 [1:20:46<373:23:55, 2.40s/it]
2%|▏ | 9673/569592 [1:20:47<339:02:46, 2.18s/it]
2%|▏ | 9673/569592 [1:20:47<339:02:46, 2.18s/it]
2%|▏ | 9674/569592 [1:20:50<351:35:40, 2.26s/it]
2%|▏ | 9674/569592 [1:20:50<351:35:40, 2.26s/it]
2%|▏ | 9675/569592 [1:20:55<519:48:19, 3.34s/it]
2%|▏ | 9675/569592 [1:20:55<519:48:19, 3.34s/it]
2%|▏ | 9676/569592 [1:20:56<408:24:05, 2.63s/it]
2%|▏ | 9676/569592 [1:20:56<408:24:05, 2.63s/it]
2%|▏ | 9677/569592 [1:20:57<331:26:06, 2.13s/it]
2%|▏ | 9677/569592 [1:20:57<331:26:06, 2.13s/it]
2%|▏ | 9678/569592 [1:21:00<353:14:55, 2.27s/it]
2%|▏ | 9678/569592 [1:21:00<353:14:55, 2.27s/it]
2%|▏ | 9679/569592 [1:21:05<473:26:24, 3.04s/it]
2%|▏ | 9679/569592 [1:21:05<473:26:24, 3.04s/it]
2%|▏ | 9680/569592 [1:21:06<380:57:25, 2.45s/it]
2%|▏ | 9680/569592 [1:21:06<380:57:25, 2.45s/it]
2%|▏ | 9681/569592 [1:21:09<390:31:58, 2.51s/it]
2%|▏ | 9681/569592 [1:21:09<390:31:58, 2.51s/it]
2%|▏ | 9682/569592 [1:21:11<383:08:52, 2.46s/it]
2%|▏ | 9682/569592 [1:21:11<383:08:52, 2.46s/it]
2%|▏ | 9683/569592 [1:21:15<445:16:25, 2.86s/it]
2%|▏ | 9683/569592 [1:21:15<445:16:25, 2.86s/it]
2%|▏ | 9684/569592 [1:21:16<379:52:03, 2.44s/it]
2%|▏ | 9684/569592 [1:21:16<379:52:03, 2.44s/it]
2%|▏ | 9685/569592 [1:21:17<316:53:56, 2.04s/it]
2%|▏ | 9685/569592 [1:21:17<316:53:56, 2.04s/it]
2%|▏ | 9686/569592 [1:21:19<322:48:12, 2.08s/it]
2%|▏ | 9686/569592 [1:21:19<322:48:12, 2.08s/it]
2%|▏ | 9687/569592 [1:21:25<468:11:39, 3.01s/it]
2%|▏ | 9687/569592 [1:21:25<468:11:39, 3.01s/it]
2%|▏ | 9688/569592 [1:21:26<402:54:12, 2.59s/it]
2%|▏ | 9688/569592 [1:21:26<402:54:12, 2.59s/it]
2%|▏ | 9689/569592 [1:21:27<329:20:30, 2.12s/it]
2%|▏ | 9689/569592 [1:21:27<329:20:30, 2.12s/it]
2%|▏ | 9690/569592 [1:21:30<371:41:10, 2.39s/it]
2%|▏ | 9690/569592 [1:21:30<371:41:10, 2.39s/it]
2%|▏ | 9691/569592 [1:21:36<520:54:07, 3.35s/it]
2%|▏ | 9691/569592 [1:21:36<520:54:07, 3.35s/it]
2%|▏ | 9692/569592 [1:21:37<417:29:50, 2.68s/it]
2%|▏ | 9692/569592 [1:21:37<417:29:50, 2.68s/it]
2%|▏ | 9693/569592 [1:21:38<338:04:03, 2.17s/it]
2%|▏ | 9693/569592 [1:21:38<338:04:03, 2.17s/it]
2%|▏ | 9694/569592 [1:21:40<352:04:00, 2.26s/it]
2%|▏ | 9694/569592 [1:21:40<352:04:00, 2.26s/it]
2%|▏ | 9695/569592 [1:21:45<468:00:45, 3.01s/it]
2%|▏ | 9695/569592 [1:21:45<468:00:45, 3.01s/it]
2%|▏ | 9696/569592 [1:21:48<477:24:05, 3.07s/it]
2%|▏ | 9696/569592 [1:21:48<477:24:05, 3.07s/it]
2%|▏ | 9697/569592 [1:21:49<382:18:37, 2.46s/it]
2%|▏ | 9697/569592 [1:21:49<382:18:37, 2.46s/it]
2%|▏ | 9698/569592 [1:21:50<312:58:58, 2.01s/it]
2%|▏ | 9698/569592 [1:21:50<312:58:58, 2.01s/it]
2%|▏ | 9699/569592 [1:21:54<405:42:45, 2.61s/it]
2%|▏ | 9699/569592 [1:21:54<405:42:45, 2.61s/it]
2%|▏ | 9700/569592 [1:21:58<444:45:15, 2.86s/it]
2%|▏ | 9700/569592 [1:21:58<444:45:15, 2.86s/it]
2%|▏ | 9701/569592 [1:22:01<476:37:30, 3.06s/it]
2%|▏ | 9701/569592 [1:22:01<476:37:30, 3.06s/it]
2%|▏ | 9702/569592 [1:22:05<489:30:39, 3.15s/it]
2%|▏ | 9702/569592 [1:22:05<489:30:39, 3.15s/it]
2%|▏ | 9703/569592 [1:22:09<524:04:18, 3.37s/it]
2%|▏ | 9703/569592 [1:22:09<524:04:18, 3.37s/it]
2%|▏ | 9704/569592 [1:22:14<599:35:31, 3.86s/it]
2%|▏ | 9704/569592 [1:22:14<599:35:31, 3.86s/it]
2%|▏ | 9705/569592 [1:22:17<560:13:22, 3.60s/it]
2%|▏ | 9705/569592 [1:22:17<560:13:22, 3.60s/it]
2%|▏ | 9706/569592 [1:22:20<539:35:52, 3.47s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90157815 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98263910 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9706/569592 [1:22:20<539:35:52, 3.47s/it]
2%|▏ | 9707/569592 [1:22:25<601:26:26, 3.87s/it]
2%|▏ | 9707/569592 [1:22:25<601:26:26, 3.87s/it]
2%|▏ | 9708/569592 [1:22:26<464:59:48, 2.99s/it]
2%|▏ | 9708/569592 [1:22:26<464:59:48, 2.99s/it]
2%|▏ | 9709/569592 [1:22:29<479:24:26, 3.08s/it]
2%|▏ | 9709/569592 [1:22:29<479:24:26, 3.08s/it]
2%|▏ | 9710/569592 [1:22:34<579:30:07, 3.73s/it]
2%|▏ | 9710/569592 [1:22:34<579:30:07, 3.73s/it]
2%|▏ | 9711/569592 [1:22:35<447:29:35, 2.88s/it]
2%|▏ | 9711/569592 [1:22:35<447:29:35, 2.88s/it]
2%|▏ | 9712/569592 [1:22:39<507:30:58, 3.26s/it]
2%|▏ | 9712/569592 [1:22:39<507:30:58, 3.26s/it]
2%|▏ | 9713/569592 [1:22:42<500:00:07, 3.21s/it]
2%|▏ | 9713/569592 [1:22:42<500:00:07, 3.21s/it]
2%|▏ | 9714/569592 [1:22:45<502:03:13, 3.23s/it]
2%|▏ | 9714/569592 [1:22:45<502:03:13, 3.23s/it]
2%|▏ | 9715/569592 [1:22:51<590:01:29, 3.79s/it]
2%|▏ | 9715/569592 [1:22:51<590:01:29, 3.79s/it]
2%|▏ | 9716/569592 [1:22:55<632:31:55, 4.07s/it]
2%|▏ | 9716/569592 [1:22:55<632:31:55, 4.07s/it]
2%|▏ | 9717/569592 [1:23:00<670:32:28, 4.31s/it]
2%|▏ | 9717/569592 [1:23:00<670:32:28, 4.31s/it]
2%|▏ | 9718/569592 [1:23:01<510:49:25, 3.28s/it]
2%|▏ | 9718/569592 [1:23:01<510:49:25, 3.28s/it]
2%|▏ | 9719/569592 [1:23:05<554:11:02, 3.56s/it]
2%|▏ | 9719/569592 [1:23:05<554:11:02, 3.56s/it]
2%|▏ | 9720/569592 [1:23:09<572:23:36, 3.68s/it]
2%|▏ | 9720/569592 [1:23:09<572:23:36, 3.68s/it]
2%|▏ | 9721/569592 [1:23:14<636:41:27, 4.09s/it]
2%|▏ | 9721/569592 [1:23:14<636:41:27, 4.09s/it]
2%|▏ | 9722/569592 [1:23:18<605:05:31, 3.89s/it]
2%|▏ | 9722/569592 [1:23:18<605:05:31, 3.89s/it]
2%|▏ | 9723/569592 [1:23:23<657:18:15, 4.23s/it]
2%|▏ | 9723/569592 [1:23:23<657:18:15, 4.23s/it]
2%|▏ | 9724/569592 [1:23:26<594:12:03, 3.82s/it]
2%|▏ | 9724/569592 [1:23:26<594:12:03, 3.82s/it]
2%|▏ | 9725/569592 [1:23:31<658:30:17, 4.23s/it]
2%|▏ | 9725/569592 [1:23:31<658:30:17, 4.23s/it]
2%|▏ | 9726/569592 [1:23:32<502:29:16, 3.23s/it]
2%|▏ | 9726/569592 [1:23:32<502:29:16, 3.23s/it]
2%|▏ | 9727/569592 [1:23:37<589:06:14, 3.79s/it]
2%|▏ | 9727/569592 [1:23:37<589:06:14, 3.79s/it]
2%|▏ | 9728/569592 [1:23:42<634:28:32, 4.08s/it]
2%|▏ | 9728/569592 [1:23:42<634:28:32, 4.08s/it]
2%|▏ | 9729/569592 [1:23:47<689:36:31, 4.43s/it]
2%|▏ | 9729/569592 [1:23:47<689:36:31, 4.43s/it]
2%|▏ | 9730/569592 [1:23:50<641:45:13, 4.13s/it]
2%|▏ | 9730/569592 [1:23:50<641:45:13, 4.13s/it]
2%|▏ | 9731/569592 [1:23:51<490:09:40, 3.15s/it]
2%|▏ | 9731/569592 [1:23:51<490:09:40, 3.15s/it]
2%|▏ | 9732/569592 [1:23:54<499:22:46, 3.21s/it]
2%|▏ | 9732/569592 [1:23:54<499:22:46, 3.21s/it]
2%|▏ | 9733/569592 [1:23:59<586:01:56, 3.77s/it]
2%|▏ | 9733/569592 [1:24:00<586:01:56, 3.77s/it]
2%|▏ | 9734/569592 [1:24:00<453:10:41, 2.91s/it]
2%|▏ | 9734/569592 [1:24:00<453:10:41, 2.91s/it]
2%|▏ | 9735/569592 [1:24:05<537:50:22, 3.46s/it]
2%|▏ | 9735/569592 [1:24:05<537:50:22, 3.46s/it]
2%|▏ | 9736/569592 [1:24:06<419:32:23, 2.70s/it]
2%|▏ | 9736/569592 [1:24:06<419:32:23, 2.70s/it]
2%|▏ | 9737/569592 [1:24:10<461:28:07, 2.97s/it]
2%|▏ | 9737/569592 [1:24:10<461:28:07, 2.97s/it]
2%|▏ | 9738/569592 [1:24:11<365:36:41, 2.35s/it]
2%|▏ | 9738/569592 [1:24:11<365:36:41, 2.35s/it]
2%|▏ | 9739/569592 [1:24:15<484:20:21, 3.11s/it]
2%|▏ | 9739/569592 [1:24:15<484:20:21, 3.11s/it]
2%|▏ | 9740/569592 [1:24:16<382:16:53, 2.46s/it]
2%|▏ | 9740/569592 [1:24:16<382:16:53, 2.46s/it]
2%|▏ | 9741/569592 [1:24:20<424:53:47, 2.73s/it]
2%|▏ | 9741/569592 [1:24:20<424:53:47, 2.73s/it]
2%|▏ | 9742/569592 [1:24:25<536:51:37, 3.45s/it]
2%|▏ | 9742/569592 [1:24:25<536:51:37, 3.45s/it]
2%|▏ | 9743/569592 [1:24:29<571:13:56, 3.67s/it]
2%|▏ | 9743/569592 [1:24:29<571:13:56, 3.67s/it]
2%|▏ | 9744/569592 [1:24:34<642:34:33, 4.13s/it]
2%|▏ | 9744/569592 [1:24:34<642:34:33, 4.13s/it]
2%|▏ | 9745/569592 [1:24:35<493:59:33, 3.18s/it]
2%|▏ | 9745/569592 [1:24:35<493:59:33, 3.18s/it]
2%|▏ | 9746/569592 [1:24:39<519:56:30, 3.34s/it]
2%|▏ | 9746/569592 [1:24:39<519:56:30, 3.34s/it]
2%|▏ | 9747/569592 [1:24:43<530:14:27, 3.41s/it]
2%|▏ | 9747/569592 [1:24:43<530:14:27, 3.41s/it]
2%|▏ | 9748/569592 [1:24:43<415:06:48, 2.67s/it]
2%|▏ | 9748/569592 [1:24:44<415:06:48, 2.67s/it]
2%|▏ | 9749/569592 [1:24:44<334:41:12, 2.15s/it]
2%|▏ | 9749/569592 [1:24:44<334:41:12, 2.15s/it]
2%|▏ | 9750/569592 [1:24:45<278:53:01, 1.79s/it]
2%|▏ | 9750/569592 [1:24:45<278:53:01, 1.79s/it]
2%|▏ | 9751/569592 [1:24:49<364:54:03, 2.35s/it]
2%|▏ | 9751/569592 [1:24:49<364:54:03, 2.35s/it]
2%|▏ | 9752/569592 [1:24:50<299:07:02, 1.92s/it]
2%|▏ | 9752/569592 [1:24:50<299:07:02, 1.92s/it]
2%|▏ | 9753/569592 [1:24:54<420:38:27, 2.70s/it]
2%|▏ | 9753/569592 [1:24:55<420:38:27, 2.70s/it]
2%|▏ | 9754/569592 [1:24:56<354:09:26, 2.28s/it]
2%|▏ | 9754/569592 [1:24:56<354:09:26, 2.28s/it]
2%|▏ | 9755/569592 [1:24:57<291:58:51, 1.88s/it]
2%|▏ | 9755/569592 [1:24:57<291:58:51, 1.88s/it]
2%|▏ | 9756/569592 [1:25:00<376:33:55, 2.42s/it]
2%|▏ | 9756/569592 [1:25:00<376:33:55, 2.42s/it]
2%|▏ | 9757/569592 [1:25:01<306:39:24, 1.97s/it]
2%|▏ | 9757/569592 [1:25:01<306:39:24, 1.97s/it]
2%|▏ | 9758/569592 [1:25:02<259:33:09, 1.67s/it]
2%|▏ | 9758/569592 [1:25:02<259:33:09, 1.67s/it]
2%|▏ | 9759/569592 [1:25:07<421:48:46, 2.71s/it]
2%|▏ | 9759/569592 [1:25:07<421:48:46, 2.71s/it]
2%|▏ | 9760/569592 [1:25:09<345:35:50, 2.22s/it]
2%|▏ | 9760/569592 [1:25:09<345:35:50, 2.22s/it]
2%|▏ | 9761/569592 [1:25:10<288:21:43, 1.85s/it]
2%|▏ | 9761/569592 [1:25:10<288:21:43, 1.85s/it]
2%|▏ | 9762/569592 [1:25:10<247:40:58, 1.59s/it]
2%|▏ | 9762/569592 [1:25:11<247:40:58, 1.59s/it]
2%|▏ | 9763/569592 [1:25:14<334:03:30, 2.15s/it]
2%|▏ | 9763/569592 [1:25:14<334:03:30, 2.15s/it]
2%|▏ | 9764/569592 [1:25:17<373:34:08, 2.40s/it]
2%|▏ | 9764/569592 [1:25:17<373:34:08, 2.40s/it]
2%|▏ | 9765/569592 [1:25:18<325:13:27, 2.09s/it]
2%|▏ | 9765/569592 [1:25:18<325:13:27, 2.09s/it]
2%|▏ | 9766/569592 [1:25:19<270:48:51, 1.74s/it]
2%|▏ | 9766/569592 [1:25:19<270:48:51, 1.74s/it]
2%|▏ | 9767/569592 [1:25:25<455:43:08, 2.93s/it]
2%|▏ | 9767/569592 [1:25:25<455:43:08, 2.93s/it]
2%|▏ | 9768/569592 [1:25:27<407:49:31, 2.62s/it]
2%|▏ | 9768/569592 [1:25:27<407:49:31, 2.62s/it]
2%|▏ | 9769/569592 [1:25:28<338:41:30, 2.18s/it]
2%|▏ | 9769/569592 [1:25:28<338:41:30, 2.18s/it]
2%|▏ | 9770/569592 [1:25:29<305:28:51, 1.96s/it]
2%|▏ | 9770/569592 [1:25:29<305:28:51, 1.96s/it]
2%|▏ | 9771/569592 [1:25:35<490:46:45, 3.16s/it]
2%|▏ | 9771/569592 [1:25:35<490:46:45, 3.16s/it]
2%|▏ | 9772/569592 [1:25:37<423:49:18, 2.73s/it]
2%|▏ | 9772/569592 [1:25:37<423:49:18, 2.73s/it]
2%|▏ | 9773/569592 [1:25:38<340:20:55, 2.19s/it]
2%|▏ | 9773/569592 [1:25:38<340:20:55, 2.19s/it]
2%|▏ | 9774/569592 [1:25:40<313:37:31, 2.02s/it]
2%|▏ | 9774/569592 [1:25:40<313:37:31, 2.02s/it]
2%|▏ | 9775/569592 [1:25:46<506:06:58, 3.25s/it]
2%|▏ | 9775/569592 [1:25:46<506:06:58, 3.25s/it]
2%|▏ | 9776/569592 [1:25:48<460:53:24, 2.96s/it]
2%|▏ | 9776/569592 [1:25:48<460:53:24, 2.96s/it]
2%|▏ | 9777/569592 [1:25:49<366:02:10, 2.35s/it]
2%|▏ | 9777/569592 [1:25:49<366:02:10, 2.35s/it]
2%|▏ | 9778/569592 [1:25:50<302:27:00, 1.94s/it]
2%|▏ | 9778/569592 [1:25:50<302:27:00, 1.94s/it]
2%|▏ | 9779/569592 [1:25:56<489:28:07, 3.15s/it]
2%|▏ | 9779/569592 [1:25:56<489:28:07, 3.15s/it]
2%|▏ | 9780/569592 [1:25:57<413:48:46, 2.66s/it]
2%|▏ | 9780/569592 [1:25:58<413:48:46, 2.66s/it]
2%|▏ | 9781/569592 [1:25:59<340:32:07, 2.19s/it]
2%|▏ | 9781/569592 [1:25:59<340:32:07, 2.19s/it]
2%|▏ | 9782/569592 [1:26:00<318:41:40, 2.05s/it]
2%|▏ | 9782/569592 [1:26:00<318:41:40, 2.05s/it]
2%|▏ | 9783/569592 [1:26:05<447:37:23, 2.88s/it]
2%|▏ | 9783/569592 [1:26:05<447:37:23, 2.88s/it]
2%|▏ | 9784/569592 [1:26:08<450:26:53, 2.90s/it]
2%|▏ | 9784/569592 [1:26:08<450:26:53, 2.90s/it]
2%|▏ | 9785/569592 [1:26:10<393:13:26, 2.53s/it]
2%|▏ | 9785/569592 [1:26:10<393:13:26, 2.53s/it]
2%|▏ | 9786/569592 [1:26:11<318:59:38, 2.05s/it]
2%|▏ | 9786/569592 [1:26:11<318:59:38, 2.05s/it]
2%|▏ | 9787/569592 [1:26:16<488:27:27, 3.14s/it]
2%|▏ | 9787/569592 [1:26:16<488:27:27, 3.14s/it]
2%|▏ | 9788/569592 [1:26:18<432:53:40, 2.78s/it]
2%|▏ | 9788/569592 [1:26:18<432:53:40, 2.78s/it]
2%|▏ | 9789/569592 [1:26:21<412:17:17, 2.65s/it]
2%|▏ | 9789/569592 [1:26:21<412:17:17, 2.65s/it]
2%|▏ | 9790/569592 [1:26:22<333:39:05, 2.15s/it]
2%|▏ | 9790/569592 [1:26:22<333:39:05, 2.15s/it]
2%|▏ | 9791/569592 [1:26:27<468:43:27, 3.01s/it]
2%|▏ | 9791/569592 [1:26:27<468:43:27, 3.01s/it]
2%|▏ | 9792/569592 [1:26:29<448:44:55, 2.89s/it]
2%|▏ | 9792/569592 [1:26:29<448:44:55, 2.89s/it]
2%|▏ | 9793/569592 [1:26:31<383:45:48, 2.47s/it]
2%|▏ | 9793/569592 [1:26:31<383:45:48, 2.47s/it]
2%|▏ | 9794/569592 [1:26:32<312:07:36, 2.01s/it]
2%|▏ | 9794/569592 [1:26:32<312:07:36, 2.01s/it]
2%|▏ | 9795/569592 [1:26:37<474:54:21, 3.05s/it]
2%|▏ | 9795/569592 [1:26:37<474:54:21, 3.05s/it]
2%|▏ | 9796/569592 [1:26:39<413:17:57, 2.66s/it]
2%|▏ | 9796/569592 [1:26:39<413:17:57, 2.66s/it]
2%|▏ | 9797/569592 [1:26:40<332:58:57, 2.14s/it]
2%|▏ | 9797/569592 [1:26:40<332:58:57, 2.14s/it]
2%|▏ | 9798/569592 [1:26:42<317:07:41, 2.04s/it]
2%|▏ | 9798/569592 [1:26:42<317:07:41, 2.04s/it]
2%|▏ | 9799/569592 [1:26:47<493:13:28, 3.17s/it]
2%|▏ | 9799/569592 [1:26:47<493:13:28, 3.17s/it]
2%|▏ | 9800/569592 [1:26:49<425:39:44, 2.74s/it]
2%|▏ | 9800/569592 [1:26:49<425:39:44, 2.74s/it]
2%|▏ | 9801/569592 [1:26:50<353:58:46, 2.28s/it]
2%|▏ | 9801/569592 [1:26:50<353:58:46, 2.28s/it]
2%|▏ | 9802/569592 [1:26:52<311:50:24, 2.01s/it]
2%|▏ | 9802/569592 [1:26:52<311:50:24, 2.01s/it]
2%|▏ | 9803/569592 [1:26:56<431:39:58, 2.78s/it]
2%|▏ | 9803/569592 [1:26:56<431:39:58, 2.78s/it]
2%|▏ | 9804/569592 [1:27:00<490:48:57, 3.16s/it]
2%|▏ | 9804/569592 [1:27:00<490:48:57, 3.16s/it]
2%|▏ | 9805/569592 [1:27:01<386:02:36, 2.48s/it]
2%|▏ | 9805/569592 [1:27:01<386:02:36, 2.48s/it]
2%|▏ | 9806/569592 [1:27:02<315:06:32, 2.03s/it]
2%|▏ | 9806/569592 [1:27:02<315:06:32, 2.03s/it]
2%|▏ | 9807/569592 [1:27:06<391:53:48, 2.52s/it]
2%|▏ | 9807/569592 [1:27:06<391:53:48, 2.52s/it]
2%|▏ | 9808/569592 [1:27:11<499:41:27, 3.21s/it]
2%|▏ | 9808/569592 [1:27:11<499:41:27, 3.21s/it]
2%|▏ | 9809/569592 [1:27:12<392:56:48, 2.53s/it]
2%|▏ | 9809/569592 [1:27:12<392:56:48, 2.53s/it]
2%|▏ | 9810/569592 [1:27:13<318:21:13, 2.05s/it]
2%|▏ | 9810/569592 [1:27:13<318:21:13, 2.05s/it]
2%|▏ | 9811/569592 [1:27:16<374:10:17, 2.41s/it]
2%|▏ | 9811/569592 [1:27:16<374:10:17, 2.41s/it]
2%|▏ | 9812/569592 [1:27:20<467:12:00, 3.00s/it]
2%|▏ | 9812/569592 [1:27:20<467:12:00, 3.00s/it]
2%|▏ | 9813/569592 [1:27:25<552:46:07, 3.55s/it]
2%|▏ | 9813/569592 [1:27:25<552:46:07, 3.55s/it]
2%|▏ | 9814/569592 [1:27:26<429:09:26, 2.76s/it]
2%|▏ | 9814/569592 [1:27:26<429:09:26, 2.76s/it]
2%|▏ | 9815/569592 [1:27:27<343:41:15, 2.21s/it]
2%|▏ | 9815/569592 [1:27:27<343:41:15, 2.21s/it]
2%|▏ | 9816/569592 [1:27:30<406:18:09, 2.61s/it]
2%|▏ | 9816/569592 [1:27:30<406:18:09, 2.61s/it]
2%|▏ | 9817/569592 [1:27:34<467:57:21, 3.01s/it]
2%|▏ | 9817/569592 [1:27:34<467:57:21, 3.01s/it]
2%|▏ | 9818/569592 [1:27:39<565:53:55, 3.64s/it]
2%|▏ | 9818/569592 [1:27:40<565:53:55, 3.64s/it]
2%|▏ | 9819/569592 [1:27:43<553:41:14, 3.56s/it]
2%|▏ | 9819/569592 [1:27:43<553:41:14, 3.56s/it]
2%|▏ | 9820/569592 [1:27:47<603:10:13, 3.88s/it]
2%|▏ | 9820/569592 [1:27:47<603:10:13, 3.88s/it]
2%|▏ | 9821/569592 [1:27:48<464:18:42, 2.99s/it]
2%|▏ | 9821/569592 [1:27:48<464:18:42, 2.99s/it]
2%|▏ | 9822/569592 [1:27:53<552:55:03, 3.56s/it]
2%|▏ | 9822/569592 [1:27:53<552:55:03, 3.56s/it]
2%|▏ | 9823/569592 [1:27:57<556:06:59, 3.58s/it]
2%|▏ | 9823/569592 [1:27:57<556:06:59, 3.58s/it]
2%|▏ | 9824/569592 [1:27:58<431:04:18, 2.77s/it]
2%|▏ | 9824/569592 [1:27:58<431:04:18, 2.77s/it]
2%|▏ | 9825/569592 [1:28:02<500:41:33, 3.22s/it]
2%|▏ | 9825/569592 [1:28:02<500:41:33, 3.22s/it]
2%|▏ | 9826/569592 [1:28:06<523:26:03, 3.37s/it]
2%|▏ | 9826/569592 [1:28:06<523:26:03, 3.37s/it]
2%|▏ | 9827/569592 [1:28:11<591:28:08, 3.80s/it]
2%|▏ | 9827/569592 [1:28:11<591:28:08, 3.80s/it]
2%|▏ | 9828/569592 [1:28:14<574:28:54, 3.69s/it]
2%|▏ | 9828/569592 [1:28:14<574:28:54, 3.69s/it]
2%|▏ | 9829/569592 [1:28:17<543:22:15, 3.49s/it]
2%|▏ | 9829/569592 [1:28:17<543:22:15, 3.49s/it]
2%|▏ | 9830/569592 [1:28:22<600:42:05, 3.86s/it]
2%|▏ | 9830/569592 [1:28:22<600:42:05, 3.86s/it]
2%|▏ | 9831/569592 [1:28:26<604:23:57, 3.89s/it]
2%|▏ | 9831/569592 [1:28:26<604:23:57, 3.89s/it]
2%|▏ | 9832/569592 [1:28:29<592:10:16, 3.81s/it]
2%|▏ | 9832/569592 [1:28:29<592:10:16, 3.81s/it]
2%|▏ | 9833/569592 [1:28:33<570:29:13, 3.67s/it]
2%|▏ | 9833/569592 [1:28:33<570:29:13, 3.67s/it]
2%|▏ | 9834/569592 [1:28:38<630:05:06, 4.05s/it]
2%|▏ | 9834/569592 [1:28:38<630:05:06, 4.05s/it]
2%|▏ | 9835/569592 [1:28:42<665:44:29, 4.28s/it]
2%|▏ | 9835/569592 [1:28:42<665:44:29, 4.28s/it]
2%|▏ | 9836/569592 [1:28:43<507:02:14, 3.26s/it]
2%|▏ | 9836/569592 [1:28:43<507:02:14, 3.26s/it]
2%|▏ | 9837/569592 [1:28:48<588:12:36, 3.78s/it]
2%|▏ | 9837/569592 [1:28:48<588:12:36, 3.78s/it]
2%|▏ | 9838/569592 [1:28:53<633:46:05, 4.08s/it]
2%|▏ | 9838/569592 [1:28:53<633:46:05, 4.08s/it]
2%|▏ | 9839/569592 [1:28:58<686:42:20, 4.42s/it]
2%|▏ | 9839/569592 [1:28:58<686:42:20, 4.42s/it]
2%|▏ | 9840/569592 [1:29:03<696:02:36, 4.48s/it]
2%|▏ | 9840/569592 [1:29:03<696:02:36, 4.48s/it]
2%|▏ | 9841/569592 [1:29:08<711:46:29, 4.58s/it]
2%|▏ | 9841/569592 [1:29:08<711:46:29, 4.58s/it]
2%|▏ | 9842/569592 [1:29:11<643:45:07, 4.14s/it]
2%|▏ | 9842/569592 [1:29:11<643:45:07, 4.14s/it]
2%|▏ | 9843/569592 [1:29:16<680:38:34, 4.38s/it]
2%|▏ | 9843/569592 [1:29:16<680:38:34, 4.38s/it]
2%|▏ | 9844/569592 [1:29:19<620:52:24, 3.99s/it]
2%|▏ | 9844/569592 [1:29:19<620:52:24, 3.99s/it]
2%|▏ | 9845/569592 [1:29:22<583:18:54, 3.75s/it]
2%|▏ | 9845/569592 [1:29:22<583:18:54, 3.75s/it]
2%|▏ | 9846/569592 [1:29:27<646:02:59, 4.16s/it]
2%|▏ | 9846/569592 [1:29:27<646:02:59, 4.16s/it]
2%|▏ | 9847/569592 [1:29:32<668:53:38, 4.30s/it]
2%|▏ | 9847/569592 [1:29:32<668:53:38, 4.30s/it]
2%|▏ | 9848/569592 [1:29:36<682:19:27, 4.39s/it]
2%|▏ | 9848/569592 [1:29:36<682:19:27, 4.39s/it]
2%|▏ | 9849/569592 [1:29:40<643:59:07, 4.14s/it]
2%|▏ | 9849/569592 [1:29:40<643:59:07, 4.14s/it]
2%|▏ | 9850/569592 [1:29:41<492:18:29, 3.17s/it]
2%|▏ | 9850/569592 [1:29:41<492:18:29, 3.17s/it]
2%|▏ | 9851/569592 [1:29:46<561:18:50, 3.61s/it]
2%|▏ | 9851/569592 [1:29:46<561:18:50, 3.61s/it]
2%|▏ | 9852/569592 [1:29:46<435:13:51, 2.80s/it]
2%|▏ | 9852/569592 [1:29:46<435:13:51, 2.80s/it]
2%|▏ | 9853/569592 [1:29:50<455:03:59, 2.93s/it]
2%|▏ | 9853/569592 [1:29:50<455:03:59, 2.93s/it]
2%|▏ | 9854/569592 [1:29:51<366:48:07, 2.36s/it]
2%|▏ | 9854/569592 [1:29:51<366:48:07, 2.36s/it]
2%|▏ | 9855/569592 [1:29:55<481:29:39, 3.10s/it]
2%|▏ | 9855/569592 [1:29:56<481:29:39, 3.10s/it]
2%|▏ | 9856/569592 [1:29:57<385:15:02, 2.48s/it]
2%|▏ | 9856/569592 [1:29:57<385:15:02, 2.48s/it]
2%|▏ | 9857/569592 [1:30:02<520:02:30, 3.34s/it]
2%|▏ | 9857/569592 [1:30:02<520:02:30, 3.34s/it]
2%|▏ | 9858/569592 [1:30:05<509:25:00, 3.28s/it]
2%|▏ | 9858/569592 [1:30:05<509:25:00, 3.28s/it]
2%|▏ | 9859/569592 [1:30:08<508:03:54, 3.27s/it]
2%|▏ | 9859/569592 [1:30:08<508:03:54, 3.27s/it]
2%|▏ | 9860/569592 [1:30:13<594:01:22, 3.82s/it]
2%|▏ | 9860/569592 [1:30:13<594:01:22, 3.82s/it]
2%|▏ | 9861/569592 [1:30:17<593:24:28, 3.82s/it]
2%|▏ | 9861/569592 [1:30:17<593:24:28, 3.82s/it]
2%|▏ | 9862/569592 [1:30:18<455:28:29, 2.93s/it]
2%|▏ | 9862/569592 [1:30:18<455:28:29, 2.93s/it]
2%|▏ | 9863/569592 [1:30:22<483:32:14, 3.11s/it]
2%|▏ | 9863/569592 [1:30:22<483:32:14, 3.11s/it]
2%|▏ | 9864/569592 [1:30:22<381:32:13, 2.45s/it]
2%|▏ | 9864/569592 [1:30:22<381:32:13, 2.45s/it]
2%|▏ | 9865/569592 [1:30:23<311:20:48, 2.00s/it]
2%|▏ | 9865/569592 [1:30:23<311:20:48, 2.00s/it]
2%|▏ | 9866/569592 [1:30:24<261:56:50, 1.68s/it]
2%|▏ | 9866/569592 [1:30:24<261:56:50, 1.68s/it]
2%|▏ | 9867/569592 [1:30:25<228:41:29, 1.47s/it]
2%|▏ | 9867/569592 [1:30:25<228:41:29, 1.47s/it]
2%|▏ | 9868/569592 [1:30:29<332:25:29, 2.14s/it]
2%|▏ | 9868/569592 [1:30:29<332:25:29, 2.14s/it]
2%|▏ | 9869/569592 [1:30:30<275:57:58, 1.77s/it]
2%|▏ | 9869/569592 [1:30:30<275:57:58, 1.77s/it]
2%|▏ | 9870/569592 [1:30:31<237:57:38, 1.53s/it]
2%|▏ | 9870/569592 [1:30:31<237:57:38, 1.53s/it]
2%|▏ | 9871/569592 [1:30:36<417:55:16, 2.69s/it]
2%|▏ | 9871/569592 [1:30:36<417:55:16, 2.69s/it]
2%|▏ | 9872/569592 [1:30:37<341:13:29, 2.19s/it]
2%|▏ | 9872/569592 [1:30:37<341:13:29, 2.19s/it]
2%|▏ | 9873/569592 [1:30:41<398:13:36, 2.56s/it]
2%|▏ | 9873/569592 [1:30:41<398:13:36, 2.56s/it]
2%|▏ | 9874/569592 [1:30:42<325:48:17, 2.10s/it]
2%|▏ | 9874/569592 [1:30:42<325:48:17, 2.10s/it]
2%|▏ | 9875/569592 [1:30:43<271:44:26, 1.75s/it]
2%|▏ | 9875/569592 [1:30:43<271:44:26, 1.75s/it]
2%|▏ | 9876/569592 [1:30:45<296:07:19, 1.90s/it]
2%|▏ | 9876/569592 [1:30:45<296:07:19, 1.90s/it]
2%|▏ | 9877/569592 [1:30:48<357:44:03, 2.30s/it]
2%|▏ | 9877/569592 [1:30:48<357:44:03, 2.30s/it]
2%|▏ | 9878/569592 [1:30:51<399:39:31, 2.57s/it]
2%|▏ | 9878/569592 [1:30:51<399:39:31, 2.57s/it]
2%|▏ | 9879/569592 [1:30:52<325:03:30, 2.09s/it]
2%|▏ | 9879/569592 [1:30:52<325:03:30, 2.09s/it]
2%|▏ | 9880/569592 [1:30:56<372:39:00, 2.40s/it]
2%|▏ | 9880/569592 [1:30:56<372:39:00, 2.40s/it]
2%|▏ | 9881/569592 [1:30:57<311:51:45, 2.01s/it]
2%|▏ | 9881/569592 [1:30:57<311:51:45, 2.01s/it]
2%|▏ | 9882/569592 [1:31:03<506:17:26, 3.26s/it]
2%|▏ | 9882/569592 [1:31:03<506:17:26, 3.26s/it]
2%|▏ | 9883/569592 [1:31:04<398:17:35, 2.56s/it]
2%|▏ | 9883/569592 [1:31:04<398:17:35, 2.56s/it]
2%|▏ | 9884/569592 [1:31:06<365:09:21, 2.35s/it]
2%|▏ | 9884/569592 [1:31:06<365:09:21, 2.35s/it]
2%|▏ | 9885/569592 [1:31:07<311:35:56, 2.00s/it]
2%|▏ | 9885/569592 [1:31:07<311:35:56, 2.00s/it]
2%|▏ | 9886/569592 [1:31:13<506:40:57, 3.26s/it]
2%|▏ | 9886/569592 [1:31:13<506:40:57, 3.26s/it]
2%|▏ | 9887/569592 [1:31:14<400:28:18, 2.58s/it]
2%|▏ | 9887/569592 [1:31:14<400:28:18, 2.58s/it]
2%|▏ | 9888/569592 [1:31:16<374:38:47, 2.41s/it]
2%|▏ | 9888/569592 [1:31:16<374:38:47, 2.41s/it]
2%|▏ | 9889/569592 [1:31:17<332:34:53, 2.14s/it]
2%|▏ | 9889/569592 [1:31:17<332:34:53, 2.14s/it]
2%|▏ | 9890/569592 [1:31:22<466:09:59, 3.00s/it]
2%|▏ | 9890/569592 [1:31:22<466:09:59, 3.00s/it]
2%|▏ | 9891/569592 [1:31:23<370:26:09, 2.38s/it]
2%|▏ | 9891/569592 [1:31:23<370:26:09, 2.38s/it]
2%|▏ | 9892/569592 [1:31:26<380:53:33, 2.45s/it]
2%|▏ | 9892/569592 [1:31:26<380:53:33, 2.45s/it]
2%|▏ | 9893/569592 [1:31:29<389:48:14, 2.51s/it]
2%|▏ | 9893/569592 [1:31:29<389:48:14, 2.51s/it]
2%|▏ | 9894/569592 [1:31:34<514:14:27, 3.31s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94099482 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9894/569592 [1:31:34<514:14:27, 3.31s/it]
2%|▏ | 9895/569592 [1:31:35<405:30:51, 2.61s/it]
2%|▏ | 9895/569592 [1:31:35<405:30:51, 2.61s/it]
2%|▏ | 9896/569592 [1:31:37<365:14:55, 2.35s/it]
2%|▏ | 9896/569592 [1:31:37<365:14:55, 2.35s/it]
2%|▏ | 9897/569592 [1:31:39<359:20:47, 2.31s/it]
2%|▏ | 9897/569592 [1:31:39<359:20:47, 2.31s/it]
2%|▏ | 9898/569592 [1:31:43<450:43:27, 2.90s/it]
2%|▏ | 9898/569592 [1:31:43<450:43:27, 2.90s/it]
2%|▏ | 9899/569592 [1:31:44<360:10:04, 2.32s/it]
2%|▏ | 9899/569592 [1:31:44<360:10:04, 2.32s/it]
2%|▏ | 9900/569592 [1:31:47<413:13:50, 2.66s/it]
2%|▏ | 9900/569592 [1:31:47<413:13:50, 2.66s/it]
2%|▏ | 9901/569592 [1:31:48<333:38:46, 2.15s/it]
2%|▏ | 9901/569592 [1:31:48<333:38:46, 2.15s/it]
2%|▏ | 9902/569592 [1:31:55<532:28:09, 3.42s/it]
2%|▏ | 9902/569592 [1:31:55<532:28:09, 3.42s/it]
2%|▏ | 9903/569592 [1:31:56<418:02:22, 2.69s/it]
2%|▏ | 9903/569592 [1:31:56<418:02:22, 2.69s/it]
2%|▏ | 9904/569592 [1:31:59<431:16:45, 2.77s/it]
2%|▏ | 9904/569592 [1:31:59<431:16:45, 2.77s/it]
2%|▏ | 9905/569592 [1:32:00<352:24:02, 2.27s/it]
2%|▏ | 9905/569592 [1:32:00<352:24:02, 2.27s/it]
2%|▏ | 9906/569592 [1:32:05<494:06:04, 3.18s/it]
2%|▏ | 9906/569592 [1:32:05<494:06:04, 3.18s/it]
2%|▏ | 9907/569592 [1:32:06<395:21:20, 2.54s/it]
2%|▏ | 9907/569592 [1:32:06<395:21:20, 2.54s/it]
2%|▏ | 9908/569592 [1:32:09<405:41:56, 2.61s/it]
2%|▏ | 9908/569592 [1:32:09<405:41:56, 2.61s/it]
2%|▏ | 9909/569592 [1:32:10<330:52:37, 2.13s/it]
2%|▏ | 9909/569592 [1:32:10<330:52:37, 2.13s/it]
2%|▏ | 9910/569592 [1:32:16<492:48:47, 3.17s/it]
2%|▏ | 9910/569592 [1:32:16<492:48:47, 3.17s/it]
2%|▏ | 9911/569592 [1:32:17<397:03:51, 2.55s/it]
2%|▏ | 9911/569592 [1:32:17<397:03:51, 2.55s/it]
2%|▏ | 9912/569592 [1:32:19<390:57:02, 2.51s/it]
2%|▏ | 9912/569592 [1:32:19<390:57:02, 2.51s/it]
2%|▏ | 9913/569592 [1:32:20<319:21:40, 2.05s/it]
2%|▏ | 9913/569592 [1:32:20<319:21:40, 2.05s/it]
2%|▏ | 9914/569592 [1:32:25<472:23:58, 3.04s/it]
2%|▏ | 9914/569592 [1:32:25<472:23:58, 3.04s/it]
2%|▏ | 9915/569592 [1:32:26<377:21:02, 2.43s/it]
2%|▏ | 9915/569592 [1:32:26<377:21:02, 2.43s/it]
2%|▏ | 9916/569592 [1:32:29<375:23:28, 2.41s/it]
2%|▏ | 9916/569592 [1:32:29<375:23:28, 2.41s/it]
2%|▏ | 9917/569592 [1:32:30<306:34:04, 1.97s/it]
2%|▏ | 9917/569592 [1:32:30<306:34:04, 1.97s/it]
2%|▏ | 9918/569592 [1:32:34<435:19:56, 2.80s/it]
2%|▏ | 9918/569592 [1:32:35<435:19:56, 2.80s/it]
2%|▏ | 9919/569592 [1:32:36<352:39:46, 2.27s/it]
2%|▏ | 9919/569592 [1:32:36<352:39:46, 2.27s/it]
2%|▏ | 9920/569592 [1:32:40<451:29:05, 2.90s/it]
2%|▏ | 9920/569592 [1:32:40<451:29:05, 2.90s/it]
2%|▏ | 9921/569592 [1:32:41<360:34:45, 2.32s/it]
2%|▏ | 9921/569592 [1:32:41<360:34:45, 2.32s/it]
2%|▏ | 9922/569592 [1:32:46<493:32:51, 3.17s/it]
2%|▏ | 9922/569592 [1:32:46<493:32:51, 3.17s/it]
2%|▏ | 9923/569592 [1:32:47<400:45:14, 2.58s/it]
2%|▏ | 9923/569592 [1:32:47<400:45:14, 2.58s/it]
2%|▏ | 9924/569592 [1:32:50<418:49:49, 2.69s/it]
2%|▏ | 9924/569592 [1:32:50<418:49:49, 2.69s/it]
2%|▏ | 9925/569592 [1:32:51<337:19:30, 2.17s/it]
2%|▏ | 9925/569592 [1:32:51<337:19:30, 2.17s/it]
2%|▏ | 9926/569592 [1:32:56<463:53:45, 2.98s/it]
2%|▏ | 9926/569592 [1:32:56<463:53:45, 2.98s/it]
2%|▏ | 9927/569592 [1:32:57<370:34:50, 2.38s/it]
2%|▏ | 9927/569592 [1:32:57<370:34:50, 2.38s/it]
2%|▏ | 9928/569592 [1:33:00<422:28:50, 2.72s/it]
2%|▏ | 9928/569592 [1:33:01<422:28:50, 2.72s/it]
2%|▏ | 9929/569592 [1:33:01<339:39:03, 2.18s/it]
2%|▏ | 9929/569592 [1:33:01<339:39:03, 2.18s/it]
2%|▏ | 9930/569592 [1:33:06<451:03:19, 2.90s/it]
2%|▏ | 9930/569592 [1:33:06<451:03:19, 2.90s/it]
2%|▏ | 9931/569592 [1:33:09<465:07:19, 2.99s/it]
2%|▏ | 9931/569592 [1:33:09<465:07:19, 2.99s/it]
2%|▏ | 9932/569592 [1:33:13<492:39:24, 3.17s/it]
2%|▏ | 9932/569592 [1:33:13<492:39:24, 3.17s/it]
2%|▏ | 9933/569592 [1:33:14<387:14:06, 2.49s/it]
2%|▏ | 9933/569592 [1:33:14<387:14:06, 2.49s/it]
2%|▏ | 9934/569592 [1:33:19<505:33:00, 3.25s/it]
2%|▏ | 9934/569592 [1:33:19<505:33:00, 3.25s/it]
2%|▏ | 9935/569592 [1:33:23<572:41:36, 3.68s/it]
2%|▏ | 9935/569592 [1:33:23<572:41:36, 3.68s/it]
2%|▏ | 9936/569592 [1:33:27<575:58:50, 3.71s/it]
2%|▏ | 9936/569592 [1:33:27<575:58:50, 3.71s/it]
2%|▏ | 9937/569592 [1:33:30<547:37:20, 3.52s/it]
2%|▏ | 9937/569592 [1:33:30<547:37:20, 3.52s/it]
2%|▏ | 9938/569592 [1:33:35<584:06:57, 3.76s/it]
2%|▏ | 9938/5695/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93322730 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
92 [1:33:35<584:06:57, 3.76s/it]
2%|▏ | 9939/569592 [1:33:35<451:19:08, 2.90s/it]
2%|▏ | 9939/569592 [1:33:36<451:19:08, 2.90s/it]
2%|▏ | 9940/569592 [1:33:39<495:13:42, 3.19s/it]
2%|▏ | 9940/569592 [1:33:39<495:13:42, 3.19s/it]
2%|▏ | 9941/569592 [1:33:44<557:14:53, 3.58s/it]
2%|▏ | 9941/569592 [1:33:44<557:14:53, 3.58s/it]
2%|▏ | 9942/569592 [1:33:47<530:43:20, 3.41s/it]
2%|▏ | 9942/569592 [1:33:47<530:43:20, 3.41s/it]
2%|▏ | 9943/569592 [1:33:48<414:40:28, 2.67s/it]
2%|▏ | 9943/569592 [1:33:48<414:40:28, 2.67s/it]
2%|▏ | 9944/569592 [1:33:51<436:13:30, 2.81s/it]
2%|▏ | 9944/569592 [1:33:51<436:13:30, 2.81s/it]
2%|▏ | 9945/569592 [1:33:54<464:43:14, 2.99s/it]
2%|▏ | 9945/569592 [1:33:54<464:43:14, 2.99s/it]
2%|▏ | 9946/569592 [1:33:58<497:15:24, 3.20s/it]
2%|▏ | 9946/569592 [1:33:58<497:15:24, 3.20s/it]
2%|▏ | 9947/569592 [1:34:02<520:06:23, 3.35s/it]
2%|▏ | 9947/569592 [1:34:02<520:06:23, 3.35s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101647872 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 9948/569592 [1:34:05<535:34:41, 3.45s/it]
2%|▏ | 9948/569592 [1:34:05<535:34:41, 3.45s/it]
2%|▏ | 9949/569592 [1:34:09<560:32:39, 3.61s/it]
2%|▏ | 9949/569592 [1:34:09<560:32:39, 3.61s/it]
2%|▏ | 9950/569592 [1:34:14<605:27:50, 3.89s/it]
2%|▏ | 9950/569592 [1:34:14<605:27:50, 3.89s/it]
2%|▏ | 9951/569592 [1:34:19<640:50:04, 4.12s/it]
2%|▏ | 9951/569592 [1:34:19<640:50:04, 4.12s/it]
2%|▏ | 9952/569592 [1:34:24<681:30:16, 4.38s/it]
2%|▏ | 9952/569592 [1:34:24<681:30:16, 4.38s/it]
2%|▏ | 9953/569592 [1:34:27<636:23:07, 4.09s/it]
2%|▏ | 9953/569592 [1:34:27<636:23:07, 4.09s/it]
2%|▏ | 9954/569592 [1:34:28<486:18:27, 3.13s/it]
2%|▏ | 9954/569592 [1:34:28<486:18:27, 3.13s/it]
2%|▏ | 9955/569592 [1:34:32<540:43:05, 3.48s/it]
2%|▏ | 9955/569592 [1:34:32<540:43:05, 3.48s/it]
2%|▏ | 9956/569592 [1:34:36<550:01:24, 3.54s/it]
2%|▏ | 9956/569592 [1:34:36<550:01:24, 3.54s/it]
2%|▏ | 9957/569592 [1:34:41<619:16:58, 3.98s/it]
2%|▏ | 9957/569592 [1:34:41<619:16:58, 3.98s/it]
2%|▏ | 9958/569592 [1:34:46<660:06:26, 4.25s/it]
2%|▏ | 9958/569592 [1:34:46<660:06:26, 4.25s/it]
2%|▏ | 9959/569592 [1:34:49<614:37:15, 3.95s/it]
2%|▏ | 9959/569592 [1:34:49<614:37:15, 3.95s/it]
2%|▏ | 9960/569592 [1:34:52<573:01:36, 3.69s/it]
2%|▏ | 9960/569592 [1:34:52<573:01:36, 3.69s/it]
2%|▏ | 9961/569592 [1:35:00<754:15:45, 4.85s/it]
2%|▏ | 9961/569592 [1:35:00<754:15:45, 4.85s/it]
2%|▏ | 9962/569592 [1:35:03<701:36:21, 4.51s/it]
2%|▏ | 9962/569592 [1:35:03<701:36:21, 4.51s/it]
2%|▏ | 9963/569592 [1:35:07<649:13:47, 4.18s/it]
2%|▏ | 9963/569592 [1:35:07<649:13:47, 4.18s/it]
2%|▏ | 9964/569592 [1:35:12<684:32:59, 4.40s/it]
2%|▏ | 9964/569592 [1:35:12<684:32:59, 4.40s/it]
2%|▏ | 9965/569592 [1:35:17<709:05:23, 4.56s/it]
2%|▏ | 9965/569592 [1:35:17<709:05:23, 4.56s/it]
2%|▏ | 9966/569592 [1:35:22<727:35:51, 4.68s/it]
2%|▏ | 9966/569592 [1:35:22<727:35:51, 4.68s/it]
2%|▏ | 9967/569592 [1:35:26<735:27:35, 4.73s/it]
2%|▏ | 9967/569592 [1:35:26<735:27:35, 4.73s/it]
2%|▏ | 9968/569592 [1:35:27<556:32:28, 3.58s/it]
2%|▏ | 9968/569592 [1:35:27<556:32:28, 3.58s/it]
2%|▏ | 9969/569592 [1:35:28<433:51:16, 2.79s/it]
2%|▏ | 9969/569592 [1:35:28<433:51:16, 2.79s/it]
2%|▏ | 9970/569592 [1:35:29<348:39:01, 2.24s/it]
2%|▏ | 9970/569592 [1:35:29<348:39:01, 2.24s/it]
2%|▏ | 9971/569592 [1:35:33<402:24:59, 2.59s/it]
2%|▏ | 9971/569592 [1:35:33<402:24:59, 2.59s/it]
2%|▏ | 9972/569592 [1:35:36<449:46:43, 2.89s/it]
2%|▏ | 9972/569592 [1:35:36<449:46:43, 2.89s/it]
2%|▏ | 9973/569592 [1:35:41<530:09:37, 3.41s/it]
2%|▏ | 9973/569592 [1:35:41<530:09:37, 3.41s/it]
2%|▏ | 9974/569592 [1:35:44<528:37:40, 3.40s/it]
2%|▏ | 9974/569592 [1:35:44<528:37:40, 3.40s/it]
2%|▏ | 9975/569592 [1:35:45<417:13:06, 2.68s/it]
2%|▏ | 9975/569592 [1:35:45<417:13:06, 2.68s/it]
2%|▏ | 9976/569592 [1:35:49<451:16:06, 2.90s/it]
2%|▏ | 9976/569592 [1:35:49<451:16:06, 2.90s/it]
2%|▏ | 9977/569592 [1:35:50<364:16:15, 2.34s/it]
2%|▏ | 9977/569592 [1:35:50<364:16:15, 2.34s/it]
2%|▏ | 9978/569592 [1:35:55<482:23:24, 3.10s/it]
2%|▏ | 9978/569592 [1:35:55<482:23:24, 3.10s/it]
2%|▏ | 9979/569592 [1:35:58<517:49:24, 3.33s/it]
2%|▏ | 9979/569592 [1:35:58<517:49:24, 3.33s/it]
2%|▏ | 9980/569592 [1:35:59<404:37:59, 2.60s/it]
2%|▏ | 9980/569592 [1:35:59<404:37:59, 2.60s/it]
2%|▏ | 9981/569592 [1:36:00<325:49:39, 2.10s/it]
2%|▏ | 9981/569592 [1:36:00<325:49:39, 2.10s/it]
2%|▏ | 9982/569592 [1:36:01<271:36:41, 1.75s/it]
2%|▏ | 9982/569592 [1:36:01<271:36:41, 1.75s/it]
2%|▏ | 9983/569592 [1:36:02<233:25:40, 1.50s/it]
2%|▏ | 9983/569592 [1:36:02<233:25:40, 1.50s/it]
2%|▏ | 9984/569592 [1:36:03<207:28:43, 1.33s/it]
2%|▏ | 9984/569592 [1:36:03<207:28:43, 1.33s/it]
2%|▏ | 9985/569592 [1:36:04<188:22:38, 1.21s/it]
2%|▏ | 9985/569592 [1:36:04<188:22:38, 1.21s/it]
2%|▏ | 9986/569592 [1:36:09<358:22:53, 2.31s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93102552 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 9986/569592 [1:36:09<358:22:53, 2.31s/it]
2%|▏ | 9987/569592 [1:36:10<297:24:59, 1.91s/it]
2%|▏ | 9987/569592 [1:36:10<297:24:59, 1.91s/it]
2%|▏ | 9988/569592 [1:36:14<379:12:15, 2.44s/it]
2%|▏ | 9988/569592 [1:36:14<379:12:15, 2.44s/it]
2%|▏ | 9989/569592 [1:36:14<310:48:03, 2.00s/it]
2%|▏ | 9989/569592 [1:36:14<310:48:03, 2.00s/it]
2%|▏ | 9990/569592 [1:36:15<261:46:00, 1.68s/it]
2%|▏ | 9990/569592 [1:36:15<261:46:00, 1.68s/it]
2%|▏ | 9991/569592 [1:36:19<346:57:28, 2.23s/it]
2%|▏ | 9991/569592 [1:36:19<346:57:28, 2.23s/it]
2%|▏ | 9992/569592 [1:36:22<408:52:40, 2.63s/it]
2%|▏ | 9992/569592 [1:36:23<408:52:40, 2.63s/it]
2%|▏ | 9993/569592 [1:36:24<335:42:19, 2.16s/it]
2%|▏ | 9993/569592 [1:36:24<335:42:19, 2.16s/it]
2%|▏ | 9994/569592 [1:36:25<291:01:01, 1.87s/it]
2%|▏ | 9994/569592 [1:36:25<291:01:01, 1.87s/it]
2%|▏ | 9995/569592 [1:36:30<442:23:43, 2.85s/it]
2%|▏ | 9995/569592 [1:36:30<442:23:43, 2.85s/it]
2%|▏ | 9996/569592 [1:36:33<457:41:42, 2.94s/it]
2%|▏ | 9996/569592 [1:36:33<457:41:42, 2.94s/it]
2%|▏ | 9997/569592 [1:36:34<365:35:01, 2.35s/it]
2%|▏ | 9997/569592 [1:36:34<365:35:01, 2.35s/it]
2%|▏ | 9998/569592 [1:36:35<309:05:28, 1.99s/it]
2%|▏ | 9998/569592 [1:36:35<309:05:28, 1.99s/it]
2%|▏ | 9999/569592 [1:36:39<387:57:19, 2.50s/it]
2%|▏ | 9999/569592 [1:36:39<387:57:19, 2.50s/it]
2%|▏ | 10000/569592 [1:36:43<449:06:32, 2.89s/it]
2%|▏ | 10000/569592 [1:36:43<449:06:32, 2.89s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-9000] due to args.save_total_limit
model-00001-of-00006.safetensors: 0%| | 0.00/4.97G [00:00, ?B/s][A
model-00002-of-00006.safetensors: 0%| | 0.00/4.99G [00:00, ?B/s][A[A
model-00003-of-00006.safetensors: 0%| | 0.00/4.93G [00:00, ?B/s][A[A[A
model-00004-of-00006.safetensors: 0%| | 0.00/5.00G [00:00, ?B/s][A[A[A[A
Upload 138 LFS files: 0%| | 0/138 [00:00, ?it/s][A[A[A[A[A
model-00005-of-00006.safetensors: 0%| | 0.00/4.97G [00:00, ?B/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 0%| | 2.31M/4.97G [00:00<03:38, 22.7MB/s][A
model-00002-of-00006.safetensors: 0%| | 5.85M/4.99G [00:00<01:25, 58.4MB/s][A[A
model-00003-of-00006.safetensors: 0%| | 6.05M/4.93G [00:00<01:21, 60.4MB/s][A[A[A
model-00004-of-00006.safetensors: 0%| | 6.75M/5.00G [00:00<01:14, 67.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 0%| | 16.0M/4.97G [00:00<00:31, 160MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 0%| | 16.0M/5.00G [00:00<01:30, 55.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 0%| | 16.0M/4.93G [00:00<01:30, 54.3MB/s][A[A[A
model-00002-of-00006.safetensors: 0%| | 16.0M/4.99G [00:00<01:58, 42.1MB/s][A[A
model-00001-of-00006.safetensors: 0%| | 16.0M/4.97G [00:00<02:06, 39.1MB/s][A
model-00003-of-00006.safetensors: 1%| | 29.7M/4.93G [00:00<00:58, 83.5MB/s][A[A[A
model-00005-of-00006.safetensors: 1%| | 32.0M/4.97G [00:00<01:06, 73.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 32.0M/5.00G [00:00<01:24, 58.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 1%| | 32.0M/4.99G [00:00<01:35, 52.1MB/s][A[A
model-00003-of-00006.safetensors: 1%| | 39.0M/4.93G [00:00<01:20, 60.4MB/s][A[A[A
model-00001-of-00006.safetensors: 1%| | 32.0M/4.97G [00:00<01:40, 49.3MB/s][A
model-00005-of-00006.safetensors: 1%| | 41.7M/4.97G [00:00<01:41, 48.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 48.0M/5.00G [00:00<01:22, 60.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 1%| | 48.0M/4.93G [00:00<01:29, 54.4MB/s][A[A[A
model-00002-of-00006.safetensors: 1%| | 48.0M/4.99G [00:00<01:27, 56.6MB/s][A[A
model-00001-of-00006.safetensors: 1%| | 48.0M/4.97G [00:00<01:30, 54.5MB/s][A
model-00003-of-00006.safetensors: 1%|▏ | 64.0M/4.93G [00:01<01:24, 57.9MB/s][A[A[A
model-00005-of-00006.safetensors: 1%| | 48.3M/4.97G [00:01<02:20, 34.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 1%|▏ | 64.0M/4.99G [00:01<01:25, 57.4MB/s][A[A
model-00001-of-00006.safetensors: 1%|▏ | 64.0M/4.97G [00:01<01:22, 59.2MB/s][A
model-00004-of-00006.safetensors: 1%|▏ | 64.0M/5.00G [00:01<01:33, 53.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 2%|▏ | 80.0M/4.93G [00:01<01:13, 65.9MB/s][A[A[A
model-00001-of-00006.safetensors: 2%|▏ | 80.0M/4.97G [00:01<01:16, 63.9MB/s][A
model-00005-of-00006.safetensors: 1%|▏ | 64.0M/4.97G [00:01<01:56, 42.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 2%|▏ | 80.0M/4.99G [00:01<01:24, 58.0MB/s][A[A
model-00004-of-00006.safetensors: 2%|▏ | 80.0M/5.00G [00:01<01:27, 56.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 2%|▏ | 96.0M/4.97G [00:01<01:12, 67.0MB/s][A
model-00003-of-00006.safetensors: 2%|▏ | 96.0M/4.93G [00:01<01:25, 56.6MB/s][A[A[A
model-00002-of-00006.safetensors: 2%|▏ | 96.0M/4.99G [00:01<01:17, 63.0MB/s][A[A
model-00004-of-00006.safetensors: 2%|▏ | 96.0M/5.00G [00:01<01:20, 60.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 2%|▏ | 80.0M/4.97G [00:01<01:47, 45.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 2%|▏ | 112M/4.97G [00:01<01:14, 65.1MB/s] [A
model-00003-of-00006.safetensors: 2%|▏ | 112M/4.93G [00:01<01:22, 58.4MB/s] [A[A[A
model-00002-of-00006.safetensors: 2%|▏ | 112M/4.99G [00:01<01:17, 62.8MB/s] [A[A
model-00004-of-00006.safetensors: 2%|▏ | 112M/5.00G [00:01<01:22, 59.5MB/s] [A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 128M/4.97G [00:02<01:09, 69.5MB/s][A
model-00003-of-00006.safetensors: 3%|▎ | 128M/4.93G [00:02<01:17, 62.3MB/s][A[A[A
model-00002-of-00006.safetensors: 3%|▎ | 128M/4.99G [00:02<01:15, 64.5MB/s][A[A
model-00004-of-00006.safetensors: 3%|▎ | 128M/5.00G [00:02<01:18, 62.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 144M/4.97G [00:02<01:08, 69.9MB/s][A
model-00003-of-00006.safetensors: 3%|▎ | 144M/4.93G [00:02<01:14, 64.1MB/s][A[A[A
model-00002-of-00006.safetensors: 3%|▎ | 144M/4.99G [00:02<01:17, 62.4MB/s][A[A
model-00004-of-00006.safetensors: 3%|▎ | 144M/5.00G [00:02<01:17, 62.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 160M/4.97G [00:02<01:10, 68.3MB/s][A
model-00003-of-00006.safetensors: 3%|▎ | 160M/4.93G [00:02<01:09, 68.3MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 160M/5.00G [00:02<01:09, 69.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 3%|▎ | 160M/4.99G [00:02<01:15, 63.8MB/s][A[A
model-00005-of-00006.safetensors: 2%|▏ | 96.0M/4.97G [00:02<02:53, 28.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 4%|▎ | 176M/4.93G [00:02<01:07, 70.3MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▎ | 176M/4.97G [00:02<01:12, 65.7MB/s][A
model-00004-of-00006.safetensors: 4%|▎ | 176M/5.00G [00:02<01:06, 72.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 2%|▏ | 112M/4.97G [00:02<02:14, 36.2MB/s] [A[A[A[A[A[A
model-00002-of-00006.safetensors: 4%|▎ | 176M/4.99G [00:02<01:12, 66.6MB/s][A[A
model-00003-of-00006.safetensors: 4%|▍ | 192M/4.93G [00:02<01:05, 72.1MB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 192M/5.00G [00:02<01:06, 72.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [00:03<01:12, 66.2MB/s][A
model-00002-of-00006.safetensors: 4%|▍ | 192M/4.99G [00:03<01:08, 70.5MB/s][A[A
model-00005-of-00006.safetensors: 3%|▎ | 128M/4.97G [00:03<01:55, 41.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 4%|▍ | 208M/4.93G [00:03<01:03, 74.2MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 208M/4.97G [00:03<01:08, 69.5MB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 208M/5.00G [00:03<01:06, 72.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 4%|▍ | 208M/4.99G [00:03<01:07, 71.1MB/s][A[A
model-00005-of-00006.safetensors: 3%|▎ | 144M/4.97G [00:03<01:42, 47.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 5%|▍ | 224M/4.93G [00:03<01:06, 71.0MB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 224M/5.00G [00:03<01:05, 72.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 4%|▍ | 224M/4.99G [00:03<01:06, 72.0MB/s][A[A
model-00001-of-00006.safetensors: 5%|▍ | 224M/4.97G [00:03<01:12, 65.4MB/s][A
model-00003-of-00006.safetensors: 5%|▍ | 240M/4.93G [00:03<01:03, 73.6MB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▍ | 240M/5.00G [00:03<01:03, 74.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 5%|▍ | 240M/4.99G [00:03<01:05, 73.0MB/s][A[A
model-00001-of-00006.safetensors: 5%|▍ | 240M/4.97G [00:03<01:14, 63.3MB/s][A
model-00003-of-00006.safetensors: 5%|▌ | 256M/4.93G [00:03<01:02, 75.1MB/s][A[A[A
model-00005-of-00006.safetensors: 3%|▎ | 160M/4.97G [00:03<01:55, 41.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 256M/5.00G [00:03<01:04, 74.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 5%|▌ | 256M/4.99G [00:03<01:05, 72.0MB/s][A[A
model-00001-of-00006.safetensors: 5%|▌ | 256M/4.97G [00:04<01:11, 66.0MB/s][A
model-00003-of-00006.safetensors: 6%|▌ | 272M/4.93G [00:04<01:02, 74.7MB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 272M/5.00G [00:04<01:03, 74.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 4%|▎ | 176M/4.97G [00:04<01:44, 45.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 5%|▌ | 272M/4.99G [00:04<01:04, 72.9MB/s][A[A
model-00001-of-00006.safetensors: 5%|▌ | 272M/4.97G [00:04<01:08, 68.8MB/s][A
model-00003-of-00006.safetensors: 6%|▌ | 288M/4.93G [00:04<01:02, 74.8MB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 288M/5.00G [00:04<01:02, 75.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 4%|▍ | 192M/4.97G [00:04<01:36, 49.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 6%|▌ | 288M/4.99G [00:04<01:05, 72.1MB/s][A[A
model-00001-of-00006.safetensors: 6%|▌ | 288M/4.97G [00:04<01:06, 70.0MB/s][A
model-00003-of-00006.safetensors: 6%|▌ | 304M/4.93G [00:04<01:04, 71.8MB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 304M/5.00G [00:04<01:06, 70.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 304M/4.97G [00:04<01:03, 73.6MB/s][A
model-00005-of-00006.safetensors: 4%|▍ | 208M/4.97G [00:04<01:31, 51.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 6%|▌ | 304M/4.99G [00:04<01:18, 59.4MB/s][A[A
model-00004-of-00006.safetensors: 6%|▋ | 320M/5.00G [00:04<01:05, 71.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 6%|▋ | 320M/4.97G [00:04<01:03, 73.0MB/s][A
model-00005-of-00006.safetensors: 5%|▍ | 224M/4.97G [00:04<01:23, 56.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 336M/5.00G [00:04<01:03, 73.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 6%|▋ | 320M/4.93G [00:04<01:27, 52.8MB/s][A[A[A
model-00002-of-00006.safetensors: 6%|▋ | 320M/4.99G [00:05<01:18, 59.5MB/s][A[A
model-00001-of-00006.safetensors: 7%|▋ | 336M/4.97G [00:05<01:06, 70.1MB/s][A
model-00005-of-00006.safetensors: 5%|▍ | 240M/4.97G [00:05<01:20, 59.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 352M/5.00G [00:05<01:00, 76.9MB/s][A[A[A[A
model-00003-of-00006.safetensors: 7%|▋ | 336M/4.93G [00:05<01:18, 58.4MB/s][A[A[A
model-00002-of-00006.safetensors: 7%|▋ | 336M/4.99G [00:05<01:14, 62.2MB/s][A[A
model-00001-of-00006.safetensors: 7%|▋ | 352M/4.97G [00:05<01:03, 72.3MB/s][A
model-00003-of-00006.safetensors: 7%|▋ | 348M/4.93G [00:05<01:09, 66.1MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 368M/5.00G [00:05<01:01, 74.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 5%|▌ | 256M/4.97G [00:05<01:25, 55.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 7%|▋ | 356M/4.93G [00:05<01:18, 58.2MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 384M/5.00G [00:05<01:01, 75.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 5%|▌ | 272M/4.97G [00:05<01:17, 60.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 7%|▋ | 352M/4.99G [00:05<01:29, 51.9MB/s][A[A
model-00003-of-00006.safetensors: 7%|▋ | 368M/4.93G [00:05<01:17, 59.1MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 400M/5.00G [00:05<01:04, 71.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 6%|▌ | 288M/4.97G [00:05<01:13, 63.8MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 8%|▊ | 384M/4.93G [00:05<01:10, 64.2MB/s][A[A[A
model-00001-of-00006.safetensors: 7%|▋ | 368M/4.97G [00:05<01:38, 46.8MB/s][A
model-00002-of-00006.safetensors: 7%|▋ | 368M/4.99G [00:05<01:23, 55.2MB/s][A[A
model-00004-of-00006.safetensors: 8%|▊ | 416M/5.00G [00:06<01:04, 71.3MB/s][A[A[A[A
model-00005-of-00006.safetensors: 6%|▌ | 304M/4.97G [00:06<01:11, 65.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 8%|▊ | 400M/4.93G [00:06<01:06, 68.2MB/s][A[A[A
model-00002-of-00006.safetensors: 8%|▊ | 384M/4.99G [00:06<01:15, 60.9MB/s][A[A
model-00004-of-00006.safetensors: 9%|▊ | 432M/5.00G [00:06<01:05, 69.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 6%|▋ | 320M/4.97G [00:06<01:09, 67.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 8%|▊ | 400M/4.99G [00:06<01:13, 62.8MB/s][A[A
model-00003-of-00006.safetensors: 8%|▊ | 416M/4.93G [00:06<01:07, 67.2MB/s][A[A[A
model-00005-of-00006.safetensors: 7%|▋ | 336M/4.97G [00:06<01:05, 70.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 448M/5.00G [00:06<01:04, 70.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 8%|▊ | 384M/4.97G [00:06<02:03, 37.2MB/s][A
model-00002-of-00006.safetensors: 8%|▊ | 416M/4.99G [00:06<01:08, 66.5MB/s][A[A
model-00003-of-00006.safetensors: 9%|▉ | 432M/4.93G [00:06<01:12, 62.3MB/s][A[A[A
model-00001-of-00006.safetensors: 8%|▊ | 400M/4.97G [00:06<01:44, 43.8MB/s][A
model-00002-of-00006.safetensors: 9%|▊ | 432M/4.99G [00:06<01:05, 70.1MB/s][A[A
model-00004-of-00006.safetensors: 9%|▉ | 464M/5.00G [00:06<01:07, 67.4MB/s][A[A[A[A
model-00003-of-00006.safetensors: 9%|▉ | 448M/4.93G [00:06<01:09, 64.5MB/s][A[A[A
model-00002-of-00006.safetensors: 9%|▉ | 448M/4.99G [00:06<01:01, 73.5MB/s][A[A
model-00001-of-00006.safetensors: 8%|▊ | 416M/4.97G [00:06<01:30, 50.5MB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 480M/5.00G [00:06<01:04, 70.4MB/s][A[A[A[A
model-00003-of-00006.safetensors: 9%|▉ | 464M/4.93G [00:07<01:05, 67.8MB/s][A[A[A
model-00002-of-00006.safetensors: 9%|▉ | 464M/4.99G [00:07<01:02, 72.2MB/s][A[A
model-00004-of-00006.safetensors: 10%|▉ | 496M/5.00G [00:07<01:01, 72.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 10%|▉ | 480M/4.93G [00:07<01:08, 65.1MB/s][A[A[A
model-00001-of-00006.safetensors: 9%|▊ | 432M/4.97G [00:07<01:37, 46.3MB/s][A
model-00005-of-00006.safetensors: 7%|▋ | 352M/4.97G [00:07<02:00, 38.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 10%|▉ | 480M/4.99G [00:07<01:02, 72.0MB/s][A[A
model-00004-of-00006.safetensors: 10%|█ | 512M/5.00G [00:07<01:04, 70.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 7%|▋ | 367M/4.97G [00:07<01:33, 49.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 10%|▉ | 496M/4.99G [00:07<00:59, 75.0MB/s][A[A
model-00001-of-00006.safetensors: 9%|▉ | 448M/4.97G [00:07<01:29, 50.8MB/s][A
model-00003-of-00006.safetensors: 10%|█ | 496M/4.93G [00:07<01:12, 61.5MB/s][A[A[A
model-00005-of-00006.safetensors: 8%|▊ | 376M/4.97G [00:07<01:36, 47.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 11%|█ | 528M/5.00G [00:07<01:08, 65.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 9%|▉ | 464M/4.97G [00:07<01:16, 58.8MB/s][A
model-00002-of-00006.safetensors: 10%|█ | 512M/4.99G [00:07<00:58, 76.4MB/s][A[A
model-00005-of-00006.safetensors: 8%|▊ | 384M/4.97G [00:07<01:39, 46.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 10%|█ | 512M/4.93G [00:07<01:08, 64.4MB/s][A[A[A
model-00002-of-00006.safetensors: 11%|█ | 524M/4.99G [00:07<00:52, 84.4MB/s][A[A
model-00004-of-00006.safetensors: 11%|█ | 544M/5.00G [00:07<01:08, 64.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 10%|▉ | 480M/4.97G [00:08<01:12, 62.0MB/s][A
model-00003-of-00006.safetensors: 11%|█ | 528M/4.93G [00:08<01:04, 68.7MB/s][A[A[A
model-00005-of-00006.safetensors: 8%|▊ | 400M/4.97G [00:08<01:24, 53.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 11%|█ | 534M/4.99G [00:08<01:04, 68.7MB/s][A[A
model-00004-of-00006.safetensors: 11%|█ | 560M/5.00G [00:08<01:07, 65.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 10%|▉ | 496M/4.97G [00:08<01:09, 64.3MB/s][A
model-00003-of-00006.safetensors: 11%|█ | 544M/4.93G [00:08<01:04, 68.2MB/s][A[A[A
model-00002-of-00006.safetensors: 11%|█ | 544M/4.99G [00:08<01:10, 63.4MB/s][A[A
model-00005-of-00006.safetensors: 8%|▊ | 416M/4.97G [00:08<01:21, 55.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 576M/5.00G [00:08<01:05, 67.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 10%|█ | 512M/4.97G [00:08<01:10, 62.8MB/s][A
model-00003-of-00006.safetensors: 11%|█▏ | 560M/4.93G [00:08<01:03, 68.9MB/s][A[A[A
model-00002-of-00006.safetensors: 11%|█ | 560M/4.99G [00:08<01:06, 66.4MB/s][A[A
model-00005-of-00006.safetensors: 9%|▊ | 432M/4.97G [00:08<01:14, 60.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 592M/5.00G [00:08<01:01, 71.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 11%|█ | 528M/4.97G [00:08<01:08, 65.0MB/s][A
model-00002-of-00006.safetensors: 12%|█▏ | 576M/4.99G [00:08<01:04, 68.3MB/s][A[A
model-00005-of-00006.safetensors: 9%|▉ | 448M/4.97G [00:08<01:15, 60.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 608M/5.00G [00:08<01:02, 70.3MB/s][A[A[A[A
model-00003-of-00006.safetensors: 12%|█▏ | 576M/4.93G [00:08<01:13, 59.6MB/s][A[A[A
model-00002-of-00006.safetensors: 12%|█▏ | 592M/4.99G [00:08<00:59, 73.4MB/s][A[A
model-00001-of-00006.safetensors: 11%|█ | 544M/4.97G [00:08<01:06, 66.6MB/s][A
model-00005-of-00006.safetensors: 9%|▉ | 464M/4.97G [00:09<01:09, 64.9MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 624M/5.00G [00:09<01:00, 72.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 12%|█▏ | 592M/4.93G [00:09<01:09, 62.5MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█▏ | 560M/4.97G [00:09<01:03, 69.8MB/s][A
model-00005-of-00006.safetensors: 10%|▉ | 480M/4.97G [00:09<01:06, 67.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 12%|█▏ | 608M/4.99G [00:09<01:08, 64.5MB/s][A[A
model-00004-of-00006.safetensors: 13%|█▎ | 640M/5.00G [00:09<00:59, 73.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 12%|█▏ | 608M/4.93G [00:09<01:06, 64.7MB/s][A[A[A
model-00001-of-00006.safetensors: 12%|█▏ | 576M/4.97G [00:09<00:59, 74.0MB/s][A
model-00002-of-00006.safetensors: 13%|█▎ | 624M/4.99G [00:09<01:02, 70.4MB/s][A[A
model-00005-of-00006.safetensors: 10%|▉ | 496M/4.97G [00:09<01:06, 67.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 656M/5.00G [00:09<01:03, 68.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 13%|█▎ | 624M/4.93G [00:09<01:04, 67.2MB/s][A[A[A
model-00001-of-00006.safetensors: 12%|█▏ | 592M/4.97G [00:09<00:59, 73.7MB/s][A
model-00002-of-00006.safetensors: 13%|█▎ | 640M/4.99G [00:09<01:00, 72.2MB/s][A[A
model-00005-of-00006.safetensors: 10%|█ | 512M/4.97G [00:09<01:08, 65.4MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 12%|█▏ | 608M/4.97G [00:09<00:57, 75.7MB/s][A
model-00003-of-00006.safetensors: 13%|█▎ | 640M/4.93G [00:09<01:02, 69.0MB/s][A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 672M/5.00G [00:09<01:05, 65.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 13%|█▎ | 656M/4.99G [00:09<00:59, 72.3MB/s][A[A
model-00005-of-00006.safetensors: 11%|█ | 528M/4.97G [00:09<01:05, 67.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 13%|█▎ | 624M/4.97G [00:10<00:58, 74.3MB/s][A
model-00003-of-00006.safetensors: 13%|█▎ | 656M/4.93G [00:09<01:00, 70.3MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▍ | 688M/5.00G [00:10<01:02, 69.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 13%|█▎ | 672M/4.99G [00:10<01:01, 69.8MB/s][A[A
model-00005-of-00006.safetensors: 11%|█ | 544M/4.97G [00:10<01:03, 70.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 13%|█▎ | 640M/4.97G [00:10<00:58, 74.2MB/s][A
model-00003-of-00006.safetensors: 14%|█▎ | 672M/4.93G [00:10<01:00, 70.6MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▍ | 704M/5.00G [00:10<01:04, 66.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 14%|█▍ | 688M/4.99G [00:10<01:00, 71.1MB/s][A[A
model-00001-of-00006.safetensors: 13%|█▎ | 656M/4.97G [00:10<00:57, 74.7MB/s][A
model-00003-of-00006.safetensors: 14%|█▍ | 688M/4.93G [00:10<00:58, 72.6MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▍ | 720M/5.00G [00:10<01:02, 68.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 14%|█▍ | 704M/4.99G [00:10<01:00, 71.2MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▎ | 672M/4.97G [00:10<00:56, 75.5MB/s][A
model-00003-of-00006.safetensors: 14%|█▍ | 704M/4.93G [00:10<00:58, 72.5MB/s][A[A[A
model-00004-of-00006.safetensors: 15%|█▍ | 736M/5.00G [00:10<01:00, 70.7MB/s][A[A[A[A
model-00005-of-00006.safetensors: 11%|█▏ | 560M/4.97G [00:10<01:28, 49.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 14%|█▍ | 720M/4.99G [00:10<01:01, 69.2MB/s][A[A
model-00003-of-00006.safetensors: 15%|█▍ | 720M/4.93G [00:10<00:56, 74.6MB/s][A[A[A
model-00004-of-00006.safetensors: 15%|█▌ | 752M/5.00G [00:10<01:00, 70.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 14%|█▍ | 688M/4.97G [00:10<01:05, 65.5MB/s][A
model-00005-of-00006.safetensors: 12%|█▏ | 576M/4.97G [00:10<01:20, 54.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 15%|█▍ | 736M/4.99G [00:11<01:00, 70.2MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▍ | 704M/4.97G [00:11<01:02, 67.9MB/s][A
model-00004-of-00006.safetensors: 15%|█▌ | 768M/5.00G [00:11<01:00, 70.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 12%|█▏ | 592M/4.97G [00:11<01:19, 55.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 15%|█▍ | 736M/4.93G [00:11<01:12, 58.2MB/s][A[A[A
model-00002-of-00006.safetensors: 15%|█▌ | 752M/4.99G [00:11<01:00, 70.2MB/s][A[A
model-00004-of-00006.safetensors: 16%|█▌ | 784M/5.00G [00:11<00:59, 71.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 14%|█▍ | 720M/4.97G [00:11<01:04, 66.1MB/s][A
model-00003-of-00006.safetensors: 15%|█▌ | 752M/4.93G [00:11<01:08, 61.4MB/s][A[A[A
model-00005-of-00006.safetensors: 12%|█▏ | 608M/4.97G [00:11<01:16, 56.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 15%|█▌ | 768M/4.99G [00:11<01:01, 68.2MB/s][A[A
model-00004-of-00006.safetensors: 16%|█▌ | 800M/5.00G [00:11<00:59, 70.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 15%|█▍ | 736M/4.97G [00:11<01:04, 65.1MB/s][A
model-00002-of-00006.safetensors: 16%|█▌ | 784M/4.99G [00:11<00:57, 73.3MB/s][A[A
model-00003-of-00006.safetensors: 16%|█▌ | 768M/4.93G [00:11<01:03, 65.1MB/s][A[A[A
model-00005-of-00006.safetensors: 13%|█▎ | 624M/4.97G [00:11<01:14, 58.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 16%|█▌ | 800M/4.99G [00:11<00:56, 73.7MB/s][A[A
model-00003-of-00006.safetensors: 16%|█▌ | 784M/4.93G [00:11<01:01, 66.9MB/s][A[A[A
model-00001-of-00006.safetensors: 15%|█▌ | 752M/4.97G [00:11<01:07, 62.0MB/s][A
model-00004-of-00006.safetensors: 16%|█▋ | 816M/5.00G [00:11<01:08, 61.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 13%|█▎ | 640M/4.97G [00:12<01:15, 57.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 16%|█▋ | 816M/4.99G [00:12<00:56, 73.9MB/s][A[A
model-00003-of-00006.safetensors: 16%|█▌ | 800M/4.93G [00:12<00:58, 70.1MB/s][A[A[A
model-00001-of-00006.safetensors: 15%|█▌ | 768M/4.97G [00:12<01:04, 65.3MB/s][A
model-00004-of-00006.safetensors: 17%|█▋ | 832M/5.00G [00:12<01:05, 63.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 13%|█▎ | 656M/4.97G [00:12<01:09, 61.8MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 17%|█▋ | 816M/4.93G [00:12<00:57, 71.7MB/s][A[A[A
model-00002-of-00006.safetensors: 17%|█▋ | 832M/4.99G [00:12<00:59, 69.4MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▌ | 784M/4.97G [00:12<01:02, 67.2MB/s][A
model-00004-of-00006.safetensors: 17%|█▋ | 848M/5.00G [00:12<01:05, 63.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 16%|█▌ | 798M/4.97G [00:12<00:52, 78.7MB/s][A
model-00005-of-00006.safetensors: 14%|█▎ | 672M/4.97G [00:12<01:08, 62.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 17%|█▋ | 832M/4.93G [00:12<00:59, 68.5MB/s][A[A[A
model-00002-of-00006.safetensors: 17%|█▋ | 848M/4.99G [00:12<00:58, 70.7MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▋ | 807M/4.97G [00:12<01:00, 68.2MB/s][A
model-00004-of-00006.safetensors: 17%|█▋ | 864M/5.00G [00:12<01:05, 62.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 14%|█▍ | 688M/4.97G [00:12<01:11, 59.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 17%|█▋ | 848M/4.93G [00:12<01:00, 67.0MB/s][A[A[A
model-00002-of-00006.safetensors: 17%|█▋ | 864M/4.99G [00:12<01:00, 68.4MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▋ | 816M/4.97G [00:12<01:07, 61.3MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 880M/5.00G [00:12<01:01, 66.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 14%|█▍ | 704M/4.97G [00:12<01:04, 66.0MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 18%|█▊ | 864M/4.93G [00:13<00:58, 69.7MB/s][A[A[A
model-00002-of-00006.safetensors: 18%|█▊ | 880M/4.99G [00:13<00:58, 70.3MB/s][A[A
model-00001-of-00006.safetensors: 17%|█▋ | 832M/4.97G [00:13<01:07, 61.1MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 896M/5.00G [00:13<01:05, 62.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 14%|█▍ | 720M/4.97G [00:13<01:03, 66.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 18%|█▊ | 896M/4.99G [00:13<00:54, 75.6MB/s][A[A
model-00003-of-00006.safetensors: 18%|█▊ | 880M/4.93G [00:13<00:59, 68.2MB/s][A[A[A
model-00001-of-00006.safetensors: 17%|█▋ | 848M/4.97G [00:13<01:05, 63.1MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 912M/5.00G [00:13<01:03, 64.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 18%|█▊ | 912M/4.99G [00:13<00:58, 69.6MB/s][A[A
model-00005-of-00006.safetensors: 15%|█▍ | 736M/4.97G [00:13<01:11, 59.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 17%|█▋ | 864M/4.97G [00:13<01:02, 65.3MB/s][A
model-00004-of-00006.safetensors: 19%|█▊ | 928M/5.00G [00:13<01:01, 66.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 18%|█▊ | 896M/4.93G [00:13<01:10, 57.6MB/s][A[A[A
model-00002-of-00006.safetensors: 19%|█▊ | 928M/4.99G [00:13<00:54, 74.4MB/s][A[A
model-00005-of-00006.safetensors: 15%|█▌ | 752M/4.97G [00:13<01:07, 62.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 880M/4.97G [00:13<00:58, 69.6MB/s][A
model-00004-of-00006.safetensors: 19%|█▉ | 944M/5.00G [00:13<00:55, 72.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 19%|█▉ | 944M/4.99G [00:13<00:52, 77.6MB/s][A[A
model-00003-of-00006.safetensors: 18%|█▊ | 912M/4.93G [00:13<01:04, 62.5MB/s][A[A[A
model-00005-of-00006.safetensors: 15%|█▌ | 768M/4.97G [00:14<01:05, 64.1MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 19%|█▉ | 960M/5.00G [00:14<00:54, 73.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 896M/4.97G [00:14<00:58, 69.5MB/s][A
model-00002-of-00006.safetensors: 19%|█▉ | 960M/4.99G [00:14<00:51, 77.5MB/s][A[A
model-00003-of-00006.safetensors: 19%|█▉ | 928M/4.93G [00:14<01:00, 65.7MB/s][A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 912M/4.97G [00:14<00:56, 72.0MB/s][A
model-00005-of-00006.safetensors: 16%|█▌ | 784M/4.97G [00:14<01:06, 62.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 20%|█▉ | 976M/5.00G [00:14<00:56, 70.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 19%|█▉ | 944M/4.93G [00:14<00:59, 66.7MB/s][A[A[A
model-00002-of-00006.safetensors: 20%|█▉ | 976M/4.99G [00:14<00:54, 73.1MB/s][A[A
model-00001-of-00006.safetensors: 19%|█▊ | 928M/4.97G [00:14<00:54, 73.8MB/s][A
model-00004-of-00006.safetensors: 20%|█▉ | 992M/5.00G [00:14<00:54, 73.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 16%|█▌ | 800M/4.97G [00:14<01:04, 64.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 19%|█▉ | 960M/4.93G [00:14<00:57, 68.6MB/s][A[A[A
model-00002-of-00006.safetensors: 20%|█▉ | 992M/4.99G [00:14<00:54, 73.4MB/s][A[A
model-00001-of-00006.safetensors: 19%|█▉ | 944M/4.97G [00:14<00:53, 75.5MB/s][A
model-00004-of-00006.safetensors: 20%|██ | 1.01G/5.00G [00:14<00:55, 71.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 16%|█▋ | 816M/4.97G [00:14<01:04, 64.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 20%|██ | 1.01G/4.99G [00:14<00:53, 74.0MB/s][A[A
model-00003-of-00006.safetensors: 20%|█▉ | 976M/4.93G [00:14<00:56, 69.6MB/s][A[A[A
model-00001-of-00006.safetensors: 19%|█▉ | 960M/4.97G [00:14<00:51, 77.2MB/s][A
model-00004-of-00006.safetensors: 20%|██ | 1.02G/5.00G [00:14<00:54, 72.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 21%|██ | 1.02G/4.99G [00:14<00:53, 73.5MB/s][A[A
model-00003-of-00006.safetensors: 20%|██ | 992M/4.93G [00:14<00:55, 71.0MB/s][A[A[A
model-00005-of-00006.safetensors: 17%|█▋ | 832M/4.97G [00:14<01:03, 65.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 20%|█▉ | 976M/4.97G [00:15<00:51, 77.5MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.04G/5.00G [00:15<00:57, 69.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 20%|██ | 1.01G/4.93G [00:15<00:54, 72.1MB/s][A[A[A
model-00002-of-00006.safetensors: 21%|██ | 1.04G/4.99G [00:15<00:54, 73.1MB/s][A[A
model-00005-of-00006.safetensors: 17%|█▋ | 848M/4.97G [00:15<01:00, 68.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 20%|█▉ | 992M/4.97G [00:15<00:50, 79.3MB/s][A
model-00003-of-00006.safetensors: 21%|██ | 1.02G/4.93G [00:15<00:53, 72.6MB/s][A[A[A
model-00005-of-00006.safetensors: 17%|█▋ | 864M/4.97G [00:15<00:58, 70.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 21%|██ | 1.06G/5.00G [00:15<00:57, 68.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 20%|██ | 1.01G/4.97G [00:15<00:51, 76.5MB/s][A
model-00002-of-00006.safetensors: 21%|██ | 1.06G/4.99G [00:15<01:07, 58.2MB/s][A[A
model-00003-of-00006.safetensors: 21%|██ | 1.04G/4.93G [00:15<00:53, 72.4MB/s][A[A[A
model-00001-of-00006.safetensors: 21%|██ | 1.02G/4.97G [00:15<00:50, 77.3MB/s][A
model-00005-of-00006.safetensors: 18%|█▊ | 880M/4.97G [00:15<01:01, 66.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 21%|██▏ | 1.07G/4.99G [00:15<01:05, 59.5MB/s][A[A
model-00001-of-00006.safetensors: 21%|██ | 1.04G/4.97G [00:15<00:50, 78.1MB/s][A
model-00004-of-00006.safetensors: 21%|██▏ | 1.07G/5.00G [00:16<01:24, 46.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 21%|██▏ | 1.06G/4.97G [00:16<00:48, 81.3MB/s][A
model-00003-of-00006.safetensors: 21%|██▏ | 1.06G/4.93G [00:16<01:18, 49.6MB/s][A[A[A
model-00004-of-00006.safetensors: 22%|██▏ | 1.09G/5.00G [00:16<01:13, 53.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 22%|██▏ | 1.09G/4.99G [00:16<01:13, 52.8MB/s][A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.07G/4.97G [00:16<00:49, 79.0MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.10G/5.00G [00:16<01:19, 49.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 22%|██▏ | 1.10G/4.99G [00:16<01:24, 45.9MB/s][A[A
model-00005-of-00006.safetensors: 18%|█▊ | 896M/4.97G [00:16<01:49, 37.3MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.08G/4.97G [00:16<01:08, 56.8MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.12G/5.00G [00:16<01:02, 62.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 22%|██▏ | 1.10G/4.99G [00:16<01:22, 47.4MB/s][A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.09G/4.97G [00:16<01:11, 54.3MB/s][A
model-00005-of-00006.safetensors: 18%|█▊ | 912M/4.97G [00:16<01:33, 43.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 22%|██▏ | 1.12G/4.99G [00:16<01:11, 54.0MB/s][A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.13G/5.00G [00:16<01:12, 53.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.10G/4.97G [00:17<01:02, 61.5MB/s][A
model-00005-of-00006.safetensors: 19%|█▊ | 928M/4.97G [00:16<01:21, 49.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 22%|██▏ | 1.07G/4.93G [00:17<01:54, 33.7MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.14G/5.00G [00:17<01:09, 55.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 23%|██▎ | 1.14G/4.99G [00:17<01:05, 59.0MB/s][A[A
model-00005-of-00006.safetensors: 19%|█▉ | 944M/4.97G [00:17<01:13, 55.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 22%|██▏ | 1.09G/4.93G [00:17<01:33, 41.1MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.15G/5.00G [00:17<00:58, 65.6MB/s][A[A[A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.12G/4.97G [00:17<01:03, 60.2MB/s][A
model-00002-of-00006.safetensors: 23%|██▎ | 1.15G/4.99G [00:17<01:00, 63.6MB/s][A[A
model-00005-of-00006.safetensors: 19%|█▉ | 960M/4.97G [00:17<01:05, 61.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 22%|██▏ | 1.10G/4.93G [00:17<01:20, 47.8MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.17G/5.00G [00:17<00:56, 68.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.14G/4.97G [00:17<01:00, 63.3MB/s][A
model-00003-of-00006.safetensors: 23%|██▎ | 1.12G/4.93G [00:17<01:13, 52.0MB/s][A[A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.15G/4.97G [00:17<00:56, 67.0MB/s][A
model-00004-of-00006.safetensors: 24%|██▎ | 1.18G/5.00G [00:17<01:01, 61.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 23%|██▎ | 1.17G/4.99G [00:17<01:17, 49.3MB/s][A[A
model-00003-of-00006.safetensors: 23%|██▎ | 1.14G/4.93G [00:17<01:06, 57.4MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▎ | 1.17G/4.97G [00:17<00:55, 68.2MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.20G/5.00G [00:17<00:58, 65.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 24%|██▎ | 1.18G/4.99G [00:18<01:09, 54.6MB/s][A[A
model-00005-of-00006.safetensors: 20%|█▉ | 976M/4.97G [00:18<01:36, 41.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 23%|██▎ | 1.15G/4.93G [00:18<01:02, 60.6MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.18G/4.97G [00:18<00:58, 65.1MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.22G/5.00G [00:18<00:57, 65.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 20%|█▉ | 992M/4.97G [00:18<01:23, 47.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 24%|██▍ | 1.20G/4.99G [00:18<01:09, 54.4MB/s][A[A
model-00003-of-00006.safetensors: 24%|██▎ | 1.17G/4.93G [00:18<01:00, 62.7MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.20G/4.97G [00:18<00:56, 67.2MB/s][A
model-00004-of-00006.safetensors: 25%|██▍ | 1.23G/5.00G [00:18<00:55, 67.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 20%|██ | 1.01G/4.97G [00:18<01:11, 55.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 24%|██▍ | 1.22G/4.99G [00:18<01:01, 61.1MB/s][A[A
model-00003-of-00006.safetensors: 24%|██▍ | 1.18G/4.93G [00:18<00:56, 66.9MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.22G/4.97G [00:18<00:53, 69.9MB/s][A
model-00004-of-00006.safetensors: 25%|██▍ | 1.25G/5.00G [00:18<00:56, 65.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 21%|██ | 1.02G/4.97G [00:18<01:08, 57.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 25%|██▍ | 1.23G/4.99G [00:18<00:58, 64.6MB/s][A[A
model-00003-of-00006.safetensors: 24%|██▍ | 1.20G/4.93G [00:18<00:55, 67.6MB/s][A[A[A
model-00001-of-00006.safetensors: 25%|██▍ | 1.23G/4.97G [00:18<00:50, 74.0MB/s][A
model-00004-of-00006.safetensors: 25%|██▌ | 1.26G/5.00G [00:18<00:55, 67.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 21%|██ | 1.04G/4.97G [00:18<01:05, 59.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 25%|██▍ | 1.22G/4.93G [00:19<00:58, 63.2MB/s][A[A[A
model-00002-of-00006.safetensors: 25%|██▌ | 1.25G/4.99G [00:19<01:04, 58.3MB/s][A[A
model-00001-of-00006.safetensors: 25%|██▌ | 1.25G/4.97G [00:19<00:55, 67.6MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.28G/5.00G [00:19<00:53, 69.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 25%|██▌ | 1.26G/4.99G [00:19<00:58, 63.4MB/s][A[A
model-00001-of-00006.safetensors: 25%|██▌ | 1.26G/4.97G [00:19<00:52, 70.9MB/s][A
model-00003-of-00006.safetensors: 25%|██▍ | 1.23G/4.93G [00:19<00:55, 66.3MB/s][A[A[A
model-00005-of-00006.safetensors: 21%|██ | 1.06G/4.97G [00:19<01:09, 56.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 26%|██▌ | 1.30G/5.00G [00:19<00:53, 69.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 25%|██▌ | 1.25G/4.93G [00:19<00:51, 72.1MB/s][A[A[A
model-00002-of-00006.safetensors: 26%|██▌ | 1.28G/4.99G [00:19<00:56, 65.4MB/s][A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.28G/4.97G [00:19<00:52, 70.6MB/s][A
model-00005-of-00006.safetensors: 22%|██▏ | 1.07G/4.97G [00:19<01:04, 60.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 26%|██▌ | 1.31G/5.00G [00:19<00:53, 68.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 26%|██▌ | 1.26G/4.93G [00:19<00:50, 72.5MB/s][A[A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.30G/4.97G [00:19<00:51, 71.3MB/s][A
model-00002-of-00006.safetensors: 26%|██▌ | 1.30G/4.99G [00:19<00:59, 61.8MB/s][A[A
model-00005-of-00006.safetensors: 22%|██▏ | 1.09G/4.97G [00:19<01:04, 60.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 26%|██▌ | 1.28G/4.93G [00:19<00:51, 71.1MB/s][A[A[A
model-00004-of-00006.safetensors: 27%|██▋ | 1.33G/5.00G [00:19<00:59, 61.2MB/s][A[A[A[A
model-00001-of-00006.safetensors: 26%|██▋ | 1.31G/4.97G [00:19<00:51, 71.3MB/s][A
model-00002-of-00006.safetensors: 26%|██▋ | 1.31G/4.99G [00:20<00:55, 66.3MB/s][A[A
model-00005-of-00006.safetensors: 22%|██▏ | 1.10G/4.97G [00:19<01:00, 64.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 26%|██▋ | 1.30G/4.93G [00:20<00:49, 73.1MB/s][A[A[A
model-00004-of-00006.safetensors: 27%|██▋ | 1.34G/5.00G [00:20<00:56, 64.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.33G/4.97G [00:20<00:52, 68.9MB/s][A
model-00005-of-00006.safetensors: 23%|██▎ | 1.12G/4.97G [00:20<00:57, 67.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 27%|██▋ | 1.33G/4.99G [00:20<00:54, 67.4MB/s][A[A
model-00003-of-00006.safetensors: 27%|██▋ | 1.31G/4.93G [00:20<00:49, 72.5MB/s][A[A[A
model-00004-of-00006.safetensors: 27%|██▋ | 1.36G/5.00G [00:20<00:54, 66.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 27%|██▋ | 1.34G/4.99G [00:20<00:53, 68.3MB/s][A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.34G/4.97G [00:20<00:54, 66.9MB/s][A
model-00005-of-00006.safetensors: 23%|██▎ | 1.14G/4.97G [00:20<00:58, 65.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 27%|██▋ | 1.33G/4.93G [00:20<00:52, 68.3MB/s][A[A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.38G/5.00G [00:20<00:53, 67.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 27%|██▋ | 1.36G/4.99G [00:20<00:53, 68.1MB/s][A[A
model-00005-of-00006.safetensors: 23%|██▎ | 1.15G/4.97G [00:20<00:57, 66.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 27%|██▋ | 1.34G/4.93G [00:20<00:51, 69.9MB/s][A[A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.39G/5.00G [00:20<00:52, 68.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 23%|██▎ | 1.17G/4.97G [00:20<00:52, 72.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 28%|██▊ | 1.38G/4.99G [00:20<00:51, 70.6MB/s][A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.41G/5.00G [00:21<00:51, 69.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.36G/4.97G [00:21<01:19, 45.6MB/s][A
model-00002-of-00006.safetensors: 28%|██▊ | 1.39G/4.99G [00:21<00:49, 72.7MB/s][A[A
model-00003-of-00006.safetensors: 28%|██▊ | 1.36G/4.93G [00:21<00:55, 64.5MB/s][A[A[A
model-00005-of-00006.safetensors: 24%|██▍ | 1.18G/4.97G [00:21<00:54, 69.1MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.42G/5.00G [00:21<00:50, 71.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.38G/4.97G [00:21<01:09, 51.5MB/s][A
model-00003-of-00006.safetensors: 28%|██▊ | 1.38G/4.93G [00:21<00:53, 67.0MB/s][A[A[A
model-00005-of-00006.safetensors: 24%|██▍ | 1.20G/4.97G [00:21<00:54, 69.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 28%|██▊ | 1.41G/4.99G [00:21<00:52, 67.9MB/s][A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.39G/4.97G [00:21<01:05, 54.7MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.44G/5.00G [00:21<00:52, 67.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 29%|██▊ | 1.42G/4.99G [00:21<00:51, 69.5MB/s][A[A
model-00003-of-00006.safetensors: 28%|██▊ | 1.39G/4.93G [00:21<00:54, 65.6MB/s][A[A[A
model-00005-of-00006.safetensors: 24%|██▍ | 1.22G/4.97G [00:21<00:53, 70.1MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 29%|██▉ | 1.46G/5.00G [00:21<00:51, 69.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 29%|██▉ | 1.44G/4.99G [00:21<00:49, 71.6MB/s][A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.41G/4.97G [00:21<01:02, 56.5MB/s][A
model-00005-of-00006.safetensors: 25%|██▍ | 1.23G/4.97G [00:21<00:54, 68.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 29%|██▊ | 1.41G/4.93G [00:21<00:56, 62.9MB/s][A[A[A
model-00001-of-00006.safetensors: 29%|██▊ | 1.42G/4.97G [00:22<00:56, 62.5MB/s][A
model-00002-of-00006.safetensors: 29%|██▉ | 1.46G/4.99G [00:22<00:48, 72.4MB/s][A[A
model-00005-of-00006.safetensors: 25%|██▌ | 1.25G/4.97G [00:22<00:51, 71.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 29%|██▉ | 1.42G/4.93G [00:22<00:53, 66.1MB/s][A[A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.44G/4.97G [00:22<00:52, 67.1MB/s][A
model-00002-of-00006.safetensors: 29%|██▉ | 1.47G/4.99G [00:22<00:47, 74.2MB/s][A[A
model-00005-of-00006.safetensors: 25%|██▌ | 1.26G/4.97G [00:22<00:50, 73.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 29%|██▉ | 1.47G/5.00G [00:22<01:12, 48.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.46G/4.97G [00:22<00:50, 70.1MB/s][A
model-00002-of-00006.safetensors: 30%|██▉ | 1.49G/4.99G [00:22<00:46, 74.7MB/s][A[A
model-00003-of-00006.safetensors: 29%|██▉ | 1.44G/4.93G [00:22<01:03, 55.2MB/s][A[A[A
model-00005-of-00006.safetensors: 26%|██▌ | 1.28G/4.97G [00:22<00:52, 70.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 30%|██▉ | 1.47G/4.97G [00:22<00:49, 70.5MB/s][A
model-00002-of-00006.safetensors: 30%|███ | 1.50G/4.99G [00:22<00:47, 73.5MB/s][A[A
model-00005-of-00006.safetensors: 26%|██▌ | 1.30G/4.97G [00:22<00:49, 74.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 30%|██▉ | 1.46G/4.93G [00:22<01:09, 49.7MB/s][A[A[A
model-00001-of-00006.safetensors: 30%|██▉ | 1.49G/4.97G [00:22<00:51, 67.3MB/s][A
model-00005-of-00006.safetensors: 26%|██▋ | 1.31G/4.97G [00:22<00:49, 74.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 30%|███ | 1.52G/4.99G [00:22<00:48, 71.6MB/s][A[A
model-00005-of-00006.safetensors: 27%|██▋ | 1.33G/4.97G [00:23<00:48, 75.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 30%|██▉ | 1.47G/4.93G [00:23<01:03, 54.4MB/s][A[A[A
model-00002-of-00006.safetensors: 31%|███ | 1.54G/4.99G [00:23<00:49, 69.8MB/s][A[A
model-00001-of-00006.safetensors: 30%|███ | 1.50G/4.97G [00:23<00:55, 62.3MB/s][A
model-00004-of-00006.safetensors: 30%|██▉ | 1.49G/5.00G [00:23<01:49, 32.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 31%|███ | 1.55G/4.99G [00:23<00:45, 75.5MB/s][A[A
model-00003-of-00006.safetensors: 30%|███ | 1.49G/4.93G [00:23<01:00, 57.0MB/s][A[A[A
model-00005-of-00006.safetensors: 27%|██▋ | 1.34G/4.97G [00:23<00:55, 65.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 31%|███ | 1.52G/4.97G [00:23<00:57, 60.1MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.50G/5.00G [00:23<01:34, 37.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 30%|███ | 1.50G/4.93G [00:23<00:53, 64.5MB/s][A[A[A
model-00005-of-00006.safetensors: 27%|██▋ | 1.36G/4.97G [00:23<00:54, 66.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 31%|███ | 1.54G/4.97G [00:23<00:52, 64.7MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.52G/5.00G [00:23<01:19, 44.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 31%|███▏ | 1.57G/4.99G [00:23<00:59, 57.5MB/s][A[A
model-00003-of-00006.safetensors: 31%|███ | 1.52G/4.93G [00:23<00:50, 67.0MB/s][A[A[A
model-00005-of-00006.safetensors: 28%|██▊ | 1.38G/4.97G [00:23<00:53, 67.3MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 31%|███▏ | 1.55G/4.97G [00:23<00:51, 65.7MB/s][A
model-00004-of-00006.safetensors: 31%|███ | 1.54G/5.00G [00:23<01:09, 49.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 31%|███ | 1.54G/4.93G [00:23<00:48, 70.6MB/s][A[A[A
model-00002-of-00006.safetensors: 32%|███▏ | 1.58G/4.99G [00:23<00:55, 61.5MB/s][A[A
model-00005-of-00006.safetensors: 28%|██▊ | 1.39G/4.97G [00:24<00:51, 69.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.57G/4.97G [00:24<00:49, 68.5MB/s][A
model-00002-of-00006.safetensors: 32%|███▏ | 1.60G/4.99G [00:24<00:51, 66.3MB/s][A[A
model-00003-of-00006.safetensors: 31%|███▏ | 1.55G/4.93G [00:24<00:47, 71.5MB/s][A[A[A
model-00004-of-00006.safetensors: 31%|███ | 1.55G/5.00G [00:24<01:13, 47.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 28%|██▊ | 1.41G/4.97G [00:24<00:50, 70.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.58G/4.97G [00:24<00:47, 70.6MB/s][A
model-00002-of-00006.safetensors: 32%|███▏ | 1.62G/4.99G [00:24<00:48, 70.2MB/s][A[A
model-00003-of-00006.safetensors: 32%|███▏ | 1.57G/4.93G [00:24<00:47, 70.6MB/s][A[A[A
model-00005-of-00006.safetensors: 29%|██▊ | 1.42G/4.97G [00:24<00:49, 71.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 31%|███▏ | 1.57G/5.00G [00:24<01:06, 51.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.60G/4.97G [00:24<00:47, 71.2MB/s][A
model-00002-of-00006.safetensors: 33%|███▎ | 1.63G/4.99G [00:24<00:48, 69.2MB/s][A[A
model-00003-of-00006.safetensors: 32%|███▏ | 1.58G/4.93G [00:24<00:48, 69.0MB/s][A[A[A
model-00004-of-00006.safetensors: 32%|███▏ | 1.58G/5.00G [00:24<00:58, 58.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.62G/4.97G [00:24<00:47, 71.1MB/s][A
model-00002-of-00006.safetensors: 33%|███▎ | 1.65G/4.99G [00:24<00:46, 71.2MB/s][A[A
model-00005-of-00006.safetensors: 29%|██▉ | 1.44G/4.97G [00:24<00:55, 63.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 32%|███▏ | 1.60G/4.93G [00:24<00:48, 68.6MB/s][A[A[A
model-00004-of-00006.safetensors: 32%|███▏ | 1.60G/5.00G [00:24<00:55, 60.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 33%|███▎ | 1.66G/4.99G [00:25<00:48, 69.0MB/s][A[A
model-00005-of-00006.safetensors: 29%|██▉ | 1.46G/4.97G [00:25<00:52, 66.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 33%|███▎ | 1.62G/4.93G [00:25<00:47, 70.6MB/s][A[A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.63G/4.97G [00:25<00:52, 63.4MB/s][A
model-00004-of-00006.safetensors: 32%|███▏ | 1.62G/5.00G [00:25<00:51, 66.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 34%|███▎ | 1.68G/4.99G [00:25<00:46, 71.3MB/s][A[A
model-00003-of-00006.safetensors: 33%|███▎ | 1.63G/4.93G [00:25<00:44, 73.9MB/s][A[A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.65G/4.97G [00:25<00:50, 65.6MB/s][A
model-00005-of-00006.safetensors: 30%|██▉ | 1.47G/4.97G [00:25<00:56, 62.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 33%|███▎ | 1.63G/5.00G [00:25<00:50, 66.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 34%|███▍ | 1.70G/4.99G [00:25<00:45, 73.1MB/s][A[A
model-00003-of-00006.safetensors: 33%|███▎ | 1.65G/4.93G [00:25<00:46, 70.9MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▎ | 1.66G/4.97G [00:25<00:48, 67.4MB/s][A
model-00005-of-00006.safetensors: 30%|██▉ | 1.49G/4.97G [00:25<00:53, 65.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 33%|███▎ | 1.65G/5.00G [00:25<00:51, 64.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 34%|███▍ | 1.71G/4.99G [00:25<00:44, 73.5MB/s][A[A
model-00003-of-00006.safetensors: 34%|███▎ | 1.66G/4.93G [00:25<00:43, 74.7MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.68G/4.97G [00:25<00:48, 67.8MB/s][A
model-00005-of-00006.safetensors: 30%|███ | 1.50G/4.97G [00:25<00:50, 69.1MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 35%|███▍ | 1.73G/4.99G [00:25<00:43, 75.0MB/s][A[A
model-00004-of-00006.safetensors: 33%|███▎ | 1.66G/5.00G [00:25<00:55, 60.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 34%|███▍ | 1.68G/4.93G [00:25<00:46, 70.4MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.70G/4.97G [00:26<00:46, 70.1MB/s][A
model-00002-of-00006.safetensors: 35%|███▍ | 1.74G/4.99G [00:26<00:43, 73.9MB/s][A[A
model-00005-of-00006.safetensors: 31%|███ | 1.52G/4.97G [00:26<00:58, 59.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 34%|███▍ | 1.70G/4.93G [00:26<00:44, 72.2MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.71G/4.97G [00:26<00:45, 71.6MB/s][A
model-00004-of-00006.safetensors: 34%|███▎ | 1.68G/5.00G [00:26<00:54, 61.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 35%|███▌ | 1.76G/4.99G [00:26<00:42, 75.7MB/s][A[A
model-00005-of-00006.safetensors: 31%|███ | 1.54G/4.97G [00:26<00:55, 61.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 35%|███▍ | 1.71G/4.93G [00:26<00:43, 73.9MB/s][A[A[A
model-00001-of-00006.safetensors: 35%|███▍ | 1.73G/4.97G [00:26<00:44, 72.4MB/s][A
model-00004-of-00006.safetensors: 34%|███▍ | 1.70G/5.00G [00:26<00:51, 63.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 36%|███▌ | 1.78G/4.99G [00:26<00:43, 73.6MB/s][A[A
model-00005-of-00006.safetensors: 31%|███ | 1.55G/4.97G [00:26<00:51, 66.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 35%|███▌ | 1.73G/4.93G [00:26<00:42, 75.5MB/s][A[A[A
model-00004-of-00006.safetensors: 34%|███▍ | 1.71G/5.00G [00:26<00:47, 69.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.74G/4.97G [00:26<00:47, 68.5MB/s][A
model-00005-of-00006.safetensors: 32%|███▏ | 1.57G/4.97G [00:26<00:49, 69.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 36%|███▌ | 1.79G/4.99G [00:26<00:50, 63.6MB/s][A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.76G/4.97G [00:26<00:45, 70.7MB/s][A
model-00003-of-00006.safetensors: 35%|███▌ | 1.74G/4.93G [00:26<00:48, 66.0MB/s][A[A[A
model-00004-of-00006.safetensors: 35%|███▍ | 1.73G/5.00G [00:26<00:51, 63.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 36%|███▌ | 1.81G/4.99G [00:27<00:47, 67.6MB/s][A[A
model-00003-of-00006.safetensors: 36%|███▌ | 1.76G/4.93G [00:27<00:45, 70.3MB/s][A[A[A
model-00001-of-00006.safetensors: 36%|███▌ | 1.78G/4.97G [00:27<00:47, 67.7MB/s][A
model-00004-of-00006.safetensors: 35%|███▍ | 1.74G/5.00G [00:27<00:50, 65.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 37%|███▋ | 1.82G/4.99G [00:27<00:47, 67.0MB/s][A[A
model-00004-of-00006.safetensors: 35%|███▌ | 1.76G/5.00G [00:27<00:47, 67.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 36%|███▌ | 1.79G/4.97G [00:27<00:47, 67.2MB/s][A
model-00003-of-00006.safetensors: 36%|███▌ | 1.78G/4.93G [00:27<00:49, 64.0MB/s][A[A[A
model-00002-of-00006.safetensors: 37%|███▋ | 1.84G/4.99G [00:27<00:45, 68.8MB/s][A[A
model-00005-of-00006.safetensors: 32%|███▏ | 1.58G/4.97G [00:27<01:24, 40.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 36%|███▌ | 1.78G/5.00G [00:27<00:46, 70.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 36%|███▋ | 1.79G/4.93G [00:27<00:47, 66.2MB/s][A[A[A
model-00002-of-00006.safetensors: 37%|███▋ | 1.86G/4.99G [00:27<00:43, 72.7MB/s][A[A
model-00001-of-00006.safetensors: 36%|███▋ | 1.81G/4.97G [00:27<00:53, 59.5MB/s][A
model-00005-of-00006.safetensors: 32%|███▏ | 1.60G/4.97G [00:27<01:13, 45.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 37%|███▋ | 1.81G/4.93G [00:27<00:46, 67.8MB/s][A[A[A
model-00002-of-00006.safetensors: 38%|███▊ | 1.87G/4.99G [00:27<00:40, 77.9MB/s][A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.82G/4.97G [00:28<00:53, 59.2MB/s][A
model-00002-of-00006.safetensors: 38%|███▊ | 1.89G/4.99G [00:28<00:40, 77.0MB/s][A[A
model-00005-of-00006.safetensors: 33%|███▎ | 1.62G/4.97G [00:28<01:14, 44.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.84G/4.97G [00:28<00:48, 63.8MB/s][A
model-00002-of-00006.safetensors: 38%|███▊ | 1.90G/4.99G [00:28<00:39, 78.6MB/s][A[A
model-00005-of-00006.safetensors: 33%|███▎ | 1.63G/4.97G [00:28<01:05, 51.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 37%|███▋ | 1.82G/4.93G [00:28<01:06, 47.0MB/s][A[A[A
model-00004-of-00006.safetensors: 36%|███▌ | 1.79G/5.00G [00:28<01:25, 37.6MB/s][A[A[A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.86G/4.97G [00:28<00:49, 62.8MB/s][A
model-00002-of-00006.safetensors: 38%|███▊ | 1.92G/4.99G [00:28<00:39, 77.5MB/s][A[A
model-00005-of-00006.safetensors: 33%|███▎ | 1.65G/4.97G [00:28<00:58, 57.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 37%|███▋ | 1.84G/4.93G [00:28<00:58, 52.7MB/s][A[A[A
model-00002-of-00006.safetensors: 39%|███▉ | 1.94G/4.99G [00:28<00:40, 75.6MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.87G/4.97G [00:28<00:50, 61.6MB/s][A
model-00005-of-00006.safetensors: 33%|███▎ | 1.66G/4.97G [00:28<00:53, 61.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 38%|███▊ | 1.86G/4.93G [00:28<00:52, 58.3MB/s][A[A[A
model-00004-of-00006.safetensors: 36%|███▌ | 1.81G/5.00G [00:28<01:26, 36.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 39%|███▉ | 1.95G/4.99G [00:28<00:40, 75.7MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.89G/4.97G [00:29<00:48, 63.1MB/s][A
model-00005-of-00006.safetensors: 34%|███▍ | 1.68G/4.97G [00:29<00:51, 64.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 38%|███▊ | 1.87G/4.93G [00:29<00:51, 59.7MB/s][A[A[A
model-00004-of-00006.safetensors: 36%|███▋ | 1.82G/5.00G [00:29<01:13, 43.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 39%|███▉ | 1.97G/4.99G [00:29<00:39, 76.1MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.90G/4.97G [00:29<00:46, 65.6MB/s][A
model-00005-of-00006.safetensors: 34%|███▍ | 1.70G/4.97G [00:29<00:49, 66.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 38%|███▊ | 1.89G/4.93G [00:29<00:48, 62.5MB/s][A[A[A
model-00004-of-00006.safetensors: 37%|███▋ | 1.84G/5.00G [00:29<01:03, 50.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 40%|███▉ | 1.98G/4.99G [00:29<00:40, 74.5MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▊ | 1.92G/4.97G [00:29<00:46, 65.2MB/s][A
model-00005-of-00006.safetensors: 34%|███▍ | 1.71G/4.97G [00:29<00:46, 69.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 39%|███▊ | 1.90G/4.93G [00:29<00:45, 66.3MB/s][A[A[A
model-00004-of-00006.safetensors: 37%|███▋ | 1.86G/5.00G [00:29<00:57, 54.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 40%|████ | 2.00G/4.99G [00:29<00:44, 67.7MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.94G/4.97G [00:29<00:45, 67.0MB/s][A
model-00005-of-00006.safetensors: 35%|███▍ | 1.73G/4.97G [00:29<00:46, 70.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 39%|███▉ | 1.92G/4.93G [00:29<00:43, 68.9MB/s][A[A[A
model-00004-of-00006.safetensors: 37%|███▋ | 1.87G/5.00G [00:29<00:55, 56.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 35%|███▌ | 1.74G/4.97G [00:29<00:45, 71.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 40%|████ | 2.02G/4.99G [00:29<00:47, 62.8MB/s][A[A
model-00004-of-00006.safetensors: 38%|███▊ | 1.89G/5.00G [00:30<00:49, 63.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 39%|███▉ | 1.94G/4.93G [00:30<00:45, 65.7MB/s][A[A[A
model-00005-of-00006.safetensors: 35%|███▌ | 1.76G/4.97G [00:30<00:43, 74.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 41%|████ | 2.03G/4.99G [00:30<00:43, 68.3MB/s][A[A
model-00004-of-00006.safetensors: 38%|███▊ | 1.90G/5.00G [00:30<00:46, 66.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 40%|███▉ | 1.95G/4.93G [00:30<00:44, 67.3MB/s][A[A[A
model-00005-of-00006.safetensors: 36%|███▌ | 1.78G/4.97G [00:30<00:47, 67.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 41%|████ | 2.05G/4.99G [00:30<00:43, 67.4MB/s][A[A
model-00004-of-00006.safetensors: 38%|███▊ | 1.92G/5.00G [00:30<00:44, 68.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 40%|███▉ | 1.97G/4.93G [00:30<00:44, 66.7MB/s][A[A[A
model-00005-of-00006.safetensors: 36%|███▌ | 1.79G/4.97G [00:30<00:45, 69.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 40%|████ | 1.98G/4.93G [00:30<00:41, 70.7MB/s][A[A[A
model-00004-of-00006.safetensors: 39%|███▊ | 1.94G/5.00G [00:30<00:48, 63.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 41%|████▏ | 2.06G/4.99G [00:30<00:52, 55.4MB/s][A[A
model-00005-of-00006.safetensors: 36%|███▋ | 1.81G/4.97G [00:30<00:48, 65.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 41%|████ | 2.00G/4.93G [00:30<00:40, 71.8MB/s][A[A[A
model-00004-of-00006.safetensors: 39%|███▉ | 1.95G/5.00G [00:30<00:46, 65.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 42%|████▏ | 2.08G/4.99G [00:31<00:49, 58.5MB/s][A[A
model-00005-of-00006.safetensors: 37%|███▋ | 1.82G/4.97G [00:31<00:50, 62.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 39%|███▉ | 1.97G/5.00G [00:31<00:50, 60.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 42%|████▏ | 2.10G/4.99G [00:31<00:48, 59.8MB/s][A[A
model-00005-of-00006.safetensors: 37%|███▋ | 1.84G/4.97G [00:31<00:51, 61.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 40%|███▉ | 1.98G/5.00G [00:31<00:49, 61.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 42%|████▏ | 2.11G/4.99G [00:31<00:48, 59.8MB/s][A[A
model-00005-of-00006.safetensors: 37%|███▋ | 1.86G/4.97G [00:31<00:48, 64.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 41%|████ | 2.02G/4.93G [00:31<01:14, 39.3MB/s][A[A[A
model-00004-of-00006.safetensors: 40%|████ | 2.00G/5.00G [00:31<00:46, 64.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 43%|████▎ | 2.13G/4.99G [00:31<00:45, 63.5MB/s][A[A
model-00005-of-00006.safetensors: 38%|███▊ | 1.87G/4.97G [00:31<00:46, 67.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 40%|████ | 2.02G/5.00G [00:31<00:44, 67.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 41%|████ | 2.03G/4.93G [00:31<01:03, 45.4MB/s][A[A[A
model-00002-of-00006.safetensors: 43%|████▎ | 2.14G/4.99G [00:32<00:45, 62.9MB/s][A[A
model-00005-of-00006.safetensors: 38%|███▊ | 1.89G/4.97G [00:32<00:45, 68.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 42%|████▏ | 2.05G/4.93G [00:32<00:55, 51.8MB/s][A[A[A
model-00004-of-00006.safetensors: 41%|████ | 2.03G/5.00G [00:32<00:43, 68.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 43%|████▎ | 2.16G/4.99G [00:32<00:43, 64.8MB/s][A[A
model-00003-of-00006.safetensors: 42%|████▏ | 2.06G/4.93G [00:32<00:49, 57.6MB/s][A[A[A
model-00004-of-00006.safetensors: 41%|████ | 2.05G/5.00G [00:32<00:41, 70.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 44%|████▎ | 2.18G/4.99G [00:32<00:41, 67.1MB/s][A[A
model-00003-of-00006.safetensors: 42%|████▏ | 2.08G/4.93G [00:32<00:46, 61.9MB/s][A[A[A
model-00004-of-00006.safetensors: 41%|████▏ | 2.06G/5.00G [00:32<00:41, 71.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 44%|████▍ | 2.19G/4.99G [00:32<00:39, 70.1MB/s][A[A
model-00003-of-00006.safetensors: 42%|████▏ | 2.10G/4.93G [00:32<00:43, 65.0MB/s][A[A[A
model-00004-of-00006.safetensors: 42%|████▏ | 2.08G/5.00G [00:32<00:41, 70.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 44%|████▍ | 2.21G/4.99G [00:32<00:38, 72.9MB/s][A[A
model-00005-of-00006.safetensors: 38%|███▊ | 1.90G/4.97G [00:32<01:20, 38.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 43%|████▎ | 2.11G/4.93G [00:33<00:41, 67.9MB/s][A[A[A
model-00004-of-00006.safetensors: 42%|████▏ | 2.10G/5.00G [00:33<00:40, 72.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 45%|████▍ | 2.22G/4.99G [00:33<00:38, 72.7MB/s][A[A
model-00005-of-00006.safetensors: 39%|███▊ | 1.92G/4.97G [00:33<01:08, 44.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 43%|████▎ | 2.13G/4.93G [00:33<00:40, 68.5MB/s][A[A[A
model-00002-of-00006.safetensors: 45%|████▍ | 2.24G/4.99G [00:33<00:36, 75.7MB/s][A[A
model-00004-of-00006.safetensors: 42%|████▏ | 2.11G/5.00G [00:33<00:41, 69.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 39%|███▉ | 1.94G/4.97G [00:33<01:00, 50.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 43%|████▎ | 2.14G/4.93G [00:33<00:41, 66.6MB/s][A[A[A
model-00004-of-00006.safetensors: 43%|████▎ | 2.13G/5.00G [00:33<00:40, 71.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 45%|████▌ | 2.26G/4.99G [00:33<00:37, 73.4MB/s][A[A
model-00003-of-00006.safetensors: 44%|████▍ | 2.16G/4.93G [00:33<00:40, 68.8MB/s][A[A[A
model-00004-of-00006.safetensors: 43%|████▎ | 2.14G/5.00G [00:33<00:40, 70.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 46%|████▌ | 2.27G/4.99G [00:33<00:37, 72.9MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.95G/4.97G [00:33<04:25, 11.4MB/s][A
model-00003-of-00006.safetensors: 44%|████▍ | 2.18G/4.93G [00:33<00:38, 72.4MB/s][A[A[A
model-00001-of-00006.safetensors: 40%|███▉ | 1.97G/4.97G [00:34<03:16, 15.3MB/s][A
model-00002-of-00006.safetensors: 46%|████▌ | 2.29G/4.99G [00:34<00:41, 65.5MB/s][A[A
model-00003-of-00006.safetensors: 44%|████▍ | 2.19G/4.93G [00:34<00:37, 73.0MB/s][A[A[A
model-00002-of-00006.safetensors: 46%|████▌ | 2.30G/4.99G [00:34<00:38, 69.9MB/s][A[A
model-00001-of-00006.safetensors: 40%|███▉ | 1.98G/4.97G [00:34<02:32, 19.6MB/s][A
model-00003-of-00006.safetensors: 45%|████▍ | 2.21G/4.93G [00:34<00:37, 71.8MB/s][A[A[A
model-00004-of-00006.safetensors: 43%|████▎ | 2.16G/5.00G [00:34<01:04, 43.8MB/s][A[A[A[A
model-00002-of-00006.safetensors: 46%|████▋ | 2.32G/4.99G [00:34<00:37, 70.8MB/s][A[A
model-00001-of-00006.safetensors: 40%|████ | 2.00G/4.97G [00:34<01:57, 25.3MB/s][A
model-00003-of-00006.safetensors: 45%|████▌ | 2.22G/4.93G [00:34<00:36, 74.1MB/s][A[A[A
model-00004-of-00006.safetensors: 44%|████▎ | 2.18G/5.00G [00:34<01:04, 44.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 47%|████▋ | 2.34G/4.99G [00:34<00:42, 62.2MB/s][A[A
model-00001-of-00006.safetensors: 41%|████ | 2.02G/4.97G [00:34<01:38, 29.9MB/s][A
model-00003-of-00006.safetensors: 45%|████▌ | 2.23G/4.93G [00:34<00:48, 55.5MB/s][A[A[A
model-00002-of-00006.safetensors: 47%|████▋ | 2.35G/4.99G [00:34<00:36, 72.4MB/s][A[A
model-00001-of-00006.safetensors: 41%|████ | 2.03G/4.97G [00:35<01:20, 36.3MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.19G/5.00G [00:35<00:57, 49.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 47%|████▋ | 2.36G/4.99G [00:35<00:39, 66.8MB/s][A[A
model-00003-of-00006.safetensors: 45%|████▌ | 2.24G/4.93G [00:35<00:56, 47.7MB/s][A[A[A
model-00004-of-00006.safetensors: 44%|████▍ | 2.21G/5.00G [00:35<00:50, 55.2MB/s][A[A[A[A
model-00001-of-00006.safetensors: 41%|████ | 2.05G/4.97G [00:35<01:09, 41.7MB/s][A
model-00002-of-00006.safetensors: 47%|████▋ | 2.37G/4.99G [00:35<00:46, 56.8MB/s][A[A
model-00004-of-00006.safetensors: 44%|████▍ | 2.22G/5.00G [00:35<00:47, 58.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.06G/4.97G [00:35<01:01, 47.3MB/s][A
model-00002-of-00006.safetensors: 48%|████▊ | 2.38G/4.99G [00:35<00:46, 56.3MB/s][A[A
model-00004-of-00006.safetensors: 45%|████▍ | 2.24G/5.00G [00:35<00:43, 64.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 46%|████▌ | 2.26G/4.93G [00:35<01:08, 38.8MB/s][A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.08G/4.97G [00:35<00:54, 52.7MB/s][A
model-00002-of-00006.safetensors: 48%|████▊ | 2.40G/4.99G [00:35<00:42, 60.5MB/s][A[A
model-00003-of-00006.safetensors: 46%|████▌ | 2.27G/4.93G [00:35<00:57, 46.1MB/s][A[A[A
model-00004-of-00006.safetensors: 45%|████▌ | 2.26G/5.00G [00:35<00:41, 65.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.10G/4.97G [00:36<00:51, 56.0MB/s][A
model-00002-of-00006.safetensors: 48%|████▊ | 2.42G/4.99G [00:36<00:40, 63.8MB/s][A[A
model-00004-of-00006.safetensors: 45%|████▌ | 2.27G/5.00G [00:36<00:38, 71.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 46%|████▋ | 2.29G/4.93G [00:36<00:50, 52.2MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.11G/4.97G [00:36<00:45, 62.4MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.29G/5.00G [00:36<00:36, 74.8MB/s][A[A[A[A
model-00002-of-00006.safetensors: 49%|████▊ | 2.43G/4.99G [00:36<00:38, 67.3MB/s][A[A
model-00003-of-00006.safetensors: 47%|████▋ | 2.30G/4.93G [00:36<00:44, 58.5MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.13G/4.97G [00:36<00:43, 65.2MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.30G/5.00G [00:36<00:36, 73.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 49%|████▉ | 2.45G/4.99G [00:36<00:37, 67.6MB/s][A[A
model-00003-of-00006.safetensors: 47%|████▋ | 2.32G/4.93G [00:36<00:43, 59.6MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.14G/4.97G [00:36<00:43, 65.2MB/s][A
model-00002-of-00006.safetensors: 49%|████▉ | 2.46G/4.99G [00:36<00:36, 69.2MB/s][A[A
model-00004-of-00006.safetensors: 46%|████▋ | 2.32G/5.00G [00:36<00:38, 69.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.16G/4.97G [00:36<00:39, 71.9MB/s][A
model-00002-of-00006.safetensors: 50%|████▉ | 2.48G/4.99G [00:36<00:33, 74.7MB/s][A[A
model-00004-of-00006.safetensors: 47%|████▋ | 2.34G/5.00G [00:36<00:38, 68.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.18G/4.97G [00:37<00:37, 74.1MB/s][A
model-00002-of-00006.safetensors: 50%|█████ | 2.50G/4.99G [00:37<00:31, 79.0MB/s][A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.19G/4.97G [00:37<00:36, 75.6MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.35G/5.00G [00:37<00:38, 69.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 50%|█████ | 2.51G/4.99G [00:37<00:32, 77.2MB/s][A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.21G/4.97G [00:37<00:37, 73.5MB/s][A
model-00002-of-00006.safetensors: 51%|█████ | 2.53G/4.99G [00:37<00:33, 73.3MB/s][A[A
model-00004-of-00006.safetensors: 47%|████▋ | 2.37G/5.00G [00:37<00:43, 60.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 39%|███▉ | 1.95G/4.97G [00:37<04:40, 10.8MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 45%|████▍ | 2.22G/4.97G [00:37<00:37, 73.4MB/s][A
model-00002-of-00006.safetensors: 51%|█████ | 2.54G/4.99G [00:37<00:32, 74.7MB/s][A[A
model-00004-of-00006.safetensors: 48%|████▊ | 2.38G/5.00G [00:37<00:41, 63.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 40%|███▉ | 1.97G/4.97G [00:37<03:27, 14.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 45%|████▌ | 2.24G/4.97G [00:37<00:36, 75.0MB/s][A
model-00002-of-00006.safetensors: 51%|█████▏ | 2.56G/4.99G [00:37<00:32, 73.8MB/s][A[A
model-00005-of-00006.safetensors: 40%|███▉ | 1.98G/4.97G [00:37<02:35, 19.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 48%|████▊ | 2.40G/5.00G [00:38<00:41, 62.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 52%|█████▏ | 2.58G/4.99G [00:38<00:33, 72.5MB/s][A[A
model-00005-of-00006.safetensors: 40%|████ | 2.00G/4.97G [00:38<02:00, 24.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 45%|████▌ | 2.26G/4.97G [00:38<00:45, 59.1MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.42G/5.00G [00:38<00:39, 66.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 41%|████ | 2.02G/4.97G [00:38<01:34, 31.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 52%|█████▏ | 2.59G/4.99G [00:38<00:33, 72.5MB/s][A[A
model-00004-of-00006.safetensors: 49%|████▊ | 2.43G/5.00G [00:38<00:38, 67.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 46%|████▌ | 2.27G/4.97G [00:38<00:44, 61.0MB/s][A
model-00005-of-00006.safetensors: 41%|████ | 2.03G/4.97G [00:38<01:18, 37.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 52%|█████▏ | 2.61G/4.99G [00:38<00:33, 70.4MB/s][A[A
model-00004-of-00006.safetensors: 49%|████▉ | 2.45G/5.00G [00:38<00:35, 72.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 46%|████▌ | 2.29G/4.97G [00:38<00:42, 63.7MB/s][A
model-00005-of-00006.safetensors: 41%|████ | 2.05G/4.97G [00:38<01:06, 44.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 53%|█████▎ | 2.62G/4.99G [00:38<00:32, 73.2MB/s][A[A
model-00004-of-00006.safetensors: 49%|████▉ | 2.46G/5.00G [00:38<00:35, 71.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 46%|████▋ | 2.30G/4.97G [00:38<00:40, 65.8MB/s][A
model-00005-of-00006.safetensors: 42%|████▏ | 2.06G/4.97G [00:39<00:56, 51.1MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 53%|█████▎ | 2.64G/4.99G [00:39<00:33, 71.0MB/s][A[A
model-00004-of-00006.safetensors: 50%|████▉ | 2.48G/5.00G [00:39<00:35, 71.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.32G/4.97G [00:39<00:38, 68.6MB/s][A
model-00004-of-00006.safetensors: 50%|████▉ | 2.49G/5.00G [00:39<00:30, 82.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 42%|████▏ | 2.08G/4.97G [00:39<00:51, 55.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 53%|█████▎ | 2.66G/4.99G [00:39<00:32, 71.4MB/s][A[A
model-00004-of-00006.safetensors: 50%|█████ | 2.50G/5.00G [00:39<00:34, 73.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.34G/4.97G [00:39<00:39, 66.0MB/s][A
model-00005-of-00006.safetensors: 42%|████▏ | 2.10G/4.97G [00:39<00:48, 59.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.35G/4.97G [00:39<00:37, 70.2MB/s][A
model-00004-of-00006.safetensors: 50%|█████ | 2.51G/5.00G [00:39<00:40, 61.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 42%|████▏ | 2.11G/4.97G [00:39<00:45, 63.0MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 48%|████▊ | 2.37G/4.97G [00:39<00:36, 71.5MB/s][A
model-00004-of-00006.safetensors: 51%|█████ | 2.53G/5.00G [00:39<00:37, 66.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 43%|████▎ | 2.13G/4.97G [00:39<00:40, 69.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 54%|█████▎ | 2.67G/4.99G [00:39<00:49, 47.0MB/s][A[A
model-00004-of-00006.safetensors: 51%|█████ | 2.54G/5.00G [00:40<00:33, 73.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 48%|████▊ | 2.38G/4.97G [00:40<00:35, 72.3MB/s][A
model-00005-of-00006.safetensors: 43%|████▎ | 2.14G/4.97G [00:40<00:39, 71.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 54%|█████▍ | 2.69G/4.99G [00:40<00:43, 53.4MB/s][A[A
model-00004-of-00006.safetensors: 51%|█████ | 2.56G/5.00G [00:40<00:33, 72.9MB/s][A[A[A[A
model-00003-of-00006.safetensors: 47%|████▋ | 2.34G/4.93G [00:40<03:35, 12.1MB/s][A[A[A
model-00001-of-00006.safetensors: 48%|████▊ | 2.40G/4.97G [00:40<00:38, 66.2MB/s][A
model-00005-of-00006.safetensors: 43%|████▎ | 2.16G/4.97G [00:40<00:40, 69.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 54%|█████▍ | 2.70G/4.99G [00:40<00:39, 57.5MB/s][A[A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.58G/5.00G [00:40<00:32, 74.9MB/s][A[A[A[A
model-00003-of-00006.safetensors: 48%|████▊ | 2.35G/4.93G [00:40<02:39, 16.2MB/s][A[A[A
model-00005-of-00006.safetensors: 44%|████▍ | 2.18G/4.97G [00:40<00:39, 70.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 54%|█████▍ | 2.72G/4.99G [00:40<00:37, 61.1MB/s][A[A
model-00001-of-00006.safetensors: 49%|████▊ | 2.42G/4.97G [00:40<00:39, 64.1MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.59G/5.00G [00:40<00:32, 73.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 48%|████▊ | 2.37G/4.93G [00:40<02:00, 21.3MB/s][A[A[A
model-00005-of-00006.safetensors: 44%|████▍ | 2.19G/4.97G [00:40<00:39, 70.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 55%|█████▍ | 2.74G/4.99G [00:40<00:35, 63.4MB/s][A[A
model-00001-of-00006.safetensors: 49%|████▉ | 2.43G/4.97G [00:40<00:39, 63.6MB/s][A
model-00003-of-00006.safetensors: 48%|████▊ | 2.38G/4.93G [00:40<01:34, 26.8MB/s][A[A[A
model-00002-of-00006.safetensors: 55%|█████▌ | 2.75G/4.99G [00:41<00:32, 69.1MB/s][A[A
model-00005-of-00006.safetensors: 44%|████▍ | 2.21G/4.97G [00:41<00:39, 69.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 49%|████▉ | 2.45G/4.97G [00:41<00:38, 64.7MB/s][A
model-00003-of-00006.safetensors: 49%|████▊ | 2.40G/4.93G [00:41<01:18, 32.4MB/s][A[A[A
model-00005-of-00006.safetensors: 45%|████▍ | 2.22G/4.97G [00:41<00:36, 75.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 55%|█████▌ | 2.77G/4.99G [00:41<00:31, 70.0MB/s][A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.46G/4.97G [00:41<00:34, 71.7MB/s][A
model-00003-of-00006.safetensors: 49%|████▉ | 2.42G/4.93G [00:41<01:03, 39.4MB/s][A[A[A
model-00002-of-00006.safetensors: 56%|█████▌ | 2.78G/4.99G [00:41<00:30, 71.8MB/s][A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.48G/4.97G [00:41<00:33, 74.0MB/s][A
model-00005-of-00006.safetensors: 45%|████▌ | 2.24G/4.97G [00:41<00:40, 67.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 49%|████▉ | 2.43G/4.93G [00:41<00:55, 44.9MB/s][A[A[A
model-00002-of-00006.safetensors: 56%|█████▌ | 2.80G/4.99G [00:41<00:30, 72.6MB/s][A[A
model-00001-of-00006.safetensors: 50%|█████ | 2.50G/4.97G [00:41<00:32, 75.3MB/s][A
model-00005-of-00006.safetensors: 45%|████▌ | 2.26G/4.97G [00:41<00:38, 70.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 50%|████▉ | 2.45G/4.93G [00:41<00:48, 51.4MB/s][A[A[A
model-00002-of-00006.safetensors: 56%|█████▋ | 2.82G/4.99G [00:41<00:30, 72.3MB/s][A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.51G/4.97G [00:41<00:32, 74.8MB/s][A
model-00005-of-00006.safetensors: 46%|████▌ | 2.27G/4.97G [00:41<00:36, 74.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 50%|████▉ | 2.46G/4.93G [00:42<00:43, 57.4MB/s][A[A[A
model-00002-of-00006.safetensors: 57%|█████▋ | 2.83G/4.99G [00:42<00:29, 72.5MB/s][A[A
model-00005-of-00006.safetensors: 46%|████▌ | 2.29G/4.97G [00:42<00:36, 73.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.53G/4.97G [00:42<00:36, 66.6MB/s][A
model-00002-of-00006.safetensors: 57%|█████▋ | 2.85G/4.99G [00:42<00:28, 74.2MB/s][A[A
model-00005-of-00006.safetensors: 46%|████▋ | 2.30G/4.97G [00:42<00:36, 73.8MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.54G/4.97G [00:42<00:34, 69.7MB/s][A
model-00003-of-00006.safetensors: 50%|█████ | 2.48G/4.93G [00:42<00:47, 51.3MB/s][A[A[A
model-00002-of-00006.safetensors: 57%|█████▋ | 2.86G/4.99G [00:42<00:29, 72.7MB/s][A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.56G/4.97G [00:42<00:34, 70.1MB/s][A
model-00005-of-00006.safetensors: 47%|████▋ | 2.32G/4.97G [00:42<00:40, 64.8MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 51%|█████ | 2.50G/4.93G [00:42<00:46, 52.4MB/s][A[A[A
model-00002-of-00006.safetensors: 58%|█████▊ | 2.88G/4.99G [00:42<00:28, 73.0MB/s][A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.58G/4.97G [00:42<00:33, 71.6MB/s][A
model-00005-of-00006.safetensors: 47%|████▋ | 2.34G/4.97G [00:42<00:38, 69.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 51%|█████ | 2.51G/4.93G [00:42<00:41, 58.3MB/s][A[A[A
model-00002-of-00006.safetensors: 58%|█████▊ | 2.90G/4.99G [00:43<00:30, 69.4MB/s][A[A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.61G/5.00G [00:43<02:11, 18.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 47%|████▋ | 2.35G/4.97G [00:43<00:36, 72.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 51%|█████ | 2.53G/4.93G [00:43<00:38, 62.3MB/s][A[A[A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.62G/5.00G [00:43<01:39, 23.8MB/s][A[A[A[A
model-00002-of-00006.safetensors: 58%|█████▊ | 2.91G/4.99G [00:43<00:31, 66.6MB/s][A[A
model-00005-of-00006.safetensors: 48%|████▊ | 2.37G/4.97G [00:43<00:39, 66.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 52%|█████▏ | 2.54G/4.93G [00:43<00:35, 66.7MB/s][A[A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.59G/4.97G [00:43<00:52, 45.5MB/s][A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.64G/5.00G [00:43<01:20, 29.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 52%|█████▏ | 2.56G/4.93G [00:43<00:34, 69.0MB/s][A[A[A
model-00005-of-00006.safetensors: 48%|████▊ | 2.38G/4.97G [00:43<00:38, 66.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.61G/4.97G [00:43<00:45, 51.6MB/s][A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.66G/5.00G [00:43<01:04, 36.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 48%|████▊ | 2.40G/4.97G [00:43<00:37, 68.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 52%|█████▏ | 2.58G/4.93G [00:43<00:37, 63.3MB/s][A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.62G/4.97G [00:43<00:39, 59.4MB/s][A
model-00002-of-00006.safetensors: 59%|█████▊ | 2.93G/4.99G [00:43<00:46, 44.0MB/s][A[A
model-00003-of-00006.safetensors: 53%|█████▎ | 2.59G/4.93G [00:44<00:35, 65.2MB/s][A[A[A
model-00002-of-00006.safetensors: 59%|█████▉ | 2.94G/4.99G [00:44<00:40, 50.4MB/s][A[A
model-00003-of-00006.safetensors: 53%|█████▎ | 2.61G/4.93G [00:44<00:37, 62.5MB/s][A[A[A
model-00002-of-00006.safetensors: 59%|█████▉ | 2.96G/4.99G [00:44<00:38, 52.6MB/s][A[A
model-00002-of-00006.safetensors: 60%|█████▉ | 2.98G/4.99G [00:44<00:33, 60.2MB/s][A[A
model-00003-of-00006.safetensors: 53%|█████▎ | 2.62G/4.93G [00:44<00:34, 66.1MB/s][A[A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.67G/5.00G [00:44<01:24, 27.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 54%|█████▎ | 2.64G/4.93G [00:44<00:33, 69.0MB/s][A[A[A
model-00002-of-00006.safetensors: 60%|█████▉ | 2.99G/4.99G [00:44<00:31, 63.5MB/s][A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.69G/5.00G [00:44<01:07, 34.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 54%|█████▍ | 2.66G/4.93G [00:44<00:31, 71.5MB/s][A[A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.70G/5.00G [00:44<00:55, 41.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 60%|██████ | 3.01G/4.99G [00:45<00:30, 66.0MB/s][A[A
model-00003-of-00006.safetensors: 54%|█████▍ | 2.67G/4.93G [00:45<00:32, 70.6MB/s][A[A[A
model-00002-of-00006.safetensors: 61%|██████ | 3.02G/4.99G [00:45<00:31, 63.0MB/s][A[A
model-00003-of-00006.safetensors: 54%|█████▍ | 2.69G/4.93G [00:45<00:31, 71.2MB/s][A[A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.72G/5.00G [00:45<00:59, 38.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 61%|██████ | 3.04G/4.99G [00:45<00:30, 63.5MB/s][A[A
model-00004-of-00006.safetensors: 55%|█████▍ | 2.74G/5.00G [00:45<00:48, 46.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 61%|██████ | 3.06G/4.99G [00:45<00:29, 65.2MB/s][A[A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.75G/5.00G [00:45<00:43, 52.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 55%|█████▍ | 2.70G/4.93G [00:45<00:40, 55.5MB/s][A[A[A
model-00002-of-00006.safetensors: 62%|██████▏ | 3.07G/4.99G [00:45<00:27, 68.6MB/s][A[A
model-00003-of-00006.safetensors: 55%|█████▌ | 2.72G/4.93G [00:46<00:36, 59.8MB/s][A[A[A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.77G/5.00G [00:46<00:41, 53.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 55%|█████▌ | 2.74G/4.93G [00:46<00:34, 62.9MB/s][A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.64G/4.97G [00:46<02:15, 17.2MB/s][A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.78G/5.00G [00:46<00:39, 55.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 62%|██████▏ | 3.09G/4.99G [00:46<00:38, 49.5MB/s][A[A
model-00003-of-00006.safetensors: 56%|█████▌ | 2.75G/4.93G [00:46<00:32, 66.9MB/s][A[A[A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.80G/5.00G [00:46<00:35, 61.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.66G/4.97G [00:46<01:46, 21.7MB/s][A
model-00002-of-00006.safetensors: 62%|██████▏ | 3.10G/4.99G [00:46<00:34, 54.9MB/s][A[A
model-00003-of-00006.safetensors: 56%|█████▌ | 2.77G/4.93G [00:46<00:31, 69.5MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.67G/4.97G [00:46<01:24, 27.2MB/s][A
model-00004-of-00006.safetensors: 56%|█████▋ | 2.82G/5.00G [00:46<00:35, 61.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 63%|██████▎ | 3.12G/4.99G [00:46<00:30, 60.5MB/s][A[A
model-00003-of-00006.safetensors: 56%|█████▋ | 2.78G/4.93G [00:46<00:31, 68.7MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.69G/4.97G [00:47<01:07, 33.6MB/s][A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.83G/5.00G [00:47<00:32, 66.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 49%|████▊ | 2.42G/4.97G [00:47<03:05, 13.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 63%|██████▎ | 3.14G/4.99G [00:47<00:28, 64.1MB/s][A[A
model-00003-of-00006.safetensors: 57%|█████▋ | 2.80G/4.93G [00:47<00:32, 65.7MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.70G/4.97G [00:47<00:56, 40.0MB/s][A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.85G/5.00G [00:47<00:32, 66.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 49%|████▉ | 2.43G/4.97G [00:47<02:19, 18.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 63%|██████▎ | 3.15G/4.99G [00:47<00:27, 67.4MB/s][A[A
model-00003-of-00006.safetensors: 57%|█████▋ | 2.82G/4.93G [00:47<00:30, 69.1MB/s][A[A[A
model-00001-of-00006.safetensors: 55%|█████▍ | 2.72G/4.97G [00:47<00:48, 46.7MB/s][A
model-00002-of-00006.safetensors: 63%|██████▎ | 3.17G/4.99G [00:47<00:26, 69.2MB/s][A[A
model-00005-of-00006.safetensors: 49%|████▉ | 2.45G/4.97G [00:47<01:49, 23.1MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.86G/5.00G [00:47<00:34, 61.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 57%|█████▋ | 2.83G/4.93G [00:47<00:29, 70.8MB/s][A[A[A
model-00002-of-00006.safetensors: 64%|██████▍ | 3.18G/4.99G [00:47<00:25, 70.1MB/s][A[A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.74G/4.97G [00:47<00:45, 48.8MB/s][A
model-00005-of-00006.safetensors: 50%|████▉ | 2.46G/4.97G [00:47<01:26, 29.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.88G/5.00G [00:47<00:32, 65.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 58%|█████▊ | 2.85G/4.93G [00:47<00:28, 73.1MB/s][A[A[A
model-00002-of-00006.safetensors: 64%|██████▍ | 3.20G/4.99G [00:47<00:24, 73.6MB/s][A[A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.75G/4.97G [00:48<00:40, 54.7MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.90G/5.00G [00:48<00:32, 65.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 50%|████▉ | 2.48G/4.97G [00:48<01:12, 34.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 58%|█████▊ | 2.86G/4.93G [00:48<00:28, 72.4MB/s][A[A[A
model-00002-of-00006.safetensors: 64%|██████▍ | 3.22G/4.99G [00:48<00:23, 77.0MB/s][A[A
model-00005-of-00006.safetensors: 50%|█████ | 2.50G/4.97G [00:48<01:00, 40.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.91G/5.00G [00:48<00:33, 61.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 58%|█████▊ | 2.88G/4.93G [00:48<00:31, 65.8MB/s][A[A[A
model-00002-of-00006.safetensors: 65%|██████▍ | 3.23G/4.99G [00:48<00:23, 74.4MB/s][A[A
model-00005-of-00006.safetensors: 51%|█████ | 2.51G/4.97G [00:48<00:52, 46.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 59%|█████▊ | 2.93G/5.00G [00:48<00:31, 66.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 65%|██████▌ | 3.25G/4.99G [00:48<00:22, 76.9MB/s][A[A
model-00003-of-00006.safetensors: 59%|█████▊ | 2.90G/4.93G [00:48<00:29, 68.5MB/s][A[A[A
model-00002-of-00006.safetensors: 65%|██████▌ | 3.26G/4.99G [00:48<00:22, 77.2MB/s][A[A
model-00003-of-00006.safetensors: 59%|█████▉ | 2.91G/4.93G [00:48<00:28, 71.5MB/s][A[A[A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.94G/5.00G [00:48<00:32, 63.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.77G/4.97G [00:48<01:04, 33.9MB/s][A
model-00003-of-00006.safetensors: 59%|█████▉ | 2.93G/4.93G [00:48<00:26, 76.4MB/s][A[A[A
model-00002-of-00006.safetensors: 66%|██████▌ | 3.28G/4.99G [00:49<00:22, 76.8MB/s][A[A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.96G/5.00G [00:49<00:31, 65.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.78G/4.97G [00:49<00:53, 40.5MB/s][A
model-00003-of-00006.safetensors: 60%|█████▉ | 2.94G/4.93G [00:49<00:25, 78.9MB/s][A[A[A
model-00005-of-00006.safetensors: 51%|█████ | 2.53G/4.97G [00:49<01:08, 35.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 66%|██████▌ | 3.30G/4.99G [00:49<00:22, 74.2MB/s][A[A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.98G/5.00G [00:49<00:29, 69.2MB/s][A[A[A[A
model-00001-of-00006.safetensors: 56%|█████▋ | 2.80G/4.97G [00:49<00:47, 45.8MB/s][A
model-00005-of-00006.safetensors: 51%|█████ | 2.54G/4.97G [00:49<00:57, 42.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 66%|██████▋ | 3.31G/4.99G [00:49<00:23, 70.7MB/s][A[A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.99G/5.00G [00:49<00:28, 69.6MB/s][A[A[A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.82G/4.97G [00:49<00:42, 50.4MB/s][A
model-00005-of-00006.safetensors: 51%|█████▏ | 2.56G/4.97G [00:49<00:49, 49.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 67%|██████▋ | 3.33G/4.99G [00:49<00:23, 70.5MB/s][A[A
model-00004-of-00006.safetensors: 60%|██████ | 3.01G/5.00G [00:49<00:28, 69.5MB/s][A[A[A[A
model-00003-of-00006.safetensors: 60%|██████ | 2.96G/4.93G [00:49<00:40, 49.2MB/s][A[A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.83G/4.97G [00:49<00:37, 56.3MB/s][A
model-00005-of-00006.safetensors: 52%|█████▏ | 2.58G/4.97G [00:49<00:43, 54.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 60%|██████ | 3.02G/5.00G [00:49<00:26, 73.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 67%|██████▋ | 3.34G/4.99G [00:49<00:24, 67.7MB/s][A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.85G/4.97G [00:50<00:34, 61.4MB/s][A
model-00004-of-00006.safetensors: 61%|██████ | 3.04G/5.00G [00:50<00:26, 74.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 52%|█████▏ | 2.59G/4.97G [00:50<00:43, 54.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 60%|██████ | 2.98G/4.93G [00:50<00:42, 45.7MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.86G/4.97G [00:50<00:32, 63.8MB/s][A
model-00002-of-00006.safetensors: 67%|██████▋ | 3.36G/4.99G [00:50<00:26, 60.6MB/s][A[A
model-00005-of-00006.safetensors: 52%|█████▏ | 2.61G/4.97G [00:50<00:39, 60.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 61%|██████ | 3.06G/5.00G [00:50<00:26, 72.3MB/s][A[A[A[A
model-00003-of-00006.safetensors: 61%|██████ | 2.99G/4.93G [00:50<00:37, 51.4MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.88G/4.97G [00:50<00:31, 65.7MB/s][A
model-00002-of-00006.safetensors: 68%|██████▊ | 3.38G/4.99G [00:50<00:24, 64.8MB/s][A[A
model-00005-of-00006.safetensors: 53%|█████▎ | 2.62G/4.97G [00:50<00:36, 64.9MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 61%|██████▏ | 3.07G/5.00G [00:50<00:26, 72.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 61%|██████ | 3.01G/4.93G [00:50<00:35, 54.8MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.90G/4.97G [00:50<00:30, 68.1MB/s][A
model-00002-of-00006.safetensors: 68%|██████▊ | 3.39G/4.99G [00:50<00:23, 66.8MB/s][A[A
model-00005-of-00006.safetensors: 53%|█████▎ | 2.64G/4.97G [00:50<00:34, 67.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.09G/5.00G [00:50<00:26, 72.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 61%|██████▏ | 3.02G/4.93G [00:50<00:32, 58.6MB/s][A[A[A
model-00001-of-00006.safetensors: 59%|█████▊ | 2.91G/4.97G [00:50<00:29, 70.5MB/s][A
model-00002-of-00006.safetensors: 68%|██████▊ | 3.41G/4.99G [00:50<00:23, 68.4MB/s][A[A
model-00005-of-00006.safetensors: 53%|█████▎ | 2.66G/4.97G [00:50<00:33, 70.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.10G/5.00G [00:51<00:26, 72.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 62%|██████▏ | 3.04G/4.93G [00:51<00:30, 62.0MB/s][A[A[A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.93G/4.97G [00:51<00:30, 67.8MB/s][A
model-00002-of-00006.safetensors: 69%|██████▊ | 3.42G/4.99G [00:51<00:22, 70.3MB/s][A[A
model-00005-of-00006.safetensors: 54%|█████▎ | 2.67G/4.97G [00:51<00:32, 70.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.12G/5.00G [00:51<00:25, 73.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.94G/4.97G [00:51<00:27, 73.1MB/s][A
model-00002-of-00006.safetensors: 69%|██████▉ | 3.44G/4.99G [00:51<00:21, 73.1MB/s][A[A
model-00003-of-00006.safetensors: 62%|██████▏ | 3.06G/4.93G [00:51<00:30, 60.8MB/s][A[A[A
model-00005-of-00006.safetensors: 54%|█████▍ | 2.69G/4.97G [00:51<00:31, 71.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.14G/5.00G [00:51<00:25, 72.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 69%|██████▉ | 3.46G/4.99G [00:51<00:20, 74.7MB/s][A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.96G/4.97G [00:51<00:28, 71.0MB/s][A
model-00003-of-00006.safetensors: 62%|██████▏ | 3.07G/4.93G [00:51<00:29, 63.4MB/s][A[A[A
model-00005-of-00006.safetensors: 54%|█████▍ | 2.70G/4.97G [00:51<00:31, 72.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.15G/5.00G [00:51<00:28, 65.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 70%|██████▉ | 3.47G/4.99G [00:51<00:19, 76.5MB/s][A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.98G/4.97G [00:51<00:26, 73.8MB/s][A
model-00003-of-00006.safetensors: 63%|██████▎ | 3.09G/4.93G [00:51<00:30, 61.2MB/s][A[A[A
model-00005-of-00006.safetensors: 55%|█████▍ | 2.72G/4.97G [00:51<00:36, 62.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 60%|██████ | 2.99G/4.97G [00:52<00:26, 74.0MB/s][A
model-00002-of-00006.safetensors: 70%|██████▉ | 3.49G/4.99G [00:52<00:20, 75.0MB/s][A[A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.17G/5.00G [00:51<00:27, 66.7MB/s][A[A[A[A
model-00005-of-00006.safetensors: 55%|█████▌ | 2.74G/4.97G [00:52<00:34, 64.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 64%|██████▎ | 3.18G/5.00G [00:52<00:26, 67.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.01G/4.97G [00:52<00:27, 72.1MB/s][A
model-00002-of-00006.safetensors: 70%|███████ | 3.50G/4.99G [00:52<00:20, 71.1MB/s][A[A
model-00005-of-00006.safetensors: 55%|█████▌ | 2.75G/4.97G [00:52<00:33, 66.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 71%|███████ | 3.52G/4.99G [00:52<00:19, 73.8MB/s][A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.02G/4.97G [00:52<00:28, 67.2MB/s][A
model-00005-of-00006.safetensors: 56%|█████▌ | 2.77G/4.97G [00:52<00:32, 68.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 71%|███████ | 3.54G/4.99G [00:52<00:19, 74.3MB/s][A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.04G/4.97G [00:52<00:27, 69.8MB/s][A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.20G/5.00G [00:52<00:37, 48.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 63%|██████▎ | 3.10G/4.93G [00:52<00:53, 33.9MB/s][A[A[A
model-00005-of-00006.safetensors: 56%|█████▌ | 2.78G/4.97G [00:52<00:30, 71.1MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 71%|███████ | 3.55G/4.99G [00:52<00:19, 72.4MB/s][A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.06G/4.97G [00:52<00:26, 70.8MB/s][A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.22G/5.00G [00:52<00:32, 55.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 56%|█████▋ | 2.80G/4.97G [00:53<00:29, 72.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 71%|███████▏ | 3.57G/4.99G [00:53<00:19, 73.9MB/s][A[A
model-00003-of-00006.safetensors: 63%|██████▎ | 3.12G/4.93G [00:53<00:45, 39.5MB/s][A[A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.07G/4.97G [00:53<00:27, 67.9MB/s][A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.23G/5.00G [00:53<00:31, 56.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 57%|█████▋ | 2.82G/4.97G [00:53<00:29, 72.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 72%|███████▏ | 3.58G/4.99G [00:53<00:18, 75.5MB/s][A[A
model-00003-of-00006.safetensors: 64%|██████▎ | 3.14G/4.93G [00:53<00:40, 44.0MB/s][A[A[A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.25G/5.00G [00:53<00:28, 61.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 57%|█████▋ | 2.83G/4.97G [00:53<00:29, 73.8MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 64%|██████▍ | 3.15G/4.93G [00:53<00:35, 49.8MB/s][A[A[A
model-00002-of-00006.safetensors: 72%|███████▏ | 3.60G/4.99G [00:53<00:20, 66.5MB/s][A[A
model-00004-of-00006.safetensors: 65%|██████▌ | 3.26G/5.00G [00:53<00:27, 64.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 57%|█████▋ | 2.85G/4.97G [00:53<00:28, 74.8MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.09G/4.97G [00:53<00:39, 47.9MB/s][A
model-00002-of-00006.safetensors: 72%|███████▏ | 3.62G/4.99G [00:53<00:19, 69.1MB/s][A[A
model-00005-of-00006.safetensors: 58%|█████▊ | 2.86G/4.97G [00:53<00:27, 77.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.28G/5.00G [00:53<00:26, 64.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.10G/4.97G [00:53<00:34, 53.7MB/s][A
model-00003-of-00006.safetensors: 64%|██████▍ | 3.17G/4.93G [00:54<00:41, 42.3MB/s][A[A[A
model-00005-of-00006.safetensors: 58%|█████▊ | 2.88G/4.97G [00:54<00:28, 74.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.30G/5.00G [00:54<00:25, 66.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.12G/4.97G [00:54<00:31, 58.8MB/s][A
model-00005-of-00006.safetensors: 58%|█████▊ | 2.90G/4.97G [00:54<00:31, 66.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.31G/5.00G [00:54<00:28, 59.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 73%|███████▎ | 3.63G/4.99G [00:54<00:29, 46.4MB/s][A[A
model-00003-of-00006.safetensors: 65%|██████▍ | 3.18G/4.93G [00:54<00:39, 44.2MB/s][A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.13G/4.97G [00:54<00:38, 47.5MB/s][A
model-00005-of-00006.safetensors: 58%|█████▊ | 2.90G/4.97G [00:54<00:31, 66.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 66%|██████▋ | 3.32G/5.00G [00:54<00:29, 58.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.14G/4.97G [00:54<00:37, 49.4MB/s][A
model-00002-of-00006.safetensors: 73%|███████▎ | 3.65G/4.99G [00:54<00:25, 52.6MB/s][A[A
model-00003-of-00006.safetensors: 65%|██████▍ | 3.20G/4.93G [00:54<00:34, 50.3MB/s][A[A[A
model-00005-of-00006.safetensors: 59%|█████▊ | 2.91G/4.97G [00:54<00:32, 62.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.15G/4.97G [00:54<00:30, 58.6MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.33G/5.00G [00:54<00:31, 53.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 65%|██████▌ | 3.22G/4.93G [00:54<00:30, 55.9MB/s][A[A[A
model-00002-of-00006.safetensors: 73%|███████▎ | 3.66G/4.99G [00:54<00:23, 57.2MB/s][A[A
model-00005-of-00006.safetensors: 59%|█████▉ | 2.93G/4.97G [00:54<00:30, 67.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.34G/5.00G [00:55<00:28, 57.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.17G/4.97G [00:55<00:29, 60.8MB/s][A
model-00002-of-00006.safetensors: 74%|███████▎ | 3.68G/4.99G [00:55<00:21, 61.1MB/s][A[A
model-00003-of-00006.safetensors: 66%|██████▌ | 3.23G/4.93G [00:55<00:28, 59.5MB/s][A[A[A
model-00005-of-00006.safetensors: 59%|█████▉ | 2.94G/4.97G [00:55<00:29, 68.0MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.18G/4.97G [00:55<00:23, 76.3MB/s][A
model-00002-of-00006.safetensors: 74%|███████▍ | 3.69G/4.99G [00:55<00:17, 72.6MB/s][A[A
model-00003-of-00006.safetensors: 66%|██████▌ | 3.25G/4.93G [00:55<00:23, 72.4MB/s][A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.19G/4.97G [00:55<00:22, 77.3MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.36G/5.00G [00:55<00:26, 62.1MB/s][A[A[A[A
model-00003-of-00006.safetensors: 66%|██████▌ | 3.26G/4.93G [00:55<00:23, 71.2MB/s][A[A[A
model-00002-of-00006.safetensors: 74%|███████▍ | 3.70G/4.99G [00:55<00:19, 67.4MB/s][A[A
model-00005-of-00006.safetensors: 60%|█████▉ | 2.96G/4.97G [00:55<00:29, 68.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.20G/4.97G [00:55<00:25, 69.8MB/s][A
model-00002-of-00006.safetensors: 74%|███████▍ | 3.71G/4.99G [00:55<00:19, 65.3MB/s][A[A
model-00003-of-00006.safetensors: 66%|██████▌ | 3.26G/4.93G [00:55<00:26, 63.2MB/s][A[A[A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.38G/5.00G [00:55<00:25, 62.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 60%|█████▉ | 2.98G/4.97G [00:55<00:29, 68.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 66%|██████▋ | 3.28G/4.93G [00:55<00:23, 70.0MB/s][A[A[A
model-00001-of-00006.safetensors: 65%|██████▍ | 3.22G/4.97G [00:55<00:30, 56.5MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.39G/5.00G [00:55<00:26, 61.3MB/s][A[A[A[A
model-00005-of-00006.safetensors: 60%|██████ | 2.99G/4.97G [00:55<00:28, 69.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 67%|██████▋ | 3.30G/4.93G [00:55<00:22, 72.7MB/s][A[A[A
model-00005-of-00006.safetensors: 61%|██████ | 3.01G/4.97G [00:56<00:29, 67.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 67%|██████▋ | 3.31G/4.93G [00:56<00:28, 56.1MB/s][A[A[A
model-00005-of-00006.safetensors: 61%|██████ | 3.02G/4.97G [00:56<00:29, 66.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.41G/5.00G [00:56<00:35, 45.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 75%|███████▍ | 3.73G/4.99G [00:56<00:37, 33.6MB/s][A[A
model-00003-of-00006.safetensors: 67%|██████▋ | 3.33G/4.93G [00:56<00:25, 62.3MB/s][A[A[A
model-00005-of-00006.safetensors: 61%|██████ | 3.04G/4.97G [00:56<00:27, 70.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 75%|███████▌ | 3.74G/4.99G [00:56<00:30, 41.6MB/s][A[A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.42G/5.00G [00:56<00:31, 50.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 68%|██████▊ | 3.34G/4.93G [00:56<00:25, 61.3MB/s][A[A[A
model-00002-of-00006.safetensors: 75%|███████▌ | 3.76G/4.99G [00:56<00:24, 49.7MB/s][A[A
model-00005-of-00006.safetensors: 61%|██████▏ | 3.06G/4.97G [00:56<00:28, 66.7MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.44G/5.00G [00:56<00:27, 56.6MB/s][A[A[A[A
model-00003-of-00006.safetensors: 68%|██████▊ | 3.36G/4.93G [00:56<00:23, 67.6MB/s][A[A[A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.46G/5.00G [00:57<00:25, 60.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 62%|██████▏ | 3.07G/4.97G [00:57<00:29, 65.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 76%|███████▌ | 3.78G/4.99G [00:57<00:23, 50.7MB/s][A[A
model-00003-of-00006.safetensors: 68%|██████▊ | 3.38G/4.93G [00:57<00:22, 68.1MB/s][A[A[A
model-00005-of-00006.safetensors: 62%|██████▏ | 3.09G/4.97G [00:57<00:27, 69.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.47G/5.00G [00:57<00:24, 62.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 76%|███████▌ | 3.79G/4.99G [00:57<00:20, 57.2MB/s][A[A
model-00003-of-00006.safetensors: 69%|██████▉ | 3.39G/4.93G [00:57<00:21, 70.3MB/s][A[A[A
model-00005-of-00006.safetensors: 62%|██████▏ | 3.10G/4.97G [00:57<00:25, 73.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 70%|██████▉ | 3.49G/5.00G [00:57<00:22, 67.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 76%|███████▋ | 3.81G/4.99G [00:57<00:19, 60.4MB/s][A[A
model-00005-of-00006.safetensors: 63%|██████▎ | 3.12G/4.97G [00:57<00:24, 75.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 70%|███████ | 3.50G/5.00G [00:57<00:21, 69.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 63%|██████▎ | 3.14G/4.97G [00:57<00:23, 76.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 70%|███████ | 3.52G/5.00G [00:57<00:21, 70.2MB/s][A[A[A[A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.23G/4.97G [00:57<01:43, 16.8MB/s][A
model-00002-of-00006.safetensors: 77%|███████▋ | 3.82G/4.99G [00:57<00:22, 50.9MB/s][A[A
model-00003-of-00006.safetensors: 69%|██████▉ | 3.41G/4.93G [00:57<00:32, 47.4MB/s][A[A[A
model-00005-of-00006.safetensors: 63%|██████▎ | 3.15G/4.97G [00:58<00:24, 75.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 71%|███████ | 3.54G/5.00G [00:58<00:19, 74.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.25G/4.97G [00:58<01:14, 23.0MB/s][A
model-00002-of-00006.safetensors: 77%|███████▋ | 3.84G/4.99G [00:58<00:20, 55.0MB/s][A[A
model-00003-of-00006.safetensors: 69%|██████▉ | 3.42G/4.93G [00:58<00:28, 53.1MB/s][A[A[A
model-00004-of-00006.safetensors: 71%|███████ | 3.55G/5.00G [00:58<00:19, 75.7MB/s][A[A[A[A
model-00005-of-00006.safetensors: 64%|██████▎ | 3.17G/4.97G [00:58<00:25, 71.0MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.26G/4.97G [00:58<00:57, 29.5MB/s][A
model-00003-of-00006.safetensors: 70%|██████▉ | 3.44G/4.93G [00:58<00:25, 57.8MB/s][A[A[A
model-00002-of-00006.safetensors: 77%|███████▋ | 3.86G/4.99G [00:58<00:19, 57.9MB/s][A[A
model-00004-of-00006.safetensors: 71%|███████▏ | 3.57G/5.00G [00:58<00:18, 77.3MB/s][A[A[A[A
model-00005-of-00006.safetensors: 64%|██████▍ | 3.18G/4.97G [00:58<00:25, 70.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.28G/4.97G [00:58<00:47, 35.8MB/s][A
model-00003-of-00006.safetensors: 70%|███████ | 3.46G/4.93G [00:58<00:23, 63.3MB/s][A[A[A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.58G/5.00G [00:58<00:18, 77.7MB/s][A[A[A[A
model-00005-of-00006.safetensors: 64%|██████▍ | 3.20G/4.97G [00:58<00:23, 76.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 66%|██████▋ | 3.30G/4.97G [00:58<00:38, 43.2MB/s][A
model-00003-of-00006.safetensors: 70%|███████ | 3.47G/4.93G [00:58<00:21, 67.0MB/s][A[A[A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.60G/5.00G [00:58<00:17, 77.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 65%|██████▍ | 3.22G/4.97G [00:58<00:24, 72.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.31G/4.97G [00:59<00:33, 48.9MB/s][A
model-00002-of-00006.safetensors: 78%|███████▊ | 3.87G/4.99G [00:59<00:26, 42.0MB/s][A[A
model-00003-of-00006.safetensors: 71%|███████ | 3.49G/4.93G [00:59<00:21, 67.8MB/s][A[A[A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.62G/5.00G [00:59<00:18, 75.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 78%|███████▊ | 3.89G/4.99G [00:59<00:22, 48.2MB/s][A[A
model-00003-of-00006.safetensors: 71%|███████ | 3.50G/4.93G [00:59<00:21, 67.8MB/s][A[A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.33G/4.97G [00:59<00:31, 51.5MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.63G/5.00G [00:59<00:20, 65.7MB/s][A[A[A[A
model-00002-of-00006.safetensors: 78%|███████▊ | 3.90G/4.99G [00:59<00:20, 53.3MB/s][A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.34G/4.97G [00:59<00:28, 57.4MB/s][A
model-00005-of-00006.safetensors: 65%|██████▌ | 3.23G/4.97G [00:59<00:35, 49.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 71%|███████▏ | 3.52G/4.93G [00:59<00:21, 67.0MB/s][A[A[A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.65G/5.00G [00:59<00:19, 68.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.36G/4.97G [00:59<00:26, 61.1MB/s][A
model-00002-of-00006.safetensors: 79%|███████▊ | 3.92G/4.99G [00:59<00:19, 56.2MB/s][A[A
model-00005-of-00006.safetensors: 65%|██████▌ | 3.25G/4.97G [00:59<00:31, 54.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 72%|███████▏ | 3.54G/4.93G [00:59<00:20, 67.5MB/s][A[A[A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.66G/5.00G [00:59<00:20, 66.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 79%|███████▉ | 3.94G/4.99G [00:59<00:17, 60.8MB/s][A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.38G/4.97G [00:59<00:25, 63.3MB/s][A
model-00003-of-00006.safetensors: 72%|███████▏ | 3.55G/4.93G [00:59<00:19, 69.1MB/s][A[A[A
model-00005-of-00006.safetensors: 66%|██████▌ | 3.26G/4.97G [00:59<00:29, 57.1MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 74%|███████▎ | 3.68G/5.00G [01:00<00:19, 68.2MB/s][A[A[A[A
model-00002-of-00006.safetensors: 79%|███████▉ | 3.95G/4.99G [01:00<00:16, 64.4MB/s][A[A
model-00005-of-00006.safetensors: 66%|██████▌ | 3.28G/4.97G [01:00<00:26, 63.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.39G/4.97G [01:00<00:24, 63.3MB/s][A
model-00003-of-00006.safetensors: 72%|███████▏ | 3.57G/4.93G [01:00<00:20, 68.1MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▊ | 3.41G/4.97G [01:00<00:20, 76.2MB/s][A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.70G/5.00G [01:00<00:19, 67.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 79%|███████▉ | 3.97G/4.99G [01:00<00:15, 68.2MB/s][A[A
model-00003-of-00006.safetensors: 73%|███████▎ | 3.58G/4.93G [01:00<00:18, 72.8MB/s][A[A[A
model-00005-of-00006.safetensors: 66%|██████▋ | 3.30G/4.97G [01:00<00:26, 64.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.42G/4.97G [01:00<00:22, 69.7MB/s][A
model-00002-of-00006.safetensors: 80%|███████▉ | 3.98G/4.99G [01:00<00:14, 68.1MB/s][A[A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.71G/5.00G [01:00<00:19, 65.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 67%|██████▋ | 3.31G/4.97G [01:00<00:24, 66.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 73%|███████▎ | 3.60G/4.93G [01:00<00:18, 71.3MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.42G/4.97G [01:00<00:24, 63.2MB/s][A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.73G/5.00G [01:00<00:18, 67.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 80%|████████ | 4.00G/4.99G [01:00<00:14, 67.3MB/s][A[A
model-00005-of-00006.safetensors: 67%|██████▋ | 3.33G/4.97G [01:00<00:24, 68.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 73%|███████▎ | 3.62G/4.93G [01:00<00:18, 71.8MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.44G/4.97G [01:00<00:23, 64.2MB/s][A
model-00002-of-00006.safetensors: 80%|████████ | 4.02G/4.99G [01:01<00:13, 72.3MB/s][A[A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.74G/5.00G [01:01<00:17, 70.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 74%|███████▎ | 3.63G/4.93G [01:01<00:18, 69.4MB/s][A[A[A
model-00005-of-00006.safetensors: 67%|██████▋ | 3.34G/4.97G [01:01<00:24, 66.3MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.46G/4.97G [01:01<00:23, 65.2MB/s][A
model-00002-of-00006.safetensors: 81%|████████ | 4.03G/4.99G [01:01<00:13, 73.3MB/s][A[A
model-00004-of-00006.safetensors: 75%|███████▌ | 3.76G/5.00G [01:01<00:17, 72.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 68%|██████▊ | 3.36G/4.97G [01:01<00:24, 67.0MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 74%|███████▍ | 3.65G/4.93G [01:01<00:18, 68.7MB/s][A[A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.47G/4.97G [01:01<00:21, 67.9MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.78G/5.00G [01:01<00:16, 74.3MB/s][A[A[A[A
model-00002-of-00006.safetensors: 81%|████████ | 4.05G/4.99G [01:01<00:13, 72.5MB/s][A[A
model-00005-of-00006.safetensors: 68%|██████▊ | 3.38G/4.97G [01:01<00:22, 70.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 70%|███████ | 3.49G/4.97G [01:01<00:21, 68.3MB/s][A
model-00003-of-00006.safetensors: 74%|███████▍ | 3.66G/4.93G [01:01<00:18, 67.6MB/s][A[A[A
model-00002-of-00006.safetensors: 81%|████████▏ | 4.06G/4.99G [01:01<00:13, 70.9MB/s][A[A
model-00005-of-00006.safetensors: 68%|██████▊ | 3.39G/4.97G [01:01<00:21, 73.4MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.50G/4.97G [01:01<00:20, 72.8MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.79G/5.00G [01:01<00:19, 62.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 75%|███████▍ | 3.68G/4.93G [01:01<00:17, 69.7MB/s][A[A[A
model-00002-of-00006.safetensors: 82%|████████▏ | 4.08G/4.99G [01:01<00:12, 71.8MB/s][A[A
model-00005-of-00006.safetensors: 69%|██████▊ | 3.41G/4.97G [01:01<00:20, 74.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 75%|███████▍ | 3.70G/4.93G [01:02<00:17, 72.7MB/s][A[A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.52G/4.97G [01:02<00:20, 71.2MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.81G/5.00G [01:02<00:18, 63.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 69%|██████▉ | 3.42G/4.97G [01:02<00:20, 77.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 82%|████████▏ | 4.10G/4.99G [01:02<00:12, 70.7MB/s][A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.54G/4.97G [01:02<00:19, 74.0MB/s][A
model-00003-of-00006.safetensors: 75%|███████▌ | 3.71G/4.93G [01:02<00:16, 72.5MB/s][A[A[A
model-00004-of-00006.safetensors: 76%|███████▋ | 3.82G/5.00G [01:02<00:18, 63.7MB/s][A[A[A[A
model-00005-of-00006.safetensors: 69%|██████▉ | 3.44G/4.97G [01:02<00:19, 77.1MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 82%|████████▏ | 4.11G/4.99G [01:02<00:12, 72.3MB/s][A[A
model-00003-of-00006.safetensors: 76%|███████▌ | 3.73G/4.93G [01:02<00:16, 74.1MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.55G/4.97G [01:02<00:20, 70.5MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.84G/5.00G [01:02<00:18, 63.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 83%|████████▎ | 4.13G/4.99G [01:02<00:11, 75.4MB/s][A[A
model-00005-of-00006.safetensors: 70%|██████▉ | 3.46G/4.97G [01:02<00:19, 76.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 76%|███████▌ | 3.74G/4.93G [01:02<00:15, 75.7MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.57G/4.97G [01:02<00:19, 70.1MB/s][A
model-00002-of-00006.safetensors: 83%|████████▎ | 4.14G/4.99G [01:02<00:10, 79.3MB/s][A[A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.86G/5.00G [01:02<00:17, 66.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 70%|██████▉ | 3.47G/4.97G [01:02<00:19, 76.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 76%|███████▌ | 3.76G/4.93G [01:02<00:16, 71.0MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.58G/4.97G [01:02<00:19, 71.0MB/s][A
model-00002-of-00006.safetensors: 83%|████████▎ | 4.16G/4.99G [01:02<00:10, 78.2MB/s][A[A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.87G/5.00G [01:02<00:16, 68.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 70%|███████ | 3.49G/4.97G [01:02<00:19, 75.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 77%|███████▋ | 3.78G/4.93G [01:03<00:16, 71.9MB/s][A[A[A
model-00002-of-00006.safetensors: 84%|████████▎ | 4.18G/4.99G [01:03<00:10, 77.2MB/s][A[A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.89G/5.00G [01:03<00:15, 70.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 70%|███████ | 3.50G/4.97G [01:03<00:19, 74.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 77%|███████▋ | 3.79G/4.93G [01:03<00:15, 72.6MB/s][A[A[A
model-00002-of-00006.safetensors: 84%|████████▍ | 4.19G/4.99G [01:03<00:10, 77.9MB/s][A[A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.90G/5.00G [01:03<00:14, 73.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 71%|███████ | 3.52G/4.97G [01:03<00:19, 75.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.60G/4.97G [01:03<00:28, 48.3MB/s][A
model-00002-of-00006.safetensors: 84%|████████▍ | 4.21G/4.99G [01:03<00:10, 77.5MB/s][A[A
model-00005-of-00006.safetensors: 71%|███████ | 3.54G/4.97G [01:03<00:19, 74.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.92G/5.00G [01:03<00:15, 70.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 85%|████████▍ | 4.22G/4.99G [01:03<00:09, 78.2MB/s][A[A
model-00005-of-00006.safetensors: 71%|███████▏ | 3.55G/4.97G [01:03<00:19, 74.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 79%|███████▊ | 3.94G/5.00G [01:03<00:16, 65.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 85%|████████▍ | 4.24G/4.99G [01:04<00:10, 72.3MB/s][A[A
model-00005-of-00006.safetensors: 72%|███████▏ | 3.57G/4.97G [01:04<00:19, 73.9MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.95G/5.00G [01:04<00:17, 58.6MB/s][A[A[A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.62G/4.97G [01:04<00:39, 34.2MB/s][A
model-00005-of-00006.safetensors: 72%|███████▏ | 3.58G/4.97G [01:04<00:18, 76.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 85%|████████▌ | 4.26G/4.99G [01:04<00:10, 68.6MB/s][A[A
model-00005-of-00006.safetensors: 72%|███████▏ | 3.60G/4.97G [01:04<00:17, 76.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.63G/4.97G [01:04<00:32, 40.6MB/s][A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.97G/5.00G [01:04<00:16, 60.8MB/s][A[A[A[A
model-00002-of-00006.safetensors: 86%|████████▌ | 4.27G/4.99G [01:04<00:10, 68.3MB/s][A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.65G/4.97G [01:04<00:27, 47.9MB/s][A
model-00005-of-00006.safetensors: 73%|███████▎ | 3.62G/4.97G [01:04<00:17, 76.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 80%|███████▉ | 3.98G/5.00G [01:04<00:15, 66.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 86%|████████▌ | 4.29G/4.99G [01:04<00:10, 69.9MB/s][A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.66G/4.97G [01:04<00:24, 53.7MB/s][A
model-00005-of-00006.safetensors: 73%|███████▎ | 3.63G/4.97G [01:04<00:17, 75.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 80%|████████ | 4.00G/5.00G [01:04<00:14, 68.4MB/s][A[A[A[A
model-00002-of-00006.safetensors: 86%|████████▌ | 4.30G/4.99G [01:04<00:09, 75.3MB/s][A[A
model-00002-of-00006.safetensors: 87%|████████▋ | 4.32G/4.99G [01:05<00:08, 77.2MB/s][A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.68G/4.97G [01:05<00:21, 58.8MB/s][A
model-00004-of-00006.safetensors: 80%|████████ | 4.02G/5.00G [01:05<00:14, 68.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 73%|███████▎ | 3.65G/4.97G [01:05<00:19, 66.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 87%|████████▋ | 4.34G/4.99G [01:05<00:08, 75.2MB/s][A[A
model-00004-of-00006.safetensors: 81%|████████ | 4.03G/5.00G [01:05<00:13, 71.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.70G/4.97G [01:05<00:20, 61.1MB/s][A
model-00005-of-00006.safetensors: 74%|███████▎ | 3.66G/4.97G [01:05<00:20, 63.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 81%|████████ | 4.05G/5.00G [01:05<00:13, 73.0MB/s][A[A[A[A
model-00002-of-00006.safetensors: 87%|████████▋ | 4.35G/4.99G [01:05<00:08, 73.9MB/s][A[A
model-00005-of-00006.safetensors: 74%|███████▍ | 3.68G/4.97G [01:05<00:19, 66.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 75%|███████▍ | 3.71G/4.97G [01:05<00:22, 54.6MB/s][A
model-00004-of-00006.safetensors: 81%|████████▏ | 4.06G/5.00G [01:05<00:12, 75.1MB/s][A[A[A[A
model-00002-of-00006.safetensors: 88%|████████▊ | 4.37G/4.99G [01:05<00:08, 74.9MB/s][A[A
model-00005-of-00006.safetensors: 74%|███████▍ | 3.70G/4.97G [01:05<00:18, 69.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.73G/4.97G [01:05<00:21, 58.3MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.08G/5.00G [01:05<00:12, 74.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 88%|████████▊ | 4.38G/4.99G [01:06<00:08, 74.2MB/s][A[A
model-00005-of-00006.safetensors: 75%|███████▍ | 3.71G/4.97G [01:06<00:18, 68.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.74G/4.97G [01:06<00:20, 59.7MB/s][A
model-00002-of-00006.safetensors: 88%|████████▊ | 4.40G/4.99G [01:06<00:08, 73.3MB/s][A[A
model-00005-of-00006.safetensors: 75%|███████▍ | 3.73G/4.97G [01:06<00:17, 70.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.10G/5.00G [01:06<00:15, 57.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.76G/4.97G [01:06<00:19, 61.1MB/s][A
model-00002-of-00006.safetensors: 88%|████████▊ | 4.42G/4.99G [01:06<00:08, 70.5MB/s][A[A
model-00005-of-00006.safetensors: 75%|███████▌ | 3.74G/4.97G [01:06<00:17, 69.2MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.11G/5.00G [01:06<00:14, 61.5MB/s][A[A[A[A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.78G/4.97G [01:06<00:19, 61.9MB/s][A
model-00002-of-00006.safetensors: 89%|████████▉ | 4.43G/4.99G [01:06<00:08, 69.5MB/s][A[A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.13G/5.00G [01:06<00:12, 67.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 76%|███████▌ | 3.76G/4.97G [01:06<00:18, 65.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 89%|████████▉ | 4.45G/4.99G [01:06<00:07, 69.4MB/s][A[A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.14G/5.00G [01:07<00:12, 68.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 76%|███████▌ | 3.78G/4.97G [01:07<00:17, 66.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 89%|████████▉ | 4.46G/4.99G [01:07<00:07, 69.6MB/s][A[A
model-00003-of-00006.safetensors: 77%|███████▋ | 3.81G/4.93G [01:07<01:31, 12.3MB/s][A[A[A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.16G/5.00G [01:07<00:11, 72.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 76%|███████▋ | 3.79G/4.97G [01:07<00:16, 69.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 90%|████████▉ | 4.48G/4.99G [01:07<00:07, 71.2MB/s][A[A
model-00003-of-00006.safetensors: 78%|███████▊ | 3.82G/4.93G [01:07<01:07, 16.4MB/s][A[A[A
model-00004-of-00006.safetensors: 84%|████████▎ | 4.18G/5.00G [01:07<00:11, 74.3MB/s][A[A[A[A
model-00005-of-00006.safetensors: 77%|███████▋ | 3.81G/4.97G [01:07<00:15, 73.9MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 90%|█████████ | 4.50G/4.99G [01:07<00:06, 72.3MB/s][A[A
model-00003-of-00006.safetensors: 78%|███████▊ | 3.84G/4.93G [01:07<00:51, 21.0MB/s][A[A[A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.19G/5.00G [01:07<00:10, 74.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 76%|███████▋ | 3.79G/4.97G [01:07<00:35, 33.2MB/s][A
model-00005-of-00006.safetensors: 77%|███████▋ | 3.82G/4.97G [01:07<00:15, 76.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 90%|█████████ | 4.51G/4.99G [01:07<00:06, 73.6MB/s][A[A
model-00003-of-00006.safetensors: 78%|███████▊ | 3.86G/4.93G [01:07<00:40, 26.9MB/s][A[A[A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.21G/5.00G [01:07<00:10, 74.5MB/s][A[A[A[A
model-00005-of-00006.safetensors: 77%|███████▋ | 3.84G/4.97G [01:07<00:14, 77.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 78%|███████▊ | 3.87G/4.93G [01:08<00:32, 33.0MB/s][A[A[A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.22G/5.00G [01:08<00:10, 70.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 78%|███████▊ | 3.86G/4.97G [01:08<00:14, 77.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 79%|███████▉ | 3.89G/4.93G [01:08<00:26, 39.9MB/s][A[A[A
model-00004-of-00006.safetensors: 85%|████████▍ | 4.24G/5.00G [01:08<00:10, 74.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 78%|███████▊ | 3.87G/4.97G [01:08<00:14, 74.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.26G/5.00G [01:08<00:09, 77.3MB/s][A[A[A[A
model-00003-of-00006.safetensors: 79%|███████▉ | 3.90G/4.93G [01:08<00:22, 45.7MB/s][A[A[A
model-00005-of-00006.safetensors: 78%|███████▊ | 3.89G/4.97G [01:08<00:13, 79.6MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.81G/4.97G [01:08<00:42, 27.5MB/s][A
model-00002-of-00006.safetensors: 91%|█████████ | 4.53G/4.99G [01:08<00:11, 39.9MB/s][A[A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.27G/5.00G [01:08<00:09, 76.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 79%|███████▉ | 3.92G/4.93G [01:08<00:19, 51.6MB/s][A[A[A
model-00005-of-00006.safetensors: 79%|███████▊ | 3.90G/4.97G [01:08<00:15, 70.6MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 91%|█████████ | 4.54G/4.99G [01:08<00:09, 46.4MB/s][A[A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.29G/5.00G [01:08<00:09, 76.3MB/s][A[A[A[A
model-00003-of-00006.safetensors: 80%|███████▉ | 3.94G/4.93G [01:08<00:17, 57.4MB/s][A[A[A
model-00005-of-00006.safetensors: 79%|███████▉ | 3.92G/4.97G [01:08<00:14, 72.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.82G/4.97G [01:09<00:40, 28.4MB/s][A
model-00002-of-00006.safetensors: 91%|█████████▏| 4.56G/4.99G [01:09<00:08, 51.9MB/s][A[A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.30G/5.00G [01:09<00:08, 78.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 80%|████████ | 3.95G/4.93G [01:09<00:15, 62.5MB/s][A[A[A
model-00005-of-00006.safetensors: 79%|███████▉ | 3.94G/4.97G [01:09<00:13, 75.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.84G/4.97G [01:09<00:32, 34.4MB/s][A
model-00002-of-00006.safetensors: 92%|█████████▏| 4.58G/4.99G [01:09<00:07, 56.9MB/s][A[A
model-00004-of-00006.safetensors: 86%|████████▋ | 4.32G/5.00G [01:09<00:08, 77.8MB/s][A[A[A[A
model-00003-of-00006.safetensors: 80%|████████ | 3.97G/4.93G [01:09<00:14, 65.7MB/s][A[A[A
model-00005-of-00006.safetensors: 79%|███████▉ | 3.95G/4.97G [01:09<00:13, 76.1MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 92%|█████████▏| 4.59G/4.99G [01:09<00:06, 60.3MB/s][A[A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.34G/5.00G [01:09<00:08, 76.4MB/s][A[A[A[A
model-00003-of-00006.safetensors: 81%|████████ | 3.98G/4.93G [01:09<00:14, 66.3MB/s][A[A[A
model-00005-of-00006.safetensors: 80%|███████▉ | 3.97G/4.97G [01:09<00:13, 73.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.35G/5.00G [01:09<00:08, 74.2MB/s][A[A[A[A
model-00003-of-00006.safetensors: 81%|████████ | 4.00G/4.93G [01:09<00:13, 69.9MB/s][A[A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.86G/4.97G [01:09<00:34, 32.5MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.37G/5.00G [01:09<00:08, 76.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 80%|████████ | 3.98G/4.97G [01:09<00:15, 63.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 81%|████████▏ | 4.02G/4.93G [01:10<00:13, 69.7MB/s][A[A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.87G/4.97G [01:10<00:29, 37.2MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.38G/5.00G [01:10<00:08, 75.9MB/s][A[A[A[A
model-00005-of-00006.safetensors: 80%|████████ | 4.00G/4.97G [01:10<00:15, 63.3MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.40G/5.00G [01:10<00:07, 77.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.89G/4.97G [01:10<00:25, 41.6MB/s][A
model-00005-of-00006.safetensors: 81%|████████ | 4.02G/4.97G [01:10<00:15, 62.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.42G/5.00G [01:10<00:07, 76.9MB/s][A[A[A[A
model-00003-of-00006.safetensors: 82%|████████▏ | 4.03G/4.93G [01:10<00:19, 45.2MB/s][A[A[A
model-00002-of-00006.safetensors: 92%|█████████▏| 4.61G/4.99G [01:10<00:12, 29.7MB/s][A[A
model-00001-of-00006.safetensors: 79%|███████▊ | 3.90G/4.97G [01:10<00:24, 42.9MB/s][A
model-00005-of-00006.safetensors: 81%|████████ | 4.03G/4.97G [01:10<00:15, 61.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 89%|████████▊ | 4.43G/5.00G [01:10<00:07, 78.6MB/s][A[A[A[A
model-00002-of-00006.safetensors: 93%|█████████▎| 4.62G/4.99G [01:10<00:10, 36.6MB/s][A[A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.92G/4.97G [01:10<00:21, 49.0MB/s][A
model-00005-of-00006.safetensors: 81%|████████▏ | 4.05G/4.97G [01:10<00:14, 63.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 93%|█████████▎| 4.64G/4.99G [01:11<00:08, 43.4MB/s][A[A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.45G/5.00G [01:11<00:09, 60.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.94G/4.97G [01:11<00:19, 53.7MB/s][A
model-00005-of-00006.safetensors: 82%|████████▏ | 4.06G/4.97G [01:11<00:13, 66.5MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 93%|█████████▎| 4.66G/4.99G [01:11<00:06, 50.7MB/s][A[A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.95G/4.97G [01:11<00:17, 58.8MB/s][A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.46G/5.00G [01:11<00:08, 62.8MB/s][A[A[A[A
model-00005-of-00006.safetensors: 82%|████████▏ | 4.08G/4.97G [01:11<00:13, 66.8MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 94%|█████████▎| 4.67G/4.99G [01:11<00:05, 56.1MB/s][A[A
model-00003-of-00006.safetensors: 82%|████████▏ | 4.05G/4.93G [01:11<00:27, 31.6MB/s][A[A[A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.48G/5.00G [01:11<00:07, 67.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.97G/4.97G [01:11<00:15, 63.7MB/s][A
model-00005-of-00006.safetensors: 82%|████████▏ | 4.10G/4.97G [01:11<00:12, 69.7MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 94%|█████████▍| 4.69G/4.99G [01:11<00:04, 60.9MB/s][A[A
model-00003-of-00006.safetensors: 82%|████████▏ | 4.06G/4.93G [01:11<00:23, 37.6MB/s][A[A[A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.50G/5.00G [01:11<00:07, 71.3MB/s][A[A[A[A
model-00001-of-00006.safetensors: 80%|████████ | 3.98G/4.97G [01:11<00:14, 68.0MB/s][A
model-00005-of-00006.safetensors: 83%|████████▎ | 4.11G/4.97G [01:11<00:12, 71.2MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 94%|█████████▍| 4.70G/4.99G [01:11<00:04, 63.2MB/s][A[A
model-00003-of-00006.safetensors: 83%|████████▎ | 4.08G/4.93G [01:11<00:19, 43.8MB/s][A[A[A
model-00004-of-00006.safetensors: 90%|█████████ | 4.51G/5.00G [01:12<00:06, 72.0MB/s][A[A[A[A
model-00005-of-00006.safetensors: 83%|████████▎ | 4.13G/4.97G [01:12<00:10, 76.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.00G/4.97G [01:12<00:15, 64.1MB/s][A
model-00002-of-00006.safetensors: 95%|█████████▍| 4.72G/4.99G [01:12<00:04, 65.5MB/s][A[A
model-00004-of-00006.safetensors: 91%|█████████ | 4.53G/5.00G [01:12<00:08, 58.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 83%|████████▎ | 4.10G/4.93G [01:12<00:20, 41.4MB/s][A[A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.02G/4.97G [01:12<00:16, 58.5MB/s][A
model-00002-of-00006.safetensors: 95%|█████████▍| 4.73G/4.99G [01:12<00:05, 51.7MB/s][A[A
model-00005-of-00006.safetensors: 83%|████████▎ | 4.14G/4.97G [01:12<00:15, 54.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 83%|████████▎ | 4.11G/4.93G [01:12<00:17, 47.7MB/s][A[A[A
model-00004-of-00006.safetensors: 91%|█████████ | 4.54G/5.00G [01:12<00:07, 60.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.03G/4.97G [01:12<00:15, 59.4MB/s][A
model-00005-of-00006.safetensors: 84%|████████▎ | 4.16G/4.97G [01:12<00:13, 59.6MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 84%|████████▎ | 4.13G/4.93G [01:12<00:14, 54.5MB/s][A[A[A
model-00004-of-00006.safetensors: 91%|█████████ | 4.56G/5.00G [01:12<00:06, 65.4MB/s][A[A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.05G/4.97G [01:12<00:14, 62.2MB/s][A
model-00005-of-00006.safetensors: 84%|████████▍ | 4.18G/4.97G [01:12<00:12, 64.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 95%|█████████▍| 4.74G/4.99G [01:12<00:07, 35.5MB/s][A[A
model-00003-of-00006.safetensors: 84%|████████▍ | 4.14G/4.93G [01:13<00:13, 59.8MB/s][A[A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.58G/5.00G [01:13<00:06, 66.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.06G/4.97G [01:13<00:14, 64.1MB/s][A
model-00002-of-00006.safetensors: 95%|█████████▌| 4.75G/4.99G [01:13<00:05, 44.7MB/s][A[A
model-00003-of-00006.safetensors: 84%|████████▍ | 4.16G/4.93G [01:13<00:11, 64.6MB/s][A[A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.59G/5.00G [01:13<00:05, 72.1MB/s][A[A[A[A
model-00005-of-00006.safetensors: 84%|████████▍ | 4.19G/4.97G [01:13<00:13, 57.9MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.08G/4.97G [01:13<00:13, 67.8MB/s][A
model-00002-of-00006.safetensors: 96%|█████████▌| 4.77G/4.99G [01:13<00:04, 50.7MB/s][A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.61G/5.00G [01:13<00:05, 77.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 85%|████████▍ | 4.18G/4.93G [01:13<00:11, 65.1MB/s][A[A[A
model-00005-of-00006.safetensors: 85%|████████▍ | 4.21G/4.97G [01:13<00:12, 61.8MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.10G/4.97G [01:13<00:12, 70.5MB/s][A
model-00002-of-00006.safetensors: 96%|█████████▌| 4.78G/4.99G [01:13<00:03, 56.4MB/s][A[A
model-00003-of-00006.safetensors: 85%|████████▍ | 4.19G/4.93G [01:13<00:10, 69.8MB/s][A[A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.62G/5.00G [01:13<00:05, 71.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.11G/4.97G [01:13<00:11, 72.8MB/s][A
model-00005-of-00006.safetensors: 85%|████████▍ | 4.22G/4.97G [01:13<00:12, 62.3MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 96%|█████████▌| 4.80G/4.99G [01:13<00:03, 62.3MB/s][A[A
model-00003-of-00006.safetensors: 85%|████████▌ | 4.21G/4.93G [01:13<00:10, 72.0MB/s][A[A[A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.64G/5.00G [01:13<00:04, 73.2MB/s][A[A[A[A
model-00005-of-00006.safetensors: 85%|████████▌ | 4.24G/4.97G [01:13<00:11, 64.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.13G/4.97G [01:14<00:12, 67.6MB/s][A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.66G/5.00G [01:14<00:04, 78.5MB/s][A[A[A[A
model-00002-of-00006.safetensors: 96%|█████████▋| 4.82G/4.99G [01:14<00:02, 62.9MB/s][A[A
model-00003-of-00006.safetensors: 86%|████████▌ | 4.22G/4.93G [01:14<00:10, 70.5MB/s][A[A[A
model-00005-of-00006.safetensors: 86%|████████▌ | 4.26G/4.97G [01:14<00:11, 64.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.14G/4.97G [01:14<00:11, 69.7MB/s][A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.67G/5.00G [01:14<00:04, 74.7MB/s][A[A[A[A
model-00003-of-00006.safetensors: 86%|████████▌ | 4.24G/4.93G [01:14<00:09, 72.5MB/s][A[A[A
model-00005-of-00006.safetensors: 86%|████████▌ | 4.27G/4.97G [01:14<00:10, 68.0MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 97%|█████████▋| 4.83G/4.99G [01:14<00:03, 52.9MB/s][A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.69G/5.00G [01:14<00:04, 76.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.16G/4.97G [01:14<00:12, 65.7MB/s][A
model-00003-of-00006.safetensors: 86%|████████▋ | 4.26G/4.93G [01:14<00:09, 71.2MB/s][A[A[A
model-00005-of-00006.safetensors: 86%|████████▋ | 4.29G/4.97G [01:14<00:09, 69.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.70G/5.00G [01:14<00:03, 74.9MB/s][A[A[A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.18G/4.97G [01:14<00:11, 66.1MB/s][A
model-00003-of-00006.safetensors: 87%|████████▋ | 4.27G/4.93G [01:14<00:09, 71.9MB/s][A[A[A
model-00005-of-00006.safetensors: 87%|████████▋ | 4.30G/4.97G [01:14<00:09, 69.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.72G/5.00G [01:14<00:03, 71.9MB/s][A[A[A[A
model-00002-of-00006.safetensors: 97%|█████████▋| 4.85G/4.99G [01:15<00:03, 43.2MB/s][A[A
model-00003-of-00006.safetensors: 87%|████████▋ | 4.29G/4.93G [01:15<00:09, 69.5MB/s][A[A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.19G/4.97G [01:15<00:12, 60.2MB/s][A
model-00005-of-00006.safetensors: 87%|████████▋ | 4.32G/4.97G [01:15<00:10, 64.0MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 95%|█████████▍| 4.74G/5.00G [01:15<00:03, 73.0MB/s][A[A[A[A
model-00003-of-00006.safetensors: 87%|████████▋ | 4.30G/4.93G [01:15<00:08, 71.9MB/s][A[A[A
model-00002-of-00006.safetensors: 97%|█████████▋| 4.86G/4.99G [01:15<00:02, 48.3MB/s][A[A
model-00001-of-00006.safetensors: 85%|████████▍ | 4.21G/4.97G [01:15<00:12, 61.1MB/s][A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.75G/5.00G [01:15<00:03, 71.4MB/s][A[A[A[A
model-00003-of-00006.safetensors: 88%|████████▊ | 4.32G/4.93G [01:15<00:08, 71.7MB/s][A[A[A
model-00005-of-00006.safetensors: 87%|████████▋ | 4.34G/4.97G [01:15<00:10, 61.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.22G/4.97G [01:15<00:11, 64.0MB/s][A
model-00002-of-00006.safetensors: 98%|█████████▊| 4.88G/4.99G [01:15<00:02, 49.1MB/s][A[A
model-00003-of-00006.safetensors: 88%|████████▊ | 4.34G/4.93G [01:15<00:08, 73.5MB/s][A[A[A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.77G/5.00G [01:15<00:03, 69.4MB/s][A[A[A[A
model-00005-of-00006.safetensors: 88%|████████▊ | 4.35G/4.97G [01:15<00:09, 62.4MB/s][A[A[A[A[A[A
model-00002-of-00006.safetensors: 98%|█████████▊| 4.90G/4.99G [01:15<00:01, 57.1MB/s][A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.24G/4.97G [01:15<00:11, 61.2MB/s][A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.78G/5.00G [01:15<00:03, 71.3MB/s][A[A[A[A
model-00005-of-00006.safetensors: 88%|████████▊ | 4.37G/4.97G [01:15<00:09, 63.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 88%|████████▊ | 4.35G/4.93G [01:16<00:10, 57.1MB/s][A[A[A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.80G/5.00G [01:16<00:02, 70.6MB/s][A[A[A[A
model-00005-of-00006.safetensors: 88%|████████▊ | 4.38G/4.97G [01:16<00:09, 61.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 89%|████████▊ | 4.37G/4.93G [01:16<00:09, 61.2MB/s][A[A[A
model-00004-of-00006.safetensors: 96%|█████████▋| 4.82G/5.00G [01:16<00:02, 72.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.26G/4.97G [01:16<00:15, 46.8MB/s][A
model-00005-of-00006.safetensors: 89%|████████▊ | 4.40G/4.97G [01:16<00:09, 61.2MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 89%|████████▉ | 4.38G/4.93G [01:16<00:08, 64.9MB/s][A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.83G/5.00G [01:16<00:02, 71.1MB/s][A[A[A[A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.27G/4.97G [01:16<00:12, 54.0MB/s][A
model-00003-of-00006.safetensors: 89%|████████▉ | 4.40G/4.93G [01:16<00:07, 67.6MB/s][A[A[A
model-00005-of-00006.safetensors: 89%|████████▉ | 4.42G/4.97G [01:16<00:08, 63.6MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.85G/5.00G [01:16<00:02, 73.2MB/s][A[A[A[A
model-00001-of-00006.safetensors: 86%|████████▋ | 4.29G/4.97G [01:16<00:12, 56.2MB/s][A
model-00003-of-00006.safetensors: 90%|████████▉ | 4.42G/4.93G [01:16<00:07, 71.1MB/s][A[A[A
model-00005-of-00006.safetensors: 89%|████████▉ | 4.43G/4.97G [01:16<00:07, 68.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.30G/4.97G [01:17<00:10, 60.7MB/s][A
model-00003-of-00006.safetensors: 90%|████████▉ | 4.43G/4.93G [01:17<00:06, 73.1MB/s][A[A[A
model-00005-of-00006.safetensors: 89%|████████▉ | 4.45G/4.97G [01:17<00:07, 69.5MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.32G/4.97G [01:17<00:09, 67.3MB/s][A
model-00005-of-00006.safetensors: 90%|████████▉ | 4.46G/4.97G [01:17<00:07, 72.4MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 90%|█████████ | 4.45G/4.93G [01:17<00:07, 65.9MB/s][A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.34G/4.97G [01:17<00:09, 68.4MB/s][A
model-00003-of-00006.safetensors: 90%|█████████ | 4.46G/4.93G [01:17<00:06, 69.3MB/s][A[A[A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.35G/4.97G [01:17<00:08, 71.4MB/s][A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.37G/4.97G [01:17<00:08, 73.2MB/s][A
model-00003-of-00006.safetensors: 91%|█████████ | 4.48G/4.93G [01:17<00:06, 70.7MB/s][A[A[A
model-00003-of-00006.safetensors: 91%|█████████ | 4.50G/4.93G [01:18<00:06, 71.5MB/s][A[A[A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.38G/4.97G [01:18<00:08, 71.0MB/s][A
model-00001-of-00006.safetensors: 89%|████████▊ | 4.40G/4.97G [01:18<00:07, 71.9MB/s][A
model-00003-of-00006.safetensors: 91%|█████████▏| 4.51G/4.93G [01:18<00:07, 57.5MB/s][A[A[A
model-00005-of-00006.safetensors: 90%|█████████ | 4.48G/4.97G [01:18<00:16, 30.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.42G/4.97G [01:18<00:08, 61.5MB/s][A
model-00003-of-00006.safetensors: 92%|█████████▏| 4.53G/4.93G [01:18<00:06, 61.7MB/s][A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.86G/5.00G [01:18<00:06, 20.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.43G/4.97G [01:18<00:08, 63.3MB/s][A
model-00005-of-00006.safetensors: 90%|█████████ | 4.50G/4.97G [01:18<00:13, 34.3MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 92%|█████████▏| 4.54G/4.93G [01:18<00:06, 62.3MB/s][A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.88G/5.00G [01:19<00:04, 26.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.45G/4.97G [01:19<00:07, 65.1MB/s][A
model-00003-of-00006.safetensors: 92%|█████████▏| 4.56G/4.93G [01:19<00:05, 67.9MB/s][A[A[A
model-00005-of-00006.safetensors: 91%|█████████ | 4.51G/4.97G [01:19<00:11, 40.9MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.90G/5.00G [01:19<00:03, 33.0MB/s][A[A[A[A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.46G/4.97G [01:19<00:07, 65.3MB/s][A
model-00002-of-00006.safetensors: 98%|█████████▊| 4.91G/4.99G [01:19<00:06, 12.4MB/s][A[A
model-00005-of-00006.safetensors: 91%|█████████ | 4.53G/4.97G [01:19<00:10, 43.5MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.91G/5.00G [01:19<00:02, 39.7MB/s][A[A[A[A
model-00001-of-00006.safetensors: 90%|█████████ | 4.48G/4.97G [01:19<00:07, 69.0MB/s][A
model-00002-of-00006.safetensors: 99%|█████████▊| 4.93G/4.99G [01:19<00:03, 16.5MB/s][A[A
model-00005-of-00006.safetensors: 91%|█████████▏| 4.54G/4.97G [01:19<00:09, 46.4MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.50G/4.97G [01:19<00:06, 72.3MB/s][A
model-00002-of-00006.safetensors: 99%|█████████▉| 4.94G/4.99G [01:19<00:02, 21.4MB/s][A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.51G/4.97G [01:20<00:06, 71.1MB/s][A
model-00003-of-00006.safetensors: 93%|█████████▎| 4.58G/4.93G [01:20<00:09, 35.9MB/s][A[A[A
model-00002-of-00006.safetensors: 99%|█████████▉| 4.96G/4.99G [01:20<00:01, 26.9MB/s][A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.53G/4.97G [01:20<00:06, 71.7MB/s][A
model-00005-of-00006.safetensors: 92%|█████████▏| 4.56G/4.97G [01:20<00:10, 41.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 93%|█████████▎| 4.59G/4.93G [01:20<00:07, 42.7MB/s][A[A[A
model-00002-of-00006.safetensors: 100%|█████████▉| 4.98G/4.99G [01:20<00:00, 32.9MB/s][A[A
model-00005-of-00006.safetensors: 92%|█████████▏| 4.58G/4.97G [01:20<00:08, 49.3MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.54G/4.97G [01:20<00:05, 73.2MB/s][A
model-00002-of-00006.safetensors: 100%|██████████| 4.99G/4.99G [01:20<00:00, 62.0MB/s]
model-00003-of-00006.safetensors: 93%|█████████▎| 4.61G/4.93G [01:20<00:06, 49.2MB/s][A[A[A
model-00005-of-00006.safetensors: 92%|█████████▏| 4.59G/4.97G [01:20<00:06, 55.4MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.56G/4.97G [01:20<00:05, 73.2MB/s][A
model-00006-of-00006.safetensors: 0%| | 0.00/3.32G [00:00, ?B/s][A[A
model-00003-of-00006.safetensors: 94%|█████████▎| 4.62G/4.93G [01:20<00:05, 53.8MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.58G/4.97G [01:20<00:05, 72.7MB/s][A
model-00006-of-00006.safetensors: 0%| | 16.0M/3.32G [00:00<00:48, 68.2MB/s][A[A
model-00003-of-00006.safetensors: 94%|█████████▍| 4.64G/4.93G [01:20<00:04, 58.8MB/s][A[A[A
model-00006-of-00006.safetensors: 1%| | 32.0M/3.32G [00:00<00:48, 67.7MB/s][A[A
model-00003-of-00006.safetensors: 94%|█████████▍| 4.66G/4.93G [01:21<00:04, 61.4MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.59G/4.97G [01:21<00:05, 64.0MB/s][A
model-00003-of-00006.safetensors: 95%|█████████▍| 4.67G/4.93G [01:21<00:03, 67.3MB/s][A[A[A
model-00006-of-00006.safetensors: 1%|▏ | 48.0M/3.32G [00:00<00:49, 66.0MB/s][A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.61G/4.97G [01:21<00:05, 68.0MB/s][A
model-00005-of-00006.safetensors: 93%|█████████▎| 4.61G/4.97G [01:21<00:10, 35.7MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 95%|█████████▌| 4.69G/4.93G [01:21<00:03, 69.5MB/s][A[A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.62G/4.97G [01:21<00:04, 73.6MB/s][A
model-00006-of-00006.safetensors: 2%|▏ | 64.0M/3.32G [00:00<00:48, 66.5MB/s][A[A
model-00003-of-00006.safetensors: 95%|█████████▌| 4.70G/4.93G [01:21<00:03, 71.3MB/s][A[A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.64G/4.97G [01:21<00:04, 74.5MB/s][A
model-00005-of-00006.safetensors: 93%|█████████▎| 4.62G/4.97G [01:21<00:09, 38.2MB/s][A[A[A[A[A[A
model-00006-of-00006.safetensors: 2%|▏ | 80.0M/3.32G [00:01<00:44, 72.7MB/s][A[A
model-00003-of-00006.safetensors: 96%|█████████▌| 4.72G/4.93G [01:21<00:03, 70.7MB/s][A[A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.66G/4.97G [01:22<00:04, 72.4MB/s][A
model-00006-of-00006.safetensors: 3%|▎ | 96.0M/3.32G [00:01<00:45, 71.0MB/s][A[A
model-00005-of-00006.safetensors: 93%|█████████▎| 4.64G/4.97G [01:22<00:07, 41.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 96%|█████████▌| 4.74G/4.93G [01:22<00:02, 71.7MB/s][A[A[A
model-00006-of-00006.safetensors: 3%|▎ | 112M/3.32G [00:01<00:44, 71.2MB/s] [A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.67G/4.97G [01:22<00:04, 68.8MB/s][A
model-00005-of-00006.safetensors: 94%|█████████▎| 4.66G/4.97G [01:22<00:06, 47.9MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 96%|█████████▋| 4.75G/4.93G [01:22<00:02, 74.9MB/s][A[A[A
model-00006-of-00006.safetensors: 4%|▍ | 128M/3.32G [00:01<00:43, 73.3MB/s][A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.69G/4.97G [01:22<00:03, 71.1MB/s][A
model-00005-of-00006.safetensors: 94%|█████████▍| 4.67G/4.97G [01:22<00:05, 54.7MB/s][A[A[A[A[A[A
model-00006-of-00006.safetensors: 4%|▍ | 144M/3.32G [00:02<00:42, 73.8MB/s][A[A
model-00003-of-00006.safetensors: 97%|█████████▋| 4.77G/4.93G [01:22<00:02, 69.9MB/s][A[A[A
model-00005-of-00006.safetensors: 94%|█████████▍| 4.69G/4.97G [01:22<00:04, 60.3MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 95%|█████████▍| 4.70G/4.97G [01:22<00:03, 69.2MB/s][A
model-00004-of-00006.safetensors: 99%|█████████▊| 4.93G/5.00G [01:22<00:05, 12.4MB/s][A[A[A[A
model-00003-of-00006.safetensors: 97%|█████████▋| 4.78G/4.93G [01:22<00:02, 72.6MB/s][A[A[A
model-00006-of-00006.safetensors: 5%|▍ | 160M/3.32G [00:02<00:42, 74.6MB/s][A[A
model-00005-of-00006.safetensors: 95%|█████████▍| 4.70G/4.97G [01:22<00:04, 63.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.72G/4.97G [01:22<00:03, 71.7MB/s][A
model-00003-of-00006.safetensors: 97%|█████████▋| 4.80G/4.93G [01:23<00:01, 78.0MB/s][A[A[A
model-00006-of-00006.safetensors: 5%|▌ | 176M/3.32G [00:02<00:41, 74.8MB/s][A[A
model-00005-of-00006.safetensors: 95%|█████████▍| 4.72G/4.97G [01:23<00:03, 65.7MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.74G/4.97G [01:23<00:03, 69.3MB/s][A
model-00003-of-00006.safetensors: 98%|█████████▊| 4.82G/4.93G [01:23<00:01, 79.3MB/s][A[A[A
model-00006-of-00006.safetensors: 6%|▌ | 192M/3.32G [00:02<00:42, 72.7MB/s][A[A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.75G/4.97G [01:23<00:02, 71.4MB/s][A
model-00003-of-00006.safetensors: 98%|█████████▊| 4.83G/4.93G [01:23<00:01, 78.0MB/s][A[A[A
model-00006-of-00006.safetensors: 6%|▋ | 208M/3.32G [00:02<00:43, 72.1MB/s][A[A
model-00005-of-00006.safetensors: 95%|█████████▌| 4.74G/4.97G [01:23<00:04, 48.4MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.77G/4.97G [01:23<00:03, 65.4MB/s][A
model-00003-of-00006.safetensors: 98%|█████████▊| 4.85G/4.93G [01:23<00:01, 68.9MB/s][A[A[A
model-00005-of-00006.safetensors: 96%|█████████▌| 4.75G/4.97G [01:23<00:04, 54.2MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 96%|█████████▋| 4.78G/4.97G [01:23<00:02, 67.9MB/s][A
model-00003-of-00006.safetensors: 99%|█████████▊| 4.86G/4.93G [01:23<00:00, 72.5MB/s][A[A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.94G/5.00G [01:23<00:04, 12.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.80G/4.97G [01:24<00:02, 70.9MB/s][A
model-00005-of-00006.safetensors: 96%|█████████▌| 4.77G/4.97G [01:24<00:03, 53.1MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 99%|█████████▉| 4.88G/4.93G [01:24<00:00, 68.7MB/s][A[A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.96G/5.00G [01:24<00:02, 16.8MB/s][A[A[A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.82G/4.97G [01:24<00:02, 70.8MB/s][A
model-00005-of-00006.safetensors: 96%|█████████▌| 4.78G/4.97G [01:24<00:03, 58.8MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.98G/5.00G [01:24<00:01, 21.7MB/s][A[A[A[A
model-00006-of-00006.safetensors: 7%|▋ | 224M/3.32G [00:03<01:25, 36.2MB/s][A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.83G/4.97G [01:24<00:01, 69.5MB/s][A
model-00005-of-00006.safetensors: 97%|█████████▋| 4.80G/4.97G [01:24<00:02, 61.4MB/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.99G/5.00G [01:24<00:00, 27.6MB/s][A[A[A[A
model-00006-of-00006.safetensors: 7%|▋ | 240M/3.32G [00:04<01:13, 41.8MB/s][A[A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.85G/4.97G [01:24<00:01, 64.4MB/s][A
model-00004-of-00006.safetensors: 100%|██████████| 5.00G/5.00G [01:24<00:00, 58.9MB/s]
model-00003-of-00006.safetensors: 99%|█████████▉| 4.90G/4.93G [01:24<00:00, 40.6MB/s][A[A[A
model-00006-of-00006.safetensors: 8%|▊ | 256M/3.32G [00:04<01:04, 47.5MB/s][A[A
rng_state_0.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.86G/4.97G [01:25<00:01, 68.1MB/s][A
rng_state_0.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 205kB/s]
model-00003-of-00006.safetensors: 100%|█████████▉| 4.91G/4.93G [01:25<00:00, 47.9MB/s][A[A[A
model-00006-of-00006.safetensors: 8%|▊ | 272M/3.32G [00:04<00:57, 53.2MB/s][A[A
rng_state_1.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.88G/4.97G [01:25<00:01, 69.0MB/s][A
rng_state_1.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 153kB/s]
model-00005-of-00006.safetensors: 97%|█████████▋| 4.82G/4.97G [01:25<00:03, 39.5MB/s][A[A[A[A[A[A
model-00003-of-00006.safetensors: 100%|█████████▉| 4.93G/4.93G [01:25<00:00, 53.2MB/s][A[A[A
model-00006-of-00006.safetensors: 9%|▊ | 288M/3.32G [00:04<00:52, 57.2MB/s][A[A
rng_state_10.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_10.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 271kB/s]
model-00003-of-00006.safetensors: 100%|██████████| 4.93G/4.93G [01:25<00:00, 57.7MB/s]
model-00001-of-00006.safetensors: 99%|█████████▊| 4.90G/4.97G [01:25<00:01, 65.5MB/s][A
model-00005-of-00006.safetensors: 97%|█████████▋| 4.83G/4.97G [01:25<00:02, 47.0MB/s][A[A[A[A[A[A
rng_state_100.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00006-of-00006.safetensors: 9%|▉ | 304M/3.32G [00:05<00:49, 60.6MB/s][A[A
rng_state_100.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 271kB/s]
model-00005-of-00006.safetensors: 98%|█████████▊| 4.85G/4.97G [01:25<00:02, 54.4MB/s][A[A[A[A[A[A
rng_state_101.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00006-of-00006.safetensors: 10%|▉ | 320M/3.32G [00:05<00:45, 66.1MB/s][A[A
rng_state_101.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 188kB/s]
rng_state_102.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.91G/4.97G [01:25<00:00, 58.4MB/s][A
rng_state_102.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 301kB/s]
rng_state_103.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00005-of-00006.safetensors: 98%|█████████▊| 4.86G/4.97G [01:25<00:01, 60.0MB/s][A[A[A[A[A[A
rng_state_103.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 212kB/s]
rng_state_104.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00006-of-00006.safetensors: 10%|█ | 336M/3.32G [00:05<00:43, 68.1MB/s][A[A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.93G/4.97G [01:26<00:00, 60.3MB/s][A
rng_state_105.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_104.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 152kB/s]
model-00005-of-00006.safetensors: 98%|█████████▊| 4.88G/4.97G [01:26<00:01, 64.1MB/s][A[A[A[A[A[A
rng_state_105.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 166kB/s]
rng_state_106.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_106.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 226kB/s]
model-00006-of-00006.safetensors: 11%|█ | 352M/3.32G [00:05<00:46, 64.0MB/s][A[A
rng_state_107.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.94G/4.97G [01:26<00:00, 61.7MB/s][A
rng_state_107.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 208kB/s]
rng_state_108.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00005-of-00006.safetensors: 98%|█████████▊| 4.90G/4.97G [01:26<00:01, 60.5MB/s][A[A[A[A[A[A
rng_state_108.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 297kB/s]
rng_state_109.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.96G/4.97G [01:26<00:00, 66.4MB/s][A
model-00006-of-00006.safetensors: 11%|█ | 368M/3.32G [00:05<00:46, 63.9MB/s][A[A
rng_state_11.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_109.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 321kB/s]
rng_state_11.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 446kB/s]
model-00005-of-00006.safetensors: 99%|█████████▉| 4.91G/4.97G [01:26<00:00, 64.1MB/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 100%|██████████| 4.97G/4.97G [01:26<00:00, 57.3MB/s]
rng_state_110.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 12%|█▏ | 384M/3.32G [00:06<00:41, 70.9MB/s][A[A
rng_state_110.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 253kB/s]
rng_state_111.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
Upload 138 LFS files: 1%| | 1/138 [01:26<3:18:18, 86.85s/it][A[A[A[A[A
rng_state_112.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_111.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 208kB/s]
rng_state_113.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00005-of-00006.safetensors: 99%|█████████▉| 4.93G/4.97G [01:26<00:00, 64.5MB/s][A[A[A[A[A[A
rng_state_113.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 443kB/s]
rng_state_112.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 160kB/s]
rng_state_114.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_114.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 304kB/s]
model-00006-of-00006.safetensors: 12%|█▏ | 400M/3.32G [00:06<00:44, 65.9MB/s][A[A
rng_state_115.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_116.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_115.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 337kB/s]
model-00005-of-00006.safetensors: 99%|█████████▉| 4.94G/4.97G [01:27<00:00, 68.8MB/s][A[A[A[A[A[A
rng_state_116.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 381kB/s]
rng_state_117.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_118.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_119.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00006-of-00006.safetensors: 13%|█▎ | 416M/3.32G [00:06<00:41, 69.2MB/s][A[A
rng_state_117.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 245kB/s]
rng_state_119.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 295kB/s]
rng_state_118.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 239kB/s]
model-00005-of-00006.safetensors: 100%|█████████▉| 4.96G/4.97G [01:27<00:00, 71.9MB/s][A[A[A[A[A[A
rng_state_12.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_120.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_12.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 212kB/s]
rng_state_121.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 13%|█▎ | 432M/3.32G [00:06<00:41, 69.0MB/s][A[A
model-00005-of-00006.safetensors: 100%|██████████| 4.97G/4.97G [01:27<00:00, 56.8MB/s]
rng_state_121.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 313kB/s]
rng_state_120.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 97.9kB/s]
rng_state_122.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_123.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_122.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 366kB/s]
rng_state_123.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 340kB/s]
rng_state_124.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 14%|█▎ | 448M/3.32G [00:07<00:39, 72.3MB/s][A[A
rng_state_124.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 389kB/s]
Upload 138 LFS files: 4%|▎ | 5/138 [01:27<29:04, 13.12s/it] [A[A[A[A[A
rng_state_125.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_126.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_127.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_125.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 297kB/s]
rng_state_127.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 237kB/s]
rng_state_126.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 202kB/s]
rng_state_13.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 14%|█▍ | 464M/3.32G [00:07<00:38, 73.8MB/s][A[A
rng_state_13.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 314kB/s]
rng_state_14.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_15.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_16.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_14.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 253kB/s]
rng_state_15.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 213kB/s]
rng_state_16.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 156kB/s]
rng_state_17.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_18.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00006-of-00006.safetensors: 14%|█▍ | 480M/3.32G [00:07<00:38, 73.0MB/s][A[A
rng_state_19.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_17.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 401kB/s]
rng_state_19.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 339kB/s]
rng_state_2.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_18.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 169kB/s]
rng_state_20.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_20.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 401kB/s]
rng_state_21.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_22.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00006-of-00006.safetensors: 15%|█▍ | 496M/3.32G [00:07<00:37, 74.5MB/s][A[A
rng_state_2.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 97.4kB/s]
rng_state_21.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 310kB/s]
rng_state_22.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 249kB/s]
rng_state_23.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_24.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_23.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 298kB/s]
rng_state_25.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_24.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 341kB/s]
model-00006-of-00006.safetensors: 15%|█▌ | 512M/3.32G [00:07<00:36, 77.8MB/s][A[A
rng_state_25.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 52.0kB/s]
rng_state_27.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_26.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_28.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00006-of-00006.safetensors: 16%|█▌ | 520M/3.32G [00:08<00:46, 60.0MB/s][A[A
rng_state_26.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 246kB/s]
rng_state_28.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 242kB/s]
rng_state_27.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 216kB/s]
rng_state_29.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_29.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 280kB/s]
rng_state_3.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 16%|█▌ | 528M/3.32G [00:08<00:48, 57.3MB/s][A[A
rng_state_30.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_30.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 391kB/s]
rng_state_31.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_3.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 187kB/s]
rng_state_31.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 251kB/s]
rng_state_32.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_33.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_32.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 372kB/s]
model-00006-of-00006.safetensors: 16%|█▋ | 544M/3.32G [00:08<00:43, 64.2MB/s][A[A
rng_state_33.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 310kB/s]
rng_state_34.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_35.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_34.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 290kB/s]
rng_state_36.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_37.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_35.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 288kB/s]
rng_state_36.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 318kB/s]
rng_state_37.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 350kB/s]
model-00006-of-00006.safetensors: 17%|█▋ | 560M/3.32G [00:08<00:40, 68.6MB/s][A[A
rng_state_38.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_39.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_4.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_38.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 273kB/s]
rng_state_4.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 295kB/s]
rng_state_39.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 265kB/s]
rng_state_40.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 17%|█▋ | 576M/3.32G [00:08<00:39, 69.6MB/s][A[A
rng_state_41.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_40.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 272kB/s]
rng_state_42.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_41.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 287kB/s]
rng_state_43.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_42.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 303kB/s]
rng_state_43.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 213kB/s]
model-00006-of-00006.safetensors: 18%|█▊ | 592M/3.32G [00:09<00:35, 75.8MB/s][A[A
rng_state_44.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_45.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_44.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 286kB/s]
rng_state_45.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 280kB/s]
rng_state_46.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_47.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_48.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_46.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 231kB/s]
rng_state_48.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 346kB/s]
rng_state_49.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
model-00006-of-00006.safetensors: 18%|█▊ | 608M/3.32G [00:09<00:36, 73.3MB/s][A[A
rng_state_47.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 171kB/s]
rng_state_49.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 453kB/s]
rng_state_5.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_5.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 482kB/s]
rng_state_50.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_51.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_50.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 275kB/s]
rng_state_52.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_53.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_51.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 266kB/s]
model-00006-of-00006.safetensors: 19%|█▉ | 624M/3.32G [00:09<00:36, 73.3MB/s][A[A
rng_state_52.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 263kB/s]
rng_state_54.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_53.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 282kB/s]
rng_state_54.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 213kB/s]
rng_state_55.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_56.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_57.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_56.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 341kB/s]
rng_state_57.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 332kB/s]
rng_state_55.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 204kB/s]
model-00006-of-00006.safetensors: 19%|█▉ | 640M/3.32G [00:09<00:36, 73.5MB/s][A[A
rng_state_58.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_59.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_6.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_60.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_58.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 215kB/s]
rng_state_59.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 279kB/s]
rng_state_6.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 309kB/s]
rng_state_60.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 318kB/s]
model-00006-of-00006.safetensors: 20%|█▉ | 656M/3.32G [00:09<00:36, 73.1MB/s][A[A
rng_state_61.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_62.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_63.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_64.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_63.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 306kB/s]
rng_state_61.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 211kB/s]
rng_state_64.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 304kB/s]
rng_state_62.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 176kB/s]
model-00006-of-00006.safetensors: 20%|██ | 672M/3.32G [00:10<00:33, 77.9MB/s][A[A
rng_state_65.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_66.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_67.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_68.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_65.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 220kB/s]
rng_state_66.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 271kB/s]
rng_state_67.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 243kB/s]
rng_state_68.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 242kB/s]
model-00006-of-00006.safetensors: 21%|██ | 688M/3.32G [00:10<00:34, 75.8MB/s][A[A
rng_state_69.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_7.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_70.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_71.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_69.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 250kB/s]
rng_state_7.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 279kB/s]
rng_state_70.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 291kB/s]
rng_state_71.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 235kB/s]
rng_state_72.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_73.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_74.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00006-of-00006.safetensors: 21%|██ | 704M/3.32G [00:10<00:34, 75.9MB/s][A[A
rng_state_75.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_74.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 335kB/s]
rng_state_72.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 264kB/s]
rng_state_73.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 249kB/s]
rng_state_75.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 335kB/s]
rng_state_76.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_77.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_78.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_79.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_78.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 349kB/s]
rng_state_76.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 220kB/s]
rng_state_77.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 190kB/s]
model-00006-of-00006.safetensors: 22%|██▏ | 720M/3.32G [00:10<00:35, 74.1MB/s][A[A
rng_state_79.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 281kB/s]
rng_state_8.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_80.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_81.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_8.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 279kB/s]
rng_state_82.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_81.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 290kB/s]
rng_state_80.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 198kB/s]
rng_state_82.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 352kB/s]
model-00006-of-00006.safetensors: 22%|██▏ | 736M/3.32G [00:11<00:34, 74.6MB/s][A[A
rng_state_83.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_84.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_83.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 231kB/s]
rng_state_85.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_84.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 287kB/s]
rng_state_86.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_85.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 343kB/s]
rng_state_86.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 266kB/s]
rng_state_87.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_88.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_87.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 348kB/s]
rng_state_88.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 293kB/s]
model-00006-of-00006.safetensors: 23%|██▎ | 752M/3.32G [00:11<00:37, 67.8MB/s][A[A
rng_state_89.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_9.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_90.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_91.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_89.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 398kB/s]
rng_state_9.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 283kB/s]
rng_state_91.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 385kB/s]
rng_state_90.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 228kB/s]
model-00006-of-00006.safetensors: 23%|██▎ | 768M/3.32G [00:11<00:36, 69.0MB/s][A[A
rng_state_92.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_93.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_94.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_95.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_94.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 412kB/s]
rng_state_93.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 367kB/s]
rng_state_92.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 251kB/s]
rng_state_95.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 186kB/s]
rng_state_96.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_97.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
rng_state_98.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_97.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 365kB/s]
rng_state_96.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 256kB/s]
rng_state_99.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A
rng_state_98.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 206kB/s]
scheduler.pt: 0%| | 0.00/1.06k [00:00, ?B/s][A[A[A
tokenizer.json: 0%| | 0.00/11.4M [00:00, ?B/s][A[A[A[A
rng_state_99.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 199kB/s]
training_args.bin: 0%| | 0.00/7.35k [00:00, ?B/s][A
scheduler.pt: 100%|██████████| 1.06k/1.06k [00:00<00:00, 13.9kB/s]
training_args.bin: 100%|██████████| 7.35k/7.35k [00:00<00:00, 194kB/s]
335933.out: 0%| | 0.00/33.8M [00:00, ?B/s][A
model-00006-of-00006.safetensors: 24%|██▎ | 784M/3.32G [00:12<00:49, 51.5MB/s][A[A
tokenizer.json: 100%|██████████| 11.4M/11.4M [00:00<00:00, 68.6MB/s]
335933.out: 47%|████▋ | 16.0M/33.8M [00:00<00:00, 79.6MB/s][A
model-00006-of-00006.safetensors: 24%|██▍ | 800M/3.32G [00:12<00:46, 54.3MB/s][A[A
335933.out: 95%|█████████▍| 32.0M/33.8M [00:00<00:00, 83.1MB/s][A
model-00006-of-00006.safetensors: 25%|██▍ | 816M/3.32G [00:12<00:42, 58.2MB/s][A[A
335933.out: 100%|██████████| 33.8M/33.8M [00:00<00:00, 60.2MB/s]
model-00006-of-00006.safetensors: 25%|██▌ | 832M/3.32G [00:12<00:41, 60.5MB/s][A[A
model-00006-of-00006.safetensors: 26%|██▌ | 848M/3.32G [00:12<00:38, 63.7MB/s][A[A
model-00006-of-00006.safetensors: 26%|██▌ | 864M/3.32G [00:13<00:37, 65.4MB/s][A[A
model-00006-of-00006.safetensors: 27%|██▋ | 880M/3.32G [00:13<00:37, 65.1MB/s][A[A
model-00006-of-00006.safetensors: 27%|██▋ | 896M/3.32G [00:13<00:36, 67.1MB/s][A[A
model-00006-of-00006.safetensors: 28%|██▊ | 912M/3.32G [00:13<00:34, 70.3MB/s][A[A
model-00006-of-00006.safetensors: 28%|██▊ | 928M/3.32G [00:14<00:32, 73.0MB/s][A[A
model-00006-of-00006.safetensors: 28%|██▊ | 944M/3.32G [00:14<00:32, 73.7MB/s][A[A
model-00006-of-00006.safetensors: 29%|██▉ | 960M/3.32G [00:14<00:30, 76.4MB/s][A[A
model-00006-of-00006.safetensors: 29%|██▉ | 976M/3.32G [00:14<00:29, 80.3MB/s][A[A
model-00006-of-00006.safetensors: 30%|██▉ | 992M/3.32G [00:14<00:29, 78.6MB/s][A[A
model-00006-of-00006.safetensors: 30%|███ | 1.01G/3.32G [00:15<00:30, 74.6MB/s][A[A
model-00006-of-00006.safetensors: 31%|███ | 1.02G/3.32G [00:15<00:30, 75.9MB/s][A[A
model-00006-of-00006.safetensors: 31%|███▏ | 1.04G/3.32G [00:15<00:29, 77.2MB/s][A[A
model-00006-of-00006.safetensors: 32%|███▏ | 1.06G/3.32G [00:15<00:28, 78.1MB/s][A[A
model-00006-of-00006.safetensors: 32%|███▏ | 1.07G/3.32G [00:15<00:30, 73.4MB/s][A[A
model-00006-of-00006.safetensors: 33%|███▎ | 1.09G/3.32G [00:16<00:30, 74.0MB/s][A[A
model-00006-of-00006.safetensors: 33%|███▎ | 1.10G/3.32G [00:16<00:29, 73.8MB/s][A[A
model-00006-of-00006.safetensors: 34%|███▍ | 1.12G/3.32G [00:16<00:28, 77.6MB/s][A[A
model-00006-of-00006.safetensors: 34%|███▍ | 1.14G/3.32G [00:16<00:28, 76.1MB/s][A[A
model-00006-of-00006.safetensors: 35%|███▍ | 1.15G/3.32G [00:17<00:29, 72.5MB/s][A[A
model-00006-of-00006.safetensors: 35%|███▌ | 1.17G/3.32G [00:17<00:29, 73.9MB/s][A[A
model-00006-of-00006.safetensors: 36%|███▌ | 1.18G/3.32G [00:17<00:29, 71.6MB/s][A[A
model-00006-of-00006.safetensors: 36%|███▌ | 1.20G/3.32G [00:17<00:29, 72.0MB/s][A[A
model-00006-of-00006.safetensors: 37%|███▋ | 1.22G/3.32G [00:17<00:28, 74.3MB/s][A[A
model-00006-of-00006.safetensors: 37%|███▋ | 1.23G/3.32G [00:18<00:28, 72.0MB/s][A[A
model-00006-of-00006.safetensors: 38%|███▊ | 1.25G/3.32G [00:18<00:28, 73.1MB/s][A[A
model-00006-of-00006.safetensors: 38%|███▊ | 1.26G/3.32G [00:18<00:29, 69.0MB/s][A[A
model-00006-of-00006.safetensors: 39%|███▊ | 1.28G/3.32G [00:18<00:29, 69.3MB/s][A[A
model-00006-of-00006.safetensors: 39%|███▉ | 1.30G/3.32G [00:19<00:28, 69.9MB/s][A[A
model-00006-of-00006.safetensors: 40%|███▉ | 1.31G/3.32G [00:19<00:27, 74.1MB/s][A[A
model-00006-of-00006.safetensors: 40%|████ | 1.33G/3.32G [00:19<00:32, 62.1MB/s][A[A
model-00006-of-00006.safetensors: 41%|████ | 1.34G/3.32G [00:19<00:29, 66.6MB/s][A[A
model-00006-of-00006.safetensors: 41%|████ | 1.36G/3.32G [00:20<00:28, 67.8MB/s][A[A
model-00006-of-00006.safetensors: 41%|████▏ | 1.38G/3.32G [00:20<00:26, 71.9MB/s][A[A
model-00006-of-00006.safetensors: 42%|████▏ | 1.39G/3.32G [00:20<00:27, 69.1MB/s][A[A
model-00006-of-00006.safetensors: 42%|████▏ | 1.41G/3.32G [00:20<00:26, 70.9MB/s][A[A
model-00006-of-00006.safetensors: 43%|████▎ | 1.42G/3.32G [00:20<00:27, 67.9MB/s][A[A
model-00006-of-00006.safetensors: 43%|████▎ | 1.44G/3.32G [00:21<00:26, 69.6MB/s][A[A
model-00006-of-00006.safetensors: 44%|████▍ | 1.46G/3.32G [00:21<00:26, 70.4MB/s][A[A
model-00006-of-00006.safetensors: 44%|████▍ | 1.47G/3.32G [00:21<00:25, 72.5MB/s][A[A
model-00006-of-00006.safetensors: 45%|████▍ | 1.49G/3.32G [00:21<00:24, 73.6MB/s][A[A
model-00006-of-00006.safetensors: 45%|████▌ | 1.50G/3.32G [00:21<00:23, 77.5MB/s][A[A
model-00006-of-00006.safetensors: 46%|████▌ | 1.52G/3.32G [00:22<00:24, 74.8MB/s][A[A
model-00006-of-00006.safetensors: 46%|████▋ | 1.54G/3.32G [00:22<00:23, 75.2MB/s][A[A
model-00006-of-00006.safetensors: 47%|████▋ | 1.55G/3.32G [00:22<00:22, 77.6MB/s][A[A
model-00006-of-00006.safetensors: 47%|████▋ | 1.57G/3.32G [00:22<00:24, 72.4MB/s][A[A
model-00006-of-00006.safetensors: 48%|████▊ | 1.58G/3.32G [00:23<00:23, 73.0MB/s][A[A
model-00006-of-00006.safetensors: 48%|████▊ | 1.60G/3.32G [00:23<00:25, 68.3MB/s][A[A
model-00006-of-00006.safetensors: 49%|████▊ | 1.62G/3.32G [00:23<00:24, 70.7MB/s][A[A
model-00006-of-00006.safetensors: 49%|████▉ | 1.63G/3.32G [00:23<00:24, 68.8MB/s][A[A
model-00006-of-00006.safetensors: 50%|████▉ | 1.65G/3.32G [00:23<00:22, 74.0MB/s][A[A
model-00006-of-00006.safetensors: 50%|█████ | 1.66G/3.32G [00:24<00:22, 73.0MB/s][A[A
model-00006-of-00006.safetensors: 51%|█████ | 1.68G/3.32G [00:25<00:50, 32.3MB/s][A[A
model-00006-of-00006.safetensors: 51%|█████ | 1.70G/3.32G [00:25<00:42, 38.4MB/s][A[A
model-00006-of-00006.safetensors: 52%|█████▏ | 1.71G/3.32G [00:25<00:35, 45.7MB/s][A[A
model-00006-of-00006.safetensors: 52%|█████▏ | 1.73G/3.32G [00:25<00:29, 54.0MB/s][A[A
model-00006-of-00006.safetensors: 53%|█████▎ | 1.74G/3.32G [00:26<00:27, 57.2MB/s][A[A
model-00006-of-00006.safetensors: 53%|█████▎ | 1.76G/3.32G [00:26<00:25, 62.0MB/s][A[A
model-00006-of-00006.safetensors: 54%|█████▎ | 1.78G/3.32G [00:26<00:23, 66.4MB/s][A[A
model-00006-of-00006.safetensors: 54%|█████▍ | 1.79G/3.32G [00:26<00:21, 71.0MB/s][A[A
model-00006-of-00006.safetensors: 55%|█████▍ | 1.81G/3.32G [00:26<00:20, 73.0MB/s][A[A
model-00006-of-00006.safetensors: 55%|█████▌ | 1.82G/3.32G [00:27<00:19, 75.4MB/s][A[A
model-00006-of-00006.safetensors: 55%|█████▌ | 1.84G/3.32G [00:27<00:19, 75.8MB/s][A[A
model-00006-of-00006.safetensors: 56%|█████▌ | 1.86G/3.32G [00:27<00:18, 77.5MB/s][A[A
model-00006-of-00006.safetensors: 56%|█████▋ | 1.87G/3.32G [00:27<00:17, 81.6MB/s][A[A
model-00006-of-00006.safetensors: 57%|█████▋ | 1.89G/3.32G [00:27<00:18, 78.4MB/s][A[A
model-00006-of-00006.safetensors: 57%|█████▋ | 1.90G/3.32G [00:28<00:18, 78.3MB/s][A[A
model-00006-of-00006.safetensors: 58%|█████▊ | 1.92G/3.32G [00:28<00:17, 78.3MB/s][A[A
model-00006-of-00006.safetensors: 58%|█████▊ | 1.94G/3.32G [00:28<00:17, 80.1MB/s][A[A
model-00006-of-00006.safetensors: 59%|█████▉ | 1.95G/3.32G [00:28<00:17, 76.6MB/s][A[A
model-00006-of-00006.safetensors: 59%|█████▉ | 1.97G/3.32G [00:29<00:17, 78.6MB/s][A[A
model-00006-of-00006.safetensors: 60%|█████▉ | 1.98G/3.32G [00:29<00:24, 54.8MB/s][A[A
model-00006-of-00006.safetensors: 60%|██████ | 2.00G/3.32G [00:29<00:22, 58.7MB/s][A[A
model-00006-of-00006.safetensors: 61%|██████ | 2.02G/3.32G [00:29<00:20, 64.1MB/s][A[A
model-00006-of-00006.safetensors: 61%|██████▏ | 2.03G/3.32G [00:30<00:18, 67.9MB/s][A[A
model-00006-of-00006.safetensors: 62%|██████▏ | 2.05G/3.32G [00:30<00:17, 70.7MB/s][A[A
model-00006-of-00006.safetensors: 62%|██████▏ | 2.06G/3.32G [00:30<00:16, 76.3MB/s][A[A
model-00006-of-00006.safetensors: 63%|██████▎ | 2.08G/3.32G [00:30<00:16, 74.4MB/s][A[A
model-00006-of-00006.safetensors: 63%|██████▎ | 2.10G/3.32G [00:30<00:17, 71.3MB/s][A[A
model-00006-of-00006.safetensors: 64%|██████▎ | 2.11G/3.32G [00:31<00:15, 76.1MB/s][A[A
model-00006-of-00006.safetensors: 64%|██████▍ | 2.13G/3.32G [00:31<00:15, 75.4MB/s][A[A
model-00006-of-00006.safetensors: 65%|██████▍ | 2.14G/3.32G [00:31<00:15, 74.7MB/s][A[A
model-00006-of-00006.safetensors: 65%|██████▌ | 2.16G/3.32G [00:31<00:15, 74.6MB/s][A[A
model-00006-of-00006.safetensors: 66%|██████▌ | 2.18G/3.32G [00:32<00:15, 72.4MB/s][A[A
model-00006-of-00006.safetensors: 66%|██████▌ | 2.19G/3.32G [00:32<00:14, 76.4MB/s][A[A
model-00006-of-00006.safetensors: 67%|██████▋ | 2.21G/3.32G [00:32<00:14, 76.0MB/s][A[A
model-00006-of-00006.safetensors: 67%|██████▋ | 2.22G/3.32G [00:32<00:14, 77.2MB/s][A[A
model-00006-of-00006.safetensors: 68%|██████▊ | 2.24G/3.32G [00:32<00:13, 77.1MB/s][A[A
model-00006-of-00006.safetensors: 68%|██████▊ | 2.26G/3.32G [00:33<00:13, 77.4MB/s][A[A
model-00006-of-00006.safetensors: 69%|██████▊ | 2.27G/3.32G [00:33<00:12, 81.9MB/s][A[A
model-00006-of-00006.safetensors: 69%|██████▉ | 2.29G/3.32G [00:33<00:13, 77.3MB/s][A[A
model-00006-of-00006.safetensors: 69%|██████▉ | 2.30G/3.32G [00:33<00:13, 75.0MB/s][A[A
model-00006-of-00006.safetensors: 70%|██████▉ | 2.32G/3.32G [00:33<00:14, 70.5MB/s][A[A
model-00006-of-00006.safetensors: 70%|███████ | 2.34G/3.32G [00:34<00:13, 73.6MB/s][A[A
model-00006-of-00006.safetensors: 71%|███████ | 2.35G/3.32G [00:34<00:12, 77.2MB/s][A[A
model-00006-of-00006.safetensors: 71%|███████▏ | 2.37G/3.32G [00:34<00:13, 72.6MB/s][A[A
model-00006-of-00006.safetensors: 72%|███████▏ | 2.38G/3.32G [00:34<00:12, 73.2MB/s][A[A
model-00006-of-00006.safetensors: 72%|███████▏ | 2.40G/3.32G [00:35<00:12, 71.8MB/s][A[A
model-00006-of-00006.safetensors: 73%|███████▎ | 2.42G/3.32G [00:35<00:11, 75.3MB/s][A[A
model-00006-of-00006.safetensors: 73%|███████▎ | 2.43G/3.32G [00:35<00:12, 71.4MB/s][A[A
model-00006-of-00006.safetensors: 74%|███████▍ | 2.45G/3.32G [00:35<00:11, 74.2MB/s][A[A
model-00006-of-00006.safetensors: 74%|███████▍ | 2.46G/3.32G [00:35<00:10, 77.6MB/s][A[A
model-00006-of-00006.safetensors: 75%|███████▍ | 2.48G/3.32G [00:36<00:10, 80.0MB/s][A[A
model-00006-of-00006.safetensors: 75%|███████▌ | 2.50G/3.32G [00:36<00:10, 78.4MB/s][A[A
model-00006-of-00006.safetensors: 76%|███████▌ | 2.51G/3.32G [00:36<00:10, 77.9MB/s][A[A
model-00006-of-00006.safetensors: 76%|███████▌ | 2.53G/3.32G [00:36<00:10, 74.4MB/s][A[A
model-00006-of-00006.safetensors: 77%|███████▋ | 2.54G/3.32G [00:36<00:11, 69.2MB/s][A[A
model-00006-of-00006.safetensors: 77%|███████▋ | 2.56G/3.32G [00:37<00:11, 68.6MB/s][A[A
model-00006-of-00006.safetensors: 78%|███████▊ | 2.58G/3.32G [00:37<00:10, 69.5MB/s][A[A
model-00006-of-00006.safetensors: 78%|███████▊ | 2.59G/3.32G [00:37<00:11, 65.4MB/s][A[A
model-00006-of-00006.safetensors: 79%|███████▊ | 2.61G/3.32G [00:38<00:13, 53.1MB/s][A[A
model-00006-of-00006.safetensors: 79%|███████▉ | 2.62G/3.32G [00:38<00:11, 58.7MB/s][A[A
model-00006-of-00006.safetensors: 80%|███████▉ | 2.64G/3.32G [00:38<00:10, 63.0MB/s][A[A
model-00006-of-00006.safetensors: 80%|████████ | 2.66G/3.32G [00:38<00:10, 65.7MB/s][A[A
model-00006-of-00006.safetensors: 81%|████████ | 2.67G/3.32G [00:38<00:09, 68.7MB/s][A[A
model-00006-of-00006.safetensors: 81%|████████ | 2.69G/3.32G [00:39<00:09, 67.0MB/s][A[A
model-00006-of-00006.safetensors: 82%|████████▏ | 2.70G/3.32G [00:39<00:08, 70.0MB/s][A[A
model-00006-of-00006.safetensors: 82%|████████▏ | 2.72G/3.32G [00:39<00:08, 73.1MB/s][A[A
model-00006-of-00006.safetensors: 83%|████████▎ | 2.74G/3.32G [00:39<00:08, 71.8MB/s][A[A
model-00006-of-00006.safetensors: 83%|████████▎ | 2.75G/3.32G [00:40<00:07, 71.7MB/s][A[A
model-00006-of-00006.safetensors: 83%|████████▎ | 2.77G/3.32G [00:40<00:07, 71.6MB/s][A[A
model-00006-of-00006.safetensors: 84%|████████▍ | 2.78G/3.32G [00:40<00:07, 71.2MB/s][A[A
model-00006-of-00006.safetensors: 84%|████████▍ | 2.80G/3.32G [00:40<00:07, 72.0MB/s][A[A
model-00006-of-00006.safetensors: 85%|████████▍ | 2.82G/3.32G [00:40<00:06, 74.3MB/s][A[A
model-00006-of-00006.safetensors: 85%|████████▌ | 2.83G/3.32G [00:41<00:06, 73.8MB/s][A[A
model-00006-of-00006.safetensors: 86%|████████▌ | 2.85G/3.32G [00:41<00:06, 75.2MB/s][A[A
model-00006-of-00006.safetensors: 86%|████████▋ | 2.86G/3.32G [00:41<00:06, 71.0MB/s][A[A
model-00006-of-00006.safetensors: 87%|████████▋ | 2.88G/3.32G [00:41<00:05, 72.8MB/s][A[A
model-00006-of-00006.safetensors: 87%|████████▋ | 2.90G/3.32G [00:42<00:07, 58.9MB/s][A[A
model-00006-of-00006.safetensors: 88%|████████▊ | 2.91G/3.32G [00:42<00:06, 63.0MB/s][A[A
model-00006-of-00006.safetensors: 88%|████████▊ | 2.93G/3.32G [00:42<00:06, 59.7MB/s][A[A
model-00006-of-00006.safetensors: 89%|████████▉ | 2.94G/3.32G [00:42<00:05, 64.9MB/s][A[A
model-00006-of-00006.safetensors: 89%|████████▉ | 2.96G/3.32G [00:43<00:05, 66.6MB/s][A[A
model-00006-of-00006.safetensors: 90%|████████▉ | 2.98G/3.32G [00:43<00:05, 67.8MB/s][A[A
model-00006-of-00006.safetensors: 90%|█████████ | 2.99G/3.32G [00:43<00:06, 46.8MB/s][A[A
model-00006-of-00006.safetensors: 91%|█████████ | 3.01G/3.32G [00:44<00:05, 51.7MB/s][A[A
model-00006-of-00006.safetensors: 91%|█████████ | 3.02G/3.32G [00:44<00:05, 56.9MB/s][A[A
model-00006-of-00006.safetensors: 92%|█████████▏| 3.04G/3.32G [00:44<00:04, 60.9MB/s][A[A
model-00006-of-00006.safetensors: 92%|█████████▏| 3.06G/3.32G [00:44<00:04, 63.2MB/s][A[A
model-00006-of-00006.safetensors: 93%|█████████▎| 3.07G/3.32G [00:45<00:04, 60.7MB/s][A[A
model-00006-of-00006.safetensors: 93%|█████████▎| 3.09G/3.32G [00:45<00:03, 64.8MB/s][A[A
model-00006-of-00006.safetensors: 94%|█████████▎| 3.10G/3.32G [00:45<00:03, 63.0MB/s][A[A
model-00006-of-00006.safetensors: 94%|█████████▍| 3.12G/3.32G [00:45<00:03, 62.7MB/s][A[A
model-00006-of-00006.safetensors: 95%|█████████▍| 3.14G/3.32G [00:46<00:02, 66.8MB/s][A[A
model-00006-of-00006.safetensors: 95%|█████████▌| 3.15G/3.32G [00:46<00:02, 68.3MB/s][A[A
model-00006-of-00006.safetensors: 96%|█████████▌| 3.17G/3.32G [00:46<00:02, 67.7MB/s][A[A
model-00006-of-00006.safetensors: 96%|█████████▌| 3.18G/3.32G [00:46<00:02, 65.2MB/s][A[A
model-00006-of-00006.safetensors: 97%|█████████▋| 3.20G/3.32G [00:47<00:01, 71.1MB/s][A[A
model-00006-of-00006.safetensors: 97%|█████████▋| 3.22G/3.32G [00:47<00:01, 58.4MB/s][A[A
model-00006-of-00006.safetensors: 97%|█████████▋| 3.23G/3.32G [00:47<00:01, 62.5MB/s][A[A
model-00006-of-00006.safetensors: 98%|█████████▊| 3.25G/3.32G [00:47<00:01, 63.1MB/s][A[A
model-00006-of-00006.safetensors: 98%|█████████▊| 3.25G/3.32G [00:48<00:01, 51.4MB/s][A[A
model-00006-of-00006.safetensors: 98%|█████████▊| 3.26G/3.32G [00:48<00:01, 51.2MB/s][A[A
model-00006-of-00006.safetensors: 99%|█████████▉| 3.28G/3.32G [00:48<00:00, 45.3MB/s][A[A
model-00006-of-00006.safetensors: 99%|█████████▉| 3.30G/3.32G [00:48<00:00, 52.7MB/s][A[A
model-00006-of-00006.safetensors: 100%|█████████▉| 3.31G/3.32G [00:49<00:00, 58.2MB/s][A[A
model-00006-of-00006.safetensors: 100%|██████████| 3.32G/3.32G [00:49<00:00, 67.3MB/s]
Upload 138 LFS files: 4%|▍ | 6/138 [02:10<43:50, 19.93s/it][A[A[A[A[A
Upload 138 LFS files: 100%|██████████| 138/138 [02:10<00:00, 1.06it/s]
2%|▏ | 10001/569592 [1:41:53<14774:10:23, 95.05s/it]
2%|▏ | 10001/569592 [1:41:53<14774:10:23, 95.05s/it]
2%|▏ | 10002/569592 [1:41:54<10399:18:26, 66.90s/it]
2%|▏ | 10002/569592 [1:41:54<10399:18:26, 66.90s/it]
2%|▏ | 10003/569592 [1:41:55<7320:45:59, 47.10s/it]
2%|▏ | 10003/569592 [1:41:55<7320:45:59, 47.10s/it]
2%|▏ | 10004/569592 [1:41:56<5166:52:50, 33.24s/it]
2%|▏ | 10004/569592 [1:41:56<5166:52:50, 33.24s/it]
2%|▏ | 10005/569592 [1:41:57<3662:42:35, 23.56s/it]
2%|▏ | 10005/569592 [1:41:57<3662:42:35, 23.56s/it]
2%|▏ | 10006/569592 [1:41:58<2608:37:08, 16.78s/it]
2%|▏ | 10006/569592 [1:41:58<2608:37:08, 16.78s/it]
2%|▏ | 10007/569592 [1:41:59<1873:39:23, 12.05s/it]
2%|▏ | 10007/569592 [1:41:59<1873:39:23, 12.05s/it]
2%|▏ | 10008/569592 [1:42:00<1355:15:21, 8.72s/it]
2%|▏ | 10008/569592 [1:42:00<1355:15:21, 8.72s/it]
2%|▏ | 10009/569592 [1:42:02<1054:34:00, 6.78s/it]
2%|▏ | 10009/569592 [1:42:02<1054:34:00, 6.78s/it]
2%|▏ | 10010/569592 [1:42:06<950:44:15, 6.12s/it]
2%|▏ | 10010/569592 [1:42:06<950:44:15, 6.12s/it]
2%|▏ | 10011/569592 [1:42:08<732:19:04, 4.71s/it]
2%|▏ | 10011/569592 [1:42:08<732:19:04, 4.71s/it]
2%|▏ | 10012/569592 [1:42:09<561:44:25, 3.61s/it]
2%|▏ | 10012/569592 [1:42:09<561:44:25, 3.61s/it]
2%|▏ | 10013/569592 [1:42:10<455:21:40, 2.93s/it]
2%|▏ | 10013/569592 [1:42:10<455:21:40, 2.93s/it]
2%|▏ | 10014/569592 [1:42:16<589:54:08, 3.80s/it]
2%|▏ | 10014/569592 [1:42:16<589:54:08, 3.80s/it]
2%|▏ | 10015/569592 [1:42:18<488:27:12, 3.14s/it]
2%|▏ | 10015/569592 [1:42:18<488:27:12, 3.14s/it]
2%|▏ | 10016/569592 [1:42:19<395:38:29, 2.55s/it]
2%|▏ | 10016/569592 [1:42:19<395:38:29, 2.55s/it]
2%|▏ | 10017/569592 [1:42:20<350:05:21, 2.25s/it]
2%|▏ | 10017/569592 [1:42:20<350:05:21, 2.25s/it]
2%|▏ | 10018/569592 [1:42:26<508:15:04, 3.27s/it]
2%|▏ | 10018/569592 [1:42:26<508:15:04, 3.27s/it]
2%|▏ | 10019/569592 [1:42:29<505:11:39, 3.25s/it]
2%|▏ | 10019/569592 [1:42:29<505:11:39, 3.25s/it]
2%|▏ | 10020/569592 [1:42:30<397:35:08, 2.56s/it]
2%|▏ | 10020/569592 [1:42:30<397:35:08, 2.56s/it]
2%|▏ | 10021/569592 [1:42:31<322:46:07, 2.08s/it]
2%|▏ | 10021/569592 [1:42:31<322:46:07, 2.08s/it]
2%|▏ | 10022/569592 [1:42:37<500:36:18, 3.22s/it]
2%|▏ | 10022/569592 [1:42:37<500:36:18, 3.22s/it]
2%|▏ | 10023/569592 [1:42:38<404:46:59, 2.60s/it]
2%|▏ | 10023/569592 [1:42:38<404:46:59, 2.60s/it]
2%|▏ | 10024/569592 [1:42:39<328:51:03, 2.12s/it]
2%|▏ | 10024/569592 [1:42:39<328:51:03, 2.12s/it]
2%|▏ | 10025/569592 [1:42:42<337:22:11, 2.17s/it]
2%|▏ | 10025/569592 [1:42:42<337:22:11, 2.17s/it]
2%|▏ | 10026/569592 [1:42:47<484:39:18, 3.12s/it]
2%|▏ | 10026/569592 [1:42:47<484:39:18, 3.12s/it]
2%|▏ | 10027/569592 [1:42:48<383:28:34, 2.47s/it]
2%|▏ | 10027/569592 [1:42:48<383:28:34, 2.47s/it]
2%|▏ | 10028/569592 [1:42:49<336:10:06, 2.16s/it]
2%|▏ | 10028/569592 [1:42:49<336:10:06, 2.16s/it]
2%|▏ | 10029/569592 [1:42:51<296:54:07, 1.91s/it]
2%|▏ | 10029/569592 [1:42:51<296:54:07, 1.91s/it]
2%|▏ | 10030/569592 [1:42:57<513:37:54, 3.30s/it]
2%|▏ | 10030/569592 [1:42:57<513:37:54, 3.30s/it]
2%|▏ | 10031/569592 [1:42:58<403:51:41, 2.60s/it]
2%|▏ | 10031/569592 [1:42:58<403:51:41, 2.60s/it]
2%|▏ | 10032/569592 [1:42:59<346:00:55, 2.23s/it]
2%|▏ | 10032/569592 [1:42:59<346:00:55, 2.23s/it]
2%|▏ | 10033/569592 [1:43:02<362:56:23, 2.34s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95999290 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10033/569592 [1:43:02<362:56:23, 2.34s/it]
2%|▏ | 10034/569592 [1:43:09<579:03:56, 3.73s/it]
2%|▏ | 10034/569592 [1:43:09<579:03:56, 3.73s/it]
2%|▏ | 10035/569592 [1:43:10<450:52:09, 2.90s/it]
2%|▏ | 10035/569592 [1:43:10<450:52:09, 2.90s/it]
2%|▏ | 10036/569592 [1:43:11<360:17:25, 2.32s/it]
2%|▏ | 10036/569592 [1:43:11<360:17:25, 2.32s/it]
2%|▏ | 10037/569592 [1:43:12<297:46:03, 1.92s/it]
2%|▏ | 10037/569592 [1:43:12<297:46:03, 1.92s/it]
2%|▏ | 10038/569592 [1:43:19<529:27:39, 3.41s/it]
2%|▏ | 10038/569592 [1:43:19<529:27:39, 3.41s/it]
2%|▏ | 10039/569592 [1:43:20<439:40:41, 2.83s/it]
2%|▏ | 10039/569592 [1:43:20<439:40:41, 2.83s/it]
2%|▏ | 10040/569592 [1:43:21<352:55:14, 2.27s/it]
2%|▏ | 10040/569592 [1:43:21<352:55:14, 2.27s/it]
2%|▏ | 10041/569592 [1:43:24<360:55:03, 2.32s/it]
2%|▏ | 10041/569592 [1:43:24<360:55:03, 2.32s/it]
2%|▏ | 10042/569592 [1:43:28<468:35:31, 3.01s/it]
2%|▏ | 10042/569592 [1:43:28<468:35:31, 3.01s/it]
2%|▏ | 10043/569592 [1:43:29<380:51:34, 2.45s/it]
2%|▏ | 10043/569592 [1:43:29<380:51:34, 2.45s/it]
2%|▏ | 10044/569592 [1:43:34<459:20:59, 2.96s/it]
2%|▏ | 10044/569592 [1:43:34<459:20:59, 2.96s/it]
2%|▏ | 10045/569592 [1:43:37<499:15:32, 3.21s/it]
2%|▏ | 10045/569592 [1:43:37<499:15:32, 3.21s/it]
2%|▏ | 10046/569592 [1:43:38<393:16:41, 2.53s/it]
2%|▏ | 10046/569592 [1:43:38<393:16:41, 2.53s/it]
2%|▏ | 10047/569592 [1:43:44<538:40:31, 3.47s/it]
2%|▏ | 10047/569592 [1:43:44<538:40:31, 3.47s/it]
2%|▏ | 10048/569592 [1:43:45<421:00:15, 2.71s/it]
2%|▏ | 10048/569592 [1:43:45<421:00:15, 2.71s/it]
2%|▏ | 10049/569592 [1:43:50<539:21:46, 3.47s/it]
2%|▏ | 10049/569592 [1:43:50<539:21:46, 3.47s/it]
2%|▏ | 10050/569592 [1:43:56<634:46:48, 4.08s/it]
2%|▏ | 10050/569592 [1:43:56<634:46:48, 4.08s/it]
2%|▏ | 10051/569592 [1:44:00<655:43:38, 4.22s/it]
2%|▏ | 10051/569592 [1:44:00<655:43:38, 4.22s/it]
2%|▏ | 10052/569592 [1:44:04<623:57:08, 4.01s/it]
2%|▏ | 10052/569592 [1:44:04<623:57:08, 4.01s/it]
2%|▏ | 10053/569592 [1:44:09<683:43:20, 4.40s/it]
2%|▏ | 10053/569592 [1:44:09<683:43:20, 4.40s/it]
2%|▏ | 10054/569592 [1:44:14<724:23:59, 4.66s/it]
2%|▏ | 10054/569592 [1:44:14<724:23:59, 4.66s/it]
2%|▏ | 10055/569592 [1:44:19<743:43:24, 4.79s/it]
2%|▏ | 10055/569592 [1:44:19<743:43:24, 4.79s/it]
2%|▏ | 10056/569592 [1:44:24<753:36:25, 4.85s/it]
2%|▏ | 10056/569592 [1:44:24<753:36:25, 4.85s/it]
2%|▏ | 10057/569592 [1:44:28<703:40:31, 4.53s/it]
2%|▏ | 10057/569592 [1:44:28<703:40:31, 4.53s/it]
2%|▏ | 10058/569592 [1:44:29<533:23:08, 3.43s/it]
2%|▏ | 10058/569592 [1:44:29<533:23:08, 3.43s/it]
2%|▏ | 10059/569592 [1:44:34<591:04:53, 3.80s/it]
2%|▏ | 10059/569592 [1:44:34<591:04:53, 3.80s/it]
2%|▏ | 10060/569592 [1:44:41<735:02:42, 4.73s/it]
2%|▏ | 10060/569592 [1:44:41<735:02:42, 4.73s/it]
2%|▏ | 10061/569592 [1:44:46<754:48:56, 4.86s/it]
2%|▏ | 10061/569592 [1:44:46<754:48:56, 4.86s/it]
2%|▏ | 10062/569592 [1:44:51<749:57:57, 4.83s/it]
2%|▏ | 10062/569592 [1:44:51<749:57:57, 4.83s/it]
2%|▏ | 10063/569592 [1:44:55<752:00:39, 4.84s/it]
2%|▏ | 10063/569592 [1:44:55<752:00:39, 4.84s/it]
2%|▏ | 10064/569592 [1:45:00<717:57:41, 4.62s/it]
2%|▏ | 10064/569592 [1:45:00<717:57:41, 4.62s/it]
2%|▏ | 10065/569592 [1:45:05<736:57:58, 4.74s/it]
2%|▏ | 10065/569592 [1:45:05<736:57:58, 4.74s/it]
2%|▏ | 10066/569592 [1:45:11<827:25:47, 5.32s/it]
2%|▏ | 10066/569592 [1:45:11<827:25:47, 5.32s/it]
2%|▏ | 10067/569592 [1:45:16<823:24:58, 5.30s/it]
2%|▏ | 10067/569592 [1:45:16<823:24:58, 5.30s/it]
2%|▏ | 10068/569592 [1:45:21<795:14:59, 5.12s/it]
2%|▏ | 10068/569592 [1:45:21<795:14:59, 5.12s/it]
2%|▏ | 10069/569592 [1:45:22<600:28:08, 3.86s/it]
2%|▏ | 10069/569592 [1:45:22<600:28:08, 3.86s/it]
2%|▏ | 10070/569592 [1:45:27<658:52:55, 4.24s/it]
2%|▏ | 10070/569592 [1:45:27<658:52:55, 4.24s/it]
2%|▏ | 10071/569592 [1:45:34<764:29:54, 4.92s/it]
2%|▏ | 10071/569592 [1:45:34<764:29:54, 4.92s/it]
2%|▏ | 10072/569592 [1:45:39<772:50:10, 4.97s/it]
2%|▏ | 10072/569592 [1:45:39<772:50:10, 4.97s/it]
2%|▏ | 10073/569592 [1:45:44<793:42:22, 5.11s/it]
2%|▏ | 10073/569592 [1:45:44<793:42:22, 5.11s/it]
2%|▏ | 10074/569592 [1:45:48<737:21:53, 4.74s/it]
2%|▏ | 10074/569592 [1:45:48<737:21:53, 4.74s/it]
2%|▏ | 10075/569592 [1:45:53<766:46:05, 4.93s/it]
2%|▏ | 10075/569592 [1:45:54<766:46:05, 4.93s/it]
2%|▏ | 10076/569592 [1:45:54<578:05:21, 3.72s/it]
2%|▏ | 10076/569592 [1:45:54<578:05:21, 3.72s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 10077/569592 [1:45:59<599:15:49, 3.86s/it]
2%|▏ | 10077/569592 [1:45:59<599:15:49, 3.86s/it]
2%|▏ | 10078/569592 [1:46:03<645:55:59, 4.16s/it]
2%|▏ | 10078/569592 [1:46:03<645:55:59, 4.16s/it]
2%|▏ | 10079/569592 [1:46:08<669:26:50, 4.31s/it]
2%|▏ | 10079/569592 [1:46:08<669:26:50, 4.31s/it]
2%|▏ | 10080/569592 [1:46:12<662:47:17, 4.26s/it]
2%|▏ | 10080/569592 [1:46:12<662:47:17, 4.26s/it]
2%|▏ | 10081/569592 [1:46:16<633:22:01, 4.08s/it]
2%|▏ | 10081/569592 [1:46:16<633:22:01, 4.08s/it]
2%|▏ | 10082/569592 [1:46:21<675:56:18, 4.35s/it]
2%|▏ | 10082/569592 [1:46:21<675:56:18, 4.35s/it]
2%|▏ | 10083/569592 [1:46:22<514:42:06, 3.31s/it]
2%|▏ | 10083/569592 [1:46:22<514:42:06, 3.31s/it]
2%|▏ | 10084/569592 [1:46:23<403:33:54, 2.60s/it]
2%|▏ | 10084/569592 [1:46:23<403:33:54, 2.60s/it]
2%|▏ | 10085/569592 [1:46:24<328:20:45, 2.11s/it]
2%|▏ | 10085/569592 [1:46:24<328:20:45, 2.11s/it]
2%|▏ | 10086/569592 [1:46:29<489:25:26, 3.15s/it]
2%|▏ | 10086/569592 [1:46:29<489:25:26, 3.15s/it]
2%|▏ | 10087/569592 [1:46:34<586:35:36, 3.77s/it]
2%|▏ | 10087/569592 [1:46:34<586:35:36, 3.77s/it]
2%|▏ | 10088/569592 [1:46:36<461:34:52, 2.97s/it]
2%|▏ | 10088/569592 [1:46:36<461:34:52, 2.97s/it]
2%|▏ | 10089/569592 [1:46:40<509:50:15, 3.28s/it]
2%|▏ | 10089/569592 [1:46:40<509:50:15, 3.28s/it]
2%|▏ | 10090/569592 [1:46:43<538:42:56, 3.47s/it]
2%|▏ | 10090/569592 [1:46:43<538:42:56, 3.47s/it]
2%|▏ | 10091/569592 [1:46:48<598:31:40, 3.85s/it]
2%|▏ | 10091/569592 [1:46:48<598:31:40, 3.85s/it]
2%|▏ | 10092/569592 [1:46:53<619:30:10, 3.99s/it]
2%|▏ | 10092/569592 [1:46:53<619:30:10, 3.99s/it]
2%|▏ | 10093/569592 [1:46:53<474:51:58, 3.06s/it]
2%|▏ | 10093/569592 [1:46:53<474:51:58, 3.06s/it]
2%|▏ | 10094/569592 [1:46:54<374:36:44, 2.41s/it]
2%|▏ | 10094/569592 [1:46:54<374:36:44, 2.41s/it]
2%|▏ | 10095/569592 [1:46:58<437:41:48, 2.82s/it]
2%|▏ | 10095/569592 [1:46:58<437:41:48, 2.82s/it]
2%|▏ | 10096/569592 [1:46:59<352:19:27, 2.27s/it]
2%|▏ | 10096/569592 [1:46:59<352:19:27, 2.27s/it]
2%|▏ | 10097/569592 [1:47:00<290:52:25, 1.87s/it]
2%|▏ | 10097/569592 [1:47:00<290:52:25, 1.87s/it]
2%|▏ | 10098/569592 [1:47:01<252:28:57, 1.62s/it]
2%|▏ | 10098/569592 [1:47:01<252:28:57, 1.62s/it]
2%|▏ | 10099/569592 [1:47:02<219:11:26, 1.41s/it]
2%|▏ | 10099/569592 [1:47:02<219:11:26, 1.41s/it]
2%|▏ | 10100/569592 [1:47:08<420:25:01, 2.71s/it]
2%|▏ | 10100/569592 [1:47:08<420:25:01, 2.71s/it]
2%|▏ | 10101/569592 [1:47:09<348:04:24, 2.24s/it]
2%|▏ | 10101/569592 [1:47:09<348:04:24, 2.24s/it]
2%|▏ | 10102/569592 [1:47:13<458:53:26, 2.95s/it]
2%|▏ | 10102/569592 [1:47:13<458:53:26, 2.95s/it]
2%|▏ | 10103/569592 [1:47:14<365:26:15, 2.35s/it]
2%|▏ | 10103/569592 [1:47:14<365:26:15, 2.35s/it]
2%|▏ | 10104/569592 [1:47:19<454:38:48, 2.93s/it]
2%|▏ | 10104/569592 [1:47:19<454:38:48, 2.93s/it]
2%|▏ | 10105/569592 [1:47:20<362:11:39, 2.33s/it]
2%|▏ | 10105/569592 [1:47:20<362:11:39, 2.33s/it]
2%|▏ | 10106/569592 [1:47:21<303:35:50, 1.95s/it]
2%|▏ | 10106/569592 [1:47:21<303:35:50, 1.95s/it]
2%|▏ | 10107/569592 [1:47:22<256:39:13, 1.65s/it]
2%|▏ | 10107/569592 [1:47:22<256:39:13, 1.65s/it]
2%|▏ | 10108/569592 [1:47:23<223:44:43, 1.44s/it]
2%|▏ | 10108/569592 [1:47:23<223:44:43, 1.44s/it]
2%|▏ | 10109/569592 [1:47:26<315:08:04, 2.03s/it]
2%|▏ | 10109/569592 [1:47:26<315:08:04, 2.03s/it]
2%|▏ | 10110/569592 [1:47:28<297:16:45, 1.91s/it]
2%|▏ | 10110/569592 [1:47:28<297:16:45, 1.91s/it]
2%|▏ | 10111/569592 [1:47:29<255:26:05, 1.64s/it]
2%|▏ | 10111/569592 [1:47:29<255:26:05, 1.64s/it]
2%|▏ | 10112/569592 [1:47:33<377:17:40, 2.43s/it]
2%|▏ | 10112/569592 [1:47:33<377:17:40, 2.43s/it]
2%|▏ | 10113/569592 [1:47:37<454:02:07, 2.92s/it]
2%|▏ | 10113/569592 [1:47:37<454:02:07, 2.92s/it]
2%|▏ | 10114/569592 [1:47:38<360:40:49, 2.32s/it]
2%|▏ | 10114/569592 [1:47:38<360:40:49, 2.32s/it]
2%|▏ | 10115/569592 [1:47:39<295:32:04, 1.90s/it]
2%|▏ | 10115/569592 [1:47:39<295:32:04, 1.90s/it]
2%|▏ | 10116/569592 [1:47:43<410:16:44, 2.64s/it]
2%|▏ | 10116/569592 [1:47:43<410:16:44, 2.64s/it]
2%|▏ | 10117/569592 [1:47:47<445:22:26, 2.87s/it]
2%|▏ | 10117/569592 [1:47:47<445:22:26, 2.87s/it]
2%|▏ | 10118/569592 [1:47:48<356:25:32, 2.29s/it]
2%|▏ | 10118/569592 [1:47:48<356:25:32, 2.29s/it]
2%|▏ | 10119/569592 [1:47:49<320:48:01, 2.06s/it]
2%|▏ | 10119/569592 [1:47:49<320:48:01, 2.06s/it]
2%|▏ | 10120/569592 [1:47:53<430:20:37, 2.77s/it]
2%|▏ | 10120/569592 [1:47:53<430:20:37, 2.77s/it]
2%|▏ | 10121/569592 [1:47:58<495:21:28, 3.19s/it]
2%|▏ | 10121/569592 [1:47:58<495:21:28, 3.19s/it]
2%|▏ | 10122/569592 [1:47:59<394:14:37, 2.54s/it]
2%|▏ | 10122/569592 [1:47:59<394:14:37, 2.54s/it]
2%|▏ | 10123/569592 [1:48:00<320:50:47, 2.06s/it]
2%|▏ | 10123/569592 [1:48:00<320:50:47, 2.06s/it]
2%|▏ | 10124/569592 [1:48:03<393:26:43, 2.53s/it]
2%|▏ | 10124/569592 [1:48:03<393:26:43, 2.53s/it]
2%|▏ | 10125/569592 [1:48:08<512:27:10, 3.30s/it]
2%|▏ | 10125/569592 [1:48:08<512:27:10, 3.30s/it]
2%|▏ | 10126/569592 [1:48:09<402:26:38, 2.59s/it]
2%|▏ | 10126/569592 [1:48:09<402:26:38, 2.59s/it]
2%|▏ | 10127/569592 [1:48:10<326:35:42, 2.10s/it]
2%|▏ | 10127/569592 [1:48:10<326:35:42, 2.10s/it]
2%|▏ | 10128/569592 [1:48:13<356:36:22, 2.29s/it]
2%|▏ | 10128/569592 [1:48:13<356:36:22, 2.29s/it]
2%|▏ | 10129/569592 [1:48:19<508:02:44, 3.27s/it]
2%|▏ | 10129/569592 [1:48:19<508:02:44, 3.27s/it]
2%|▏ | 10130/569592 [1:48:19<401:05:40, 2.58s/it]
2%|▏ | 10130/569592 [1:48:19<401:05:40, 2.58s/it]
2%|▏ | 10131/569592 [1:48:20<324:51:23, 2.09s/it]
2%|▏ | 10131/569592 [1:48:20<324:51:23, 2.09s/it]
2%|▏ | 10132/569592 [1:48:23<351:39:34, 2.26s/it]
2%|▏ | 10132/569592 [1:48:23<351:39:34, 2.26s/it]
2%|▏ | 10133/569592 [1:48:28<471:44:27, 3.04s/it]
2%|▏ | 10133/569592 [1:48:28<471:44:27, 3.04s/it]
2%|▏ | 10134/569592 [1:48:29<373:39:23, 2.40s/it]
2%|▏ | 10134/569592 [1:48:29<373:39:23, 2.40s/it]
2%|▏ | 10135/569592 [1:48:31<347:37:49, 2.24s/it]
2%|▏ | 10135/569592 [1:48:31<347:37:49, 2.24s/it]
2%|▏ | 10136/569592 [1:48:34<411:23:27, 2.65s/it]
2%|▏ | 10136/569592 [1:48:34<411:23:27, 2.65s/it]
2%|▏ | 10137/569592 [1:48:38<480:35:30, 3.09s/it]
2%|▏ | 10137/569592 [1:48:38<480:35:30, 3.09s/it]
2%|▏ | 10138/569592 [1:48:39<379:45:01, 2.44s/it]
2%|▏ | 10138/569592 [1:48:39<379:45:01, 2.44s/it]
2%|▏ | 10139/569592 [1:48:41<333:34:35, 2.15s/it]
2%|▏ | 10139/569592 [1:48:41<333:34:35, 2.15s/it]
2%|▏ | 10140/569592 [1:48:44<392:32:15, 2.53s/it]
2%|▏ | 10140/569592 [1:48:44<392:32:15, 2.53s/it]
2%|▏ | 10141/569592 [1:48:50<556:42:08, 3.58s/it]
2%|▏ | 10141/569592 [1:48:50<556:42:08, 3.58s/it]
2%|▏ | 10142/569592 [1:48:51<432:01:02, 2.78s/it]
2%|▏ | 10142/569592 [1:48:51<432:01:02, 2.78s/it]
2%|▏ | 10143/569592 [1:48:52<346:24:13, 2.23s/it]
2%|▏ | 10143/569592 [1:48:52<346:24:13, 2.23s/it]
2%|▏ | 10144/569592 [1:48:54<330:42:57, 2.13s/it]
2%|▏ | 10144/569592 [1:48:54<330:42:57, 2.13s/it]
2%|▏ | 10145/569592 [1:49:01<536:56:11, 3.46s/it]
2%|▏ | 10145/569592 [1:49:01<536:56:11, 3.46s/it]
2%|▏ | 10146/569592 [1:49:02<420:20:14, 2.70s/it]
2%|▏ | 10146/569592 [1:49:02<420:20:14, 2.70s/it]
2%|▏ | 10147/569592 [1:49:03<344:39:54, 2.22s/it]
2%|▏ | 10147/569592 [1:49:03<344:39:54, 2.22s/it]
2%|▏ | 10148/569592 [1:49:04<318:37:10, 2.05s/it]
2%|▏ | 10148/569592 [1:49:04<318:37:10, 2.05s/it]
2%|▏ | 10149/569592 [1:49:11<525:03:18, 3.38s/it]
2%|▏ | 10149/569592 [1:49:11<525:03:18, 3.38s/it]
2%|▏ | 10150/569592 [1:49:12<410:29:47, 2.64s/it]
2%|▏ | 10150/569592 [1:49:12<410:29:47, 2.64s/it]
2%|▏ | 10151/569592 [1:49:13<330:40:54, 2.13s/it]
2%|▏ | 10151/569592 [1:49:13<330:40:54, 2.13s/it]
2%|▏ | 10152/569592 [1:49:14<310:18:37, 2.00s/it]
2%|▏ | 10152/569592 [1:49:14<310:18:37, 2.00s/it]
2%|▏ | 10153/569592 [1:49:21<513:11:13, 3.30s/it]
2%|▏ | 10153/569592 [1:49:21<513:11:13, 3.30s/it]
2%|▏ | 10154/569592 [1:49:22<402:01:01, 2.59s/it]
2%|▏ | 10154/569592 [1:49:22<402:01:01, 2.59s/it]
2%|▏ | 10155/569592 [1:49:23<325:22:30, 2.09s/it]
2%|▏ | 10155/569592 [1:49:23<325:22:30, 2.09s/it]
2%|▏ | 10156/569592 [1:49:24<313:10:34, 2.02s/it]
2%|▏ | 10156/569592 [1:49:24<313:10:34, 2.02s/it]
2%|▏ | 10157/569592 [1:49:30<496:56:34, 3.20s/it]
2%|▏ | 10157/569592 [1:49:30<496:56:34, 3.20s/it]
2%|▏ | 10158/569592 [1:49:35<569:36:10, 3.67s/it]
2%|▏ | 10158/569592 [1:49:35<569:36:10, 3.67s/it]
2%|▏ | 10159/569592 [1:49:40<617:12:11, 3.97s/it]
2%|▏ | 10159/569592 [1:49:40<617:12:11, 3.97s/it]
2%|▏ | 10160/569592 [1:49:43<604:50:26, 3.89s/it]
2%|▏ | 10160/569592 [1:49:43<604:50:26, 3.89s/it]
2%|▏ | 10161/569592 [1:49:44<465:00:46, 2.99s/it]
2%|▏ | 10161/569592 [1:49:44<465:00:46, 2.99s/it]
2%|▏ | 10162/569592 [1:49:48<490:02:24, 3.15s/it]
2%|▏ | 10162/569592 [1:49:48<490:02:24, 3.15s/it]
2%|▏ | 10163/569592 [1:49:52<548:57:55, 3.53s/it]
2%|▏ | 10163/569592 [1:49:52<548:57:55, 3.53s/it]
2%|▏ | 10164/569592 [1:49:56<555:16:23, 3.57s/it]
2%|▏ | 10164/569592 [1:49:56<555:16:23, 3.57s/it]
2%|▏ | 10165/569592 [1:50:00<556:56:49, 3.58s/it]
2%|▏ | 10165/569592 [1:50:00<556:56:49, 3.58s/it]
2%|▏ | 10166/569592 [1:50:05<623:10:35, 4.01s/it]
2%|▏ | 10166/569592 [1:50:05<623:10:35, 4.01s/it]
2%|▏ | 10167/569592 [1:50:06<486:40:04, 3.13s/it]
2%|▏ | 10167/569592 [1:50:06<486:40:04, 3.13s/it]
2%|▏ | 10168/569592 [1:50:11<583:38:42, 3.76s/it]
2%|▏ | 10168/569592 [1:50:11<583:38:42, 3.76s/it]
2%|▏ | 10169/569592 [1:50:16<643:10:05, 4.14s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10169/569592 [1:50:16<643:10:05, 4.14s/it]
2%|▏ | 10170/569592 [1:50:20<624:51:46, 4.02s/it]
2%|▏ | 10170/569592 [1:50:20<624:51:46, 4.02s/it]
2%|▏ | 10171/569592 [1:50:24<656:49:13, 4.23s/it]
2%|▏ | 10171/569592 [1:50:24<656:49:13, 4.23s/it]
2%|▏ | 10172/569592 [1:50:28<631:11:36, 4.06s/it]
2%|▏ | 10172/569592 [1:50:28<631:11:36, 4.06s/it]
2%|▏ | 10173/569592 [1:50:33<659:11:58, 4.24s/it]
2%|▏ | 10173/569592 [1:50:33<659:11:58, 4.24s/it]
2%|▏ | 10174/569592 [1:50:36<607:43:41, 3.91s/it]
2%|▏ | 10174/569592 [1:50:36<607:43:41, 3.91s/it]
2%|▏ | 10175/569592 [1:50:41<658:00:04, 4.23s/it]
2%|▏ | 10175/569592 [1:50:41<658:00:04, 4.23s/it]
2%|▏ | 10176/569592 [1:50:46<685:28:02, 4.41s/it]
2%|▏ | 10176/569592 [1:50:46<685:28:02, 4.41s/it]
2%|▏ | 10177/569592 [1:50:49<647:18:14, 4.17s/it]
2%|▏ | 10177/569592 [1:50:49<647:18:14, 4.17s/it]
2%|▏ | 10178/569592 [1:50:52<602:32:30, 3.88s/it]
2%|▏ | 10178/569592 [1:50:52<602:32:30, 3.88s/it]
2%|▏ | 10179/569592 [1:50:57<641:41:23, 4.13s/it]
2%|▏ | 10179/569592 [1:50:57<641:41:23, 4.13s/it]
2%|▏ | 10180/569592 [1:51:02<680:03:18, 4.38s/it]
2%|▏ | 10180/569592 [1:51:02<680:03:18, 4.38s/it]
2%|▏ | 10181/569592 [1:51:05<630:44:09, 4.06s/it]
2%|▏ | 10181/569592 [1:51:05<630:44:09, 4.06s/it]
2%|▏ | 10182/569592 [1:51:10<677:16:54, 4.36s/it]
2%|▏ | 10182/569592 [1:51:11<677:16:54, 4.36s/it]
2%|▏ | 10183/569592 [1:51:14<636:00:26, 4.09s/it]
2%|▏ | 10183/569592 [1:51:14<636:00:26, 4.09s/it]
2%|▏ | 10184/569592 [1:51:15<485:30:37, 3.12s/it]
2%|▏ | 10184/569592 [1:51:15<485:30:37, 3.12s/it]
2%|▏ | 10185/569592 [1:51:19<526:34:18, 3.39s/it]
2%|▏ | 10185/569592 [1:51:19<526:34:18, 3.39s/it]
2%|▏ | 10186/569592 [1:51:26<717:17:51, 4.62s/it]
2%|▏ | 10186/569592 [1:51:26<717:17:51, 4.62s/it]
2%|▏ | 10187/569592 [1:51:31<713:01:02, 4.59s/it]
2%|▏ | 10187/569592 [1:51:31<713:01:02, 4.59s/it]
2%|▏ | 10188/569592 [1:51:36<728:42:23, 4.69s/it]
2%|▏ | 10188/569592 [1:51:36<728:42:23, 4.69s/it]
2%|▏ | 10189/569592 [1:51:40<724:21:24, 4.66s/it]
2%|▏ | 10189/569592 [1:51:40<724:21:24, 4.66s/it]
2%|▏ | 10190/569592 [1:51:44<667:57:28, 4.30s/it]
2%|▏ | 10190/569592 [1:51:44<667:57:28, 4.30s/it]
2%|▏ | 10191/569592 [1:51:49<690:29:28, 4.44s/it]
2%|▏ | 10191/569592 [1:51:49<690:29:28, 4.44s/it]
2%|▏ | 10192/569592 [1:51:52<654:43:20, 4.21s/it]
2%|▏ | 10192/569592 [1:51:52<654:43:20, 4.21s/it]
2%|▏ | 10193/569592 [1:51:56<623:44:10, 4.01s/it]
2%|▏ | 10193/569592 [1:51:56<623:44:10, 4.01s/it]
2%|▏ | 10194/569592 [1:52:01<661:28:32, 4.26s/it]
2%|▏ | 10194/569592 [1:52:01<661:28:32, 4.26s/it]
2%|▏ | 10195/569592 [1:52:02<508:31:41, 3.27s/it]
2%|▏ | 10195/569592 [1:52:02<508:31:41, 3.27s/it]
2%|▏ | 10196/569592 [1:52:05<513:58:16, 3.31s/it]
2%|▏ | 10196/569592 [1:52:05<513:58:16, 3.31s/it]
2%|▏ | 10197/569592 [1:52:06<402:17:35, 2.59s/it]
2%|▏ | 10197/569592 [1:52:06<402:17:35, 2.59s/it]
2%|▏ | 10198/569592 [1:52:10<462:55:23, 2.98s/it]
2%|▏ | 10198/569592 [1:52:10<462:55:23, 2.98s/it]
2%|▏ | 10199/569592 [1:52:13<477:59:52, 3.08s/it]
2%|▏ | 10199/569592 [1:52:13<477:59:52, 3.08s/it]
2%|▏ | 10200/569592 [1:52:16<490:13:52, 3.15s/it]
2%|▏ | 10200/569592 [1:52:16<490:13:52, 3.15s/it]
2%|▏ | 10201/569592 [1:52:17<389:30:59, 2.51s/it]
2%|▏ | 10201/569592 [1:52:17<389:30:59, 2.51s/it]
2%|▏ | 10202/569592 [1:52:21<423:54:49, 2.73s/it]
2%|▏ | 10202/569592 [1:52:21<423:54:49, 2.73s/it]
2%|▏ | 10203/569592 [1:52:22<340:08:57, 2.19s/it]
2%|▏ | 10203/569592 [1:52:22<340:08:57, 2.19s/it]
2%|▏ | 10204/569592 [1:52:25<419:05:13, 2.70s/it]
2%|▏ | 10204/569592 [1:52:26<419:05:13, 2.70s/it]
2%|▏ | 10205/569592 [1:52:30<488:44:50, 3.15s/it]
2%|▏ | 10205/569592 [1:52:30<488:44:50, 3.15s/it]
2%|▏ | 10206/569592 [1:52:31<386:48:21, 2.49s/it]
2%|▏ | 10206/569592 [1:52:31<386:48:21, 2.49s/it]
2%|▏ | 10207/569592 [1:52:32<315:05:16, 2.03s/it]
2%|▏ | 10207/569592 [1:52:32<315:05:16, 2.03s/it]
2%|▏ | 10208/569592 [1:52:35<388:57:49, 2.50s/it]
2%|▏ | 10208/569592 [1:52:35<388:57:49, 2.50s/it]
2%|▏ | 10209/569592 [1:52:36<319:18:07, 2.05s/it]
2%|▏ | 10209/569592 [1:52:36<319:18:07, 2.05s/it]
2%|▏ | 10210/569592 [1:52:37<269:03:59, 1.73s/it]
2%|▏ | 10210/569592 [1:52:37<269:03:59, 1.73s/it]
2%|▏ | 10211/569592 [1:52:38<231:53:48, 1.49s/it]
2%|▏ | 10211/569592 [1:52:38<231:53:48, 1.49s/it]
2%|▏ | 10212/569592 [1:52:42<326:36:04, 2.10s/it]
2%|▏ | 10212/569592 [1:52:42<326:36:04, 2.10s/it]
2%|▏ | 10213/569592 [1:52:45<400:44:16, 2.58s/it]
2%|▏ | 10213/569592 [1:52:45<400:44:16, 2.58s/it]
2%|▏ | 10214/569592 [1:52:50<485:16:07, 3.12s/it]
2%|▏ | 10214/569592 [1:52:50<485:16:07, 3.12s/it]
2%|▏ | 10215/569592 [1:52:51<384:56:17, 2.48s/it]
2%|▏ | 10215/569592 [1:52:51<384:56:17, 2.48s/it]
2%|▏ | 10216/569592 [1:52:52<313:12:38, 2.02s/it]
2%|▏ | 10216/569592 [1:52:52<313:12:38, 2.02s/it]
2%|▏ | 10217/569592 [1:52:56<412:13:03, 2.65s/it]
2%|▏ | 10217/569592 [1:52:56<412:13:03, 2.65s/it]
2%|▏ | 10218/569592 [1:53:00<476:59:37, 3.07s/it]
2%|▏ | 10218/569592 [1:53:00<476:59:37, 3.07s/it]
2%|▏ | 10219/569592 [1:53:01<377:29:45, 2.43s/it]
2%|▏ | 10219/569592 [1:53:01<377:29:45, 2.43s/it]
2%|▏ | 10220/569592 [1:53:02<310:31:36, 2.00s/it]
2%|▏ | 10220/569592 [1:53:02<310:31:36, 2.00s/it]
2%|▏ | 10221/569592 [1:53:03<261:47:08, 1.68s/it]
2%|▏ | 10221/569592 [1:53:03<261:47:08, 1.68s/it]
2%|▏ | 10222/569592 [1:53:04<227:32:07, 1.46s/it]
2%|▏ | 10222/569592 [1:53:04<227:32:07, 1.46s/it]
2%|▏ | 10223/569592 [1:53:06<249:10:40, 1.60s/it]
2%|▏ | 10223/569592 [1:53:06<249:10:40, 1.60s/it]
2%|▏ | 10224/569592 [1:53:07<225:58:32, 1.45s/it]
2%|▏ | 10224/569592 [1:53:07<225:58:32, 1.45s/it]
2%|▏ | 10225/569592 [1:53:11<352:58:16, 2.27s/it]
2%|▏ | 10225/569592 [1:53:11<352:58:16, 2.27s/it]
2%|▏ | 10226/569592 [1:53:14<396:35:54, 2.55s/it]
2%|▏ | 10226/569592 [1:53:14<396:35:54, 2.55s/it]
2%|▏ | 10227/569592 [1:53:16<350:49:41, 2.26s/it]
2%|▏ | 10227/569592 [1:53:16<350:49:41, 2.26s/it]
2%|▏ | 10228/569592 [1:53:18<337:00:54, 2.17s/it]
2%|▏ | 10228/569592 [1:53:18<337:00:54, 2.17s/it]
2%|▏ | 10229/569592 [1:53:21<401:54:10, 2.59s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10229/569592 [1:53:21<401:54:10, 2.59s/it]
2%|▏ | 10230/569592 [1:53:27<538:34:40, 3.47s/it]
2%|▏ | 10230/569592 [1:53:27<538:34:40, 3.47s/it]
2%|▏ | 10231/569592 [1:53:28<420:18:40, 2.71s/it]
2%|▏ | 10231/569592 [1:53:28<420:18:40, 2.71s/it]
2%|▏ | 10232/569592 [1:53:29<337:16:58, 2.17s/it]
2%|▏ | 10232/569592 [1:53:29<337:16:58, 2.17s/it]
2%|▏ | 10233/569592 [1:53:32<384:33:08, 2.47s/it]
2%|▏ | 10233/569592 [1:53:32<384:33:08, 2.47s/it]
2%|▏ | 10234/569592 [1:53:36<450:05:14, 2.90s/it]
2%|▏ | 10234/569592 [1:53:36<450:05:14, 2.90s/it]
2%|▏ | 10235/569592 [1:53:37<369:36:52, 2.38s/it]
2%|▏ | 10235/569592 [1:53:37<369:36:52, 2.38s/it]
2%|▏ | 10236/569592 [1:53:38<324:52:24, 2.09s/it]
2%|▏ | 10236/569592 [1:53:38<324:52:24, 2.09s/it]
2%|▏ | 10237/569592 [1:53:41<351:09:16, 2.26s/it]
2%|▏ | 10237/569592 [1:53:41<351:09:16, 2.26s/it]
2%|▏ | 10238/569592 [1:53:46<475:24:35, 3.06s/it]
2%|▏ | 10238/569592 [1:53:46<475:24:35, 3.06s/it]
2%|▏ | 10239/569592 [1:53:47<390:56:36, 2.52s/it]
2%|▏ | 10239/569592 [1:53:47<390:56:36, 2.52s/it]
2%|▏ | 10240/569592 [1:53:48<319:28:54, 2.06s/it]
2%|▏ | 10240/569592 [1:53:48<319:28:54, 2.06s/it]
2%|▏ | 10241/569592 [1:53:52<395:29:07, 2.55s/it]
2%|▏ | 10241/569592 [1:53:52<395:29:07, 2.55s/it]
2%|▏ | 10242/569592 [1:53:56<497:09:27, 3.20s/it]
2%|▏ | 10242/569592 [1:53:56<497:09:27, 3.20s/it]
2%|▏ | 10243/569592 [1:53:58<398:21:35, 2.56s/it]
2%|▏ | 10243/569592 [1:53:58<398:21:35, 2.56s/it]
2%|▏ | 10244/569592 [1:53:59<349:33:49, 2.25s/it]
2%|▏ | 10244/569592 [1:53:59<349:33:49, 2.25s/it]
2%|▏ | 10245/569592 [1:54:03<431:46:55, 2.78s/it]
2%|▏ | 10245/569592 [1:54:03<431:46:55, 2.78s/it]
2%|▏ | 10246/569592 [1:54:05<403:33:57, 2.60s/it]
2%|▏ | 10246/569592 [1:54:05<403:33:57, 2.60s/it]
2%|▏ | 10247/569592 [1:54:09<441:57:23, 2.84s/it]
2%|▏ | 10247/569592 [1:54:09<441:57:23, 2.84s/it]
2%|▏ | 10248/569592 [1:54:10<352:42:16, 2.27s/it]
2%|▏ | 10248/569592 [1:54:10<352:42:16, 2.27s/it]
2%|▏ | 10249/569592 [1:54:13<402:18:16, 2.59s/it]
2%|▏ | 10249/569592 [1:54:13<402:18:16, 2.59s/it]
2%|▏ | 10250/569592 [1:54:16<419:11:39, 2.70s/it]
2%|▏ | 10250/569592 [1:54:16<419:11:39, 2.70s/it]
2%|▏ | 10251/569592 [1:54:18<411:17:07, 2.65s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10251/569592 [1:54:18<411:17:07, 2.65s/it]
2%|▏ | 10252/569592 [1:54:19<333:32:29, 2.15s/it]
2%|▏ | 10252/569592 [1:54:19<333:32:29, 2.15s/it]
2%|▏ | 10253/569592 [1:54:23<383:48:58, 2.47s/it]
2%|▏ | 10253/569592 [1:54:23<383:48:58, 2.47s/it]
2%|▏ | 10254/569592 [1:54:26<444:00:58, 2.86s/it]
2%|▏ | 10254/569592 [1:54:26<444:00:58, 2.86s/it]
2%|▏ | 10255/569592 [1:54:28<381:42:36, 2.46s/it]
2%|▏ | 10255/569592 [1:54:28<381:42:36, 2.46s/it]
2%|▏ | 10256/569592 [1:54:29<310:35:01, 2.00s/it]
2%|▏ | 10256/569592 [1:54:29<310:35:01, 2.00s/it]
2%|▏ | 10257/569592 [1:54:34<440:06:22, 2.83s/it]
2%|▏ | 10257/569592 [1:54:34<440:06:22, 2.83s/it]
2%|▏ | 10258/569592 [1:54:36<421:15:15, 2.71s/it]
2%|▏ | 10258/569592 [1:54:36<421:15:15, 2.71s/it]
2%|▏ | 10259/569592 [1:54:39<445:05:43, 2.86s/it]
2%|▏ | 10259/569592 [1:54:39<445:05:43, 2.86s/it]
2%|▏ | 10260/569592 [1:54:40<358:02:03, 2.30s/it]
2%|▏ | 10260/569592 [1:54:40<358:02:03, 2.30s/it]
2%|▏ | 10261/569592 [1:54:42<352:41:55, 2.27s/it]
2%|▏ | 10261/569592 [1:54:42<352:41:55, 2.27s/it]
2%|▏ | 10262/569592 [1:54:47<465:15:38, 2.99s/it]
2%|▏ | 10262/569592 [1:54:47<465:15:38, 2.99s/it]
2%|▏ | 10263/569592 [1:54:50<471:45:24, 3.04s/it]
2%|▏ | 10263/569592 [1:54:50<471:45:24, 3.04s/it]
2%|▏ | 10264/569592 [1:54:51<373:34:38, 2.40s/it]
2%|▏ | 10264/569592 [1:54:51<373:34:38, 2.40s/it]
2%|▏ | 10265/569592 [1:54:53<354:02:02, 2.28s/it]
2%|▏ | 10265/569592 [1:54:53<354:02:02, 2.28s/it]
2%|▏ | 10266/569592 [1:54:57<406:10:58, 2.61s/it]
2%|▏ | 10266/569592 [1:54:57<406:10:58, 2.61s/it]
2%|▏ | 10267/569592 [1:55:00<467:24:03, 3.01s/it]
2%|▏ | 10267/569592 [1:55:00<467:24:03, 3.01s/it]
2%|▏ | 10268/569592 [1:55:01<370:39:04, 2.39s/it]
2%|▏ | 10268/569592 [1:55:01<370:39:04, 2.39s/it]
2%|▏ | 10269/569592 [1:55:03<321:22:28, 2.07s/it]
2%|▏ | 10269/569592 [1:55:03<321:22:28, 2.07s/it]
2%|▏ | 10270/569592 [1:55:07<423:49:22, 2.73s/it]
2%|▏ | 10270/569592 [1:55:07<423:49:22, 2.73s/it]
2%|▏ | 10271/569592 [1:55:12<526:42:05, 3.39s/it]
2%|▏ | 10271/569592 [1:55:12<526:42:05, 3.39s/it]
2%|▏ | 10272/569592 [1:55:15<512:14:28, 3.30s/it]
2%|▏ | 10272/569592 [1:55:15<512:14:28, 3.30s/it]
2%|▏ | 10273/569592 [1:55:19<526:51:07, 3.39s/it]
2%|▏ | 10273/569592 [1:55:19<526:51:07, 3.39s/it]
2%|▏ | 10274/569592 [1:55:20<410:38:03, 2.64s/it]
2%|▏ | 10274/569592 [1:55:20<410:38:03, 2.64s/it]
2%|▏ | 10275/569592 [1:55:23<457:00:45, 2.94s/it]
2%|▏ | 10275/569592 [1:55:23<457:00:45, 2.94s/it]
2%|▏ | 10276/569592 [1:55:27<497:53:45, 3.20s/it]
2%|▏ | 10276/569592 [1:55:27<497:53:45, 3.20s/it]
2%|▏ | 10277/569592 [1:55:31<521:19:59, 3.36s/it]
2%|▏ | 10277/569592 [1:55:31<521:19:59, 3.36s/it]
2%|▏ | 10278/569592 [1:55:35<584:30:53, 3.76s/it]
2%|▏ | 10278/569592 [1:55:35<584:30:53, 3.76s/it]
2%|▏ | 10279/569592 [1:55:40<627:52:12, 4.04s/it]
2%|▏ | 10279/569592 [1:55:40<627:52:12, 4.04s/it]
2%|▏ | 10280/569592 [1:55:41<481:03:39, 3.10s/it]
2%|▏ | 10280/569592 [1:55:41<481:03:39, 3.10s/it]
2%|▏ | 10281/569592 [1:55:45<529:53:06, 3.41s/it]
2%|▏ | 10281/569592 [1:55:45<529:53:06, 3.41s/it]
2%|▏ | 10282/569592 [1:55:48<514:23:44, 3.31s/it]
2%|▏ | 10282/569592 [1:55:48<514:23:44, 3.31s/it]
2%|▏ | 10283/569592 [1:55:52<529:05:00, 3.41s/it]
2%|▏ | 10283/569592 [1:55:52<529:05:00, 3.41s/it]
2%|▏ | 10284/569592 [1:55:53<413:04:06, 2.66s/it]
2%|▏ | 10284/569592 [1:55:53<413:04:06, 2.66s/it]
2%|▏ | 10285/569592 [1:55:58<529:27:54, 3.41s/it]
2%|▏ | 10285/569592 [1:55:58<529:27:54, 3.41s/it]
2%|▏ | 10286/569592 [1:56:02<575:30:28, 3.70s/it]
2%|▏ | 10286/569592 [1:56:02<575:30:28, 3.70s/it]
2%|▏ | 10287/569592 [1:56:07<632:08:01, 4.07s/it]
2%|▏ | 10287/569592 [1:56:07<632:08:01, 4.07s/it]
2%|▏ | 10288/569592 [1:56:10<594:13:49, 3.82s/it]
2%|▏ | 10288/569592 [1:56:10<594:13:49, 3.82s/it]
2%|▏ | 10289/569592 [1:56:16<657:27:19, 4.23s/it]
2%|▏ | 10289/569592 [1:56:16<657:27:19, 4.23s/it]
2%|▏ | 10290/569592 [1:56:19<638:19:17, 4.11s/it]
2%|▏ | 10290/569592 [1:56:19<638:19:17, 4.11s/it]
2%|▏ | 10291/569592 [1:56:23<596:06:59, 3.84s/it]
2%|▏ | 10291/569592 [1:56:23<596:06:59, 3.84s/it]
2%|▏ | 10292/569592 [1:56:27<641:24:35, 4.13s/it]
2%|▏ | 10292/569592 [1:56:27<641:24:35, 4.13s/it]
2%|▏ | 10293/569592 [1:56:32<679:07:07, 4.37s/it]
2%|▏ | 10293/569592 [1:56:32<679:07:07, 4.37s/it]
2%|▏ | 10294/569592 [1:56:36<625:57:39, 4.03s/it]
2%|▏ | 10294/569592 [1:56:36<625:57:39, 4.03s/it]
2%|▏ | 10295/569592 [1:56:39<611:07:18, 3.93s/it]
2%|▏ | 10295/569592 [1:56:39<611:07:18, 3.93s/it]
2%|▏ | 10296/569592 [1:56:44<625:17:44, 4.02s/it]
2%|▏ | 10296/569592 [1:56:44<625:17:44, 4.02s/it]
2%|▏ | 10297/569592 [1:56:47<589:27:51, 3.79s/it]
2%|▏ | 10297/569592 [1:56:47<589:27:51, 3.79s/it]
2%|▏ | 10298/569592 [1:56:52<651:57:27, 4.20s/it]
2%|▏ | 10298/569592 [1:56:52<651:57:27, 4.20s/it]
2%|▏ | 10299/569592 [1:56:57<683:26:44, 4.40s/it]
2%|▏ | 10299/569592 [1:56:57<683:26:44, 4.40s/it]
2%|▏ | 10300/569592 [1:57:02<695:36:34, 4.48s/it]
2%|▏ | 10300/569592 [1:57:02<695:36:34, 4.48s/it]
2%|▏ | 10301/569592 [1:57:05<626:06:51, 4.03s/it]
2%|▏ | 10301/569592 [1:57:05<626:06:51, 4.03s/it]
2%|▏ | 10302/569592 [1:57:08<583:28:48, 3.76s/it]
2%|▏ | 10302/569592 [1:57:08<583:28:48, 3.76s/it]
2%|▏ | 10303/569592 [1:57:12<632:25:11, 4.07s/it]
2%|▏ | 10303/569592 [1:57:12<632:25:11, 4.07s/it]
2%|▏ | 10304/569592 [1:57:16<628:23:17, 4.04s/it]
2%|▏ | 10304/569592 [1:57:16<628:23:17, 4.04s/it]
2%|▏ | 10305/569592 [1:57:17<481:14:30, 3.10s/it]
2%|▏ | 10305/569592 [1:57:17<481:14:30, 3.10s/it]
2%|▏ | 10306/569592 [1:57:22<568:48:33, 3.66s/it]
2%|▏ | 10306/569592 [1:57:22<568:48:33, 3.66s/it]
2%|▏ | 10307/569592 [1:57:25<544:42:12, 3.51s/it]
2%|▏ | 10307/569592 [1:57:25<544:42:12, 3.51s/it]
2%|▏ | 10308/569592 [1:57:29<528:35:47, 3.40s/it]
2%|▏ | 10308/569592 [1:57:29<528:35:47, 3.40s/it]
2%|▏ | 10309/569592 [1:57:33<582:21:20, 3.75s/it]
2%|▏ | 10309/569592 [1:57:33<582:21:20, 3.75s/it]
2%|▏ | 10310/569592 [1:57:37<569:36:22, 3.67s/it]
2%|▏ | 10310/569592 [1:57:37<569:36:22, 3.67s/it]
2%|▏ | 10311/569592 [1:57:40<541:10:19, 3.48s/it]
2%|▏ | 10311/569592 [1:57:40<541:10:19, 3.48s/it]
2%|▏ | 10312/569592 [1:57:45<612:53:47, 3.95s/it]
2%|▏ | 10312/569592 [1:57:45<612:53:47, 3.95s/it]
2%|▏ | 10313/569592 [1:57:48<577:45:25, 3.72s/it]
2%|▏ | 10313/569592 [1:57:48<577:45:25, 3.72s/it]
2%|▏ | 10314/569592 [1:57:53<640:18:19, 4.12s/it]
2%|▏ | 10314/569592 [1:57:53<640:18:19, 4.12s/it]
2%|▏ | 10315/569592 [1:57:54<488:58:33, 3.15s/it]
2%|▏ | 10315/569592 [1:57:54<488:58:33, 3.15s/it]
2%|▏ | 10316/569592 [1:57:57<496:00:46, 3.19s/it]
2%|▏ | 10316/569592 [1:57:57<496:00:46, 3.19s/it]
2%|▏ | 10317/569592 [1:58:02<584:45:06, 3.76s/it]
2%|▏ | 10317/569592 [1:58:02<584:45:06, 3.76s/it]
2%|▏ | 10318/569592 [1:58:07<622:16:02, 4.01s/it]
2%|▏ | 10318/569592 [1:58:07<622:16:02, 4.01s/it]
2%|▏ | 10319/569592 [1:58:08<476:49:39, 3.07s/it]
2%|▏ | 10319/569592 [1:58:08<476:49:39, 3.07s/it]
2%|▏ | 10320/569592 [1:58:09<376:52:55, 2.43s/it]
2%|▏ | 10320/569592 [1:58:09<376:52:55, 2.43s/it]
2%|▏ | 10321/569592 [1:58:10<309:09:51, 1.99s/it]
2%|▏ | 10321/569592 [1:58:10<309:09:51, 1.99s/it]
2%|▏ | 10322/569592 [1:58:13<388:57:52, 2.50s/it]
2%|▏ | 10322/569592 [1:58:13<388:57:52, 2.50s/it]
2%|▏ | 10323/569592 [1:58:17<461:45:32, 2.97s/it]
2%|▏ | 10323/569592 [1:58:17<461:45:32, 2.97s/it]
2%|▏ | 10324/569592 [1:58:18<366:46:35, 2.36s/it]
2%|▏ | 10324/569592 [1:58:18<366:46:35, 2.36s/it]
2%|▏ | 10325/569592 [1:58:21<407:25:11, 2.62s/it]
2%|▏ | 10325/569592 [1:58:22<407:25:11, 2.62s/it]
2%|▏ | 10326/569592 [1:58:22<331:18:50, 2.13s/it]
2%|▏ | 10326/569592 [1:58:23<331:18:50, 2.13s/it]
2%|▏ | 10327/569592 [1:58:23<276:53:58, 1.78s/it]
2%|▏ | 10327/569592 [1:58:23<276:53:58, 1.78s/it]
2%|▏ | 10328/569592 [1:58:24<237:31:46, 1.53s/it]
2%|▏ | 10328/569592 [1:58:24<237:31:46, 1.53s/it]
2%|▏ | 10329/569592 [1:58:25<209:53:36, 1.35s/it]
2%|▏ | 10329/569592 [1:58:25<209:53:36, 1.35s/it]
2%|▏ | 10330/569592 [1:58:29<337:48:13, 2.17s/it]
2%|▏ | 10330/569592 [1:58:29<337:48:13, 2.17s/it]
2%|▏ | 10331/569592 [1:58:33<409:53:00, 2.64s/it]
2%|▏ | 10331/569592 [1:58:33<409:53:00, 2.64s/it]
2%|▏ | 10332/569592 [1:58:34<331:29:09, 2.13s/it]
2%|▏ | 10332/569592 [1:58:34<331:29:09, 2.13s/it]
2%|▏ | 10333/569592 [1:58:35<276:19:42, 1.78s/it]
2%|▏ | 10333/569592 [1:58:35<276:19:42, 1.78s/it]
2%|▏ | 10334/569592 [1:58:37<295:35:32, 1.90s/it]
2%|▏ | 10334/569592 [1:58:37<295:35:32, 1.90s/it]
2%|▏ | 10335/569592 [1:58:41<371:59:11, 2.39s/it]
2%|▏ | 10335/569592 [1:58:41<371:59:11, 2.39s/it]
2%|▏ | 10336/569592 [1:58:45<457:36:49, 2.95s/it]
2%|▏ | 10336/569592 [1:58:45<457:36:49, 2.95s/it]
2%|▏ | 10337/569592 [1:58:46<369:36:19, 2.38s/it]
2%|▏ | 10337/569592 [1:58:46<369:36:19, 2.38s/it]
2%|▏ | 10338/569592 [1:58:48<334:17:42, 2.15s/it]
2%|▏ | 10338/569592 [1:58:48<334:17:42, 2.15s/it]
2%|▏ | 10339/569592 [1:58:52<418:50:25, 2.70s/it]
2%|▏ | 10339/569592 [1:58:52<418:50:25, 2.70s/it]
2%|▏ | 10340/569592 [1:58:54<416:34:34, 2.68s/it]
2%|▏ | 10340/569592 [1:58:54<416:34:34, 2.68s/it]
2%|▏ | 10341/569592 [1:58:56<352:01:32, 2.27s/it]
2%|▏ | 10341/569592 [1:58:56<352:01:32, 2.27s/it]
2%|▏ | 10342/569592 [1:58:57<319:00:06, 2.05s/it]
2%|▏ | 10342/569592 [1:58:57<319:00:06, 2.05s/it]
2%|▏ | 10343/569592 [1:59:02<442:33:41, 2.85s/it]
2%|▏ | 10343/569592 [1:59:02<442:33:41, 2.85s/it]
2%|▏ | 10344/569592 [1:59:03<357:17:36, 2.30s/it]
2%|▏ | 10344/569592 [1:59:03<357:17:36, 2.30s/it]
2%|▏ | 10345/569592 [1:59:04<304:34:37, 1.96s/it]
2%|▏ | 10345/569592 [1:59:04<304:34:37, 1.96s/it]
2%|▏ | 10346/569592 [1:59:09<459:39:54, 2.96s/it]
2%|▏ | 10346/569592 [1:59:09<459:39:54, 2.96s/it]
2%|▏ | 10347/569592 [1:59:11<419:52:08, 2.70s/it]
2%|▏ | 10347/569592 [1:59:11<419:52:08, 2.70s/it]
2%|▏ | 10348/569592 [1:59:16<527:07:54, 3.39s/it]
2%|▏ | 10348/569592 [1:59:16<527:07:54, 3.39s/it]
2%|▏ | 10349/569592 [1:59:17<416:01:42, 2.68s/it]
2%|▏ | 10349/569592 [1:59:17<416:01:42, 2.68s/it]
2%|▏ | 10350/569592 [1:59:18<337:31:05, 2.17s/it]
2%|▏ | 10350/569592 [1:59:18<337:31:05, 2.17s/it]
2%|▏ | 10351/569592 [1:59:23<439:18:05, 2.83s/it]
2%|▏ | 10351/569592 [1:59:23<439:18:05, 2.83s/it]
2%|▏ | 10352/569592 [1:59:24<357:50:36, 2.30s/it]
2%|▏ | 10352/569592 [1:59:24<357:50:36, 2.30s/it]
2%|▏ | 10353/569592 [1:59:25<294:51:36, 1.90s/it]
2%|▏ | 10353/569592 [1:59:25<294:51:36, 1.90s/it]
2%|▏ | 10354/569592 [1:59:30<463:56:15, 2.99s/it]
2%|▏ | 10354/569592 [1:59:30<463:56:15, 2.99s/it]
2%|▏ | 10355/569592 [1:59:33<428:16:59, 2.76s/it]
2%|▏ | 10355/569592 [1:59:33<428:16:59, 2.76s/it]
2%|▏ | 10356/569592 [1:59:35<389:09:11, 2.51s/it]
2%|▏ | 10356/569592 [1:59:35<389:09:11, 2.51s/it]
2%|▏ | 10357/569592 [1:59:35<316:24:33, 2.04s/it]
2%|▏ | 10357/569592 [1:59:35<316:24:33, 2.04s/it]
2%|▏ | 10358/569592 [1:59:41<458:19:02, 2.95s/it]
2%|▏ | 10358/569592 [1:59:41<458:19:02, 2.95s/it]
2%|▏ | 10359/569592 [1:59:43<424:51:57, 2.74s/it]
2%|▏ | 10359/569592 [1:59:43<424:51:57, 2.74s/it]
2%|▏ | 10360/569592 [1:59:45<419:24:25, 2.70s/it]
2%|▏ | 10360/569592 [1:59:45<419:24:25, 2.70s/it]
2%|▏ | 10361/569592 [1:59:46<338:24:16, 2.18s/it]
2%|▏ | 10361/569592 [1:59:46<338:24:16, 2.18s/it]
2%|▏ | 10362/569592 [1:59:51<438:30:35, 2.82s/it]
2%|▏ | 10362/569592 [1:59:51<438:30:35, 2.82s/it]
2%|▏ | 10363/569592 [1:59:53<426:44:19, 2.75s/it]
2%|▏ | 10363/569592 [1:59:53<426:44:19, 2.75s/it]
2%|▏ | 10364/569592 [1:59:56<445:58:04, 2.87s/it]
2%|▏ | 10364/569592 [1:59:56<445:58:04, 2.87s/it]
2%|▏ | 10365/569592 [1:59:57<356:12:05, 2.29s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90767352 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 10365/569592 [1:59:57<356:12:05, 2.29s/it]
2%|▏ | 10366/569592 [2:00:00<388:17:54, 2.50s/it]
2%|▏ | 10366/569592 [2:00:00<388:17:54, 2.50s/it]
2%|▏ | 10367/569592 [2:00:02<362:46:01, 2.34s/it]
2%|▏ | 10367/569592 [2:00:02<362:46:01, 2.34s/it]
2%|▏ | 10368/569592 [2:00:06<434:48:10, 2.80s/it]
2%|▏ | 10368/569592 [2:00:06<434:48:10, 2.80s/it]
2%|▏ | 10369/569592 [2:00:07<348:38:19, 2.24s/it]
2%|▏ | 10369/569592 [2:00:07<348:38:19, 2.24s/it]
2%|▏ | 10370/569592 [2:00:12<478:59:11, 3.08s/it]
2%|▏ | 10370/569592 [2:00:12<478:59:11, 3.08s/it]
2%|▏ | 10371/569592 [2:00:13<378:51:46, 2.44s/it]
2%|▏ | 10371/569592 [2:00:13<378:51:46, 2.44s/it]
2%|▏ | 10372/569592 [2:00:17<427:24:17, 2.75s/it]
2%|▏ | 10372/569592 [2:00:17<427:24:17, 2.75s/it]
2%|▏ | 10373/569592 [2:00:18<342:25:39, 2.20s/it]
2%|▏ | 10373/569592 [2:00:18<342:25:39, 2.20s/it]
2%|▏ | 10374/569592 [2:00:23<484:42:04, 3.12s/it]
2%|▏ | 10374/569592 [2:00:23<484:42:04, 3.12s/it]
2%|▏ | 10375/569592 [2:00:24<382:40:15, 2.46s/it]
2%|▏ | 10375/569592 [2:00:24<382:40:15, 2.46s/it]
2%|▏ | 10376/569592 [2:00:26<384:08:28, 2.47s/it]
2%|▏ | 10376/569592 [2:00:26<384:08:28, 2.47s/it]
2%|▏ | 10377/569592 [2:00:27<312:29:43, 2.01s/it]
2%|▏ | 10377/569592 [2:00:27<312:29:43, 2.01s/it]
2%|▏ | 10378/569592 [2:00:34<551:45:02, 3.55s/it]
2%|▏ | 10378/569592 [2:00:34<551:45:02, 3.55s/it]
2%|▏ | 10379/569592 [2:00:35<428:02:44, 2.76s/it]
2%|▏ | 10379/569592 [2:00:35<428:02:44, 2.76s/it]
2%|▏ | 10380/569592 [2:00:36<344:37:24, 2.22s/it]
2%|▏ | 10380/569592 [2:00:36<344:37:24, 2.22s/it]
2%|▏ | 10381/569592 [2:00:37<285:10:15, 1.84s/it]
2%|▏ | 10381/569592 [2:00:37<285:10:15, 1.84s/it]
2%|▏ | 10382/569592 [2:00:43<477:19:03, 3.07s/it]
2%|▏ | 10382/569592 [2:00:43<477:19:03, 3.07s/it]
2%|▏ | 10383/569592 [2:00:48<579:09:17, 3.73s/it]
2%|▏ | 10383/569592 [2:00:48<579:09:17, 3.73s/it]
2%|▏ | 10384/569592 [2:01:45<3070:30:28, 19.77s/it]
2%|▏ | 10384/569592 [2:01:46<3070:30:28, 19.77s/it]
2%|▏ | 10385/569592 [2:02:18<3663:11:18, 23.58s/it]
2%|▏ | 10385/569592 [2:02:18<3663:11:18, 23.58s/it]
2%|▏ | 10386/569592 [2:02:19<2615:41:31, 16.84s/it]
2%|▏ | 10386/569592 [2:02:19<2615:41:31, 16.84s/it]
2%|▏ | 10387/569592 [2:02:20<1872:03:40, 12.05s/it]
2%|▏ | 10387/569592 [2:02:20<1872:03:40, 12.05s/it]
2%|▏ | 10388/569592 [2:02:21<1353:35:45, 8.71s/it]
2%|▏ | 10388/569592 [2:02:21<1353:35:45, 8.71s/it]
2%|▏ | 10389/569592 [2:02:26<1184:31:40, 7.63s/it]
2%|▏ | 10389/569592 [2:02:26<1184:31:40, 7.63s/it]
2%|▏ | 10390/569592 [2:02:30<1008:39:35, 6.49s/it]
2%|▏ | 10390/569592 [2:02:30<1008:39:35, 6.49s/it]
2%|▏ | 10391/569592 [2:02:35<936:47:07, 6.03s/it]
2%|▏ | 10391/569592 [2:02:35<936:47:07, 6.03s/it]
2%|▏ | 10392/569592 [2:02:40<886:40:21, 5.71s/it]
2%|▏ | 10392/569592 [2:02:40<886:40:21, 5.71s/it]
2%|▏ | 10393/569592 [2:02:45<858:45:51, 5.53s/it]
2%|▏ | 10393/569592 [2:02:45<858:45:51, 5.53s/it]
2%|▏ | 10394/569592 [2:02:46<642:20:21, 4.14s/it]
2%|▏ | 10394/569592 [2:02:46<642:20:21, 4.14s/it]
2%|▏ | 10395/569592 [2:02:51<692:01:42, 4.46s/it]
2%|▏ | 10395/569592 [2:02:51<692:01:42, 4.46s/it]
2%|▏ | 10396/569592 [2:02:56<703:15:10, 4.53s/it]
2%|▏ | 10396/569592 [2:02:56<703:15:10, 4.53s/it]
2%|▏ | 10397/569592 [2:03:01<720:18:35, 4.64s/it]
2%|▏ | 10397/569592 [2:03:01<720:18:35, 4.64s/it]
2%|▏ | 10398/569592 [2:03:04<660:36:41, 4.25s/it]
2%|▏ | 10398/569592 [2:03:04<660:36:41, 4.25s/it]
2%|▏ | 10399/569592 [2:03:09<680:13:02, 4.38s/it]
2%|▏ | 10399/569592 [2:03:09<680:13:02, 4.38s/it]
2%|▏ | 10400/569592 [2:03:13<693:45:35, 4.47s/it]
2%|▏ | 10400/569592 [2:03:13<693:45:35, 4.47s/it]
2%|▏ | 10401/569592 [2:03:17<652:34:39, 4.20s/it]
2%|▏ | 10401/569592 [2:03:17<652:34:39, 4.20s/it]
2%|▏ | 10402/569592 [2:03:18<497:28:43, 3.20s/it]
2%|▏ | 10402/569592 [2:03:18<497:28:43, 3.20s/it]
2%|▏ | 10403/569592 [2:03:22<571:27:11, 3.68s/it]
2%|▏ | 10403/569592 [2:03:22<571:27:11, 3.68s/it]
2%|▏ | 10404/569592 [2:03:27<617:25:25, 3.97s/it]
2%|▏ | 10404/569592 [2:03:27<617:25:25, 3.97s/it]
2%|▏ | 10405/569592 [2:03:32<646:02:30, 4.16s/it]
2%|▏ | 10405/569592 [2:03:32<646:02:30, 4.16s/it]
2%|▏ | 10406/569592 [2:03:36<674:43:17, 4.34s/it]
2%|▏ | 10406/569592 [2:03:36<674:43:17, 4.34s/it]
2%|▏ | 10407/569592 [2:03:41<684:15:34, 4.41s/it]
2%|▏ | 10407/569592 [2:03:41<684:15:34, 4.41s/it]
2%|▏ | 10408/569592 [2:03:44<630:35:33, 4.06s/it]
2%|▏ | 10408/569592 [2:03:44<630:35:33, 4.06s/it]
2%|▏ | 10409/569592 [2:03:49<675:19:51, 4.35s/it]
2%|▏ | 10409/569592 [2:03:49<675:19:51, 4.35s/it]
2%|▏ | 10410/569592 [2:03:54<707:15:31, 4.55s/it]
2%|▏ | 10410/569592 [2:03:54<707:15:31, 4.55s/it]
2%|▏ | 10411/569592 [2:03:59<708:21:25, 4.56s/it]
2%|▏ | 10411/569592 [2:03:59<708:21:25, 4.56s/it]
2%|▏ | 10412/569592 [2:04:04<715:38:30, 4.61s/it]
2%|▏ | 10412/569592 [2:04:04<715:38:30, 4.61s/it]
2%|▏ | 10413/569592 [2:04:08<725:31:01, 4.67s/it]
2%|▏ | 10413/569592 [2:04:08<725:31:01, 4.67s/it]
2%|▏ | 10414/569592 [2:04:12<695:29:27, 4.48s/it]
2%|▏ | 10414/569592 [2:04:13<695:29:27, 4.48s/it]
2%|▏ | 10415/569592 [2:04:18<725:49:48, 4.67s/it]
2%|▏ | 10415/569592 [2:04:18<725:49:48, 4.67s/it]
2%|▏ | 10416/569592 [2:04:23<746:47:16, 4.81s/it]
2%|▏ | 10416/569592 [2:04:23<746:47:16, 4.81s/it]
2%|▏ | 10417/569592 [2:04:28<751:18:09, 4.84s/it]
2%|▏ | 10417/569592 [2:04:28<751:18:09, 4.84s/it]
2%|▏ | 10418/569592 [2:04:33<760:33:06, 4.90s/it]
2%|▏ | 10418/569592 [2:04:33<760:33:06, 4.90s/it]
2%|▏ | 10419/569592 [2:04:38<766:17:28, 4.93s/it]
2%|▏ | 10419/569592 [2:04:38<766:17:28, 4.93s/it]
2%|▏ | 10420/569592 [2:04:39<577:25:52, 3.72s/it]
2%|▏ | 10420/569592 [2:04:39<577:25:52, 3.72s/it]
2%|▏ | 10421/569592 [2:04:44<641:06:37, 4.13s/it]
2%|▏ | 10421/569592 [2:04:44<641:06:37, 4.13s/it]
2%|▏ | 10422/569592 [2:04:45<491:20:27, 3.16s/it]
2%|▏ | 10422/569592 [2:04:45<491:20:27, 3.16s/it]
2%|▏ | 10423/569592 [2:04:46<389:05:30, 2.51s/it]
2%|▏ | 10423/569592 [2:04:46<389:05:30, 2.51s/it]
2%|▏ | 10424/569592 [2:04:49<451:53:11, 2.91s/it]
2%|▏ | 10424/569592 [2:04:49<451:53:11, 2.91s/it]
2%|▏ | 10425/569592 [2:04:54<548:10:41, 3.53s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96582828 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10425/569592 [2:04:54<548:10:41, 3.53s/it]
2%|▏ | 10426/569592 [2:04:57<528:16:00, 3.40s/it]
2%|▏ | 10426/569592 [2:04:57<528:16:00, 3.40s/it]
2%|▏ | 10427/569592 [2:05:02<597:39:00, 3.85s/it]
2%|▏ | 10427/569592 [2:05:02<597:39:00, 3.85s/it]
2%|▏ | 10428/569592 [2:05:05<557:18:38, 3.59s/it]
2%|▏ | 10428/569592 [2:05:05<557:18:38, 3.59s/it]
2%|▏ | 10429/569592 [2:05:10<612:19:48, 3.94s/it]
2%|▏ | 10429/569592 [2:05:10<612:19:48, 3.94s/it]
2%|▏ | 10430/569592 [2:05:15<661:39:08, 4.26s/it]
2%|▏ | 10430/569592 [2:05:15<661:39:08, 4.26s/it]
2%|▏ | 10431/569592 [2:05:19<638:32:24, 4.11s/it]
2%|▏ | 10431/569592 [2:05:19<638:32:24, 4.11s/it]
2%|▏ | 10432/569592 [2:05:22<599:47:28, 3.86s/it]
2%|▏ | 10432/569592 [2:05:22<599:47:28, 3.86s/it]
2%|▏ | 10433/569592 [2:05:23<461:45:39, 2.97s/it]
2%|▏ | 10433/569592 [2:05:23<461:45:39, 2.97s/it]
2%|▏ | 10434/569592 [2:05:28<565:27:01, 3.64s/it]
2%|▏ | 10434/569592 [2:05:28<565:27:01, 3.64s/it]
2%|▏ | 10435/569592 [2:05:31<545:26:58, 3.51s/it]
2%|▏ | 10435/569592 [2:05:31<545:26:58, 3.51s/it]
2%|▏ | 10436/569592 [2:05:32<423:36:13, 2.73s/it]
2%|▏ | 10436/569592 [2:05:32<423:36:13, 2.73s/it]
2%|▏ | 10437/569592 [2:05:33<339:08:29, 2.18s/it]
2%|▏ | 10437/569592 [2:05:33<339:08:29, 2.18s/it]
2%|▏ | 10438/569592 [2:05:37<394:33:00, 2.54s/it]
2%|▏ | 10438/569592 [2:05:37<394:33:00, 2.54s/it]
2%|▏ | 10439/569592 [2:05:38<321:40:02, 2.07s/it]
2%|▏ | 10439/569592 [2:05:38<321:40:02, 2.07s/it]
2%|▏ | 10440/569592 [2:05:43<464:29:38, 2.99s/it]
2%|▏ | 10440/569592 [2:05:43<464:29:38, 2.99s/it]
2%|▏ | 10441/569592 [2:05:46<485:24:29, 3.13s/it]
2%|▏ | 10441/569592 [2:05:46<485:24:29, 3.13s/it]
2%|▏ | 10442/569592 [2:05:47<386:39:30, 2.49s/it]
2%|▏ | 10442/569592 [2:05:47<386:39:30, 2.49s/it]
2%|▏ | 10443/569592 [2:05:51<428:18:48, 2.76s/it]
2%|▏ | 10443/569592 [2:05:51<428:18:48, 2.76s/it]
2%|▏ | 10444/569592 [2:05:52<343:22:36, 2.21s/it]
2%|▏ | 10444/569592 [2:05:52<343:22:36, 2.21s/it]
2%|▏ | 10445/569592 [2:05:52<285:40:44, 1.84s/it]
2%|▏ | 10445/569592 [2:05:53<285:40:44, 1.84s/it]
2%|▏ | 10446/569592 [2:05:53<243:00:21, 1.56s/it]
2%|▏ | 10446/569592 [2:05:53<243:00:21, 1.56s/it]
2%|▏ | 10447/569592 [2:05:54<213:53:16, 1.38s/it]
2%|▏ | 10447/569592 [2:05:54<213:53:16, 1.38s/it]
2%|▏ | 10448/569592 [2:06:00<418:46:15, 2.70s/it]
2%|▏ | 10448/569592 [2:06:00<418:46:15, 2.70s/it]
2%|▏ | 10449/569592 [2:06:04<476:07:45, 3.07s/it]
2%|▏ | 10449/569592 [2:06:04<476:07:45, 3.07s/it]
2%|▏ | 10450/569592 [2:06:05<378:18:54, 2.44s/it]
2%|▏ | 10450/569592 [2:06:05<378:18:54, 2.44s/it]
2%|▏ | 10451/569592 [2:06:06<316:17:03, 2.04s/it]
2%|▏ | 10451/569592 [2:06:06<316:17:03, 2.04s/it]
2%|▏ | 10452/569592 [2:06:07<267:44:10, 1.72s/it]
2%|▏ | 10/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
452/569592 [2:06:07<267:44:10, 1.72s/it]
2%|▏ | 10453/569592 [2:06:08<230:28:49, 1.48s/it]
2%|▏ | 10453/569592 [2:06:08<230:28:49, 1.48s/it]
2%|▏ | 10454/569592 [2:06:12<350:31:52, 2.26s/it]
2%|▏ | 10454/569592 [2:06:12<350:31:52, 2.26s/it]
2%|▏ | 10455/569592 [2:06:13<289:03:09, 1.86s/it]
2%|▏ | 10455/569592 [2:06:13<289:03:09, 1.86s/it]
2%|▏ | 10456/569592 [2:06:15<296:17:31, 1.91s/it]
2%|▏ | 10456/569592 [2:06:15<296:17:31, 1.91s/it]
2%|▏ | 10457/569592 [2:06:17<320:55:26, 2.07s/it]
2%|▏ | 10457/569592 [2:06:18<320:55:26, 2.07s/it]
2%|▏ | 10458/569592 [2:06:21<366:14:54, 2.36s/it]
2%|▏ | 10458/569592 [2:06:21<366:14:54, 2.36s/it]
2%|▏ | 10459/569592 [2:06:23<366:15:05, 2.36s/it]
2%|▏ | 10459/569592 [2:06:23<366:15:05, 2.36s/it]
2%|▏ | 10460/569592 [2:06:26<414:18:00, 2.67s/it]
2%|▏ | 10460/569592 [2:06:26<414:18:00, 2.67s/it]
2%|▏ | 10461/569592 [2:06:28<374:01:48, 2.41s/it]
2%|▏ | 10461/569592 [2:06:28<374:01:48, 2.41s/it]
2%|▏ | 10462/569592 [2:06:31<391:32:10, 2.52s/it]
2%|▏ | 10462/569592 [2:06:31<391:32:10, 2.52s/it]
2%|▏ | 10463/569592 [2:06:32<345:02:28, 2.22s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (105101568 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10463/569592 [2:06:32<345:02:28, 2.22s/it]
2%|▏ | 10464/569592 [2:06:35<364:13:10, 2.35s/it]
2%|▏ | 10464/569592 [2:06:35<364:13:10, 2.35s/it]
2%|▏ | 10465/569592 [2:06:38<384:34:10, 2.48s/it]
2%|▏ | 10465/569592 [2:06:38<384:34:10, 2.48s/it]
2%|▏ | 10466/569592 [2:06:41<426:53:55, 2.75s/it]
2%|▏ | 10466/569592 [2:06:41<426:53:55, 2.75s/it]
2%|▏ | 10467/569592 [2:06:42<352:23:50, 2.27s/it]
2%|▏ | 10467/569592 [2:06:42<352:23:50, 2.27s/it]
2%|▏ | 10468/569592 [2:06:45<378:13:18, 2.44s/it]
2%|▏ | 10468/569592 [2:06:45<378:13:18, 2.44s/it]
2%|▏ | 10469/569592 [2:06:49<436:17:55, 2.81s/it]
2%|▏ | 10469/569592 [2:06:49<436:17:55, 2.81s/it]
2%|▏ | 10470/569592 [2:06:51<408:08:24, 2.63s/it]
2%|▏ | 10470/569592 [2:06:51<408:08:24, 2.63s/it]
2%|▏ | 10471/569592 [2:06:53<353:41:52, 2.28s/it]
2%|▏ | 10471/569592 [2:06:53<353:41:52, 2.28s/it]
2%|▏ | 10472/569592 [2:06:56<401:47:15, 2.59s/it]
2%|▏ | 10472/569592 [2:06:56<401:47:15, 2.59s/it]
2%|▏ | 10473/569592 [2:06:58<380:41:15, 2.45s/it]
2%|▏ | 10473/569592 [2:06:58<380:41:15, 2.45s/it]
2%|▏ | 10474/569592 [2:07:02<435:40:04, 2.81s/it]
2%|▏ | 10474/569592 [2:07:02<435:40:04, 2.81s/it]
2%|▏ | 10475/569592 [2:07:03<349:21:30, 2.25s/it]
2%|▏ | 10475/569592 [2:07:03<349:21:30, 2.25s/it]
2%|▏ | 10476/569592 [2:07:07<454:26:02, 2.93s/it]
2%|▏ | 10476/569592 [2:07:07<454:26:02, 2.93s/it]
2%|▏ | 10477/569592 [2:07:10<437:07:37, 2.81s/it]
2%|▏ | 10477/569592 [2:07:10<437:07:37, 2.81s/it]
2%|▏ | 10478/569592 [2:07:12<407:55:16, 2.63s/it]
2%|▏ | 10478/569592 [2:07:12<407:55:16, 2.63s/it]
2%|▏ | 10479/569592 [2:07:13<339:07:11, 2.18s/it]
2%|▏ | 10479/569592 [2:07:13<339:07:11, 2.18s/it]
2%|▏ | 10480/569592 [2:07:16<372:05:28, 2.40s/it]
2%|▏ | 10480/569592 [2:07:16<372:05:28, 2.40s/it]
2%|▏ | 10481/569592 [2:07:20<449:21:00, 2.89s/it]
2%|▏ | 10481/569592 [2:07:20<449:21:00, 2.89s/it]
2%|▏ | 10482/569592 [2:07:22<426:33:46, 2.75s/it]
2%|▏ | 10482/569592 [2:07:22<426:33:46, 2.75s/it]
2%|▏ | 10483/569592 [2:07:24<382:30:37, 2.46s/it]
2%|▏ | 10483/569592 [2:07:24<382:30:37, 2.46s/it]
2%|▏ | 10484/569592 [2:07:27<393:59:45, 2.54s/it]
2%|▏ | 10484/569592 [2:07:27<393:59:45, 2.54s/it]
2%|▏ | 10485/569592 [2:07:30<419:31:15, 2.70s/it]
2%|▏ | 10485/569592 [2:07:30<419:31:15, 2.70s/it]
2%|▏ | 10486/569592 [2:07:33<416:06:01, 2.68s/it]
2%|▏ | 10486/569592 [2:07:33<416:06:01, 2.68s/it]
2%|▏ | 10487/569592 [2:07:34<367:01:17, 2.36s/it]
2%|▏ | 10487/569592 [2:07:34<367:01:17, 2.36s/it]
2%|▏ | 10488/569592 [2:07:37<370:45:28, 2.39s/it]
2%|▏ | 10488/569592 [2:07:37<370:45:28, 2.39s/it]
2%|▏ | 10489/569592 [2:07:41<465:35:42, 3.00s/it]
2%|▏ | 10489/569592 [2:07:41<465:35:42, 3.00s/it]
2%|▏ | 10490/569592 [2:07:43<425:40:17, 2.74s/it]
2%|▏ | 10490/569592 [2:07:43<425:40:17, 2.74s/it]
2%|▏ | 10491/569592 [2:07:44<342:10:59, 2.20s/it]
2%|▏ | 10491/569592 [2:07:44<342:10:59, 2.20s/it]
2%|▏ | 10492/569592 [2:07:47<358:06:24, 2.31s/it]
2%|▏ | 10492/569592 [2:07:47<358:06:24, 2.31s/it]
2%|▏ | 10493/569592 [2:07:50<422:05:55, 2.72s/it]
2%|▏ | 10493/569592 [2:07:50<422:05:55, 2.72s/it]
2%|▏ | 10494/569592 [2:07:53<432:11:42, 2.78s/it]
2%|▏ | 10494/569592 [2:07:53<432:11:42, 2.78s/it]
2%|▏ | 10495/569592 [2:07:54<352:05:34, 2.27s/it]
2%|▏ | 10495/569592 [2:07:54<352:05:34, 2.27s/it]
2%|▏ | 10496/569592 [2:08:00<498:45:35, 3.21s/it]
2%|▏ | 10496/569592 [2:08:00<498:45:35, 3.21s/it]
2%|▏ | 10497/569592 [2:08:01<396:02:41, 2.55s/it]
2%|▏ | 10497/569592 [2:08:01<396:02:41, 2.55s/it]
2%|▏ | 10498/569592 [2:08:04<428:48:35, 2.76s/it]
2%|▏ | 10498/569592 [2:08:04<428:48:35, 2.76s/it]
2%|▏ | 10499/569592 [2:08:05<345:45:59, 2.23s/it]
2%|▏ | 10499/569592 [2:08:05<345:45:59, 2.23s/it]
2%|▏ | 10500/569592 [2:08:08<360:41:44, 2.32s/it]
2%|▏ | 10500/569592 [2:08:08<360:41:44, 2.32s/it]
2%|▏ | 10501/569592 [2:08:11<422:14:50, 2.72s/it]
2%|▏ | 10501/569592 [2:08:11<422:14:50, 2.72s/it]
2%|▏ | 10502/569592 [2:08:15<456:08:03, 2.94s/it]
2%|▏ | 10502/569592 [2:08:15<456:08:03, 2.94s/it]
2%|▏ | 10503/569592 [2:08:20<554:33:43, 3.57s/it]
2%|▏ | 10503/569592 [2:08:20<554:33:43, 3.57s/it]
2%|▏ | 10504/569592 [2:08:23<560:58:42, 3.61s/it]
2%|▏ | 10504/569592 [2:08:23<560:58:42, 3.61s/it]
2%|▏ | 10505/569592 [2:08:28<610:11:36, 3.93s/it]
2%|▏ | 10505/569592 [2:08:28<610:11:36, 3.93s/it]
2%|▏ | 10506/569592 [2:08:31<581:20:36, 3.74s/it]
2%|▏ | 10506/569592 [2:08:31<581:20:36, 3.74s/it]
2%|▏ | 10507/569592 [2:08:35<576:58:07, 3.72s/it]
2%|▏ | 10507/569592 [2:08:35<576:58:07, 3.72s/it]
2%|▏ | 10508/569592 [2:08:36<445:33:10, 2.87s/it]
2%|▏ | 10508/569592 [2:08:36<445:33:10, 2.87s/it]
2%|▏ | 10509/569592 [2:08:37<358:36:51, 2.31s/it]
2%|▏ | 10509/569592 [2:08:37<358:36:51, 2.31s/it]
2%|▏ | 10510/569592 [2:08:38<296:02:34, 1.91s/it]
2%|▏ | 10510/569592 [2:08:38<296:02:34, 1.91s/it]
2%|▏ | 10511/569592 [2:08:44<475:48:26, 3.06s/it]
2%|▏ | 10511/569592 [2:08:44<475:48:26, 3.06s/it]
2%|▏ | 10512/569592 [2:08:47<493:03:54, 3.17s/it]
2%|▏ | 10512/569592 [2:08:47<493:03:54, 3.17s/it]
2%|▏ | 10513/569592 [2:08:52<556:23:01, 3.58s/it]
2%|▏ | 10513/569592 [2:08:52<556:23:01, 3.58s/it]
2%|▏ | 10514/569592 [2:08:55<530:25:42, 3.42s/it]
2%|▏ | 10514/569592 [2:08:55<530:25:42, 3.42s/it]
2%|▏ | 10515/569592 [2:08:58<528:30:34, 3.40s/it]
2%|▏ | 10515/569592 [2:08:58<528:30:34, 3.40s/it]
2%|▏ | 10516/569592 [2:08:59<412:15:05, 2.65s/it]
2%|▏ | 10516/569592 [2:08:59<412:15:05, 2.65s/it]
2%|▏ | 10517/569592 [2:09:04<509:43:18, 3.28s/it]
2%|▏ | 10517/569592 [2:09:04<509:43:18, 3.28s/it]
2%|▏ | 10518/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [2:09:08<575:30:26, 3.71s/it]
2%|▏ | 10518/569592 [2:09:08<575:30:26, 3.71s/it]
2%|▏ | 10519/569592 [2:09:13<624:56:30, 4.02s/it]
2%|▏ | 10519/569592 [2:09:13<624:56:30, 4.02s/it]
2%|▏ | 10520/569592 [2:09:17<626:15:01, 4.03s/it]
2%|▏ | 10520/569592 [2:09:17<626:15:01, 4.03s/it]
2%|▏ | 10521/569592 [2:09:22<665:42:13, 4.29s/it]
2%|▏ | 10521/569592 [2:09:22<665:42:13, 4.29s/it]
2%|▏ | 10522/569592 [2:09:27<688:38:53, 4.43s/it]
2%|▏ | 10522/569592 [2:09:27<688:38:53, 4.43s/it]
2%|▏ | 10523/569592 [2:09:32<707:48:32, 4.56s/it]
2%|▏ | 10523/569592 [2:09:32<707:48:32, 4.56s/it]
2%|▏ | 10524/569592 [2:09:36<720:17:22, 4.64s/it]
2%|▏ | 10524/569592 [2:09:36<720:17:22, 4.64s/it]
2%|▏ | 10525/569592 [2:09:41<713:38:59, 4.60s/it]
2%|▏ | 10525/569592 [2:09:41<713:38:59, 4.60s/it]
2%|▏ | 10526/569592 [2:09:44<635:57:11, 4.10s/it]
2%|▏ | 10526/569592 [2:09:44<635:57:11, 4.10s/it]
2%|▏ | 10527/569592 [2:09:47<599:52:49, 3.86s/it]
2%|▏ | 10527/569592 [2:09:47<599:52:49, 3.86s/it]
2%|▏ | 10528/569592 [2:09:52<646:06:14, 4.16s/it]
2%|▏ | 10528/569592 [2:09:52<646:06:14, 4.16s/it]
2%|▏ | 10529/569592 [2:09:56<622:07:44, 4.01s/it]
2%|▏ | 10529/569592 [2:09:56<622:07:44, 4.01s/it]
2%|▏ | 10530/569592 [2:09:59<599:22:31, 3.86s/it]
2%|▏ | 10530/569592 [2:09:59<599:22:31, 3.86s/it]
2%|▏ | 10531/569592 [2:10:04<635:14:03, 4.09s/it]
2%|▏ | 10531/569592 [2:10:04<635:14:03, 4.09s/it]
2%|▏ | 10532/569592 [2:10:07<604:15:53, 3.89s/it]
2%|▏ | 10532/569592 [2:10:07<604:15:53, 3.89s/it]
2%|▏ | 10533/569592 [2:10:12<641:41:05, 4.13s/it]
2%|▏ | 10533/569592 [2:10:12<641:41:05, 4.13s/it]
2%|▏ | 10534/569592 [2:10:17<675:38:15, 4.35s/it]
2%|▏ | 10534/569592 [2:10:17<675:38:15, 4.35s/it]
2%|▏ | 10535/569592 [2:10:20<632:21:26, 4.07s/it]
2%|▏ | 10535/569592 [2:10:20<632:21:26, 4.07s/it]
2%|▏ | 10536/569592 [2:10:23<580:41:51, 3.74s/it]
2%|▏ | 10536/569592 [2:10:23<580:41:51, 3.74s/it]
2%|▏ | 10537/569592 [2:10:28<620:09:40, 3.99s/it]
2%|▏ | 10537/569592 [2:10:28<620:09:40, 3.99s/it]
2%|▏ | 10538/569592 [2:10:29<477:18:53, 3.07s/it]
2%|▏ | 10538/569592 [2:10:29<477:18:53, 3.07s/it]
2%|▏ | 10539/569592 [2:10:34<572:07:03, 3.68s/it]
2%|▏ | 10539/569592 [2:10:34<572:07:03, 3.68s/it]
2%|▏ | 10540/569592 [2:10:37<541:49:09, 3.49s/it]
2%|▏ | 10540/569592 [2:10:37<541:49:09, 3.49s/it]
2%|▏ | 10541/569592 [2:10:40<523:01:36, 3.37s/it]
2%|▏ | 10541/569592 [2:10:40<523:01:36, 3.37s/it]
2%|▏ | 10542/569592 [2:10:45<597:11:47, 3.85s/it]
2%|▏ | 10542/569592 [2:10:45<597:11:47, 3.85s/it]
2%|▏ | 10543/569592 [2:10:46<460:38:03, 2.97s/it]
2%|▏ | 10543/569592 [2:10:46<460:38:03, 2.97s/it]
2%|▏ | 10544/569592 [2:10:49<482:40:03, 3.11s/it]
2%|▏ | 10544/569592 [2:10:49<482:40:03, 3.11s/it]
2%|▏ | 10545/569592 [2:10:54<576:49:27, 3.71s/it]
2%|▏ | 10545/569592 [2:10:54<576:49:27, 3.71s/it]
2%|▏ | 10546/569592 [2:11:00<640:46:39, 4.13s/it]
2%|▏ | 10546/569592 [2:11:00<640:46:39, 4.13s/it]
2%|▏ | 10547/569592 [2:11:04<670:29:36, 4.32s/it]
2%|▏ | 10547/569592 [2:11:04<670:29:36, 4.32s/it]
2%|▏ | 10548/569592 [2:11:08<654:51:03, 4.22s/it]
2%|▏ | 10548/569592 [2:11:08<654:51:03, 4.22s/it]
2%|▏ | 10549/569592 [2:11:11<605:45:54, 3.90s/it]
2%|▏ | 10549/569592 [2:11:11<605:45:54, 3.90s/it]
2%|▏ | 10550/569592 [2:11:15<585:06:20, 3.77s/it]
2%|▏ | 10550/5695/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
92 [2:11:15<585:06:20, 3.77s/it]
2%|▏ | 10551/569592 [2:11:18<564:53:57, 3.64s/it]
2%|▏ | 10551/569592 [2:11:18<564:53:57, 3.64s/it]
2%|▏ | 10552/569592 [2:11:22<565:20:26, 3.64s/it]
2%|▏ | 10552/569592 [2:11:22<565:20:26, 3.64s/it]
2%|▏ | 10553/569592 [2:11:23<437:32:22, 2.82s/it]
2%|▏ | 10553/569592 [2:11:23<437:32:22, 2.82s/it]
2%|▏ | 10554/569592 [2:11:24<352:27:55, 2.27s/it]
2%|▏ | 10554/569592 [2:11:24<352:27:55, 2.27s/it]
2%|▏ | 10555/569592 [2:11:25<294:54:51, 1.90s/it]
2%|▏ | 10555/569592 [2:11:25<294:54:51, 1.90s/it]
2%|▏ | 10556/569592 [2:11:26<256:17:59, 1.65s/it]
2%|▏ | 10556/569592 [2:11:26<256:17:59, 1.65s/it]
2%|▏ | 10557/569592 [2:11:27<222:37:16, 1.43s/it]
2%|▏ | 10557/569592 [2:11:27<222:37:16, 1.43s/it]
2%|▏ | 10558/569592 [2:11:30<308:19:07, 1.99s/it]
2%|▏ | 10558/569592 [2:11:30<308:19:07, 1.99s/it]
2%|▏ | 10559/569592 [2:11:34<395:45:30, 2.55s/it]
2%|▏ | 10559/569592 [2:11:34<395:45:30, 2.55s/it]
2%|▏ | 10560/569592 [2:11:37<434:35:53, 2.80s/it]
2%|▏ | 10560/569592 [2:11:37<434:35:53, 2.80s/it]
2%|▏ | 10561/569592 [2:11:38<347:49:23, 2.24s/it]
2%|▏ | 10561/569592 [2:11:38<347:49:23, 2.24s/it]
2%|▏ | 10562/569592 [2:11:39<286:50:53, 1.85s/it]
2%|▏ | 10562/569592 [2:11:39<286:50:53, 1.85s/it]
2%|▏ | 10563/569592 [2:11:40<245:28:51, 1.58s/it]
2%|▏ | 10563/569592 [2:11:40<245:28:51, 1.58s/it]
2%|▏ | 10564/569592 [2:11:41<234:26:17, 1.51s/it]
2%|▏ | 10564/569592 [2:11:41<234:26:17, 1.51s/it]
2%|▏ | 10565/569592 [2:11:46<361:57:24, 2.33s/it]
2%|▏ | 10565/569592 [2:11:46<361:57:24, 2.33s/it]
2%|▏ | 10566/569592 [2:11:50<435:24:54, 2.80s/it]
2%|▏ | 10566/569592 [2:11:50<435:24:54, 2.80s/it]
2%|▏ | 10567/569592 [2:11:51<349:30:28, 2.25s/it]
2%|▏ | 10567/569592 [2:11:51<349:30:28, 2.25s/it]
2%|▏ | 10568/569592 [2:11:52<318:55:14, 2.05s/it]
2%|▏ | 10568/569592 [2:11:52<318:55:14, 2.05s/it]
2%|▏ | 10569/569592 [2:11:56<420:35:54, 2.71s/it]
2%|▏ | 10569/569592 [2:11:56<420:35:54, 2.71s/it]
2%|▏ | 10570/569592 [2:11:57<339:09:19, 2.18s/it]
2%|▏ | 10570/569592 [2:11:57<339:09:19, 2.18s/it]
2%|▏ | 10571/569592 [2:11:58<280:12:00, 1.80s/it]
2%|▏ | 10571/569592 [2:11:58<280:12:00, 1.80s/it]
2%|▏ | 10572/569592 [2:12:04<446:59:53, 2.88s/it]
2%|▏ | 10572/569592 [2:12:04<446:59:53, 2.88s/it]
2%|▏ | 10573/569592 [2:12:07<458:21:31, 2.95s/it]
2%|▏ | 10573/569592 [2:12:07<458:21:31, 2.95s/it]
2%|▏ | 10574/569592 [2:12:08<368:24:15, 2.37s/it]
2%|▏ | 10574/569592 [2:12:08<368:24:15, 2.37s/it]
2%|▏ | 10575/569592 [2:12:09<301:42:06, 1.94s/it]
2%|▏ | 10575/569592 [2:12:09<301:42:06, 1.94s/it]
2%|▏ | 10576/569592 [2:12:13<403:35:05, 2.60s/it]
2%|▏ | 10576/569592 [2:12:13<403:35:05, 2.60s/it]
2%|▏ | 10577/569592 [2:12:18<531:15:59, 3.42s/it]
2%|▏ | 10577/569592 [2:12:18<531:15:59, 3.42s/it]
2%|▏ | 10578/569592 [2:12:19<426:52:15, 2.75s/it]
2%|▏ | 10578/569592 [2:12:19<426:52:15, 2.75s/it]
2%|▏ | 10579/569592 [2:12:20<341:46:33, 2.20s/it]
2%|▏ | 10579/569592 [2:12:20<341:46:33, 2.20s/it]
2%|▏ | 10580/569592 [2:12:23<367:16:22, 2.37s/it]
2%|▏ | 10580/569592 [2:12:23<367:16:22, 2.37s/it]
2%|▏ | 10581/569592 [2:12:27<460:47:22, 2.97s/it]
2%|▏ | 10581/569592 [2:12:27<460:47:22, 2.97s/it]
2%|▏ | 10582/569592 [2:12:29<374:19:11, 2.41s/it]
2%|▏ | 10582/569592 [2:12:29<374:19:11, 2.41s/it]
2%|▏ | 10583/569592 [2:12:32<416:26:29, 2.68s/it]
2%|▏ | 10583/569592 [2:12:32<416:26:29, 2.68s/it]
2%|▏ | 10584/569592 [2:12:34<369:39:21, 2.38s/it]
2%|▏ | 10584/569592 [2:12:34<369:39:21, 2.38s/it]
2%|▏ | 10585/569592 [2:12:38<477:37:19, 3.08s/it]
2%|▏ | 10585/569592 [2:12:38<477:37:19, 3.08s/it]
2%|▏ | 10586/569592 [2:12:39<378:16:00, 2.44s/it]
2%|▏ | 10586/569592 [2:12:39<378:16:00, 2.44s/it]
2%|▏ | 10587/569592 [2:12:40<315:56:33, 2.03s/it]
2%|▏ | 10587/569592 [2:12:40<315:56:33, 2.03s/it]
2%|▏ | 10588/569592 [2:12:44<373:17:29, 2.40s/it]
2%|▏ | 10588/569592 [2:12:44<373:17:29, 2.40s/it]
2%|▏ | 10589/569592 [2:12:48<471:50:23, 3.04s/it]
2%|▏ | 10589/569592 [2:12:48<471:50:23, 3.04s/it]
2%|▏ | 10590/569592 [2:12:49<391:00:58, 2.52s/it]
2%|▏ | 10590/569592 [2:12:49<391:00:58, 2.52s/it]
2%|▏ | 10591/569592 [2:12:50<318:35:26, 2.05s/it]
2%|▏ | 10591/569592 [2:12:50<318:35:26, 2.05s/it]
2%|▏ | 10592/569592 [2:12:53<355:48:08, 2.29s/it]
2%|▏ | 10592/569592 [2:12:53<355:48:08, 2.29s/it]
2%|▏ | 10593/569592 [2:12:57<446:44:57, 2.88s/it]
2%|▏ | 10593/569592 [2:12:57<446:44:57, 2.88s/it]
2%|▏ | 10594/569592 [2:12:59<399:04:45, 2.57s/it]
2%|▏ | 10594/569592 [2:12:59<399:04:45, 2.57s/it]
2%|▏ | 10595/569592 [2:13:00<323:35:25, 2.08s/it]
2%|▏ | 10595/569592 [2:13:00<323:35:25, 2.08s/it]
2%|▏ | 10596/569592 [2:13:03<362:34:00, 2.33s/it]
2%|▏ | 10596/569592 [2:13:03<362:34:00, 2.33s/it]
2%|▏ | 10597/569592 [2:13:09<513:43:51, 3.31s/it]
2%|▏ | 10597/569592 [2:13:09<513:43:51, 3.31s/it]
2%|▏ | 10598/569592 [2:13:10<427:01:44, 2.75s/it]
2%|▏ | 10598/569592 [2:13:10<427:01:44, 2.75s/it]
2%|▏ | 10599/569592 [2:13:11<342:45:22, 2.21s/it]
2%|▏ | 10599/569592 [2:13:11<342:45:22, 2.21s/it]
2%|▏ | 10600/569592 [2:13:13<342:14:20, 2.20s/it]
2%|▏ | 10600/569592 [2:13:13<342:14:20, 2.20s/it]
2%|▏ | 10601/569592 [2:13:19<505:33:39, 3.26s/it]
2%|▏ | 10601/569592 [2:13:19<505:33:39, 3.26s/it]
2%|▏ | 10602/569592 [2:13:20<398:44:01, 2.57s/it]
2%|▏ | 10602/569592 [2:13:20<398:44:01, 2.57s/it]
2%|▏ | 10603/569592 [2:13:21<322:56:59, 2.08s/it]
2%|▏ | 10603/569592 [2:13:21<322:56:59, 2.08s/it]
2%|▏ | 10604/569592 [2:13:24<390:51:43, 2.52s/it]
2%|▏ | 10604/569592 [2:13:25<390:51:43, 2.52s/it]
2%|▏ | 10605/569592 [2:13:29<486:26:08, 3.13s/it]
2%|▏ | 10605/569592 [2:13:29<486:26:08, 3.13s/it]
2%|▏ | 10606/569592 [2:13:30<383:39:29, 2.47s/it]
2%|▏ | 10606/569592 [2:13:30<383:39:29, 2.47s/it]
2%|▏ | 10607/569592 [2:13:31<311:24:36, 2.01s/it]
2%|▏ | 10607/569592 [2:13:31<311:24:36, 2.01s/it]
2%|▏ | 10608/569592 [2:13:36<451:09:56, 2.91s/it]
2%|▏ | 10608/569592 [2:13:36<451:09:56, 2.91s/it]
2%|▏ | 10609/569592 [2:13:40<487:16:09, 3.14s/it]
2%|▏ | 10609/569592 [2:13:40<487:16:09, 3.14s/it]
2%|▏ | 10610/569592 [2:13:40<383:03:30, 2.47s/it]
2%|▏ | 10610/569592 [2:13:40<383:03:30, 2.47s/it]
2%|▏ | 10611/569592 [2:13:44<427:46:21, 2.75s/it]
2%|▏ | 10611/569592 [2:13:44<427:46:21, 2.75s/it]
2%|▏ | 10612/569592 [2:13:46<404:15:06, 2.60s/it]
2%|▏ | 10612/569592 [2:13:46<404:15:06, 2.60s/it]
2%|▏ | 10613/569592 [2:13:50<453:08:28, 2.92s/it]
2%|▏ | 10613/569592 [2:13:50<453:08:28, 2.92s/it]
2%|▏ | 10614/569592 [2:13:53<475:58:24, 3.07s/it]
2%|▏ | 10614/569592 [2:13:53<475:58:24, 3.07s/it]
2%|▏ | 10615/569592 [2:13:54<380:29:55, 2.45s/it]
2%|▏ | 10615/569592 [2:13:54<380:29:55, 2.45s/it]
2%|▏ | 10616/569592 [2:13:58<430:28:35, 2.77s/it]
2%|▏ | 10616/569592 [2:13:58<430:28:35, 2.77s/it]
2%|▏ | 10617/569592 [2:14:00<391:50:13, 2.52s/it]
2%|▏ | 10617/569592 [2:14:00<391:50:13, 2.52s/it]
2%|▏ | 10618/569592 [2:14:05<516:37:19, 3.33s/it]
2%|▏ | 10618/569592 [2:14:05<516:37:19, 3.33s/it]
2%|▏ | 10619/569592 [2:14:09<548:59:02, 3.54s/it]
2%|▏ | 10619/569592 [2:14:09<548:59:02, 3.54s/it]
2%|▏ | 10620/569592 [2:14:14<602:13:33, 3.88s/it]
2%|▏ | 10620/569592 [2:14:14<602:13:33, 3.88s/it]
2%|▏ | 10621/569592 [2:14:18<642:53:35, 4.14s/it]
2%|▏ | 10621/569592 [2:14:18<642:53:35, 4.14s/it]
2%|▏ | 10622/569592 [2:14:22<602:25:44, 3.88s/it]
2%|▏ | 10622/569592 [2:14:22<602:25:44, 3.88s/it]
2%|▏ | 10623/569592 [2:14:23<462:07:02, 2.98s/it]
2%|▏ | 10623/569592 [2:14:23<462:07:02, 2.98s/it]
2%|▏ | 10624/569592 [2:14:26<487:51:40, 3.14s/it]
2%|▏ | 10624/569592 [2:14:26<487:51:40, 3.14s/it]
2%|▏ | 10625/569592 [2:14:31<570:08:19, 3.67s/it]
2%|▏ | 10625/569592 [2:14:31<570:08:19, 3.67s/it]
2%|▏ | 10626/569592 [2:14:36<613:28:43, 3.95s/it]
2%|▏ | 10626/569592 [2:14:36<613:28:43, 3.95s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 10627/569592 [2:14:37<476:00:26, 3.07s/it]
2%|▏ | 10627/569592 [2:14:37<476:00:26, 3.07s/it]
2%|▏ | 10628/569592 [2:14:37<376:03:06, 2.42s/it]
2%|▏ | 10628/569592 [2:14:37<376:03:06, 2.42s/it]
2%|▏ | 10629/569592 [2:14:42<492:41:46, 3.17s/it]
2%|▏ | 10629/569592 [2:14:42<492:41:46, 3.17s/it]
2%|▏ | 10630/569592 [2:14:47<569:50:03, 3.67s/it]
2%|▏ | 10630/569592 [2:14:47<569:50:03, 3.67s/it]
2%|▏ | 10631/569592 [2:14:52<625:59:08, 4.03s/it]
2%|▏ | 10631/569592 [2:14:52<625:59:08, 4.03s/it]
2%|▏ | 10632/569592 [2:14:53<479:10:41, 3.09s/it]
2%|▏ | 10632/569592 [2:14:53<479:10:41, 3.09s/it]
2%|▏ | 10633/569592 [2:14:58<565:58:16, 3.65s/it]
2%|▏ | 10633/569592 [2:14:58<565:58:16, 3.65s/it]
2%|▏ | 10634/569592 [2:15:01<555:34:19, 3.58s/it]
2%|▏ | 10634/569592 [2:15:01<555:34:19, 3.58s/it]
2%|▏ | 10635/569592 [2:15:04<530:31:55, 3.42s/it]
2%|▏ | 10635/569592 [2:15:04<530:31:55, 3.42s/it]
2%|▏ | 10636/569592 [2:15:09<577:33:02, 3.72s/it]
2%|▏ | 10636/569592 [2:15:09<577:33:02, 3.72s/it]
2%|▏ | 10637/569592 [2:15:13<616:54:14, 3.97s/it]
2%|▏ | 10637/569592 [2:15:13<616:54:14, 3.97s/it]
2%|▏ | 10638/569592 [2:15:17<598:19:15, 3.85s/it]
2%|▏ | 10638/569592 [2:15:17<598:19:15, 3.85s/it]
2%|▏ | 10639/569592 [2:15:22<640:14:49, 4.12s/it]
2%|▏ | 10639/569592 [2:15:22<640:14:49, 4.12s/it]
2%|▏ | 10640/569592 [2:15:27<697:29:43, 4.49s/it]
2%|▏ | 10640/569592 [2:15:27<697:29:43, 4.49s/it]
2%|▏ | 10641/569592 [2:15:30<633:54:41, 4.08s/it]
2%|▏ | 10641/569592 [2:15:30<633:54:41, 4.08s/it]
2%|▏ | 10642/569592 [2:15:36<691:57:21, 4.46s/it]
2%|▏ | 10642/569592 [2:15:36<691:57:21, 4.46s/it]
2%|▏ | 10643/569592 [2:15:40<709:27:59, 4.57s/it]
2%|▏ | 10643/569592 [2:15:40<709:27:59, 4.57s/it]
2%|▏ | 10644/569592 [2:15:44<652:21:34, 4.20s/it]
2%|▏ | 10644/569592 [2:15:44<652:21:34, 4.20s/it]
2%|▏ | 10645/569592 [2:15:48<662:08:06, 4.26s/it]
2%|▏ | 10645/569592 [2:15:48<662:08:06, 4.26s/it]
2%|▏ | 10646/569592 [2:15:53<703:45:24, 4.53s/it]
2%|▏ | 10646/569592 [2:15:53<703:45:24, 4.53s/it]
2%|▏ | 10647/569592 [2:15:57<661:25:45, 4.26s/it]
2%|▏ | 10647/569592 [2:15:57<661:25:45, 4.26s/it]
2%|▏ | 10648/569592 [2:16:02<680:42:51, 4.38s/it]
2%|▏ | 10648/569592 [2:16:02<680:42:51, 4.38s/it]
2%|▏ | 10649/569592 [2:16:05<626:29:16, 4.04s/it]
2%|▏ | 10649/569592 [2:16:05<626:29:16, 4.04s/it]
2%|▏ | 10650/569592 [2:16:08<606:15:31, 3.90s/it]
2%|▏ | 10650/569592 [2:16:08<606:15:31, 3.90s/it]
2%|▏ | 10651/569592 [2:16:13<646:40:18, 4.17s/it]
2%|▏ | 10651/569592 [2:16:13<646:40:18, 4.17s/it]
2%|▏ | 10652/569592 [2:16:18<685:17:07, 4.41s/it]
2%|▏ | 10652/569592 [2:16:18<685:17:07, 4.41s/it]
2%|▏ | 10653/569592 [2:16:21<627:07:03, 4.04s/it]
2%|▏ | 10653/569592 [2:16:21<627:07:03, 4.04s/it]
2%|▏ | 10654/569592 [2:16:26<658:54:01, 4.24s/it]
2%|▏ | 10654/569592 [2:16:26<658:54:01, 4.24s/it]
2%|▏ | 10655/569592 [2:16:27<503:34:11, 3.24s/it]
2%|▏ | 10655/569592 [2:16:27<503:34:11, 3.24s/it]
2%|▏ | 10656/569592 [2:16:28<395:19:26, 2.55s/it]
2%|▏ | 10656/569592 [2:16:28<395:19:26, 2.55s/it]
2%|▏ | 10657/569592 [2:16:31<440:42:03, 2.84s/it]
2%|▏ | 10657/569592 [2:16:31<440:42:03, 2.84s/it]
2%|▏ | 10658/569592 [2:16:32<354:32:13, 2.28s/it]
2%|▏ | 10658/569592 [2:16:32<354:32:13, 2.28s/it]
2%|▏ | 10659/569592 [2:16:33<293:36:06, 1.89s/it]
2%|▏ | 10659/569592 [2:16:33<293:36:06, 1.89s/it]
2%|▏ | 10660/569592 [2:16:39<455:55:57, 2.94s/it]
2%|▏ | 10660/569592 [2:16:39<455:55:57, 2.94s/it]
2%|▏ | 10661/569592 [2:16:40<363:13:34, 2.34s/it]
2%|▏ | 10661/569592 [2:16:40<363:13:34, 2.34s/it]
2%|▏ | 10662/569592 [2:16:45<483:06:00, 3.11s/it]
2%|▏ | 10662/569592 [2:16:45<483:06:00, 3.11s/it]
2%|▏ | 10663/569592 [2:16:50<578:46:54, 3.73s/it]
2%|▏ | 10663/569592 [2:16:50<578:46:54, 3.73s/it]
2%|▏ | 10664/569592 [2:16:55<631:14:39, 4.07s/it]
2%|▏ | 10664/569592 [2:16:55<631:14:39, 4.07s/it]
2%|▏ | 10665/569592 [2:16:58<600:04:28, 3.87s/it]
2%|▏ | 10665/569592 [2:16:58<600:04:28, 3.87s/it]
2%|▏ | 10666/569592 [2:17:02<593:43:04, 3.82s/it]
2%|▏ | 10666/569592 [2:17:02<593:43:04, 3.82s/it]
2%|▏ | 10667/569592 [2:17:06<630:28:52, 4.06s/it]
2%|▏ | 10667/569592 [2:17:06<630:28:52, 4.06s/it]
2%|▏ | 10668/569592 [2:17:11<657:17:35, 4.23s/it]
2%|▏ | 10668/569592 [2:17:11<657:17:35, 4.23s/it]
2%|▏ | 10669/569592 [2:17:14<621:41:12, 4.00s/it]
2%|▏ | 10669/569592 [2:17:14<621:41:12, 4.00s/it]
2%|▏ | 10670/569592 [2:17:19<651:35:53, 4.20s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (112503600 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10670/569592 [2:17:19<651:35:53, 4.20s/it]
2%|▏ | 10671/569592 [2:17:20<496:57:41, 3.20s/it]
2%|▏ | 10671/569592 [2:17:20<496:57:41, 3.20s/it]
2%|▏ | 10672/569592 [2:17:21<390:58:16, 2.52s/it]
2%|▏ | 10672/569592 [2:17:21<390:58:16, 2.52s/it]
2%|▏ | 10673/569592 [2:17:22<317:26:21, 2.04s/it]
2%|▏ | 10673/569592 [2:17:22<317:26:21, 2.04s/it]
2%|▏ | 10674/569592 [2:17:23<266:51:45, 1.72s/it]
2%|▏ | 10674/569592 [2:17:23<266:51:45, 1.72s/it]
2%|▏ | 10675/569592 [2:17:28<419:03:43, 2.70s/it]
2%|▏ | 10675/569592 [2:17:28<419:03:43, 2.70s/it]
2%|▏ | 10676/569592 [2:17:31<462:35:08, 2.98s/it]
2%|▏ | 10676/569592 [2:17:31<462:35:08, 2.98s/it]
2%|▏ | 10677/569592 [2:17:32<368:01:50, 2.37s/it]
2%|▏ | 10677/569592 [2:17:32<368:01:50, 2.37s/it]
2%|▏ | 10678/569592 [2:17:36<419:36:55, 2.70s/it]
2%|▏ | 10678/569592 [2:17:36<419:36:55, 2.70s/it]
2%|▏ | 10679/569592 [2:17:37<338:48:02, 2.18s/it]
2%|▏ | 10679/569592 [2:17:37<338:48:02, 2.18s/it]
2%|▏ | 10680/569592 [2:17:38<282:08:50, 1.82s/it]
2%|▏ | 10680/569592 [2:17:38<282:08:50, 1.82s/it]
2%|▏ | 10681/569592 [2:17:39<241:12:51, 1.55s/it]
2%|▏ | 10681/569592 [2:17:39<241:12:51, 1.55s/it]
2%|▏ | 10682/569592 [2:17:40<211:50:53, 1.36s/it]
2%|▏ | 10682/569592 [2:17:40<211:50:53, 1.36s/it]
2%|▏ | 10683/569592 [2:17:43<282:10:02, 1.82s/it]
2%|▏ | 10683/569592 [2:17:43<282:10:02, 1.82s/it]
2%|▏ | 10684/569592 [2:17:46<375:03:23, 2.42s/it]
2%|▏ | 10684/569592 [2:17:46<375:03:23, 2.42s/it]
2%|▏ | 10685/569592 [2:17:47<304:52:10, 1.96s/it]
2%|▏ | 10685/569592 [2:17:47<304:52:10, 1.96s/it]
2%|▏ | 10686/569592 [2:17:48<257:50:41, 1.66s/it]
2%|▏ | 10686/569592 [2:17:48<257:50:41, 1.66s/it]
2%|▏ | 10687/569592 [2:17:53<415:29:42, 2.68s/it]
2%|▏ | 10687/569592 [2:17:53<415:29:42, 2.68s/it]
2%|▏ | 10688/569592 [2:17:54<342:07:06, 2.20s/it]
2%|▏ | 10688/569592 [2:17:54<342:07:06, 2.20s/it]
2%|▏ | 10689/569592 [2:17:55<286:11:49, 1.84s/it]
2%|▏ | 10689/569592 [2:17:55<286:11:49, 1.84s/it]
2%|▏ | 10690/569592 [2:17:59<376:21:52, 2.42s/it]
2%|▏ | 10690/569592 [2:17:59<376:21:52, 2.42s/it]
2%|▏ | 10691/569592 [2:18:04<492:06:05, 3.17s/it]
2%|▏ | 10691/569592 [2:18:04<492:06:05, 3.17s/it]
2%|▏ | 10692/569592 [2:18:05<391:12:04, 2.52s/it]
2%|▏ | 10692/569592 [2:18:05<391:12:04, 2.52s/it]
2%|▏ | 10693/569592 [2:18:06<326:58:07, 2.11s/it]
2%|▏ | 10693/569592 [2:18:06<326:58:07, 2.11s/it]
2%|▏ | 10694/569592 [2:18:08<328:28:34, 2.12s/it]
2%|▏ | 10694/569592 [2:18:08<328:28:34, 2.12s/it]
2%|▏ | 10695/569592 [2:18:13<470:29:56, 3.03s/it]
2%|▏ | 10695/569592 [2:18:13<470:29:56, 3.03s/it]
2%|▏ | 10696/569592 [2:18:18<563:14:24, 3.63s/it]
2%|▏ | 10696/569592 [2:18:19<563:14:24, 3.63s/it]
2%|▏ | 10697/569592 [2:18:19<437:34:02, 2.82s/it]
2%|▏ | 10697/569592 [2:18:19<437:34:02, 2.82s/it]
2%|▏ | 10698/569592 [2:18:20<350:10:09, 2.26s/it]
2%|▏ | 10698/569592 [2:18:20<350:10:09, 2.26s/it]
2%|▏ | 10699/569592 [2:18:25<458:01:10, 2.95s/it]
2%|▏ | 10699/569592 [2:18:25<458:01:10, 2.95s/it]
2%|▏ | 10700/569592 [2:18:26<364:18:34, 2.35s/it]
2%|▏ | 10700/569592 [2:18:26<364:18:34, 2.35s/it]
2%|▏ | 10701/569592 [2:18:27<299:31:00, 1.93s/it]
2%|▏ | 10701/569592 [2:18:27<299:31:00, 1.93s/it]
2%|▏ | 10702/569592 [2:18:28<283:22:53, 1.83s/it]
2%|▏ | 10702/569592 [2:18:28<283:22:53, 1.83s/it]
2%|▏ | 10703/569592 [2:18:34<460:24:31, 2.97s/it]
2%|▏ | 10703/569592 [2:18:34<460:24:31, 2.97s/it]
2%|▏ | 10704/569592 [2:18:35<367:54:39, 2.37s/it]
2%|▏ | 10704/569592 [2:18:35<367:54:39, 2.37s/it]
2%|▏ | 10705/569592 [2:18:37<338:10:06, 2.18s/it]
2%|▏ | 10705/569592 [2:18:37<338:10:06, 2.18s/it]
2%|▏ | 10706/569592 [2:18:39<338:44:14, 2.18s/it]
2%|▏ | 10706/569592 [2:18:39<338:44:14, 2.18s/it]
2%|▏ | 10707/569592 [2:18:44<482:37:55, 3.11s/it]
2%|▏ | 10707/569592 [2:18:44<482:37:55, 3.11s/it]
2%|▏ | 10708/569592 [2:18:45<388:26:07, 2.50s/it]
2%|▏ | 10708/569592 [2:18:45<388:26:07, 2.50s/it]
2%|▏ | 10709/569592 [2:18:47<350:52:58, 2.26s/it]
2%|▏ | 10709/569592 [2:18:47<350:52:58, 2.26s/it]
2%|▏ | 10710/569592 [2:18:50<381:46:48, 2.46s/it]
2%|▏ | 10710/569592 [2:18:50<381:46:48, 2.46s/it]
2%|▏ | 10711/569592 [2:18:54<465:46:08, 3.00s/it]
2%|▏ | 10711/569592 [2:18:54<465:46:08, 3.00s/it]
2%|▏ | 10712/569592 [2:18:55<373:03:16, 2.40s/it]
2%|▏ | 10712/569592 [2:18:55<373:03:16, 2.40s/it]
2%|▏ | 10713/569592 [2:18:57<352:04:27, 2.27s/it]
2%|▏ | 10713/569592 [2:18:57<352:04:27, 2.27s/it]
2%|▏ | 10714/569592 [2:18:59<341:34:05, 2.20s/it]
2%|▏ | 10714/569592 [2:18:59<341:34:05, 2.20s/it]
2%|▏ | 10715/569592 [2:19:05<527:58:20, 3.40s/it]
2%|▏ | 10715/569592 [2:19:05<527:58:20, 3.40s/it]
2%|▏ | 10716/569592 [2:19:06<415:04:45, 2.67s/it]
2%|▏ | 10716/569592 [2:19:06<415:04:45, 2.67s/it]
2%|▏ | 10717/569592 [2:19:08<368:33:58, 2.37s/it]
2%|▏ | 10717/569592 [2:19:08<368:33:58, 2.37s/it]
2%|▏ | 10718/569592 [2:19:10<342:39:41, 2.21s/it]
2%|▏ | 10718/569592 [2:19:10<342:39:41, 2.21s/it]
2%|▏ | 10719/569592 [2:19:16<539:03:13, 3.47s/it]
2%|▏ | 10719/569592 [2:19:16<539:03:13, 3.47s/it]
2%|▏ | 10720/569592 [2:19:17<423:33:05, 2.73s/it]
2%|▏ | 10720/569592 [2:19:17<423:33:05, 2.73s/it]
2%|▏ | 10721/569592 [2:19:18<339:19:14, 2.19s/it]
2%|▏ | 10721/569592 [2:19:18<339:19:14, 2.19s/it]
2%|▏ | 10722/569592 [2:19:21<349:54:33, 2.25s/it]
2%|▏ | 10722/569592 [2:19:21<349:54:33, 2.25s/it]
2%|▏ | 10723/569592 [2:19:26<505:16:50, 3.25s/it]
2%|▏ | 10723/569592 [2:19:26<505:16:50, 3.25s/it]
2%|▏ | 10724/569592 [2:19:27<398:13:09, 2.57s/it]
2%|▏ | 10724/569592 [2:19:27<398:13:09, 2.57s/it]
2%|▏ | 10725/569592 [2:19:31<453:32:20, 2.92s/it]
2%|▏ | 10725/569592 [2:19:31<453:32:20, 2.92s/it]
2%|▏ | 10726/569592 [2:19:32<361:26:04, 2.33s/it]
2%|▏ | 10726/569592 [2:19:32<361:26:04, 2.33s/it]
2%|▏ | 10727/569592 [2:19:37<483:15:22, 3.11s/it]
2%|▏ | 10727/569592 [2:19:37<483:15:22, 3.11s/it]
2%|▏ | 10728/569592 [2:19:41<520:12:07, 3.35s/it]
2%|▏ | 10728/569592 [2:19:41<520:12:07, 3.35s/it]
2%|▏ | 10729/569592 [2:19:46<596:54:01, 3.85s/it]
2%|▏ | 10729/569592 [2:19:46<596:54:01, 3.85s/it]
2%|▏ | 10730/569592 [2:19:47<459:42:10, 2.96s/it]
2%|▏ | 10730/569592 [2:19:47<459:42:10, 2.96s/it]
2%|▏ | 10731/569592 [2:19:47<363:40:47, 2.34s/it]
2%|▏ | 10731/569592 [2:19:48<363:40:47, 2.34s/it]
2%|▏ | 10732/569592 [2:19:49<303:24:01, 1.95s/it]
2%|▏ | 10732/569592 [2:19:49<303:24:01, 1.95s/it]
2%|▏ | 10733/569592 [2:19:49<255:18:45, 1.64s/it]
2%|▏ | 10733/569592 [2:19:49<255:18:45, 1.64s/it]
2%|▏ | 10734/569592 [2:19:50<221:46:57, 1.43s/it]
2%|▏ | 10734/569592 [2:19:50<221:46:57, 1.43s/it]
2%|▏ | 10735/569592 [2:19:56<398:57:34, 2.57s/it]
2%|▏ | 10735/569592 [2:19:56<398:57:34, 2.57s/it]
2%|▏ | 10736/569592 [2:19:57<325:14:31, 2.10s/it]
2%|▏ | 10736/569592 [2:19:57<325:14:31, 2.10s/it]
2%|▏ | 10737/569592 [2:20:02<485:42:08, 3.13s/it]
2%|▏ | 10737/569592 [2:20:02<485:42:08, 3.13s/it]
2%|▏ | 10738/569592 [2:20:08<632:11:07, 4.07s/it]
2%|▏ | 10738/569592 [2:20:08<632:11:07, 4.07s/it]
2%|▏ | 10739/569592 [2:20:12<615:07:48, 3.96s/it]
2%|▏ | 10739/569592 [2:20:12<615:07:48, 3.96s/it]
2%|▏ | 10740/569592 [2:20:13<471:58:54, 3.04s/it]
2%|▏ | 10740/569592 [2:20:13<471:58:54, 3.04s/it]
2%|▏ | 10741/569592 [2:20:17<503:02:27, 3.24s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92467200 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10741/569592 [2:20:17<503:02:27, 3.24s/it]
2%|▏ | 10742/569592 [2:20:18<395:43:05, 2.55s/it]
2%|▏ | 10742/569592 [2:20:18<395:43:05, 2.55s/it]
2%|▏ | 10743/569592 [2:20:21<429:31:05, 2.77s/it]
2%|▏ | 10743/569592 [2:20:21<429:31:05, 2.77s/it]
2%|▏ | 10744/569592 [2:20:26<545:55:35, 3.52s/it]
2%|▏ | 10744/569592 [2:20:26<545:55:35, 3.52s/it]
2%|▏ | 10745/569592 [2:20:27<424:22:43, 2.73s/it]
2%|▏ | 10745/569592 [2:20:27<424:22:43, 2.73s/it]
2%|▏ | 10746/569592 [2:20:30<453:48:39, 2.92s/it]
2%|▏ | 10746/569592 [2:20:30<453:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
48:39, 2.92s/it]
2%|▏ | 10747/569592 [2:20:35<543:52:59, 3.50s/it]
2%|▏ | 10747/569592 [2:20:36<543:52:59, 3.50s/it]
2%|▏ | 10748/569592 [2:20:39<546:13:08, 3.52s/it]
2%|▏ | 10748/569592 [2:20:39<546:13:08, 3.52s/it]
2%|▏ | 10749/569592 [2:20:43<576:38:55, 3.71s/it]
2%|▏ | 10749/569592 [2:20:43<576:38:55, 3.71s/it]
2%|▏ | 10750/569592 [2:20:48<628:37:46, 4.05s/it]
2%|▏ | 10750/569592 [2:20:48<628:37:46, 4.05s/it]
2%|▏ | 10751/569592 [2:20:52<652:48:28, 4.21s/it]
2%|▏ | 10751/569592 [2:20:52<652:48:28, 4.21s/it]
2%|▏ | 10752/569592 [2:20:56<600:59:46, 3.87s/it]
2%|▏ | 10752/569592 [2:20:56<600:59:46, 3.87s/it]
2%|▏ | 10753/569592 [2:20:59<591:59:34, 3.81s/it]
2%|▏ | 10753/569592 [2:20:59<591:59:34, 3.81s/it]
2%|▏ | 10754/569592 [2:21:03<583:29:47, 3.76s/it]
2%|▏ | 10754/569592 [2:21:03<583:29:47, 3.76s/it]
2%|▏ | 10755/569592 [2:21:06<571:06:43, 3.68s/it]
2%|▏ | 10755/569592 [2:21:06<571:06:43, 3.68s/it]
2%|▏ | 10756/569592 [2:21:10<548:55:46, 3.54s/it]
2%|▏ | 10756/569592 [2:21:10<548:55:46, 3.54s/it]
2%|▏ | 10757/569592 [2:21:15<625:32:13, 4.03s/it]
2%|▏ | 10757/569592 [2:21:15<625:32:13, 4.03s/it]
2%|▏ | 10758/569592 [2:21:18<599:35:13, 3.86s/it]
2%|▏ | 10758/569592 [2:21:18<599:35:13, 3.86s/it]
2%|▏ | 10759/569592 [2:21:23<633:46:55, 4.08s/it]
2%|▏ | 10759/569592 [2:21:23<633:46:55, 4.08s/it]
2%|▏ | 10760/569592 [2:21:28<668:21:12, 4.31s/it]
2%|▏ | 10760/569592 [2:21:28<668:21:12, 4.31s/it]
2%|▏ | 10761/569592 [2:21:29<509:06:19, 3.28s/it]
2%|▏ | 10761/569592 [2:21:29<509:06:19, 3.28s/it]
2%|▏ | 10762/569592 [2:21:29<399:28:48, 2.57s/it]
2%|▏ | 10762/569592 [2:21:29<399:28:48, 2.57s/it]
2%|▏ | 10763/569592 [2:21:33<456:51:31, 2.94s/it]
2%|▏ | 10763/569592 [2:21:33<456:51:31, 2.94s/it]
2%|▏ | 10764/569592 [2:21:38<551:00:56, 3.55s/it]
2%|▏ | 10764/569592 [2:21:38<551:00:56, 3.55s/it]
2%|▏ | 10765/569592 [2:21:42<568:34:23, 3.66s/it]
2%|▏ | 10765/569592 [2:21:42<568:34:23, 3.66s/it]
2%|▏ | 10766/569592 [2:21:47<621:11:29, 4.00s/it]
2%|▏ | 10766/569592 [2:21:47<621:11:29, 4.00s/it]
2%|▏ | 10767/569592 [2:21:50<584:17:14, 3.76s/it]
2%|▏ | 10767/569592 [2:21:50<584:17:14, 3.76s/it]
2%|▏ | 10768/569592 [2:21:55<625:04:13, 4.03s/it]
2%|▏ | 10768/569592 [2:21:55<625:04:13, 4.03s/it]
2%|▏ | 10769/569592 [2:21:58<593:06:15, 3.82s/it]
2%|▏ | 10769/569592 [2:21:58<593:06:15, 3.82s/it]
2%|▏ | 10770/569592 [2:22:03<635:04:45, 4.09s/it]
2%|▏ | 10770/569592 [2:22:03<635:04:45, 4.09s/it]
2%|▏ | 10771/569592 [2:22:08<686:19:04, 4.42s/it]
2%|▏ | 10771/569592 [2:22:08<686:19:04, 4.42s/it]
2%|▏ | 10772/569592 [2:22:09<531:23:17, 3.42s/it]
2%|▏ | 10772/569592 [2:22:09<531:23:17, 3.42s/it]
2%|▏ | 10773/569592 [2:22:12<526:24:17, 3.39s/it]
2%|▏ | 10773/569592 [2:22:12<526:24:17, 3.39s/it]
2%|▏ | 10774/569592 [2:22:13<409:59:35, 2.64s/it]
2%|▏ | 10774/569592 [2:22:13<409:59:35, 2.64s/it]
2%|▏ | 10775/569592 [2:22:19<534:06:14, 3.44s/it]
2%|▏ | 10775/569592 [2:22:19<534:06:14, 3.44s/it]
2%|▏ | 10776/569592 [2:22:22<514:53:20, 3.32s/it]
2%|▏ | 10776/569592 [2:22:22<514:53:20, 3.32s/it]
2%|▏ | 10777/569592 [2:22:25<523:50:28, 3.37s/it]
2%|▏ | 10777/569592 [2:22:25<523:50:28, 3.37s/it]
2%|▏ | 10778/569592 [2:22:26<408:26:02, 2.63s/it]
2%|▏ | 10778/569592 [2:22:26<408:26:02, 2.63s/it]
2%|▏ | 10779/569592 [2:22:27<329:55:22, 2.13s/it]
2%|▏ | 10779/569592 [2:22:27<329:55:22, 2.13s/it]
2%|▏ | 10780/569592 [2:22:30<392:09:01, 2.53s/it]
2%|▏ | 10780/569592 [2:22:31<392:09:01, 2.53s/it]
2%|▏ | 10781/569592 [2:22:34<442:27:18, 2.85s/it]
2%|▏ | 10781/569592 [2:22:34<442:27:18, 2.85s/it]
2%|▏ | 10782/569592 [2:22:39<517:17:19, 3.33s/it]
2%|▏ | 10782/569592 [2:22:39<517:17:19, 3.33s/it]
2%|▏ | 10783/569592 [2:22:43<587:15:17, 3.78s/it]
2%|▏ | 10783/569592 [2:22:43<587:15:17, 3.78s/it]
2%|▏ | 10784/569592 [2:22:48<623:01:49, 4.01s/it]
2%|▏ | 10784/569592 [2:22:48<623:01:49, 4.01s/it]
2%|▏ | 10785/569592 [2:22:51<570:14:56, 3.67s/it]
2%|▏ | 10785/569592 [2:22:51<570:14:56, 3.67s/it]
2%|▏ | 10786/569592 [2:22:54<542:11:59, 3.49s/it]
2%|▏ | 10786/569592 [2:22:54<542:11:59, 3.49s/it]
2%|▏ | 10787/569592 [2:22:55<421:50:58, 2.72s/it]
2%|▏ | 10787/569592 [2:22:55<421:50:58, 2.72s/it]
2%|▏ | 10788/569592 [2:22:58<446:28:46, 2.88s/it]
2%|▏ | 10788/569592 [2:22:58<446:28:46, 2.88s/it]
2%|▏ | 10789/569592 [2:22:59<362:57:51, 2.34s/it]
2%|▏ | 10789/569592 [2:22:59<362:57:51, 2.34s/it]
2%|▏ | 10790/569592 [2:23:00<298:36:53, 1.92s/it]
2%|▏ | 10790/569592 [2:23:00<298:36:53, 1.92s/it]
2%|▏ | 10791/569592 [2:23:01<256:50:47, 1.65s/it]
2%|▏ | 10791/569592 [2:23:01<256:50:47, 1.65s/it]
2%|▏ | 10792/569592 [2:23:02<223:17:30, 1.44s/it]
2%|▏ | 10792/569592 [2:23:02<223:17:30, 1.44s/it]
2%|▏ | 10793/569592 [2:23:07<370:32:47, 2.39s/it]
2%|▏ | 10793/569592 [2:23:07<370:32:47, 2.39s/it]
2%|▏ | 10794/569592 [2:23:08<305:17:43, 1.97s/it]
2%|▏ | 10794/569592 [2:23:08<305:17:43, 1.97s/it]
2%|▏ | 10795/569592 [2:23:09<259:29:39, 1.67s/it]
2%|▏ | 10795/569592 [2:23:09<259:29:39, 1.67s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10796/569592 [2:23:12<349:41:37, 2.25s/it]
2%|▏ | 10796/569592 [2:23:12<349:41:37, 2.25s/it]
2%|▏ | 10797/569592 [2:23:13<288:24:19, 1.86s/it]
2%|▏ | 10797/569592 [2:23:13<288:24:19, 1.86s/it]
2%|▏ | 10798/569592 [2:23:14<244:29:52, 1.58s/it]
2%|▏ | 10798/569592 [2:23:14<244:29:52, 1.58s/it]
2%|▏ | 10799/569592 [2:23:18<355:24:05, 2.29s/it]
2%|▏ | 10799/569592 [2:23:18<355:24:05, 2.29s/it]
2%|▏ | 10800/569592 [2:23:19<296:51:51, 1.91s/it]
2%|▏ | 10800/569592 [2:23:19<296:51:51, 1.91s/it]
2%|▏ | 10801/569592 [2:23:23<391:34:55, 2.52s/it]
2%|▏ | 10801/569592 [2:23:23<391:34:55, 2.52s/it]
2%|▏ | 10802/569592 [2:23:27<451:35:45, 2.91s/it]
2%|▏ | 10802/569592 [2:23:27<451:35:45, 2.91s/it]
2%|▏ | 10803/569592 [2:23:28<376:43:10, 2.43s/it]
2%|▏ | 10803/569592 [2:23:28<376:43:10, 2.43s/it]
2%|▏ | 10804/569592 [2:23:34<514:26:15, 3.31s/it]
2%|▏ | 10804/569592 [2:23:34<514:26:15, 3.31s/it]
2%|▏ | 10805/569592 [2:23:34<403:49:11, 2.60s/it]
2%|▏ | 10805/569592 [2:23:34<403:49:11, 2.60s/it]
2%|▏ | 10806/569592 [2:23:36<341:40:21, 2.20s/it]
2%|▏ | 10806/569592 [2:23:36<341:40:21, 2.20s/it]
2%|▏ | 10807/569592 [2:23:38<344:54:37, 2.22s/it]
2%|▏ | 10807/569592 [2:23:38<344:54:37, 2.22s/it]
2%|▏ | 10808/569592 [2:23:40<339:29:17, 2.19s/it]
2%|▏ | 10808/569592 [2:23:40<339:29:17, 2.19s/it]
2%|▏ | 10809/569592 [2:23:45<478:45:33, 3.08s/it]
2%|▏ | 10809/569592 [2:23:45<478:45:33, 3.08s/it]
2%|▏ | 10810/569592 [2:23:47<423:47:58, 2.73s/it]
2%|▏ | 10810/569592 [2:23:47<423:47:58, 2.73s/it]
2%|▏ | 10811/569592 [2:23:48<352:05:54, 2.27s/it]
2%|▏ | 10811/569592 [2:23:48<352:05:54, 2.27s/it]
2%|▏ | 10812/569592 [2:23:52<394:43:32, 2.54s/it]
2%|▏ | 10812/569592 [2:23:52<394:43:32, 2.54s/it]
2%|▏ | 10813/569592 [2:23:53<353:22:59, 2.28s/it]
2%|▏ | 10813/569592 [2:23:53<353:22:59, 2.28s/it]
2%|▏ | 10814/569592 [2:23:56<378:40:19, 2.44s/it]
2%|▏ | 10814/569592 [2:23:56<378:40:19, 2.44s/it]
2%|▏ | 10815/569592 [2:23:58<356:19:48, 2.30s/it]
2%|▏ | 10815/569592 [2:23:58<356:19:48, 2.30s/it]
2%|▏ | 10816/569592 [2:24:02<426:01:59, 2.74s/it]
2%|▏ | 10816/569592 [2:24:02<426:01:59, 2.74s/it]
2%|▏ | 10817/569592 [2:24:03<360:33:01, 2.32s/it]
2%|▏ | 10817/569592 [2:24:03<360:33:01, 2.32s/it]
2%|▏ | 10818/569592 [2:24:06<390:56:37, 2.52s/it]
2%|▏ | 10818/569592 [2:24:06<390:56:37, 2.52s/it]
2%|▏ | 10819/569592 [2:24:09<390:08:04, 2.51s/it]
2%|▏ | 10819/569592 [2:24:09<390:08:04, 2.51s/it]
2%|▏ | 10820/569592 [2:24:14<508:16:57, 3.27s/it]
2%|▏ | 10820/569592 [2:24:14<508:16:57, 3.27s/it]
2%|▏ | 10821/569592 [2:24:15<405:45:00, 2.61s/it]
2%|▏ | 10821/569592 [2:24:15<405:45:00, 2.61s/it]
2%|▏ | 10822/569592 [2:24:16<351:08:59, 2.26s/it]
2%|▏ | 10822/569592 [2:24:16<351:08:59, 2.26s/it]
2%|▏ | 10823/569592 [2:24:19<357:37:25, 2.30s/it]
2%|▏ | 10823/569592 [2:24:19<357:37:25, 2.30s/it]
2%|▏ | 10824/569592 [2:24:23<450:37:09, 2.90s/it]
2%|▏ | 10824/569592 [2:24:23<450:37:09, 2.90s/it]
2%|▏ | 10825/569592 [2:24:24<378:28:33, 2.44s/it]
2%|▏ | 10825/569592 [2:24:24<378:28:33, 2.44s/it]
2%|▏ | 10826/569592 [2:24:26<342:18:37, 2.21s/it]
2%|▏ | 10826/569592 [2:24:26<342:18:37, 2.21s/it]
2%|▏ | 10827/569592 [2:24:29<406:36:48, 2.62s/it]
2%|▏ | 10827/569592 [2:24:29<406:36:48, 2.62s/it]
2%|▏ | 10828/569592 [2:24:33<459:31:49, 2.96s/it]
2%|▏ | 10828/569592 [2:24:33<459:31:49, 2.96s/it]
2%|▏ | 10829/569592 [2:24:34<380:36:15, 2.45s/it]
2%|▏ | 10829/569592 [2:24:35<380:36:15, 2.45s/it]
2%|▏ | 10830/569592 [2:24:36<359:26:15, 2.32s/it]
2%|▏ | 10830/569592 [2:24:37<359:26:15, 2.32s/it]
2%|▏ | 10831/569592 [2:24:40<400:37:44, 2.58s/it]
2%|▏ | 10831/569592 [2:24:40<400:37:44, 2.58s/it]
2%|▏ | 10832/569592 [2:24:44<494:56:02, 3.19s/it]
2%|▏ | 10832/569592 [2:24:44<494:56:02, 3.19s/it]
2%|▏ | 10833/569592 [2:24:45<399:35:14, 2.57s/it]
2%|▏ | 10833/569592 [2:24:45<399:35:14, 2.57s/it]
2%|▏ | 10834/569592 [2:24:48<393:58:22, 2.54s/it]
2%|▏ | 10834/569592 [2:24:48<393:58:22, 2.54s/it]
2%|▏ | 10835/569592 [2:24:50<372:14:02, 2.40s/it]
2%|▏ | 10835/569592 [2:24:50<372:14:02, 2.40s/it]
2%|▏ | 10836/569592 [2:24:54<432:28:28, 2.79s/it]
2%|▏ | 10836/569592 [2:24:54<432:28:28, 2.79s/it]
2%|▏ | 10837/569592 [2:24:56<423:37:55, 2.73s/it]
2%|▏ | 10837/569592 [2:24:56<423:37:55, 2.73s/it]
2%|▏ | 10838/569592 [2:24:58<375:26:27, 2.42s/it]
2%|▏ | 10838/569592 [2:24:58<375:26:27, 2.42s/it]
2%|▏ | 10839/569592 [2:25:00<376:55:49, 2.43s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10839/569592 [2:25:00<376:55:49, 2.43s/it]
2%|▏ | 10840/569592 [2:25:05<464:32:37, 2.99s/it]
2%|▏ | 10840/569592 [2:25:05<464:32:37, 2.99s/it]
2%|▏ | 10841/569592 [2:25:09<525:16:08, 3.38s/it]
2%|▏ | 10841/569592 [2:25:09<525:16:08, 3.38s/it]
2%|▏ | 10842/569592 [2:25:13<540:59:41, 3.49s/it]
2%|▏ | 10842/569592 [2:25:13<540:59:41, 3.49s/it]
2%|▏ | 10843/569592 [2:25:14<422:52:38, 2.72s/it]
2%|▏ | 10843/569592 [2:25:14<422:52:38, 2.72s/it]
2%|▏ | 10844/569592 [2:25:15<362:42:56, 2.34s/it]
2%|▏ | 10844/569592 [2:25:15<362:42:56, 2.34s/it]
2%|▏ | 10845/569592 [2:25:18<408:32:16, 2.63s/it]
2%|▏ | 10845/569592 [2:25:18<408:32:16, 2.63s/it]
2%|▏ | 10846/569592 [2:25:19<332:09:16, 2.14s/it]
2%|▏ | 10846/569592 [2:25:19<332:09:16, 2.14s/it]
2%|▏ | 10847/569592 [2:25:21<296:50:25, 1.91s/it]
2%|▏ | 10847/569592 [2:25:21<296:50:25, 1.91s/it]
2%|▏ | 10848/569592 [2:25:25<404:17:06, 2.60s/it]
2%|▏ | 10848/569592 [2:25:25<404:17:06, 2.60s/it]
2%|▏ | 10849/569592 [2:25:26<328:15:59, 2.12s/it]
2%|▏ | 10849/569592 [2:25:26<328:15:59, 2.12s/it]
2%|▏ | 10850/569592 [2:25:30<426:27:13, 2.75s/it]
2%|▏ | 10850/569592 [2:25:30<426:27:13, 2.75s/it]
2%|▏ | 10851/569592 [2:25:31<344:26:25, 2.22s/it]
2%|▏ | 10851/569592 [2:25:31<344:26:25, 2.22s/it]
2%|▏ | 10852/569592 [2:25:35<435:57:33, 2.81s/it]
2%|▏ | 10852/569592 [2:25:35<435:57:33, 2.81s/it]
2%|▏ | 10853/569592 [2:25:40<541:10:49, 3.49s/it]
2%|▏ | 10853/569592 [2:25:40<541:10:49, 3.49s/it]
2%|▏ | 10854/569592 [2:25:45<603:56:11, 3.89s/it]
2%|▏ | 10854/569592 [2:25:45<603:56:11, 3.89s/it]
2%|▏ | 10855/569592 [2:25:50<642:34:16, 4.14s/it]
2%|▏ | 10855/569592 [2:25:50<642:34:16, 4.14s/it]
2%|▏ | 10856/569592 [2:25:51<490:35:51, 3.16s/it]
2%|▏ | 10856/569592 [2:25:51<490:35:51, 3.16s/it]
2%|▏ | 10857/569592 [2:25:56<582:22:46, 3.75s/it]
2%|▏ | 10857/569592 [2:25:56<582:22:46, 3.75s/it]
2%|▏ | 10858/569592 [2:25:57<451:24:08, 2.91s/it]
2%|▏ | 10858/569592 [2:25:57<451:24:08, 2.91s/it]
2%|▏ | 10859/569592 [2:26:02<555:07:00, 3.58s/it]
2%|▏ | 10859/569592 [2:26:02<555:07:00, 3.58s/it]
2%|▏ | 10860/569592 [2:26:06<577:21:05, 3.72s/it]
2%|▏ | 10860/569592 [2:26:06<577:21:05, 3.72s/it]
2%|▏ | 10861/569592 [2:26:09<557:04:20, 3.59s/it]
2%|▏ | 10861/569592 [2:26:09<557:04:20, 3.59s/it]
2%|▏ | 10862/569592 [2:26:14<608:23:47, 3.92s/it]
2%|▏ | 10862/569592 [2:26:14<608:23:47, 3.92s/it]
2%|▏ | 10863/569592 [2:26:18<600:04:49, 3.87s/it]
2%|▏ | 10863/569592 [2:26:18<600:04:49, 3.87s/it]
2%|▏ | 10864/569592 [2:26:21<577:50:34, 3.72s/it]
2%|▏ | 10864/569592 [2:26:21<577:50:34, 3.72s/it]
2%|▏ | 10865/569592 [2:26:24<554:02:57, 3.57s/it]
2%|▏ | 10865/569592 [2:26:24<554:02:57, 3.57s/it]
2%|▏ | 10866/569592 [2:26:29<615:10:16, 3.96s/it]
2%|▏ | 10866/569592 [2:26:29<615:10:16, 3.96s/it]
2%|▏ | 10867/569592 [2:26:33<619:41:12, 3.99s/it]
2%|▏ | 10867/569592 [2:26:33<619:41:12, 3.99s/it]
2%|▏ | 10868/569592 [2:26:38<663:10:11, 4.27s/it]
2%|▏ | 10868/569592 [2:26:38<663:10:11, 4.27s/it]
2%|▏ | 10869/569592 [2:26:39<506:26:23, 3.26s/it]
2%|▏ | 10869/569592 [2:26:39<506:26:23, 3.26s/it]
2%|▏ | 10870/569592 [2:26:40<396:06:13, 2.55s/it]
2%|▏ | 10870/569592 [2:26:40<396:06:13, 2.55s/it]
2%|▏ | 10871/569592 [2:26:44<457:22:36, 2.95s/it]
2%|▏ | 10871/569592 [2:26:44<457:22:36, 2.95s/it]
2%|▏ | 10872/569592 [2:26:49<553:59:52, 3.57s/it]
2%|▏ | 10872/569592 [2:26:49<553:59:52, 3.57s/it]
2%|▏ | 10873/569592 [2:26:52<541:28:01, 3.49s/it]
2%|▏ | 10873/569592 [2:26:52<541:28:01, 3.49s/it]
2%|▏ | 10874/569592 [2:26:57<600:39:11, 3.87s/it]
2%|▏ | 10874/569592 [2:26:57<600:39:11, 3.87s/it]
2%|▏ | 10875/569592 [2:27:01<591:46:56, 3.81s/it]
2%|▏ | 10875/569592 [2:27:01<591:46:56, 3.81s/it]
2%|▏ | 10876/569592 [2:27:04<562:54:03, 3.63s/it]
2%|▏ | 10876/569592 [2:27:04<562:54:03, 3.63s/it]
2%|▏ | 10877/569592 [2:27:07<551:40:58, 3.55s/it]
2%|▏ | 10877/569592 [2:27:07<551:40:58, 3.55s/it]
2%|▏ | 10878/569592 [2:27:12<587:20:03, 3.78s/it]
2%|▏ | 10878/569592 [2:27:12<587:20:03, 3.78s/it]
2%|▏ | 10879/569592 [2:27:15<553:44:09, 3.57s/it]
2%|▏ | 10879/569592 [2:27:15<553:44:09, 3.57s/it]
2%|▏ | 10880/569592 [2:27:19<607:28:05, 3.91s/it]
2%|▏ | 10880/569592 [2:27:19<607:28:05, 3.91s/it]
2%|▏ | 10881/569592 [2:27:24<644:00:26, 4.15s/it]
2%|▏ | 10881/569592 [2:27:24<644:00:26, 4.15s/it]
2%|▏ | 10882/569592 [2:27:28<612:51:20, 3.95s/it]
2%|▏ | 10882/569592 [2:27:28<612:51:20, 3.95s/it]
2%|▏ | 10883/56/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95971008 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9592 [2:27:31<566:48:27, 3.65s/it]
2%|▏ | 10883/569592 [2:27:31<566:48:27, 3.65s/it]
2%|▏ | 10884/569592 [2:27:34<551:52:38, 3.56s/it]
2%|▏ | 10884/569592 [2:27:34<551:52:38, 3.56s/it]
2%|▏ | 10885/569592 [2:27:37<548:17:05, 3.53s/it]
2%|▏ | 10885/569592 [2:27:37<548:17:05, 3.53s/it]
2%|▏ | 10886/569592 [2:27:45<727:41:14, 4.69s/it]
2%|▏ | 10886/569592 [2:27:45<727:41:14, 4.69s/it]
2%|▏ | 10887/569592 [2:27:48<664:20:54, 4.28s/it]
2%|▏ | 10887/569592 [2:27:48<664:20:54, 4.28s/it]
2%|▏ | 10888/569592 [2:27:55<772:24:04, 4.98s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10888/569592 [2:27:55<772:24:04, 4.98s/it]
2%|▏ | 10889/569592 [2:27:58<698:29:03, 4.50s/it]
2%|▏ | 10889/569592 [2:27:58<698:29:03, 4.50s/it]
2%|▏ | 10890/569592 [2:27:59<529:43:43, 3.41s/it]
2%|▏ | 10890/569592 [2:27:59<529:43:43, 3.41s/it]
2%|▏ | 10891/569592 [2:28:00<413:18:17, 2.66s/it]
2%|▏ | 10891/569592 [2:28:00<413:18:17, 2.66s/it]
2%|▏ | 10892/569592 [2:28:05<528:24:14, 3.40s/it]
2%|▏ | 10892/569592 [2:28:05<528:24:14, 3.40s/it]
2%|▏ | 10893/569592 [2:28:08<525:04:56, 3.38s/it]
2%|▏ | 10893/569592 [2:28:08<525:04:56, 3.38s/it]
2%|▏ | 10894/569592 [2:28:12<520:01:30, 3.35s/it]
2%|▏ | 10894/569592 [2:28:12<520:01:30, 3.35s/it]
2%|▏ | 10895/569592 [2:28:17<595:16:58, 3.84s/it]
2%|▏ | 10895/569592 [2:28:17<595:16:58, 3.84s/it]
2%|▏ | 10896/569592 [2:28:17<458:04:14, 2.95s/it]
2%|▏ | 10896/569592 [2:28:17<458:04:14, 2.95s/it]
2%|▏ | 10897/569592 [2:28:22<513:35:14, 3.31s/it]
2%|▏ | 10897/569592 [2:28:22<513:35:14, 3.31s/it]
2%|▏ | 10898/569592 [2:28:25<521:37:38, 3.36s/it]
2%|▏ | 10898/569592 [2:28:25<521:37:38, 3.36s/it]
2%|▏ | 10899/569592 [2:28:26<408:39:48, 2.63s/it]
2%|▏ | 10899/569592 [2:28:26<408:39:48, 2.63s/it]
2%|▏ | 10900/569592 [2:28:27<328:30:50, 2.12s/it]
2%|▏ | 10900/569592 [2:28:27<328:30:50, 2.12s/it]
2%|▏ | 10901/569592 [2:28:31<414:41:31, 2.67s/it]
2%|▏ | 10901/569592 [2:28:31<414:41:31, 2.67s/it]
2%|▏ | 10902/569592 [2:28:36<520:11:27, 3.35s/it]
2%|▏ | 10902/569592 [2:28:36<520:11:27, 3.35s/it]
2%|▏ | 10903/569592 [2:28:41<580:49:39, 3.74s/it]
2%|▏ | 10903/569592 [2:28:41<580:49:39, 3.74s/it]
2%|▏ | 10904/569592 [2:28:41<449:19:20, 2.90s/it]
2%|▏ | 10904/569592 [2:28:41<449:19:20, 2.90s/it]
2%|▏ | 10905/569592 [2:28:46<539:01:48, 3.47s/it]
2%|▏ | 10905/569592 [2:28:46<539:01:48, 3.47s/it]
2%|▏ | 10906/569592 [2:28:50<546:05:34, 3.52s/it]
2%|▏ | 10906/569592 [2:28:50<546:05:34, 3.52s/it]
2%|▏ | 10907/569592 [2:28:51<425:07:59, 2.74s/it]
2%|▏ | 10907/569592 [2:28:51<425:07:59, 2.74s/it]
2%|▏ | 10908/569592 [2:28:52<340:24:55, 2.19s/it]
2%|▏ | 10908/569592 [2:28:52<340:24:55, 2.19s/it]
2%|▏ | 10909/569592 [2:28:53<281:16:41, 1.81s/it]
2%|▏ | 10909/569592 [2:28:53<281:16:41, 1.81s/it]
2%|▏ | 10910/569592 [2:28:54<240:12:23, 1.55s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93252400 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 10910/569592 [2:28:54<240:12:23, 1.55s/it]
2%|▏ | 10911/569592 [2:28:57<325:17:59, 2.10s/it]
2%|▏ | 10911/569592 [2:28:57<325:17:59, 2.10s/it]
2%|▏ | 10912/569592 [2:28:58<273:04:02, 1.76s/it]
2%|▏ | 10912/569592 [2:28:58<273:04:02, 1.76s/it]
2%|▏ | 10913/569592 [2:28:59<237:29:17, 1.53s/it]
2%|▏ | 10913/569592 [2:28:59<237:29:17, 1.53s/it]
2%|▏ | 10914/569592 [2:29:00<220:31:31, 1.42s/it]
2%|▏ | 10914/569592 [2:29:00<220:31:31, 1.42s/it]
2%|▏ | 10915/569592 [2:29:04<324:26:43, 2.09s/it]
2%|▏ | 10915/569592 [2:29:04<324:26:43, 2.09s/it]
2%|▏ | 10916/569592 [2:29:05<271:25:27, 1.75s/it]
2%|▏ | 10916/569592 [2:29:05<271:25:27, 1.75s/it]
2%|▏ | 10917/569592 [2:29:08<357:44:39, 2.31s/it]
2%|▏ | 10917/569592 [2:29:08<357:44:39, 2.31s/it]
2%|▏ | 10918/569592 [2:29:10<337:35:40, 2.18s/it]
2%|▏ | 10918/569592 [2:29:10<337:35:40, 2.18s/it]
2%|▏ | 10919/569592 [2:29:14<405:19:29, 2.61s/it]
2%|▏ | 10919/569592 [2:29:14<405:19:29, 2.61s/it]
2%|▏ | 10920/569592 [2:29:18<461:07:37, 2.97s/it]
2%|▏ | 10920/569592 [2:29:18<461:07:37, 2.97s/it]
2%|▏ | 10921/569592 [2:29:19<367:40:02, 2.37s/it]
2%|▏ | 10921/569592 [2:29:19<367:40:02, 2.37s/it]
2%|▏ | 10922/569592 [2:29:21<353:19:56, 2.28s/it]
2%|▏ | 10922/569592 [2:29:21<353:19:56, 2.28s/it]
2%|▏ | 10923/569592 [2:29:24<413:09:20, 2.66s/it]
2%|▏ | 10923/569592 [2:29:24<413:09:20, 2.66s/it]
2%|▏ | 10924/569592 [2:29:25<345:30:15, 2.23s/it]
2%|▏ | 10924/569592 [2:29:25<345:30:15, 2.23s/it]
2%|▏ | 10925/569592 [2:29:26<286:20:30, 1.85s/it]
2%|▏ | 10925/569592 [2:29:26<286:20:30, 1.85s/it]
2%|▏ | 10926/569592 [2:29:30<367:56:19, 2.37s/it]
2%|▏ | 10926/569592 [2:29:30<367:56:19, 2.37s/it]
2%|▏ | 10927/569592 [2:29:34<446:02:05, 2.87s/it]
2%|▏ | 10927/569592 [2:29:34<446:02:05, 2.87s/it]
2%|▏ | 10928/569592 [2:29:36<389:57:46, 2.51s/it]
2%|▏ | 10928/569592 [2:29:36<389:57:46, 2.51s/it]
2%|▏ | 10929/569592 [2:29:37<319:09:21, 2.06s/it]
2%|▏ | 10929/569592 [2:29:37<319:09:21, 2.06s/it]
2%|▏ | 10930/569592 [2:29:40<385:34:54, 2.48s/it]
2%|▏ | 10930/569592 [2:29:40<385:34:54, 2.48s/it]
2%|▏ | 10931/569592 [2:29:45<487:29:16, 3.14s/it]
2%|▏ | 10931/569592 [2:29:45<487:29:16, 3.14s/it]
2%|▏ | 10932/569592 [2:29:46<385:31:21, 2.48s/it]
2%|▏ | 10932/569592 [2:29:46<385:31:21, 2.48s/it]
2%|▏ | 10933/569592 [2:29:47<313:29:39, 2.02s/it]
2%|▏ | 10933/569592 [2:29:47<313:29:39, 2.02s/it]
2%|▏ | 10934/569592 [2:29:51<433:42:55, 2.79s/it]
2%|▏ | 10934/569592 [2:29:51<433:42:55, 2.79s/it]
2%|▏ | 10935/569592 [2:29:55<472:39:54, 3.05s/it]
2%|▏ | 10935/569592 [2:29:55<472:39:54, 3.05s/it]
2%|▏ | 10936/569592 [2:29:56<376:40:41, 2.43s/it]
2%|▏ | 10936/569592 [2:29:56<376:40:41, 2.43s/it]
2%|▏ | 10937/569592 [2:29:57<306:53:04, 1.98s/it]
2%|▏ | 10937/569592 [2:29:57<306:53:04, 1.98s/it]
2%|▏ | 10938/569592 [2:30:02<430:29:18, 2.77s/it]
2%|▏ | 10938/569592 [2:30:02<430:29:18, 2.77s/it]
2%|▏ | 10939/569592 [2:30:05<472:30:15, 3.04s/it]
2%|▏ | 10939/569592 [2:30:05<472:30:15, 3.04s/it]
2%|▏ | 10940/569592 [2:30:06<374:42:01, 2.41s/it]
2%|▏ | 10940/569592 [2:30:06<374:42:01, 2.41s/it]
2%|▏ | 10941/569592 [2:30:07<306:34:30, 1.98s/it]
2%|▏ | 10941/569592 [2:30:07<306:34:30, 1.98s/it]
2%|▏ | 10942/569592 [2:30:13<478:18:47, 3.08s/it]
2%|▏ | 10942/569592 [2:30:13<478:18:47, 3.08s/it]
2%|▏ | 10943/569592 [2:30:15<457:31:58, 2.95s/it]
2%|▏ | 10943/569592 [2:30:15<457:31:58, 2.95s/it]
2%|▏ | 10944/569592 [2:30:16<367:02:13, 2.37s/it]
2%|▏ | 10944/569592 [2:30:16<367:02:13, 2.37s/it]
2%|▏ | 10945/569592 [2:30:17<301:40:56, 1.94s/it]
2%|▏ | 10945/569592 [2:30:17<301:40:56, 1.94s/it]
2%|▏ | 10946/569592 [2:30:22<415:09:53, 2.68s/it]
2%|▏ | 10946/569592 [2:30:22<415:09:53, 2.68s/it]
2%|▏ | 10947/569592 [2:30:27<528:29:40, 3.41s/it]
2%|▏ | 10947/569592 [2:30:27<528:29:40, 3.41s/it]
2%|▏ | 10948/569592 [2:30:28<413:11:38, 2.66s/it]
2%|▏ | 10948/569592 [2:30:28<413:11:38, 2.66s/it]
2%|▏ | 10949/569592 [2:30:29<335:17:42, 2.16s/it]
2%|▏ | 10949/569592 [2:30:29<335:17:42, 2.16s/it]
2%|▏ | 10950/569592 [2:30:32<403:47:41, 2.60s/it]
2%|▏ | 10950/569592 [2:30:32<403:47:41, 2.60s/it]
2%|▏ | 10951/569592 [2:30:37<487:07:36, 3.14s/it]
2%|▏ | 10951/569592 [2:30:37<487:07:36, 3.14s/it]
2%|▏ | 10952/569592 [2:30:38<384:27:37, 2.48s/it]
2%|▏ | 10952/569592 [2:30:38<384:27:37, 2.48s/it]
2%|▏ | 10953/569592 [2:30:39<314:20:51, 2.03s/it]
2%|▏ | 10953/569592 [2:30:39<314:20:51, 2.03s/it]
2%|▏ | 10954/569592 [2:30:44<489:25:19, 3.15s/it]
2%|▏ | 10954/569592 [2:30:44<489:25:19, 3.15s/it]
2%|▏ | 10955/569592 [2:30:48<504:16:48, 3.25s/it]
2%|▏ | 10955/569592 [2:30:48<504:16:48, 3.25s/it]
2%|▏ | 10956/569592 [2:30:49<395:05:54, 2.55s/it]
2%|▏ | 10956/569592 [2:30:49<395:05:54, 2.55s/it]
2%|▏ | 10957/569592 [2:30:50<322:13:27, 2.08s/it]
2%|▏ | 10957/569592 [2:30:50<322:13:27, 2.08s/it]
2%|▏ | 10958/569592 [2:30:54<402:45:21, 2.60s/it]
2%|▏ | 10958/569592 [2:30:54<402:45:21, 2.60s/it]
2%|▏ | 10959/569592 [2:30:57<459:54:14, 2.96s/it]
2%|▏ | 10959/569592 [2:30:57<459:54:14, 2.96s/it]
2%|▏ | 10960/569592 [2:30:58<368:31:51, 2.37s/it]
2%|▏ | 10960/569592 [2:30:58<368:31:51, 2.37s/it]
2%|▏ | 10961/569592 [2:31:02<440:03:10, 2.84s/it]
2%|▏ | 10961/569592 [2:31:02<440:03:10, 2.84s/it]
2%|▏ | 10962/569592 [2:31:04<392:44:56, 2.53s/it]
2%|▏ | 10962/569592 [2:31:04<392:44:56, 2.53s/it]
2%|▏ | 10963/569592 [2:31:09<511:48:03, 3.30s/it]
2%|▏ | 10963/569592 [2:31:09<511:48:03, 3.30s/it]
2%|▏ | 10964/569592 [2:31:10<400:18:10, 2.58s/it]
2%|▏ | 10964/569592 [2:31:10<400:18:10, 2.58s/it]
2%|▏ | 10965/569592 [2:31:11<323:59:00, 2.09s/it]
2%|▏ | 10965/569592 [2:31:11<323:59:00, 2.09s/it]
2%|▏ | 10966/569592 [2:31:16<444:28:36, 2.86s/it]
2%|▏ | 10966/569592 [2:31:16<444:28:36, 2.86s/it]
2%|▏ | 10967/569592 [2:31:19<466:48:05, 3.01s/it]
2%|▏ | 10967/569592 [2:31:19<466:48:05, 3.01s/it]
2%|▏ | 10968/569592 [2:31:24<554:57:21, 3.58s/it]
2%|▏ | 10968/569592 [2:31:24<554:57:21, 3.58s/it]
2%|▏ | 10969/569592 [2:31:25<431:08:35, 2.78s/it]
2%|▏ | 10969/569592 [2:31:25<431:08:35, 2.78s/it]
2%|▏ | 10970/569592 [2:31:26<347:58:27, 2.24s/it]
2%|▏ | 10970/569592 [2:31:26<347:58:27, 2.24s/it]
2%|▏ | 10971/569592 [2:31:30<411:55:27, 2.65s/it]
2%|▏ | 10971/569592 [2:31:30<411:55:27, 2.65s/it]
2%|▏ | 10972/569592 [2:31:35<521:28:32, 3.36s/it]
2%|▏ | 10972/569592 [2:31:35<521:28:32, 3.36s/it]
2%|▏ | 10973/569592 [2:31:39<580:36:39, 3.74s/it]
2%|▏ | 10973/569592 [2:31:39<580:36:39, 3.74s/it]
2%|▏ | 10974/569592 [2:31:44<641:34:58, 4.13s/it]
2%|▏ | 10974/569592 [2:31:44<641:34:58, 4.13s/it]
2%|▏ | 10975/569592 [2:31:49<664:54:38, 4.29s/it]
2%|▏ | 10975/569592 [2:31:49<664:54:38, 4.29s/it]
2%|▏ | 10976/569592 [2:31:52<617:54:21, 3.98s/it]
2%|▏ | 10976/569592 [2:31:52<617:54:21, 3.98s/it]
2%|▏ | 10977/569592 [2:31:55<571:30:10, 3.68s/it]
2%|▏ | 10977/569592 [2:31:55<571:30:10, 3.68s/it]
2%|▏ | 10978/569592 [2:32:00<631:15:40, 4.07s/it]
2%|▏ | 10978/569592 [2:32:00<631:15:40, 4.07s/it]
2%|▏ | 10979/569592 [2:32:03<584:17:12, 3.77s/it]
2%|▏ | 10979/569592 [2:32:03<584:17:12, 3.77s/it]
2%|▏ | 10980/569592 [2:32:07<607:51:51, 3.92s/it]
2%|▏ | 10980/569592 [2:32:07<607:51:51, 3.92s/it]
2%|▏ | 10981/569592 [2:32:11<603:04:09, 3.89s/it]
2%|▏ | 10981/569592 [2:32:11<603:04:09, 3.89s/it]
2%|▏ | 10982/569592 [2:32:16<637:10:57, 4.11s/it]
2%|▏ | 10982/569592 [2:32:16<637:10:57, 4.11s/it]
2%|▏ | 10983/569592 [2:32:17<501:59:41, 3.24s/it]
2%|▏ | 10983/569592 [2:32:17<501:59:41, 3.24s/it]
2%|▏ | 10984/569592 [2:32:22<584:38:21, 3.77s/it]
2%|▏ | 10984/569592 [2:32:22<584:38:21, 3.77s/it]
2%|▏ | 10985/569592 [2:32:27<629:38:46, 4.06s/it]
2%|▏ | 10985/569592 [2:32:27<629:38:46, 4.06s/it]
2%|▏ | 10986/569592 [2:32:32<664:05:24, 4.28s/it]
2%|▏ | 10986/569592 [2:32:32<664:05:24, 4.28s/it]
2%|▏ | 10987/569592 [2:32:37<692:01:51, 4.46s/it]
2%|▏ | 10987/569592 [2:32:37<692:01:51, 4.46s/it]
2%|▏ | 10988/569592 [2:32:41<714:34:59, 4.61s/it]
2%|▏ | 10988/569592 [2:32:41<714:34:59, 4.61s/it]
2%|▏ | 10989/569592 [2:32:45<677:12:14, 4.36s/it]
2%|▏ | 10989/569592 [2:32:45<677:12:14, 4.36s/it]
2%|▏ | 10990/569592 [2:32:49<642:43:28, 4.14s/it]
2%|▏ | 10990/569592 [2:32:49<642:43:28, 4.14s/it]
2%|▏ | 10991/569592 [2:32:52<593:18:20, 3.82s/it]
2%|▏ | 10991/569592 [2:32:52<593:18:20, 3.82s/it]
2%|▏ | 10992/569592 [2:32:57<641:55:03, 4.14s/it]
2%|▏ | 10992/569592 [2:32:57<641:55:03, 4.14s/it]
2%|▏ | 10993/569592 [2:33:02<680:17:31, 4.38s/it]
2%|▏ | 10993/569592 [2:33:02<680:17:31, 4.38s/it]
2%|▏ | 10994/569592 [2:33:03<517:10:31, 3.33s/it]
2%|▏ | 10994/569592 [2:33:03<517:10:31, 3.33s/it]
2%|▏ | 10995/569592 [2:33:08<601:25:59, 3.88s/it]
2%|▏ | 10995/569592 [2:33:08<601:25:59, 3.88s/it]
2%|▏ | 10996/569592 [2:33:13<651:25:01, 4.20s/it]
2%|▏ | 10996/569592 [2:33:13<651:25:01, 4.20s/it]
2%|▏ | 10997/569592 [2:33:16<612:13:38, 3.95s/it]
2%|▏ | 10997/569592 [2:33:16<612:13:38, 3.95s/it]
2%|▏ | 10998/569592 [2:33:19<582:55:55, 3.76s/it]
2%|▏ | 10998/569592 [2:33:19<582:55:55, 3.76s/it]
2%|▏ | 10999/569592 [2:33:22<547:44:03, 3.53s/it]
2%|▏ | 10999/569592 [2:33:22<547:44:03, 3.53s/it]
2%|▏ | 11000/569592 [2:33:26<526:42:39, 3.39s/it]
2%|▏ | 11000/569592 [2:33:26<526:42:39, 3.39s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-10000] due to args.save_total_limit
2%|▏ | 11001/569592 [2:35:43<6783:49:47, 43.72s/it]
2%|▏ | 11001/569592 [2:35:43<6783:49:47, 43.72s/it]
2%|▏ | 11002/569592 [2:35:48<4958:57:35, 31.96s/it]
2%|▏ | 11002/569592 [2:35:48<4958:57:35, 31.96s/it]
2%|▏ | 11003/569592 [2:35:52<3680:39:40, 23.72s/it]
2%|▏ | 11003/569592 [2:35:52<3680:39:40, 23.72s/it]
2%|▏ | 11004/569592 [2:35:58<2826:25:57, 18.22s/it]
2%|▏ | 11004/569592 [2:35:58<2826:25:57, 18.22s/it]
2%|▏ | 11005/569592 [2:36:03<2222:54:20, 14.33s/it]
2%|▏ | 11005/569592 [2:36:03<2222:54:20, 14.33s/it]
2%|▏ | 11006/569592 [2:36:07<1766:01:48, 11.38s/it]
2%|▏ | 11006/569592 [2:36:07<1766:01:48, 11.38s/it]
2%|▏ | 11007/569592 [2:36:13<1472:02:08, 9.49s/it]
2%|▏ | 11007/569592 [2:36:13<1472:02:08, 9.49s/it]
2%|▏ | 11008/569592 [2:36:18<1263:58:01, 8.15s/it]
2%|▏ | 11008/569592 [2:36:18<1263:58:01, 8.15s/it]
2%|▏ | 11009/569592 [2:36:21<1067:47:13, 6.88s/it]
2%|▏ | 11009/569592 [2:36:22<1067:47:13, 6.88s/it]
2%|▏ | 11010/569592 [2:36:25<911:04:28, 5.87s/it]
2%|▏ | 11010/569592 [2:36:25<911:04:28, 5.87s/it]
2%|▏ | 11011/569592 [2:36:29<840:07:56, 5.41s/it]
2%|▏ | 11011/569592 [2:36:29<840:07:56, 5.41s/it]
2%|▏ | 11012/569592 [2:36:33<763:06:29, 4.92s/it]
2%|▏ | 11012/569592 [2:36:33<763:06:29, 4.92s/it]
2%|▏ | 11013/569592 [2:36:38<769:05:07, 4.96s/it]
2%|▏ | 11013/569592 [2:36:38<769:05:07, 4.96s/it]
2%|▏ | 11014/569592 [2:36:41<677:27:33, 4.37s/it]
2%|▏ | 11014/569592 [2:36:41<677:27:33, 4.37s/it]
2%|▏ | 11015/569592 [2:36:45<632:49:56, 4.08s/it]
2%|▏ | 11015/569592 [2:36:45<632:49:56, 4.08s/it]
2%|▏ | 11016/569592 [2:36:45<486:01:30, 3.13s/it]
2%|▏ | 11016/569592 [2:36:45<486:01:30, 3.13s/it]
2%|▏ | 11017/569592 [2:36:50<566:03:01, 3.65s/it]
2%|▏ | 11017/569592 [2:36:50<566:03:01, 3.65s/it]
2%|▏ | 11018/569592 [2:36:56<641:21:08, 4.13s/it]
2%|▏ | 11018/569592 [2:36:56<641:21:08, 4.13s/it]
2%|▏ | 11019/569592 [2:37:01<688:11:01, 4.44s/it]
2%|▏ | 11019/569592 [2:37:01<688:11:01, 4.44s/it]
2%|▏ | 11020/569592 [2:37:02<523:32:53, 3.37s/it]
2%|▏ | 11020/569592 [2:37:02<523:32:53, 3.37s/it]
2%|▏ | 11021/569592 [2:37:03<409:36:03, 2.64s/it]
2%|▏ | 11021/569592 [2:37:03<409:36:03, 2.64s/it]
2%|▏ | 11022/569592 [2:37:04<333:34:40, 2.15s/it]
2%|▏ | 11022/569592 [2:37:04<333:34:40, 2.15s/it]
2%|▏ | 11023/569592 [2:37:05<282:53:21, 1.82s/it]
2%|▏ | 11023/569592 [2:37:05<282:53:21, 1.82s/it]
2%|▏ | 11024/569592 [2:37:06<245:07:24, 1.58s/it]
2%|▏ | 11024/569592 [2:37:06<245:07:24, 1.58s/it]
2%|▏ | 11025/569592 [2:37:10<365:04:52, 2.35s/it]
2%|▏ | 11025/569592 [2:37:10<365:04:52, 2.35s/it]
2%|▏ | 11026/569592 [2:37:11<299:39:11, 1.93s/it]
2%|▏ | 11026/569592 [2:37:11<299:39:11, 1.93s/it]
2%|▏ | 11027/569592 [2:37:12<256:08:08, 1.65s/it]
2%|▏ | 11027/569592 [2:37:12<256:08:08, 1.65s/it]
2%|▏ | 11028/569592 [2:37:17<406:47:46, 2.62s/it]
2%|▏ | 11028/569592 [2:37:17<406:47:46, 2.62s/it]
2%|▏ | 11029/569592 [2:37:18<338:28:57, 2.18s/it]
2%|▏ | 11029/569592 [2:37:18<338:28:57, 2.18s/it]
2%|▏ | 11030/569592 [2:37:19<279:56:16, 1.80s/it]
2%|▏ | 11030/569592 [2:37:19<279:56:16, 1.80s/it]
2%|▏ | 11031/569592 [2:37:23<385:11:13, 2.48s/it]
2%|▏ | 11031/569592 [2:37:23<385:11:13, 2.48s/it]
2%|▏ | 11032/569592 [2:37:25<374:18:33, 2.41s/it]
2%|▏ | 11032/569592 [2:37:25<374:18:33, 2.41s/it]
2%|▏ | 11033/569592 [2:37:26<306:54:57, 1.98s/it]
2%|▏ | 11033/569592 [2:37:26<306:54:57, 1.98s/it]
2%|▏ | 11034/569592 [2:37:27<258:27:24, 1.67s/it]
2%|▏ | 11034/569592 [2:37:27<258:27:24, 1.67s/it]
2%|▏ | 11035/569592 [2:37:29<260:01:54, 1.68s/it]
2%|▏ | 11035/569592 [2:37:29<260:01:54, 1.68s/it]
2%|▏ | 11036/569592 [2:37:36<534:34:11, 3.45s/it]
2%|▏ | 11036/569592 [2:37:36<534:34:11, 3.45s/it]
2%|▏ | 11037/569592 [2:37:38<439:40:33, 2.83s/it]
2%|▏ | 11037/569592 [2:37:38<439:40:33, 2.83s/it]
2%|▏ | 11038/569592 [2:37:39<352:38:04, 2.27s/it]
2%|▏ | 11038/569592 [2:37:39<352:38:04, 2.27s/it]
2%|▏ | 11039/569592 [2:37:40<290:39:17, 1.87s/it]
2%|▏ | 11039/569592 [2:37:40<290:39:17, 1.87s/it]
2%|▏ | 11040/569592 [2:37:45<437:44:16, 2.82s/it]
2%|▏ | 11040/569592 [2:37:45<437:44:16, 2.82s/it]
2%|▏ | 11041/569592 [2:37:46<366:24:02, 2.36s/it]
2%|▏ | 11041/569592 [2:37:46<366:24:02, 2.36s/it]
2%|▏ | 11042/569592 [2:37:50<446:05:43, 2.88s/it]
2%|▏ | 11042/569592 [2:37:50<446:05:43, 2.88s/it]
2%|▏ | 11043/569592 [2:37:51<361:22:43, 2.33s/it]
2%|▏ | 11043/569592 [2:37:51<361:22:43, 2.33s/it]
2%|▏ | 11044/569592 [2:37:55<425:58:39, 2.75s/it]
2%|▏ | 11044/569592 [2:37:55<425:58:39, 2.75s/it]
2%|▏ | 11045/569592 [2:37:56<347:25:23, 2.24s/it]
2%|▏ | 11045/569592 [2:37:56<347:25:23, 2.24s/it]
2%|▏ | 11046/569592 [2:37:57<294:39:13, 1.90s/it]
2%|▏ | 11046/569592 [2:37:57<294:39:13, 1.90s/it]
2%|▏ | 11047/569592 [2:38:00<332:09:43, 2.14s/it]
2%|▏ | 11047/569592 [2:38:00<332:09:43, 2.14s/it]
2%|▏ | 11048/569592 [2:38:05<470:32:18, 3.03s/it]
2%|▏ | 11048/569592 [2:38:05<470:32:18, 3.03s/it]
2%|▏ | 11049/569592 [2:38:07<453:36:29, 2.92s/it]
2%|▏ | 11049/569592 [2:38:07<453:36:29, 2.92s/it]
2%|▏ | 11050/569592 [2:38:08<364:00:05, 2.35s/it]
2%|▏ | 11050/569592 [2:38:08<364:00:05, 2.35s/it]
2%|▏ | 11051/569592 [2:38:10<320:08:33, 2.06s/it]
2%|▏ | 11051/569592 [2:38:10<320:08:33, 2.06s/it]
2%|▏ | 11052/569592 [2:38:15<446:48:06, 2.88s/it]
2%|▏ | 11052/569592 [2:38:15<446:48:06, 2.88s/it]
2%|▏ | 11053/569592 [2:38:18<465:16:36, 3.00s/it]
2%|▏ | 11053/569592 [2:38:18<465:16:36, 3.00s/it]
2%|▏ | 11054/569592 [2:38:19<371:13:29, 2.39s/it]
2%|▏ | 11054/569592 [2:38:19<371:13:29, 2.39s/it]
2%|▏ | 11055/569592 [2:38:20<340:13:17, 2.19s/it]
2%|▏ | 11055/569592 [2:38:21<340:13:17, 2.19s/it]
2%|▏ | 11056/569592 [2:38:25<428:57:45, 2.76s/it]
2%|▏ | 11056/569592 [2:38:25<428:57:45, 2.76s/it]
2%|▏ | 11057/569592 [2:38:28<455:53:47, 2.94s/it]
2%|▏ | 11057/569592 [2:38:28<455:53:47, 2.94s/it]
2%|▏ | 11058/569592 [2:38:29<371:15:53, 2.39s/it]
2%|▏ | 11058/569592 [2:38:29<371:15:53, 2.39s/it]
2%|▏ | 11059/569592 [2:38:31<338:43:42, 2.18s/it]
2%|▏ | 11059/569592 [2:38:31<338:43:42, 2.18s/it]
2%|▏ | 11060/569592 [2:38:35<428:49:46, 2.76s/it]
2%|▏ | 11060/569592 [2:38:35<428:49:46, 2.76s/it]
2%|▏ | 11061/569592 [2:38:38<452:24:11, 2.92s/it]
2%|▏ | 11061/569592 [2:38:38<452:24:11, 2.92s/it]
2%|▏ | 11062/569592 [2:38:39<364:20:46, 2.35s/it]
2%|▏ | 11062/569592 [2:38:39<364:20:46, 2.35s/it]
2%|▏ | 11063/569592 [2:38:41<336:09:03, 2.17s/it]
2%|▏ | 11063/569592 [2:38:41<336:09:03, 2.17s/it]
2%|▏ | 11064/569592 [2:38:46<461:51:25, 2.98s/it]
2%|▏ | 11064/569592 [2:38:46<461:51:25, 2.98s/it]
2%|▏ | 11065/569592 [2:38:48<436:24:29, 2.81s/it]
2%|▏ | 11065/569592 [2:38:48<436:24:29, 2.81s/it]
2%|▏ | 11066/569592 [2:38:5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95719481 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
0<369:05:10, 2.38s/it]
2%|▏ | 11066/569592 [2:38:50<369:05:10, 2.38s/it]
2%|▏ | 11067/569592 [2:38:51<330:56:45, 2.13s/it]
2%|▏ | 11067/569592 [2:38:51<330:56:45, 2.13s/it]
2%|▏ | 11068/569592 [2:38:57<502:43:41, 3.24s/it]
2%|▏ | 11068/569592 [2:38:57<502:43:41, 3.24s/it]
2%|▏ | 11069/569592 [2:38:59<428:24:57, 2.76s/it]
2%|▏ | 11069/569592 [2:38:59<428:24:57, 2.76s/it]
2%|▏ | 11070/569592 [2:39:00<347:47:25, 2.24s/it]
2%|▏ | 11070/569592 [2:39:00<347:47:25, 2.24s/it]
2%|▏ | 11071/569592 [2:39:03<411:04:14, 2.65s/it]
2%|▏ | 11071/569592 [2:39:03<411:04:14, 2.65s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98303744 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11072/569592 [2:39:08<528:52:03, 3.41s/it]
2%|▏ | 11072/569592 [2:39:08<528:52:03, 3.41s/it]
2%|▏ | 11073/569592 [2:39:12<543:34:20, 3.50s/it]
2%|▏ | 11073/569592 [2:39:12<543:34:20, 3.50s/it]
2%|▏ | 11074/569592 [2:39:13<423:36:16, 2.73s/it]
2%|▏ | 11074/569592 [2:39:13<423:36:16, 2.73s/it]
2%|▏ | 11075/569592 [2:39:15<382:03:24, 2.46s/it]
2%|▏ | 11075/569592 [2:39:15<382:03:24, 2.46s/it]
2%|▏ | 11076/569592 [2:39:16<312:02:59, 2.01s/it]
2%|▏ | 11076/569592 [2:39:16<312:02:59, 2.01s/it]
2%|▏ | 11077/569592 [2:39:21<442:40:34, 2.85s/it]
2%|▏ | 11077/569592 [2:39:21<442:40:34, 2.85s/it]
2%|▏ | 11078/569592 [2:39:22<354:14:58, 2.28s/it]
2%|▏ | 11078/569592 [2:39:22<354:14:58, 2.28s/it]
2%|▏ | 11079/569592 [2:39:26<466:01:34, 3.00s/it]
2%|▏ | 11079/569592 [2:39:26<466:01:34, 3.00s/it]
2%|▏ | 11080/569592 [2:39:30<497:23:44, 3.21s/it]
2%|▏ | 11080/569592 [2:39:30<497:23:44, 3.21s/it]
2%|▏ | 11081/569592 [2:39:31<393:10:01, 2.53s/it]
2%|▏ | 11081/569592 [2:39:31<393:10:01, 2.53s/it]
2%|▏ | 11082/569592 [2:39:32<320:36:11, 2.07s/it]
2%|▏ | 11082/569592 [2:39:32<320:36:11, 2.07s/it]
2%|▏ | 11083/569592 [2:39:33<275:39:07, 1.78s/it]
2%|▏ | 11083/569592 [2:39:33<275:39:07, 1.78s/it]
2%|▏ | 11084/569592 [2:39:39<452:37:56, 2.92s/it]
2%|▏ | 11084/569592 [2:39:39<452:37:56, 2.92s/it]
2%|▏ | 11085/569592 [2:39:43<507:13:46, 3.27s/it]
2%|▏ | 11085/569592 [2:39:43<507:13:46, 3.27s/it]
2%|▏ | 11086/569592 [2:39:47<545:51:27, 3.52s/it]
2%|▏ | 11086/569592 [2:39:47<545:51:27, 3.52s/it]
2%|▏ | 11087/569592 [2:39:51<563:19:18, 3.63s/it]
2%|▏ | 11087/569592 [2:39:51<563:19:18, 3.63s/it]
2%|▏ | 11088/569592 [2:39:55<577:16:32, 3.72s/it]
2%|▏ | 11088/569592 [2:39:55<577:16:32, 3.72s/it]
2%|▏ | 11089/569592 [2:39:59<597:48:59, 3.85s/it]
2%|▏ | 11089/569592 [2:39:59<597:48:59, 3.85s/it]
2%|▏ | 11090/569592 [2:40:03<603:15:58, 3.89s/it]
2%|▏ | 11090/569592 [2:40:03<603:15:58, 3.89s/it]
2%|▏ | 11091/569592 [2:40:08<671:39:32, 4.33s/it]
2%|▏ | 11091/569592 [2:40:08<671:39:32, 4.33s/it]
2%|▏ | 11092/569592 [2:40:12<636:55:56, 4.11s/it]
2%|▏ | 11092/569592 [2:40:12<636:55:56, 4.11s/it]
2%|▏ | 11093/569592 [2:40:15<619:47:31, 4.00s/it]
2%|▏ | 11093/569592 [2:40:15<619:47:31, 4.00s/it]
2%|▏ | 11094/569592 [2:40:20<634:21:56, 4.09s/it]
2%|▏ | 11094/569592 [2:40:20<634:21:56, 4.09s/it]
2%|▏ | 11095/569592 [2:40:23<594:32:54, 3.83s/it]
2%|▏ | 11095/569592 [2:40:23<594:32:54, 3.83s/it]
2%|▏ | 11096/569592 [2:40:27<592:53:01, 3.82s/it]
2%|▏ | 11096/569592 [2:40:27<592:53:01, 3.82s/it]
2%|▏ | 11097/569592 [2:40:31<600:05:54, 3.87s/it]
2%|▏ | 11097/569592 [2:40:31<600:05:54, 3.87s/it]
2%|▏ | 11098/569592 [2:40:36<655:42:21, 4.23s/it]
2%|▏ | 11098/569592 [2:40:36<655:42:21, 4.23s/it]
2%|▏ | 11099/569592 [2:40:37<501:30:04, 3.23s/it]
2%|▏ | 11099/569592 [2:40:37<501:30:04, 3.23s/it]
2%|▏ | 11100/569592 [2:40:42<603:49:25, 3.89s/it]
2%|▏ | 11100/569592 [2:40:42<603:49:25, 3.89s/it]
2%|▏ | 11101/569592 [2:40:46<601:21:34, 3.88s/it]
2%|▏ | 11101/569592 [2:40:46<601:21:34, 3.88s/it]
2%|▏ | 11102/569592 [2:40:50<605:18:07, 3.90s/it]
2%|▏ | 11102/569592 [2:40:50<605:18:07, 3.90s/it]
2%|▏ | 11103/569592 [2:40:54<600:59:07, 3.87s/it]
2%|▏ | 11103/569592 [2:40:54<600:59:07, 3.87s/it]
2%|▏ | 11104/569592 [2:40:57<577:48:12, 3.72s/it]
2%|▏ | 11104/569592 [2:40:57<577:48:12, 3.72s/it]
2%|▏ | 11105/569592 [2:41:01<574:26:10, 3.70s/it]
2%|▏ | 11105/569592 [2:41:01<574:26:10, 3.70s/it]
2%|▏ | 11106/569592 [2:41:04<571:06:39, 3.68s/it]
2%|▏ | 11106/569592 [2:41:04<571:06:39, 3.68s/it]
2%|▏ | 11107/569592 [2:41:08<578:33:11, 3.73s/it]
2%|▏ | 11107/569592 [2:41:08<578:33:11, 3.73s/it]
2%|▏ | 11108/569592 [2:41:09<446:47:22, 2.88s/it]
2%|▏ | 11108/569592 [2:41:09<446:47:22, 2.88s/it]
2%|▏ | 11109/569592 [2:41:13<499:03:40, 3.22s/it]
2%|▏ | 11109/569592 [2:41:13<499:03:40, 3.22s/it]
2%|▏ | 11110/569592 [2:41:18<588:12:16, 3.79s/it]
2%|▏ | 11110/569592 [2:41:18<588:12:16, 3.79s/it]
2%|▏ | 11111/569592 [2:41:22<594:12:34, 3.83s/it]
2%|▏ | 11111/569592 [2:41:22<594:12:34, 3.83s/it]
2%|▏ | 11112/569592 [2:41:25<563:23:38, 3.63s/it]
2%|▏ | 11112/569592 [2:41:25<563:23:38, 3.63s/it]
2%|▏ | 11113/569592 [2:41:30<615:24:32, 3.97s/it]
2%|▏ | 11113/569592 [2:41:30<615:24:32, 3.97s/it]
2%|▏ | 11114/569592 [2:41:34<616:09:47, 3.97s/it]
2%|▏ | 11114/569592 [2:41:34<616:09:47, 3.97s/it]
2%|▏ | 11115/569592 [2:41:37<582:16:58, 3.75s/it]
2%|▏ | 11115/569592 [2:41:37<582:16:58, 3.75s/it]
2%|▏ | 11116/569592 [2:41:42<629:05:31, 4.06s/it]
2%|▏ | 11116/569592 [2:41:42<629:05:31, 4.06s/it]
2%|▏ | 11117/569592 [2:41:46<612:28:00, 3.95s/it]
2%|▏ | 11117/569592 [2:41:46<612:28:00, 3.95s/it]
2%|▏ | 11118/569592 [2:41:51<663:20:25, 4.28s/it]
2%|▏ | 11118/569592 [2:41:51<663:20:25, 4.28s/it]
2%|▏ | 11119/569592 [2:41:56<682:45:07, 4.40s/it]
2%|▏ | 11119/569592 [2:41:56<682:45:07, 4.40s/it]
2%|▏ | 11120/569592 [2:42:01<715:03:58, 4.61s/it]
2%|▏ | 11120/569592 [2:42:01<715:03:58, 4.61s/it]
2/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
%|▏ | 11121/569592 [2:42:05<686:18:29, 4.42s/it]
2%|▏ | 11121/569592 [2:42:05<686:18:29, 4.42s/it]
2%|▏ | 11122/569592 [2:42:08<634:29:16, 4.09s/it]
2%|▏ | 11122/569592 [2:42:08<634:29:16, 4.09s/it]
2%|▏ | 11123/569592 [2:42:11<591:03:40, 3.81s/it]
2%|▏ | 11123/569592 [2:42:11<591:03:40, 3.81s/it]
2%|▏ | 11124/569592 [2:42:14<567:15:16, 3.66s/it]
2%|▏ | 11124/569592 [2:42:14<567:15:16, 3.66s/it]
2%|▏ | 11125/569592 [2:42:21<715:41:21, 4.61s/it]
2%|▏ | 11125/569592 [2:42:21<715:41:21, 4.61s/it]
2%|▏ | 11126/569592 [2:42:27<747:07:51, 4.82s/it]
2%|▏ | 11126/569592 [2:42:27<747:07:51, 4.82s/it]
2%|▏ | 11127/569592 [2:42:31<742:46:33, 4.79s/it]
2%|▏ | 11127/569592 [2:42:31<742:46:33, 4.79s/it]
2%|▏ | 11128/569592 [2:42:36<729:05:02, 4.70s/it]
2%|▏ | 11128/569592 [2:42:36<729:05:02, 4.70s/it]
2%|▏ | 11129/569592 [2:42:37<551:55:22, 3.56s/it]
2%|▏ | 11129/569592 [2:42:37<551:55:22, 3.56s/it]
2%|▏ | 11130/569592 [2:42:38<431:10:13, 2.78s/it]
2%|▏ | 11130/569592 [2:42:38<431:10:13, 2.78s/it]
2%|▏ | 11131/569592 [2:42:41<458:40:48, 2.96s/it]
2%|▏ | 11131/569592 [2:42:41<458:40:48, 2.96s/it]
2%|▏ | 11132/569592 [2:42:45<487:40:35, 3.14s/it]
2%|▏ | 11132/569592 [2:42:45<487:40:35, 3.14s/it]
2%|▏ | 11133/569592 [2:42:48<516:34:32, 3.33s/it]
2%|▏ | 11133/569592 [2:42:48<516:34:32, 3.33s/it]
2%|▏ | 11134/569592 [2:42:53<571:12:13, 3.68s/it]
2%|▏ | 11134/569592 [2:42:53<571:12:13, 3.68s/it]
2%|▏ | 11135/569592 [2:42:56<557:15:01, 3.59s/it]
2%|▏ | 11135/569592 [2:42:56<557:15:01, 3.59s/it]
2%|▏ | 11136/569592 [2:43:01<623:06:14, 4.02s/it]
2%|▏ | 11136/569592 [2:43:01<623:06:14, 4.02s/it]
2%|▏ | 11137/569592 [2:43:05<604:52:20, 3.90s/it]
2%|▏ | 11137/569592 [2:43:05<604:52:20, 3.90s/it]
2%|▏ | 11138/569592 [2:43:06<464:53:21, 3.00s/it]
2%|▏ | 11138/569592 [2:43:06<464:53:21, 3.00s/it]
2%|▏ | 11139/569592 [2:43:07<368:35:00, 2.38s/it]
2%|▏ | 11139/569592 [2:43:07<368:35:00, 2.38s/it]
2%|▏ | 11140/569592 [2:43:11<444:34:54, 2.87s/it]
2%|▏ | 11140/569592 [2:43:11<444:34:54, 2.87s/it]
2%|▏ | 11141/569592 [2:43:15<526:37:13, 3.39s/it]
2%|▏ | 11141/569592 [2:43:15<526:37:13, 3.39s/it]
2%|▏ | 11142/569592 [2:43:16<410:21:09, 2.65s/it]
2%|▏ | 11142/569592 [2:43:16<410:21:09, 2.65s/it]
2%|▏ | 11143/569592 [2:43:17<329:24:35, 2.12s/it]
2%|▏ | 11143/569592 [2:43:17<329:24:35, 2.12s/it]
2%|▏ | 11144/569592 [2:43:18<275:51:32, 1.78s/it]
2%|▏ | 11144/569592 [2:43:18<275:51:32, 1.78s/it]
2%|▏ | 11145/569592 [2:43:19<236:23:01, 1.52s/it]
2%|▏ | 11145/569592 [2:43:19<236:23:01, 1.52s/it]
2%|▏ | 11146/569592 [2:43:20<208:23:03, 1.34s/it]
2%|▏ | 11146/569592 [2:43:20<208:23:03, 1.34s/it]
2%|▏ | 11147/569592 [2:43:21<188:44:03, 1.22s/it]
2%|▏ | 11147/569592 [2:43:21<188:44:03, 1.22s/it]
2%|▏ | 11148/569592 [2:43:22<176:21:44, 1.14s/it]
2%|▏ | 11148/569592 [2:43:22<176:21:44, 1.14s/it]
2%|▏ | 11149/569592 [2:43:27<377:48:22, 2.44s/it]
2%|▏ | 11149/569592 [2:43:27<377:48:22, 2.44s/it]
2%|▏ | 11150/569592 [2:43:29<350:40:02, 2.26s/it]
2%|▏ | 11150/569592 [2:43:29<350:40:02, 2.26s/it]
2%|▏ | 11151/569592 [2:43:35<496:16:56, 3.20s/it]
2%|▏ | 11151/569592 [2:43:35<496:16:56, 3.20s/it]
2%|▏ | 11152/569592 [2:43:36<393:02:32, 2.53s/it]
2%|▏ | 11152/569592 [2:43:36<393:02:32, 2.53s/it]
2%|▏ | 11153/569592 [2:43:36<318:13:35, 2.05s/it]
2%|▏ | 11153/569592 [2:43:36<318:13:35, 2.05s/it]
2%|▏ | 11154/569592 [2:43:40<389:15:11, 2.51s/it]
2%|▏ | 11154/569592 [2:43:40<389:15:11, 2.51s/it]
2%|▏ | 11155/569592 [2:43:45<508:24:59, 3.28s/it]
2%|▏ | 11155/569592 [2:43:45<508:24:59, 3.28s/it]
2%|▏ | 11156/569592 [2:43:46<400:41:44, 2.58s/it]
2%|▏ | 11156/569592 [2:43:46<400:41:44, 2.58s/it]
2%|▏ | 11157/569592 [2:43:47<324:10:52, 2.09s/it]
2%|▏ | 11157/569592 [2:43:47<324:10:52, 2.09s/it]
2%|▏ | 11158/569592 [2:43:50<388:43:59, 2.51s/it]
2%|▏ | 11158/569592 [2:43:51<388:43:59, 2.51s/it]
2%|▏ | 11159/569592 [2:43:52<331:06:49, 2.13s/it]
2%|▏ | 11159/569592 [2:43:52<331:06:49, 2.13s/it]
2%|▏ | 11160/569592 [2:43:53<281:42:07, 1.82s/it]
2%|▏ | 11160/569592 [2:43:53<281:42:07, 1.82s/it]
2%|▏ | 11161/569592 [2:43:55<292:52:39, 1.89s/it]
2%|▏ | 11161/569592 [2:43:55<292:52:39, 1.89s/it]
2%|▏ | 11162/569592 [2:44:01<478:53:15, 3.09s/it]
2%|▏ | 11162/569592 [2:44:01<478:53:15, 3.09s/it]
2%|▏ | 11163/569592 [2:44:02<379:22:50, 2.45s/it]
2%|▏ | 11163/569592 [2:44:02<379:22:50, 2.45s/it]
2%|▏ | 11164/569592 [2:44:03<340:08:24, 2.19s/it]
2%|▏ | 11164/569592 [2:44:03<340:08:24, 2.19s/it]
2%|▏ | 11165/569592 [2:44:05<321:52:49, 2.08s/it]
2%|▏ | 11165/569592 [2:44:05<321:52:49, 2.08s/it]
2%|▏ | 11166/569592 [2:44:11<498:55:05, 3.22s/it]
2%|▏ | 11166/569592 [2:44:11<498:55:05, 3.22s/it]
2%|▏ | 11167/569592 [2:44:12<395:38:22, 2.55s/it]
2%|▏ | 11167/569592 [2:44:12<395:38:22, 2.55s/it]
2%|▏ | 11168/569592 [2:44:13<341:43:09, 2.20s/it]
2%|▏ | 11168/569592 [2:44:13<341:43:09, 2.20s/it]
2%|▏ | 11169/569592 [2:44:15<312:44:46, 2.02s/it]
2%|▏ | 11169/569592 [2:44:15<312:44:46, 2.02s/it]
2%|▏ | 11170/569592 [2:44:22<538:47:48, 3.47s/it]
2%|▏ | 11170/569592 [2:44:22<538:47:48, 3.47s/it]
2%|▏ | 11171/569592 [2:44:23<420:15:29, 2.71s/it]
2%|▏ | 11171/569592 [2:44:23<420:15:29, 2.71s/it]
2%|▏ | 11172/569592 [2:44:24<343:00:39, 2.21s/it]
2%|▏ | 11172/569592 [2:44:24<343:00:39, 2.21s/it]
2%|▏ | 11173/569592 [2:44:26<337:22:01, 2.17s/it]
2%|▏ | 11173/569592 [2:44:26<337:22:01, 2.17s/it]
2%|▏ | 11174/569592 [2:44:32<515:14:46, 3.32s/it]
2%|▏ | 11174/569592 [2:44:32<515:14:46, 3.32s/it]
2%|▏ | 11175/569592 [2:44:33<404:40:12, 2.61s/it]
2%|▏ | 11175/569592 [2:44:33<404:40:12, 2.61s/it]
2%|▏ | 11176/569592 [2:44:34<330:44:45, 2.13s/it]
2%|▏ | 11176/569592 [2:44:34<330:44:45, 2.13s/it]
2%|▏ | 11177/569592 [2:44:36<335:09:46, 2.16s/it]
2%|▏ | 11177/569592 [2:44:36<335:09:46, 2.16s/it]
2%|▏ | 11178/569592 [2:44:42<503:17:25, 3.24s/it]
2%|▏ | 11178/569592 [2:44:42<503:17:25, 3.24s/it]
2%|▏ | 11179/569592 [2:44:43<400:02:16, 2.58s/it]
2%|▏ | 11179/569592 [2:44:43<400:02:16, 2.58s/it]
2%|▏ | 11180/569592 [2:44:44<331:13:24, 2.14s/it]
2%|▏ | 11180/569592 [2:44:44<331:13:24, 2.14s/it]
2%|▏ | 11181/569592 [2:44:47<384:21:28, 2.48s/it]
2%|▏ | 11181/569592 [2:44:47<384:21:28, 2.48s/it]
2%|▏ | 11182/569592 [2:44:53<547:40:08, 3.53s/it]
2%|▏ | 11182/569592 [2:44:53<547:40:08, 3.53s/it]
2%|▏ | 11183/569592 [2:44:54<428:04:35, 2.76s/it]
2%|▏ | 11183/569592 [2:44:54<428:04:35, 2.76s/it]
2%|▏ | 11184/569592 [2:44:55<343:21:27, 2.21s/it]
2%|▏ | 11184/569592 [2:44:55<343:21:27, 2.21s/it]
2%|▏ | 11185/569592 [2:45:00<488:35:14, 3.15s/it]
2%|▏ | 11185/569592 [2:45:00<488:35:14, 3.15s/it]
2%|▏ | 11186/569592 [2:45:03<474:10:55, 3.06s/it]
2%|▏ | 11186/569592 [2:45:03<474:10:55, 3.06s/it]
2%|▏ | 11187/569592 [2:45:08<564:21:48, 3.64s/it]
2%|▏ | 11187/569592 [2:45:08<564:21:48, 3.64s/it]
2%|▏ | 11188/569592 [2:45:09<439:13:07, 2.83s/it]
2%|▏ | 11188/569592 [2:45:09<439:13:07, 2.83s/it]
2%|▏ | 11189/569592 [2:45:10<351:11:07, 2.26s/it]
2%|▏ | 11189/569592 [2:45:10<351:11:07, 2.26s/it]
2%|▏ | 11190/569592 [2:45:12<335:38:09, 2.16s/it]
2%|▏ | 11190/569592 [2:45:12<335:38:09, 2.16s/it]
2%|▏ | 11191/569592 [2:45:13<278:00:36, 1.79s/it]
2%|▏ | 11191/569592 [2:45:13<278:00:36, 1.79s/it]
2%|▏ | 11192/569592 [2:45:14<261:15:26, 1.68s/it]
2%|▏ | 11192/569592 [2:45:14<261:15:26, 1.68s/it]
2%|▏ | 11193/569592 [2:45:20<433:38:00, 2.80s/it]
2%|▏ | 11193/569592 [2:45:20<433:38:00, 2.80s/it]
2%|▏ | 11194/569592 [2:45:25<530:47:43, 3.42s/it]
2%|▏ | 11194/569592 [2:45:25<530:47:43, 3.42s/it]
2%|▏ | 11195/569592 [2:45:29<571:23:42, 3.68s/it]
2%|▏ | 11195/569592 [2:45:29<571:23:42, 3.68s/it]
2%|▏ | 11196/569592 [2:45:30<441:22:24, 2.85s/it]
2%|▏ | 11196/569592 [2:45:30<441:22:24, 2.85s/it]
2%|▏ | 11197/569592 [2:45:35<531:53:55, 3.43s/it]
2%|▏ | 11197/569592 [2:45:35<531:53:55, 3.43s/it]
2%|▏ | 11198/569592 [2:45:40<628:21:58, 4.05s/it]
2%|▏ | 11198/569592 [2:45:40<628:21:58, 4.05s/it]
2%|▏ | 11199/569592 [2:45:45<662:09:05, 4.27s/it]
2%|▏ | 11199/569592 [2:45:45<662:09:05, 4.27s/it]
2%|▏ | 11200/569592 [2:45:50<677:53:00, 4.37s/it]
2%|▏ | 11200/569592 [2:45:50<677:53:00, 4.37s/it]
2%|▏ | 11201/569592 [2:45:53<615:35:26, 3.97s/it]
2%|▏ | 11201/569592 [2:45:53<615:35:26, 3.97s/it]
2%|▏ | 11202/569592 [2:45:58<663:09:39, 4.28s/it]
2%|▏ | 11202/569592 [2:45:58<663:09:39, 4.28s/it]
2%|▏ | 11203/569592 [2:46:01<621:41:39, 4.01s/it]
2%|▏ | 11203/569592 [2:46:01<621:41:39, 4.01s/it]
2%|▏ | 11204/569592 [2:46:06<669:04:25, 4.31s/it]
2%|▏ | 11204/569592 [2:46:06<669:04:25, 4.31s/it]
2%|▏ | 11205/569592 [2:46:10<630:24:59, 4.06s/it]
2%|▏ | 11205/569592 [2:46:10<630:24:59, 4.06s/it]
2%|▏ | 11206/569592 [2:46:13<592:21:26, 3.82s/it]
2%|▏ | 11206/569592 [2:46:13<592:21:26, 3.82s/it]
2%|▏ | 11207/569592 [2:46:17<600:25:21, 3.87s/it]
2%|▏ | 11207/569592 [2:46:17<600:25:21, 3.87s/it]
2%|▏ | 11208/569592 [2:46:18<462:08:36, 2.98s/it]
2%|▏ | 11208/569592 [2:46:18<462:08:36, 2.98s/it]
2%|▏ | 11209/569592 [2:46:22<547:25:48, 3.53s/it]
2%|▏ | 11209/569592 [2:46:22<547:25:48, 3.53s/it]
2%|▏ | 11210/569592 [2:46:27<616:32:35, 3.97s/it]
2%|▏ | 11210/569592 [2:46:28<616:32:35, 3.97s/it]
2%|▏ | 11211/569592 [2:46:31<596:20:32, 3.84s/it]
2%|▏ | 11211/569592 [2:46:31<596:20:32, 3.84s/it]
2%|▏ | 11212/569592 [2:46:36<637:50:54, 4.11s/it]
2%|▏ | 11212/569592 [2:46:36<637:50:54, 4.11s/it]
2%|▏ | 11213/569592 [2:46:40<660:18:24, 4.26s/it]
2%|▏ | 11213/569592 [2:46:40<660:18:24, 4.26s/it]
2%|▏ | 11214/569592 [2:46:45<656:36:29, 4.23s/it]
2%|▏ | 11214/569592 [2:46:45<656:36:29, 4.23s/it]
2%|▏ | 11215/569592 [2:46:49<649:51:10, 4.19s/it]
2%|▏ | 11215/569592 [2:46:49<649:51:10, 4.19s/it]
2%|▏ | 11216/569592 [2:46:53<673:48:59, 4.34s/it]
2%|▏ | 11216/569592 [2:46:53<673:48:59, 4.34s/it]
2%|▏ | 11217/569592 [2:46:58<691:24:22, 4.46s/it]
2%|▏ | 11217/569592 [2:46:58<691:24:22, 4.46s/it]
2%|▏ | 11218/569592 [2:47:01<632:15:55, 4.08s/it]
2%|▏ | 11218/569592 [2:47:01<632:15:55, 4.08s/it]
2%|▏ | 11219/569592 [2:47:04<588:34:42, 3.79s/it]
2%|▏ | 11219/569592 [2:47:04<588:34:42, 3.79s/it]
2%|▏ | 11220/569592 [2:47:09<637:06:03, 4.11s/it]
2%|▏ | 11220/569592 [2:47:09<637:06:03, 4.11s/it]
2%|▏ | 11221/569592 [2:47:13<610:29:02, 3.94s/it]
2%|▏ | 11221/569592 [2:47:13<610:29:02, 3.94s/it]
2%|▏ | 11222/569592 [2:47:16<591:27:12, 3.81s/it]
2%|▏ | 11222/569592 [2:47:16<591:27:12, 3.81s/it]
2%|▏ | 11223/569592 [2:47:21<656:01:15, 4.23s/it]
2%|▏ | 11223/569592 [2:47:21<656:01:15, 4.23s/it]
2%|▏ | 11224/569592 [2:47:25<614:51:02, 3.96s/it]
2%|▏ | 11224/569592 [2:47:25<614:51:02, 3.96s/it]
2%|▏ | 11225/569592 [2:47:30<649:46:40, 4.19s/it]
2%|▏ | 11225/569592 [2:47:30<649:46:40, 4.19s/it]
2%|▏ | 11226/569592 [2:47:33<619:07:18, 3.99s/it]
2%|▏ | 11226/569592 [2:47:33<619:07:18, 3.99s/it]
2%|▏ | 11227/569592 [2:47:38<650:25:32, 4.19s/it]
2%|▏ | 11227/569592 [2:47:38<650:25:32, 4.19s/it]
2%|▏ | 11228/569592 [2:47:43<686:13:55, 4.42s/it]
2%|▏ | 11228/569592 [2:47:43<686:13:55, 4.42s/it]
2%|▏ | 11229/569592 [2:47:46<653:43:26, 4.21s/it]
2%|▏ | 11229/569592 [2:47:46<653:43:26, 4.21s/it]
2%|▏ | 11230/569592 [2:47:51<669:25:57, 4.32s/it]
2%|▏ | 11230/569592 [2:47:51<669:25:57, 4.32s/it]
2%|▏ | 11231/569592 [2:47:52<509:13:09, 3.28s/it]
2%|▏ | 11231/569592 [2:47:52<509:13:09, 3.28s/it]
2%|▏ | 11232/569592 [2:47:53<398:17:56, 2.57s/it]
2%|▏ | 11232/569592 [2:47:53<398:17:56, 2.57s/it]
2%|▏ | 11233/569592 [2:47:58<542:13:38, 3.50s/it]
2%|▏ | 11233/569592 [2:47:58<542:13:38, 3.50s/it]
2%|▏ | 11234/569592 [2:47:59<428:08:21, 2.76s/it]
2%|▏ | 11234/569592 [2:47:59<428:08:21, 2.76s/it]
2%|▏ | 11235/569592 [2:48:05<538:22:22, 3.47s/it]
2%|▏ | 11235/569592 [2:48:05<538:22:22, 3.47s/it]
2%|▏ | 11236/569592 [2:48:09<598:18:23, 3.86s/it]
2%|▏ | 11236/569592 [2:48:09<598:18:23, 3.86s/it]
2%|▏ | 11237/569592 [2:48:13<570:38:34, 3.68s/it]
2%|▏ | 11237/569592 [2:48:13<570:38:34, 3.68s/it]
2%|▏ | 11238/569592 [2:48:18<630:39:17, 4.07s/it]
2%|▏ | 11238/569592 [2:48:18<630:39:17, 4.07s/it]
2%|▏ | 11239/569592 [2:48:22<658:46:50, 4.25s/it]
2%|▏ | 11239/569592 [2:48:22<658:46:50, 4.25s/it]
2%|▏ | 11240/569592 [2:48:26<656:27:04, 4.23s/it]
2%|▏ | 11240/569592 [2:48:26<656:27:04, 4.23s/it]
2%|▏ | 11241/569592 [2:48:30<633:29:30, 4.08s/it]
2%|▏ | 11241/569592 [2:48:30<633:29:30, 4.08s/it]
2%|▏ | 11242/569592 [2:48:31<497:30:31, 3.21s/it]
2%|▏ | 11242/569592 [2:48:31<497:30:31, 3.21s/it]
2%|▏ | 11243/569592 [2:48:35<530:09:44, 3.42s/it]
2%|▏ | 11243/569592 [2:48:35<530:09:44, 3.42s/it]
2%|▏ | 11244/569592 [2:48:39<529:01:46, 3.41s/it]
2%|▏ | 11244/569592 [2:48:39<529:01:46, 3.41s/it]
2%|▏ | 11245/569592 [2:48:43<595:41:33, 3.84s/it]
2%|▏ | 11245/569592 [2:48:44<595:41:33, 3.84s/it]
2%|▏ | 11246/569592 [2:48:47<572:48:50, 3.69s/it]
2%|▏ | 11246/569592 [2:48:47<572:48:50, 3.69s/it]
2%|▏ | 11247/569592 [2:48:48<442:33:25, 2.85s/it]
2%|▏ | 11247/569592 [2:48:48<442:33:25, 2.85s/it]
2%|▏ | 11248/569592 [2:48:49<353:27:01, 2.28s/it]
2%|▏ | 11248/569592 [2:48:49<353:27:01, 2.28s/it]
2%|▏ | 11249/569592 [2:48:50<292:13:52, 1.88s/it]
2%|▏ | 11249/569592 [2:48:50<292:13:52, 1.88s/it]
2%|▏ | 11250/569592 [2:48:51<249:51:25, 1.61s/it]
2%|▏ | 11250/569592 [2:48:51<249:51:25, 1.61s/it]
2%|▏ | 11251/569592 [2:48:54<333:57:49, 2.15s/it]
2%|▏ | 11251/569592 [2:48:54<333:57:49, 2.15s/it]
2%|▏ | 11252/569592 [2:48:58<412:39:47, 2.66s/it]
2%|▏ | 11252/569592 [2:48:58<412:39:47, 2.66s/it]
2%|▏ | 11253/569592 [2:49:02<466:45:41, 3.01s/it]
2%|▏ | 11253/569592 [2:49:02<466:45:41, 3.01s/it]
2%|▏ | 11254/569592 [2:49:05<493:22:20, 3.18s/it]
2%|▏ | 11254/569592 [2:49:05<493:22:20, 3.18s/it]
2%|▏ | 11255/569592 [2:49:06<390:40:57, 2.52s/it]
2%|▏ | 11255/569592 [2:49:06<390:40:57, 2.52s/it]
2%|▏ | 11256/569592 [2:49:07<318:10:44, 2.05s/it]
2%|▏ | 11256/569592 [2:49:07<318:10:44, 2.05s/it]
2%|▏ | 11257/569592 [2:49:12<430:47:55, 2.78s/it]
2%|▏ | 11257/569592 [2:49:12<430:47:55, 2.78s/it]
2%|▏ | 11258/569592 [2:49:13<349:25:16, 2.25s/it]
2%|▏ | 11258/569592 [2:49:13<349:25:16, 2.25s/it]
2%|▏ | 11259/569592 [2:49:14<288:59:29, 1.86s/it]
2%|▏ | 11259/569592 [2:49:14<288:59:29, 1.86s/it]
2%|▏ | 11260/569592 [2:49:15<246:20:39, 1.59s/it]
2%|▏ | 11260/569592 [2:49:15<246:20:39, 1.59s/it]
2%|▏ | 11261/569592 [2:49:16<216:07:56, 1.39s/it]
2%|▏ | 11261/569592 [2:49:16<216:07:56, 1.39s/it]
2%|▏ | 11262/569592 [2:49:20<345:59:45, 2.23s/it]
2%|▏ | 11262/569592 [2:49:20<345:59:45, 2.23s/it]
2%|▏ | 11263/569592 [2:49:21<313:15:05, 2.02s/it]
2%|▏ | 11263/569592 [2:49:21<313:15:05, 2.02s/it]
2%|▏ | 11264/569592 [2:49:22<266:41:48, 1.72s/it]
2%|▏ | 11264/569592 [2:49:22<266:41:48, 1.72s/it]
2%|▏ | 11265/569592 [2:49:24<283:22:57, 1.83s/it]
2%|▏ | 11265/569592 [2:49:24<283:22:57, 1.83s/it]
2%|▏ | 11266/569592 [2:49:28<369:04:05, 2.38s/it]
2%|▏ | 11266/569592 [2:49:28<369:04:05, 2.38s/it]
2%|▏ | 11267/569592 [2:49:31<380:34:46, 2.45s/it]
2%|▏ | 11267/569592 [2:49:31<380:34:46, 2.45s/it]
2%|▏ | 11268/569592 [2:49:32<329:36:40, 2.13s/it]
2%|▏ | 11268/569592 [2:49:32<329:36:40, 2.13s/it]
2%|▏ | 11269/569592 [2:49:36<399:30:04, 2.58s/it]
2%|▏ | 11269/569592 [2:49:36<399:30:04, 2.58s/it]
2%|▏ | 11270/569592 [2:49:37<352:21:53, 2.27s/it]
2%|▏ | 11270/569592 [2:49:37<352:21:53, 2.27s/it]
2%|▏ | 11271/569592 [2:49:42<450:29:34, 2.90s/it]
2%|▏ | 11271/569592 [2:49:42<450:29:34, 2.90s/it]
2%|▏ | 11272/569592 [2:49:43<365:06:31, 2.35s/it]
2%|▏ | 11272/569592 [2:49:43<365:06:31, 2.35s/it]
2%|▏ | 11273/569592 [2:49:46<409:59:43, 2.64s/it]
2%|▏ | 11273/569592 [2:49:46<409:59:43, 2.64s/it]
2%|▏ | 11274/569592 [2:49:48<396:07:01, 2.55s/it]
2%|▏ | 11274/569592 [2:49:48<396:07:01, 2.55s/it]
2%|▏ | 11275/569592 [2:49:52<439:40:37, 2.84s/it]
2%|▏ | 11275/569592 [2:49:52<439:40:37, 2.84s/it]
2%|▏ | 11276/569592 [2:49:53<352:14:24, 2.27s/it]
2%|▏ | 11276/569592 [2:49:53<352:14:24, 2.27s/it]
2%|▏ | 11277/569592 [2:49:56<381:39:07, 2.46s/it]
2%|▏ | 11277/569592 [2:49:56<381:39:07, 2.46s/it]
2%|▏ | 11278/569592 [2:49:58<394:31:32, 2.54s/it]
2%|▏ | 11278/569592 [2:49:58<394:31:32, 2.54s/it]
2%|▏ | 11279/569592 [2:50:02<462:18:45, 2.98s/it]
2%|▏ | 11279/569592 [2:50:02<462:18:45, 2.98s/it]
2%|▏ | 11280/569592 [2:50:03<367:31:02, 2.37s/it]
2%|▏ | 11280/569592 [2:50:03<367:31:02, 2.37s/it]
2%|▏ | 11281/569592 [2:50:06<367:20:56, 2.37s/it]
2%|▏ | 11281/569592 [2:50:06<367:20:56, 2.37s/it]
2%|▏ | 11282/569592 [2:50:09<418:28:02, 2.70s/it]
2%|▏ | 11282/569592 [2:50:09<418:28:02, 2.70s/it]
2%|▏ | 11283/569592 [2:50:12<445:03:27, 2.87s/it]
2%|▏ | 11283/569592 [2:50:12<445:03:27, 2.87s/it]
2%|▏ | 11284/569592 [2:50:13<355:30:35, 2.29s/it]
2%|▏ | 11284/569592 [2:50:13<355:30:35, 2.29s/it]
2%|▏ | 11285/569592 [2:50:17<420:34:48, 2.71s/it]
2%|▏ | 11285/569592 [2:50:17<420:34:48, 2.71s/it]
2%|▏ | 11286/569592 [2:50:19<375:15:20, 2.42s/it]
2%|▏ | 11286/569592 [2:50:19<375:15:20, 2.42s/it]
2%|▏ | 11287/569592 [2:50:24<496:36:03, 3.20s/it]
2%|▏ | 11287/569592 [2:50:24<496:36:03, 3.20s/it]
2%|▏ | 11288/569592 [2:50:25<394:05:47, 2.54s/it]
2%|▏ | 11288/569592 [2:50:25<394:05:47, 2.54s/it]
2%|▏ | 11289/569592 [2:50:27<352:13:04, 2.27s/it]
2%|▏ | 11289/569592 [2:50:27<352:13:04, 2.27s/it]
2%|▏ | 11290/569592 [2:50:29<351:33:43, 2.27s/it]
2%|▏ | 11290/569592 [2:50:29<351:33:43, 2.27s/it]
2%|▏ | 11291/569592 [2:50:35<549:37:29, 3.54s/it]
2%|▏ | 11291/569592 [2:50:35<549:37:29, 3.54s/it]
2%|▏ | 11292/569592 [2:50:36<428:34:20, 2.76s/it]
2%|▏ | 11292/569592 [2:50:36<428:34:20, 2.76s/it]
2%|▏ | 11293/569592 [2:50:37<357:51:55, 2.31s/it]
2%|▏ | 11293/569592 [2:50:37<357:51:55, 2.31s/it]
2%|▏ | 11294/569592 [2:50:41<419:36:05, 2.71s/it]
2%|▏ | 11294/569592 [2:50:41<419:36:05, 2.71s/it]
2%|▏ | 11295/569592 [2:50:44<443:17:21, 2.86s/it]
2%|▏ | 11295/569592 [2:50:44<443:17:21, 2.86s/it]
2%|▏ | 11296/569592 [2:50:45<354:37:33, 2.29s/it]
2%|▏ | 11296/569592 [2:50:45<354:37:33, 2.29s/it]
2%|▏ | 11297/569592 [2:50:47<305:56:50, 1.97s/it]
2%|▏ | 11297/569592 [2:50:47<305:56:50, 1.97s/it]
2%|▏ | 11298/569592 [2:50:50<382:26:52, 2.47s/it]
2%|▏ | 11298/569592 [2:50:50<382:26:52, 2.47s/it]
2%|▏ | 11299/569592 [2:50:55<472:10:02, 3.04s/it]
2%|▏ | 11299/569592 [2:50:55<472:10:02, 3.04s/it]
2%|▏ | 11300/569592 [2:50:58<474:43:45, 3.06s/it]
2%|▏ | 11300/569592 [2:50:58<474:43:45, 3.06s/it]
2%|▏ | 11301/569592 [2:50:59<381:52:13, 2.46s/it]
2%|▏ | 11301/569592 [2:50:59<381:52:13, 2.46s/it]
2%|▏ | 11302/569592 [2:51:00<312:42:48, 2.02s/it]
2%|▏ | 11302/569592 [2:51:00<312:42:48, 2.02s/it]
2%|▏ | 11303/569592 [2:51:03<365:33:02, 2.36s/it]
2%|▏ | 11303/569592 [2:51:03<365:33:02, 2.36s/it]
2%|▏ | 11304/569592 [2:51:04<305:51:51, 1.97s/it]
2%|▏ | 11304/569592 [2:51:04<305:51:51, 1.97s/it]
2%|▏ | 11305/569592 [2:51:08<395:04:43, 2.55s/it]
2%|▏ | 11305/569592 [2:51:08<395:04:43, 2.55s/it]
2%|▏ | 11306/569592 [2:51:11<427:55:53, 2.76s/it]
2%|▏ | 11306/569592 [2:51:11<427:55:53, 2.76s/it]
2%|▏ | 11307/569592 [2:51:13<387:17:24, 2.50s/it]
2%|▏ | 11307/569592 [2:51:13<387:17:24, 2.50s/it]
2%|▏ | 11308/569592 [2:51:15<372:59:59, 2.41s/it]
2%|▏ | 11308/569592 [2:51:15<372:59:59, 2.41s/it]
2%|▏ | 11309/569592 [2:51:18<416:33:08, 2.69s/it]
2%|▏ | 11309/569592 [2:51:18<416:33:08, 2.69s/it]
2%|▏ | 11310/569592 [2:51:19<335:24:20, 2.16s/it]
2%|▏ | 11310/569592 [2:51:19<335:24:20, 2.16s/it]
2%|▏ | 11311/569592 [2:51:24<436:08:03, 2.81s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11311/569592 [2:51:24<436:08:03, 2.81s/it]
2%|▏ | 11312/569592 [2:51:29<538:32:51, 3.47s/it]
2%|▏ | 11312/569592 [2:51:29<538:32:51, 3.47s/it]
2%|▏ | 11313/569592 [2:51:34<601:19:16, 3.88s/it]
2%|▏ | 11313/569592 [2:51:34<601:19:16, 3.88s/it]
2%|▏ | 11314/569592 [2:51:37<590:36:28, 3.81s/it]
2%|▏ | 11314/569592 [2:51:37<590:36:28, 3.81s/it]
2%|▏ | 11315/569592 [2:51:41<581:30:57, 3.75s/it]
2%|▏ | 11315/569592 [2:51:41<581:30:57, 3.75s/it]
2%|▏ | 11316/569592 [2:51:42<449:30:38, 2.90s/it]
2%|▏ | 11316/569592 [2:51:42<449:30:38, 2.90s/it]
2%|▏ | 11317/569592 [2:51:47<548:52:14, 3.54s/it]
2%|▏ | 11317/569592 [2:51:47<548:52:14, 3.54s/it]
2%|▏ | 11318/569592 [2:51:52<609:48:30, 3.93s/it]
2%|▏ | 11318/569592 [2:51:52<609:48:30, 3.93s/it]
2%|▏ | 11319/569592 [2:51:56<638:52:45, 4.12s/it]
2%|▏ | 11319/569592 [2:51:56<638:52:45, 4.12s/it]
2%|▏ | 11320/569592 [2:52:00<629:21:47, 4.06s/it]
2%|▏ | 11320/569592 [2:52:00<629:21:47, 4.06s/it]
2%|▏ | 11321/569592 [2:52:01<481:13:53, 3.10s/it]
2%|▏ | 11321/569592 [2:52:01<481:13:53, 3.10s/it]
2%|▏ | 11322/569592 [2:52:04<491:51:53, 3.17s/it]
2%|▏ | 11322/569592 [2:52:04<491:51:53, 3.17s/it]
2%|▏ | 11323/569592 [2:52:09<583:54:53, 3.77s/it]
2%|▏ | 11323/569592 [2:52:09<583:54:53, 3.77s/it]
2%|▏ | 11324/569592 [2:52:14<638:05:39, 4.11s/it]
2%|▏ | 11324/569592 [2:52:14<638:05:39, 4.11s/it]
2%|▏ | 11325/569592 [2:52:19<678:35:09, 4.38s/it]
2%|▏ | 11325/569592 [2:52:20<678:35:09, 4.38s/it]
2%|▏ | 11326/569592 [2:52:24<678:19:35, 4.37s/it]
2%|▏ | 11326/569592 [2:52:24<678:19:35, 4.37s/it]
2%|▏ | 11327/569592 [2:52:27<621:54:24, 4.01s/it]
2%|▏ | 11327/569592 [2:52:27<621:54:24, 4.01s/it]
2%|▏ | 11328/569592 [2:52:30<579:30:19, 3.74s/it]
2%|▏ | 11328/569592 [2:52:30<579:30:19, 3.74s/it]
2%|▏ | 11329/569592 [2:52:34<577:36:04, 3.72s/it]
2%|▏ | 11329/569592 [2:52:34<577:36:04, 3.72s/it]
2%|▏ | 11330/569592 [2:52:37<553:02:20, 3.57s/it]
2%|▏ | 11330/569592 [2:52:37<553:02:20, 3.57s/it]
2%|▏ | 11331/569592 [2:52:42<616:24:19, 3.97s/it]
2%|▏ | 11331/569592 [2:52:42<616:24:19, 3.97s/it]
2%|▏ | 11332/569592 [2:52:46<644:18:04, 4.15s/it]
2%|▏ | 11332/569592 [2:52:46<644:18:04, 4.15s/it]
2%|▏ | 11333/569592 [2:52:51<676:29:22, 4.36s/it]
2%|▏ | 11333/569592 [2:52:51<676:29:22, 4.36s/it]
2%|▏ | 11334/569592 [2:52:56<683:44:05, 4.41s/it]
2%|▏ | 11334/569592 [2:52:56<683:44:05, 4.41s/it]
2%|▏ | 11335/569592 [2:52:59<638:27:22, 4.12s/it]
2%|▏ | 11335/569592 [2:52:59<638:27:22, 4.12s/it]
2%|▏ | 11336/569592 [2:53:00<488:30:05, 3.15s/it]
2%|▏ | 11336/569592 [2:53:00<488:30:05, 3.15s/it]
2%|▏ | 11337/569592 [2:53:04<545:30:16, 3.52s/it]
2%|▏ | 11337/569592 [2:53:04<545:30:16, 3.52s/it]
2%|▏ | 11338/569592 [2:53:08<557:03:03, 3.59s/it]
2%|▏ | 11338/569592 [2:53:08<557:03:03, 3.59s/it]
2%|▏ | 11339/569592 [2:53:12<549:18:32, 3.54s/it]
2%|▏ | 11339/569592 [2:53:12<549:18:32, 3.54s/it]
2%|▏ | 11340/569592 [2:53:15<549:00:14, 3.54s/it]
2%|▏ | 11340/569592 [2:53:15<549:00:14, 3.54s/it]
2%|▏ | 11341/569592 [2:53:20<626:29:06, 4.04s/it]
2%|▏ | 11341/569592 [2:53:20<626:29:06, 4.04s/it]
2%|▏ | 11342/569592 [2:53:25<634:14:39, 4.09s/it]
2%|▏ | 11342/569592 [2:53:25<634:14:39, 4.09s/it]
2%|▏ | 11343/569592 [2:53:28<589:16:37, 3.80s/it]
2%|▏ | 11343/569592 [2:53:28<589:16:37, 3.80s/it]
2%|▏ | 11344/569592 [2:53:32<626:31:53, 4.04s/it]
2%|▏ | 11344/569592 [2:53:32<626:31:53, 4.04s/it]
2%|▏ | 11345/569592 [2:53:36<600:34:43, 3.87s/it]
2%|▏ | 11345/569592 [2:53:36<600:34:43, 3.87s/it]
2%|▏ | 11346/569592 [2:53:41<655:05:46, 4.22s/it]
2%|▏ | 11346/569592 [2:53:41<655:05:46, 4.22s/it]
2%|▏ | 11347/569592 [2:53:44<594:20:04, 3.83s/it]
2%|▏ | 11347/569592 [2:53:44<594:20:04, 3.83s/it]
2%|▏ | 11348/569592 [2:53:48<630:05:53, 4.06s/it]
2%|▏ | 11348/569592 [2:53:48<630:05:53, 4.06s/it]
2%|▏ | 11349/569592 [2:53:52<601:01:04, 3.88s/it]
2%|▏ | 11349/569592 [2:53:52<601:01:04, 3.88s/it]
2%|▏ | 11350/569592 [2:53:57<640:09:55, 4.13s/it]
2%|▏ | 11350/569592 [2:53:57<640:09:55, 4.13s/it]
2%|▏ | 11351/569592 [2:54:01<660:34:17, 4.26s/it]
2%|▏ | 11351/569592 [2:54:01<660:34:17, 4.26s/it]
2%|▏ | 11352/569592 [2:54:06<677:31:01, 4.37s/it]
2%|▏ | 11352/569592 [2:54:06<677:31:01, 4.37s/it]
2%|▏ | 11353/569592 [2:54:14<859:51:28, 5.55s/it]
2%|▏ | 11353/569592 [2:54:14<859:51:28, 5.55s/it]
2%|▏ | 11354/569592 [2:54:19<832:13:55, 5.37s/it]
2%|▏ | 11354/569592 [2:54:19<832:13:55, 5.37s/it]
2%|▏ | 11355/569592 [2:54:24<804:15:17, 5.19s/it]
2%|▏ | 11355/569592 [2:54:24<804:15:17, 5.19s/it]
2%|▏ | 11356/569592 [2:54:28<775:05:42, 5.00s/it]
2%|▏ | 11356/569592 [2:54:28<775:05:42, 5.00s/it]
2%|▏ | 11357/569592 [2:54:32<695:13:44, 4.48s/it]
2%|▏ | 11357/569592 [2:54:32<695:13:44, 4.48s/it]
2%|▏ | 11358/569592 [2:54:35<628:11:03, 4.05s/it]
2%|▏ | 11358/569592 [2:54:35<628:11:03, 4.05s/it]
2%|▏ | 11359/569592 [2:54:38<592:58:02, 3.82s/it]
2%|▏ | 11359/569592 [2:54:38<592:58:02, 3.82s/it]
2%|▏ | 11360/569592 [2:54:41<562:36:57, 3.63s/it]
2%|▏ | 11360/569592 [2:54:41<562:36:57, 3.63s/it]
2%|▏ | 11361/569592 [2:54:45<564:40:45, 3.64s/it]
2%|▏ | 11361/569592 [2:54:45<564:40:45, 3.64s/it]
2%|▏ | 11362/569592 [2:54:50<617:42:32, 3.98s/it]
2%|▏ | 11362/569592 [2:54:50<617:42:32, 3.98s/it]
2%|▏ | 11363/569592 [2:54:50<474:17:14, 3.06s/it]
2%|▏ | 11363/569592 [2:54:50<474:17:14, 3.06s/it]
2%|▏ | 11364/569592 [2:54:54<483:38:55, 3.12s/it]
2%|▏ | 11364/569592 [2:54:54<483:38:55, 3.12s/it]
2%|▏ | 11365/569592 [2:54:55<380:45:03, 2.46s/it]
2%|▏ | 11365/569592 [2:54:55<380:45:03, 2.46s/it]
2%|▏ | 11366/569592 [2:54:58<425:45:22, 2.75s/it]
2%|▏ | 11366/569592 [2:54:58<425:45:22, 2.75s/it]
2%|▏ | 11367/569592 [2:54:59<348:56:43, 2.25s/it]
2%|▏ | 11367/569592 [2:54:59<348:56:43, 2.25s/it]
2%|▏ | 11368/569592 [2:55:00<287:05:44, 1.85s/it]
2%|▏ | 11368/569592 [2:55:00<287:05:44, 1.85s/it]
2%|▏ | 11369/569592 [2:55:01<245:18:33, 1.58s/it]
2%|▏ | 11369/569592 [2:55:01<245:18:33, 1.58s/it]
2%|▏ | 11370/569592 [2:55:06<424:49:36, 2.74s/it]
2%|▏ | 11370/569592 [2:55:06<424:49:36, 2.74s/it]
2%|▏ | 11371/569592 [2:55:10<454:08:59, 2.93s/it]
2%|▏ | 11371/569592 [2:55:10<454:08:59, 2.93s/it]
2%|▏ | 11372/569592 [2:55:13<479:24:42, 3.09s/it]
2%|▏ | 11372/569592 [2:55:13<479:24:42, 3.09s/it]
2%|▏ | 11373/569592 [2:55:14<378:15:59, 2.44s/it]
2%|▏ | 11373/569592 [2:55:14<378:15:59, 2.44s/it]
2%|▏ | 11374/569592 [2:55:15<308:10:47, 1.99s/it]
2%|▏ | 11374/569592 [2:55:15<308:10:47, 1.99s/it]
2%|▏ | 11375/569592 [2:55:19<404:12:55, 2.61s/it]
2%|▏ | 11375/569592 [2:55:19<404:12:55, 2.61s/it]
2%|▏ | 11376/569592 [2:55:20<327:40:11, 2.11s/it]
2%|▏ | 11376/569592 [2:55:20<327:40:11, 2.11s/it]
2%|▏ | 11377/569592 [2:55:21<273:34:26, 1.76s/it]
2%|▏ | 11377/569592 [2:55:21<273:34:26, 1.76s/it]
2%|▏ | 11378/569592 [2:55:22<235:23:50, 1.52s/it]
2%|▏ | 11378/569592 [2:55:22<235:23:50, 1.52s/it]
2%|▏ | 11379/569592 [2:55:23<208:12:24, 1.34s/it]
2%|▏ | 11379/569592 [2:55:23<208:12:24, 1.34s/it]
2%|▏ | 11380/569592 [2:55:24<188:43:12, 1.22s/it]
2%|▏ | 11380/569592 [2:55:24<188:43:12, 1.22s/it]
2%|▏ | 11381/569592 [2:55:29<351:56:12, 2.27s/it]
2%|▏ | 11381/569592 [2:55:29<351:56:12, 2.27s/it]
2%|▏ | 11382/569592 [2:55:30<289:39:38, 1.87s/it]
2%|▏ | 11382/569592 [2:55:30<289:39:38, 1.87s/it]
2%|▏ | 11383/569592 [2:55:33<347:43:48, 2.24s/it]
2%|▏ | 11383/569592 [2:55:33<347:43:48, 2.24s/it]
2%|▏ | 11384/569592 [2:55:37<453:33:50, 2.93s/it]
2%|▏ | 11384/569592 [2:55:37<453:33:50, 2.93s/it]
2%|▏ | 11385/569592 [2:55:39<395:38:20, 2.55s/it]
2%|▏ | 11385/569592 [2:55:39<395:38:20, 2.55s/it]
2%|▏ | 11386/569592 [2:55:42<440:35:54, 2.84s/it]
2%|▏ | 11386/569592 [2:55:42<440:35:54, 2.84s/it]
2%|▏ | 11387/569592 [2:55:43<355:27:42, 2.29s/it]
2%|▏ | 11387/569592 [2:55:43<355:27:42, 2.29s/it]
2%|▏ | 11388/569592 [2:55:44<297:35:09, 1.92s/it]
2%|▏ | 11388/569592 [2:55:44<297:35:09, 1.92s/it]
2%|▏ | 11389/569592 [2:55:49<425:43:22, 2.75s/it]
2%|▏ | 11389/569592 [2:55:49<425:43:22, 2.75s/it]
2%|▏ | 11390/569592 [2:55:55<550:16:35, 3.55s/it]
2%|▏ | 11390/569592 [2:55:55<550:16:35, 3.55s/it]
2%|▏ | 11391/569592 [2:55:56<434:36:46, 2.80s/it]
2%|▏ | 11391/569592 [2:55:56<434:36:46, 2.80s/it]
2%|▏ | 11392/569592 [2:55:57<352:56:32, 2.28s/it]
2%|▏ | 11392/569592 [2:55:57<352:56:32, 2.28s/it]
2%|▏ | 11393/569592 [2:56:00<419:00:23, 2.70s/it]
2%|▏ | 11393/569592 [2:56:00<419:00:23, 2.70s/it]
2%|▏ | 11394/569592 [2:56:01<337:31:40, 2.18s/it]
2%|▏ | 11394/569592 [2:56:01<337:31:40, 2.18s/it]
2%|▏ | 11395/569592 [2:56:04<348:04:48, 2.24s/it]
2%|▏ | 11395/569592 [2:56:04<348:04:48, 2.24s/it]
2%|▏ | 11396/569592 [2:56:05<289:00:53, 1.86s/it]
2%|▏ | 11396/569592 [2:56:05<289:00:53, 1.86s/it]
2%|▏ | 11397/569592 [2:56:10<443:45:46, 2.86s/it]
2%|▏ | 11397/569592 [2:56:10<443:45:46, 2.86s/it]
2%|▏ | 11398/569592 [2:56:11<357:25:22, 2.31s/it]
2%|▏ | 11398/569592 [2:56:11<357:25:22, 2.31s/it]
2%|▏ | 11399/569592 [2:56:14<409:05:46, 2.64s/it]
2%|▏ | 11399/569592 [2:56:14<409:05:46, 2.64s/it]
2%|▏ | 11400/569592 [2:56:15<330:23:02, 2.13s/it]
2%|▏ | 11400/569592 [2:56:15<330:23:02, 2.13s/it]
2%|▏ | 11401/569592 [2:56:21<480:48:30, 3.10s/it]
2%|▏ | 11401/569592 [2:56:21<480:48:30, 3.10s/it]
2%|▏ | 11402/569592 [2:56:22<379:42:23, 2.45s/it]
2%|▏ | 11402/569592 [2:56:22<379:42:23, 2.45s/it]
2%|▏ | 11403/569592 [2:56:24<403:34:50, 2.60s/it]
2%|▏ | 11403/569592 [2:56:24<403:34:50, 2.60s/it]
2%|▏ | 11404/569592 [2:56:25<328:45:14, 2.12s/it]
2%|▏ | 11404/569592 [2:56:25<328:45:14, 2.12s/it]
2%|▏ | 11405/569592 [2:56:31<466:13:49, 3.01s/it]
2%|▏ | 11405/569592 [2:56:31<466:13:49, 3.01s/it]
2%|▏ | 11406/569592 [2:56:32<371:54:50, 2.40s/it]
2%|▏ | 11406/569592 [2:56:32<371:54:50, 2.40s/it]
2%|▏ | 11407/569592 [2:56:38<538:54:26, 3.48s/it]
2%|▏ | 11407/569592 [2:56:38<538:54:26, 3.48s/it]
2%|▏ | 11408/569592 [2:56:39<425:00:08, 2.74s/it]
2%|▏ | 11408/569592 [2:56:39<425:00:08, 2.74s/it]
2%|▏ | 11409/569592 [2:56:41<424:43:33, 2.74s/it]
2%|▏ | 11409/569592 [2:56:41<424:43:33, 2.74s/it]
2%|▏ | 11410/569592 [2:56:45<462:40:38, 2.98s/it]
2%|▏ | 11410/569592 [2:56:45<462:40:38, 2.98s/it]
2%|▏ | 11411/569592 [2:56:46<371:14:41, 2.39s/it]
2%|▏ | 11411/569592 [2:56:46<371:14:41, 2.39s/it]
2%|▏ | 11412/569592 [2:56:47<306:17:58, 1.98s/it]
2%|▏ | 11412/569592 [2:56:47<306:17:58, 1.98s/it]
2%|▏ | 11413/569592 [2:56:51<415:51:45, 2.68s/it]
2%|▏ | 11413/569592 [2:56:51<415:51:45, 2.68s/it]
2%|▏ | 11414/569592 [2:56:52<336:10:39, 2.17s/it]
2%|▏ | 11414/569592 [2:56:52<336:10:39, 2.17s/it]
2%|▏ | 11415/569592 [2:56:55<362:48:31, 2.34s/it]
2%|▏ | 11415/569592 [2:56:55<362:48:31, 2.34s/it]
2%|▏ | 11416/569592 [2:56:56<296:53:24, 1.91s/it]
2%|▏ | 11416/569592 [2:56:56<296:53:24, 1.91s/it]
2%|▏ | 11417/569592 [2:57:01<456:09:23, 2.94s/it]
2%|▏ | 11417/569592 [2:57:01<456:09:23, 2.94s/it]
2%|▏ | 11418/569592 [2:57:02<363:50:35, 2.35s/it]
2%|▏ | 11418/569592 [2:57:02<363:50:35, 2.35s/it]
2%|▏ | 11419/569592 [2:57:07<479:02:45, 3.09s/it]
2%|▏ | 11419/569592 [2:57:07<479:02:45, 3.09s/it]
2%|▏ | 11420/569592 [2:57:08<379:45:24, 2.45s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (108576768 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11420/569592 [2:57:08<379:45:24, 2.45s/it]
2%|▏ | 11421/569592 [2:57:13<488:39:22, 3.15s/it]
2%|▏ | 11421/569592 [2:57:13<488:39:22, 3.15s/it]
2%|▏ | 11422/569592 [2:57:18<569:08:26, 3.67s/it]
2%|▏ | 11422/569592 [2:57:18<569:08:26, 3.67s/it]
2%|▏ | 11423/569592 [2:57:21<569:14:13, 3.67s/it]
2%|▏ | 11423/569592 [2:57:21<569:14:13, 3.67s/it]
2%|▏ | 11424/569592 [2:57:22<440:27:15, 2.84s/it]
2%|▏ | 11424/569592 [2:57:22<440:27:15, 2.84s/it]
2%|▏ | 11425/569592 [2:57:26<491:44:47, 3.17s/it]
2%|▏ | 11425/569592 [2:57:26<491:44:47, 3.17s/it]
2%|▏ | 11426/569592 [2:57:29<495:41:19, 3.20s/it]
2%|▏ | 11426/569592 [2:57:29<495:41:19, 3.20s/it]
2%|▏ | 11427/569592 [2:57:30<390:16:53, 2.52s/it]
2%|▏ | 11427/569592 [2:57:30<390:16:53, 2.52s/it]
2%|▏ | 11428/569592 [2:57:34<449:46:14, 2.90s/it]
2%|▏ | 11428/569592 [2:57:34<449:46:14, 2.90s/it]
2%|▏ | 11429/569592 [2:57:35<360:52:03, 2.33s/it]
2%|▏ | 11429/569592 [2:57:35<360:52:03, 2.33s/it]
2%|▏ | 11430/569592 [2:57:40<493:55:15, 3.19s/it]
2%|▏ | 11430/569592 [2:57:40<493:55:15, 3.19s/it]
2%|▏ | 11431/569592 [2:57:44<518:50:43, 3.35s/it]
2%|▏ | 11431/569592 [2:57:44<518:50:43, 3.35s/it]
2%|▏ | 11432/569592 [2:57:49<597:49:36, 3.86s/it]
2%|▏ | 11432/569592 [2:57:49<597:49:36, 3.86s/it]
2%|▏ | 11433/569592 [2:57:52<580:25:10, 3.74s/it]
2%|▏ | 11433/569592 [2:57:52<580:25:10, 3.74s/it]
2%|▏ | 11434/569592 [2:57:56<547:26:36, 3.53s/it]
2%|▏ | 11434/569592 [2:57:56<547:26:36, 3.53s/it]
2%|▏ | 11435/569592 [2:57:59<567:30:09, 3.66s/it]
2%|▏ | 11435/569592 [2:57:59<567:30:09, 3.66s/it]
2%|▏ | 11436/569592 [2:58:04<619:18:50, 3.99s/it]
2%|▏ | 11436/569592 [2:58:04<619:18:50, 3.99s/it]
2%|▏ | 11437/569592 [2:58:08<599:48:09, 3.87s/it]
2%|▏ | 11437/569592 [2:58:08<599:48:09, 3.87s/it]
2%|▏ | 11438/569592 [2:58:13<646:05:36, 4.17s/it]
2%|▏ | 11438/569592 [2:58:13<646:05:36, 4.17s/it]
2%|▏ | 11439/569592 [2:58:18<680:37:25, 4.39s/it]
2%|▏ | 11439/569592 [2:58:18<680:37:25, 4.39s/it]
2%|▏ | 11440/569592 [2:58:21<655:27:31, 4.23s/it]
2%|▏ | 11440/569592 [2:58:21<655:27:31, 4.23s/it]
2%|▏ | 11441/569592 [2:58:26<660:20:59, 4.26s/it]
2%|▏ | 11441/569592 [2:58:26<660:20:59, 4.26s/it]
2%|▏ | 11442/569592 [2:58:29<630:09:33, 4.06s/it]
2%|▏ | 11442/569592 [2:58:29<630:09:33, 4.06s/it]
2%|▏ | 11443/569592 [2:58:35<678:24:32, 4.38s/it]
2%|▏ | 11443/569592 [2:58:35<678:24:32, 4.38s/it]
2%|▏ | 11444/569592 [2:58:39<706:42:18, 4.56s/it]
2%|▏ | 11444/569592 [2:58:39<706:42:18, 4.56s/it]
2%|▏ | 11445/569592 [2:58:40<535:50:17, 3.46s/it]
2%|▏ | 11445/569592 [2:58:40<535:50:17, 3.46s/it]
2%|▏ | 11446/569592 [2:58:44<543:03:06, 3.50s/it]
2%|▏ | 11446/569592 [2:58:44<543:03:06, 3.50s/it]
2%|▏ | 11447/569592 [2:58:49<614:15:05, 3.96s/it]
2%|▏ | 11447/569592 [2:58:49<614:15:05, 3.96s/it]
2%|▏ | 11448/569592 [2:58:52<575:10:28, 3.71s/it]
2%|▏ | 11448/569592 [2:58:52<575:10:28, 3.71s/it]
2%|▏ | 11449/569592 [2:58:57<640:25:49, 4.13s/it]
2%|▏ | 11449/569592 [2:58:57<640:25:49, 4.13s/it]
2%|▏ | 11450/569592 [2:59:01<602:04:50, 3.88s/it]
2%|▏ | 11450/569592 [2:59:01<602:04:50, 3.88s/it]
2%|▏ | 11451/569592 [2:59:06<655:24:06, 4.23s/it]
2%|▏ | 11451/569592 [2:59:06<655:24:06, 4.23s/it]
2%|▏ | 11452/569592 [2:59:10<671:20:22, 4.33s/it]
2%|▏ | 11452/569592 [2:59:10<671:20:22, 4.33s/it]
2%|▏ | 11453/569592 [2:59:15<685:38:07, 4.42s/it]
2%|▏ | 11453/569592 [2:59:15<685:38:07, 4.42s/it]
2%|▏ | 11454/569592 [2:59:18<634:10:03, 4.09s/it]
2%|▏ | 11454/569592 [2:59:18<634:10:03, 4.09s/it]
2%|▏ | 11455/569592 [2:59:23<663:55:37, 4.28s/it]
2%|▏ | 11455/569592 [2:59:23<663:55:37, 4.28s/it]
2%|▏ | 11456/569592 [2:59:26<605:49:33, 3.91s/it]
2%|▏ | 11456/569592 [2:59:26<605:49:33, 3.91s/it]
2%|▏ | 11457/569592 [2:59:29<590:16:03, 3.81s/it]
2%|▏ | 11457/569592 [2:59:29<590:16:03, 3.81s/it]
2%|▏ | 11458/569592 [2:59:34<645:22:23, 4.16s/it]
2%|▏ | 11458/569592 [2:59:34<645:22:23, 4.16s/it]
2%|▏ | 11459/569592 [2:59:38<602:47:58, 3.89s/it]
2%|▏ | 11459/569592 [2:59:38<602:47:58, 3.89s/it]
2%|▏ | 11460/569592 [2:59:41<561:02:34, 3.62s/it]
2%|▏ | 11460/569592 [2:59:41<561:02:34, 3.62s/it]
2%|▏ | 11461/569592 [2:59:45<598:56:41, 3.86s/it]
2%|▏ | 11461/569592 [2:59:45<598:56:41, 3.86s/it]
2%|▏ | 11462/569592 [2:59:50<643:08:20, 4.15s/it]
2%|▏ | 11462/569592 [2:59:50<643:08:20, 4.15s/it]
2%|▏ | 11463/569592 [2:59:53<612:57:18, 3.95s/it]
2%|▏ | 11463/569592 [2:59:53<612:57:18, 3.95s/it]
2%|▏ | 11464/569592 [2:59:58<661:26:31, 4.27s/it]
2%|▏ | 11464/569592 [2:59:58<661:26:31, 4.27s/it]
2%|▏ | 11465/569592 [3:00:03<691:57:21, 4.46s/it]
2%|▏ | 11465/569592 [3:00:03<691:57:21, 4.46s/it]
2%|▏ | 11466/569592 [3:00:07<652:09:37, 4.21s/it]
2%|▏ | 11466/569592 [3:00:07<652:09:37, 4.21s/it]
2%|▏ | 11467/569592 [3:00:10<620:41:49, 4.00s/it]
2%|▏ | 11467/569592 [3:00:10<620:41:49, 4.00s/it]
2%|▏ | 11468/569592 [3:00:11<476:40:14, 3.07s/it]
2%|▏ | 11468/569592 [3:00:11<476:40:14, 3.07s/it]
2%|▏ | 11469/569592 [3:00:16<571:11:55, 3.68s/it]
2%|▏ | 11469/569592 [3:00:17<571:11:55, 3.68s/it]
2%|▏ | 11470/569592 [3:00:20<547:20:42, 3.53s/it]
2%|▏ | 11470/569592 [3:00:20<547:20:42, 3.53s/it]
2%|▏ | 11471/569592 [3:00:23<543:12:41, 3.50s/it]
2%|▏ | 11471/569592 [3:00:23<543:12:41, 3.50s/it]
2%|▏ | 11472/569592 [3:00:28<596:22:13, 3.85s/it]
2%|▏ | 11472/569592 [3:00:28<596:22:13, 3.85s/it]
2%|▏ | 11473/569592 [3:00:32<631:36:59, 4.07s/it]
2%|▏ | 11473/569592 [3:00:32<631:36:59, 4.07s/it]
2%|▏ | 11474/569592 [3:00:37<659:20:19, 4.25s/it]
2%|▏ | 11474/569592 [3:00:37<659:20:19, 4.25s/it]
2%|▏ | 11475/569592 [3:00:42<677:57:34, 4.37s/it]
2%|▏ | 11475/569592 [3:00:42<677:57:34, 4.37s/it]
2%|▏ | 11476/569592 [3:00:45<627:05:50, 4.04s/it]
2%|▏ | 11476/569592 [3:00:45<627:05:50, 4.04s/it]
2%|▏ | 11477/569592 [3:00:46<480:37:33, 3.10s/it]
2%|▏ | 11477/569592 [3:00:46<480:37:33, 3.10s/it]
2%|▏ | 11478/569592 [3:00:47<377:45:24, 2.44s/it]
2%|▏ | 11478/569592 [3:00:47<377:45:24, 2.44s/it]
2%|▏ | 11479/569592 [3:00:50<430:46:29, 2.78s/it]
2%|▏ | 11479/569592 [3:00:50<430:46:29, 2.78s/it]
2%|▏ | 11480/569592 [3:00:54<457:41:31, 2.95s/it]
2%|▏ | 11480/569592 [3:00:54<457:41:31, 2.95s/it]
2%|▏ | 11481/569592 [3:00:55<365:31:15, 2.36s/it]
2%|▏ | 11481/569592 [3:00:55<365:31:15, 2.36s/it]
2%|▏ | 11482/569592 [3:00:58<412:04:35, 2.66s/it]
2%|▏ | 11482/569592 [3:00:58<412:04:35, 2.66s/it]
2%|▏ | 11483/569592 [3:00:59<335:08:45, 2.16s/it]
2%|▏ | 11483/569592 [3:00:59<335:08:45, 2.16s/it]
2%|▏ | 11484/569592 [3:01:04<468:14:11, 3.02s/it]
2%|▏ | 11484/569592 [3:01:04<468:14:11, 3.02s/it]
2%|▏ | 11485/569592 [3:01:05<370:44:50, 2.39s/it]
2%|▏ | 11485/569592 [3:01:05<370:44:50, 2.39s/it]
2%|▏ | 11486/569592 [3:01:06<305:50:03, 1.97s/it]
2%|▏ | 11486/569592 [3:01:06<305:50:03, 1.97s/it]
2%|▏ | 11487/569592 [3:01:07<259:09:24, 1.67s/it]
2%|▏ | 11487/569592 [3:01:07<259:09:24, 1.67s/it]
2%|▏ | 11488/569592 [3:01:11<364:38:47, 2.35s/it]
2%|▏ | 11488/569592 [3:01:11<364:38:47, 2.35s/it]
2%|▏ | 11489/569592 [3:01:14<412:41:51, 2.66s/it]
2%|▏ | 11489/569592 [3:01:14<412:41:51, 2.66s/it]
2%|▏ | 11490/569592 [3:01:18<453:39:44, 2.93s/it]
2%|▏ | 11490/569592 [3:01:18<453:39:44, 2.93s/it]
2%|▏ | 11491/569592 [3:01:19<362:08:03, 2.34s/it]
2%|▏ | 11491/569592 [3:01:19<362:08:03, 2.34s/it]
2%|▏ | 11492/569592 [3:01:20<297:59:49, 1.92s/it]
2%|▏ | 11492/569592 [3:01:20<297:59:49, 1.92s/it]
2%|▏ | 11493/569592 [3:01:23<372:45:38, 2.40s/it]
2%|▏ | 11493/569592 [3:01:23<372:45:38, 2.40s/it]
2%|▏ | 11494/569592 [3:01:24<305:28:25, 1.97s/it]
2%|▏ | 11494/569592 [3:01:24<305:28:25, 1.97s/it]
2%|▏ | 11495/569592 [3:01:25<256:41:08, 1.66s/it]
2%|▏ | 11495/569592 [3:01:25<256:41:08, 1.66s/it]
2%|▏ | 11496/569592 [3:01:26<223:53:13, 1.44s/it]
2%|▏ | 11496/569592 [3:01:26<223:53:13, 1.44s/it]
2%|▏ | 11497/569592 [3:01:29<292:23:01, 1.89s/it]
2%|▏ | 11497/569592 [3:01:29<292:23:01, 1.89s/it]
2%|▏ | 11498/569592 [3:01:30<270:30:34, 1.74s/it]
2%|▏ | 11498/569592 [3:01:30<270:30:34, 1.74s/it]
2%|▏ | 11499/569592 [3:01:32<269:58:14, 1.74s/it]
2%|▏ | 11499/569592 [3:01:32<269:58:14, 1.74s/it]
2%|▏ | 11500/569592 [3:01:37<430:40:33, 2.78s/it]
2%|▏ | 11500/569592 [3:01:37<430:40:33, 2.78s/it]
2%|▏ | 11501/569592 [3:01:42<528:57:52, 3.41s/it]
2%|▏ | 11501/569592 [3:01:42<528:57:52, 3.41s/it]
2%|▏ | 11502/569592 [3:01:48<622:42:43, 4.02s/it]
2%|▏ | 11502/569592 [3:01:48<622:42:43, 4.02s/it]
2%|▏ | 11503/569592 [3:01:49<479:15:13, 3.09s/it]
2%|▏ | 11503/569592 [3:01:49<479:15:13, 3.09s/it]
2%|▏ | 11504/569592 [3:01:52<498:34:30, 3.22s/it]
2%|▏ | 11504/569592 [3:01:52<498:34:30, 3.22s/it]
2%|▏ | 11505/569592 [3:01:53<393:11:55, 2.54s/it]
2%|▏ | 11505/569592 [3:01:53<393:11:55, 2.54s/it]
2%|▏ | 11506/569592 [3:01:54<319:07:45, 2.06s/it]
2%|▏ | 11506/569592 [3:01:54<319:07:45, 2.06s/it]
2%|▏ | 11507/569592 [3:01:55<266:56:32, 1.72s/it]
2%|▏ | 11507/569592 [3:01:55<266:56:32, 1.72s/it]
2%|▏ | 11508/569592 [3:01:58<345:28:50, 2.23s/it]
2%|▏ | 11508/569592 [3:01:58<345:28:50, 2.23s/it]
2%|▏ | 11509/569592 [3:02:03<457:21:19, 2.95s/it]
2%|▏ | 11509/569592 [3:02:03<457:21:19, 2.95s/it]
2%|▏ | 11510/569592 [3:02:04<364:17:13, 2.35s/it]
2%|▏ | 11510/569592 [3:02:04<364:17:13, 2.35s/it]
2%|▏ | 11511/569592 [3:02:05<301:47:31, 1.95s/it]
2%|▏ | 11511/569592 [3:02:05<301:47:31, 1.95s/it]
2%|▏ | 11512/569592 [3:02:06<274:21:04, 1.77s/it]
2%|▏ | 11512/569592 [3:02:06<274:21:04, 1.77s/it]
2%|▏ | 11513/569592 [3:02:12<479:10:23, 3.09s/it]
2%|▏ | 11513/569592 [3:02:12<479:10:23, 3.09s/it]
2%|▏ | 11514/569592 [3:02:13<379:37:52, 2.45s/it]
2%|▏ | 11514/569592 [3:02:13<379:37:52, 2.45s/it]
2%|▏ | 11515/569592 [3:02:14<308:39:50, 1.99s/it]
2%|▏ | 11515/569592 [3:02:14<308:39:50, 1.99s/it]
2%|▏ | 11516/569592 [3:02:17<324:19:56, 2.09s/it]
2%|▏ | 11516/569592 [3:02:17<324:19:56, 2.09s/it]
2%|▏ | 11517/569592 [3:02:25<611:20:40, 3.94s/it]
2%|▏ | 11517/569592 [3:02:25<611:20:40, 3.94s/it]
2%|▏ | 11518/569592 [3:02:26<471:45:29, 3.04s/it]
2%|▏ | 11518/569592 [3:02:26<471:45:29, 3.04s/it]
2%|▏ | 11519/569592 [3:02:27<372:53:15, 2.41s/it]
2%|▏ | 11519/569592 [3:02:27<372:53:15, 2.41s/it]
2%|▏ | 11520/569592 [3:02:30<419:00:08, 2.70s/it]
2%|▏ | 11520/569592 [3:02:30<419:00:08, 2.70s/it]
2%|▏ | 11521/569592 [3:02:35<524:53:14, 3.39s/it]
2%|▏ | 11521/569592 [3:02:35<524:53:14, 3.39s/it]
2%|▏ | 11522/569592 [3:02:36<420:53:25, 2.72s/it]
2%|▏ | 11522/569592 [3:02:36<420:53:25, 2.72s/it]
2%|▏ | 11523/569592 [3:02:42<536:42:05, 3.46s/it]
2%|▏ | 11523/569592 [3:02:42<536:42:05, 3.46s/it]
2%|▏ | 11524/569592 [3:02:42<418:58:19, 2.70s/it]
2%|▏ | 11524/569592 [3:02:42<418:58:19, 2.70s/it]
2%|▏ | 11525/569592 [3:02:45<429:42:04, 2.77s/it]
2%|▏ | 11525/569592 [3:02:45<429:42:04, 2.77s/it]
2%|▏ | 11526/569592 [3:02:49<447:06:45, 2.88s/it]
2%|▏ | 11526/569592 [3:02:49<447:06:45, 2.88s/it]
2%|▏ | 11527/569592 [3:02:49<355:54:20, 2.30s/it]
2%|▏ | 11527/569592 [3:02:49<355:54:20, 2.30s/it]
2%|▏ | 11528/569592 [3:02:50<293:03:09, 1.89s/it]
2%|▏ | 11528/569592 [3:02:50<293:03:09, 1.89s/it]
2%|▏ | 11529/569592 [3:02:54<362:44:56, 2.34s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96000000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11529/569592 [3:02:54<362:44:56, 2.34s/it]
2%|▏ | 11530/569592 [3:02:55<298:16:21, 1.92s/it]
2%|▏ | 11530/569592 [3:02:55<298:16:21, 1.92s/it]
2%|▏ | 11531/569592 [3:02:56<252:18:14, 1.63s/it]
2%|▏ | 11531/569592 [3:02:56<252:18:14, 1.63s/it]
2%|▏ | 11532/569592 [3:02:57<221:02:23, 1.43s/it]
2%|▏ | 11532/569592 [3:02:57<221:02:23, 1.43s/it]
2%|▏ | 11533/569592 [3:03:05<537:45:39, 3.47s/it]
2%|▏ | 11533/569592 [3:03:05<537:45:39, 3.47s/it]
2%|▏ | 11534/569592 [3:03:09<550:27:22, 3.55s/it]
2%|▏ | 11534/569592 [3:03:09<550:27:22, 3.55s/it]
2%|▏ | 11535/569592 [3:03:13<611:58:31, 3.95s/it]
2%|▏ | 11535/569592 [3:03:13<611:58:31, 3.95s/it]
2%|▏ | 11536/569592 [3:03:14<470:19:35, 3.03s/it]
2%|▏ | 11536/569592 [3:03:14<470:19:35, 3.03s/it]
2%|▏ | 11537/569592 [3:03:15<371:37:23, 2.40s/it]
2%|▏ | 11537/569592 [3:03:15<371:37:23, 2.40s/it]
2%|▏ | 11538/569592 [3:03:20<457:23:22, 2.95s/it]
2%|▏ | 11538/569592 [3:03:20<457:23:22, 2.95s/it]
2%|▏ | 11539/569592 [3:03:24<550:08:13, 3.55s/it]
2%|▏ | 11539/569592 [3:03:24<550:08:13, 3.55s/it]
2%|▏ | 11540/569592 [3:03:28<541:59:16, 3.50s/it]
2%|▏ | 11540/569592 [3:03:28<541:59:16, 3.50s/it]
2%|▏ | 11541/569592 [3:03:32<558:20:03, 3.60s/it]
2%|▏ | 11541/569592 [3:03:32<558:20:03, 3.60s/it]
2%|▏ | 11542/569592 [3:03:36<611:18:05, 3.94s/it]
2%|▏ | 11542/569592 [3:03:36<611:18:05, 3.94s/it]
2%|▏ | 11543/569592 [3:03:39<567:48:48, 3.66s/it]
2%|▏ | 11543/569592 [3:03:39<567:48:48, 3.66s/it]
2%|▏ | 11544/569592 [3:03:40<441:06:55, 2.85s/it]
2%|▏ | 11544/569592 [3:03:40<441:06:55, 2.85s/it]
2%|▏ | 11545/569592 [3:03:44<498:14:52, 3.21s/it]
2%|▏ | 11545/569592 [3:03:44<498:14:52, 3.21s/it]
2%|▏ | 11546/569592 [3:03:48<526:18:26, 3.40s/it]
2%|▏ | 11546/569592 [3:03:48<526:18:26, 3.40s/it]
2%|▏ | 11547/569592 [3:03:53<590:32:14, 3.81s/it]
2%|▏ | 11547/569592 [3:03:53<590:32:14, 3.81s/it]
2%|▏ | 11548/569592 [3:03:57<589:06:37, 3.80s/it]
2%|▏ | 11548/569592 [3:03:57<589:06:37, 3.80s/it]
2%|▏ | 11549/569592 [3:04:02<648:56:20, 4.19s/it]
2%|▏ | 11549/569592 [3:04:02<648:56:20, 4.19s/it]
2%|▏ | 11550/569592 [3:04:06<656:36:10, 4.24s/it]
2%|▏ | 11550/569592 [3:04:06<656:36:10, 4.24s/it]
2%|▏ | 11551/569592 [3:04:11<686:09:28, 4.43s/it]
2%|▏ | 11551/569592 [3:04:11<686:09:28, 4.43s/it]
2%|▏ | 11552/569592 [3:04:16<698:14:48, 4.50s/it]
2%|▏ | 11552/569592 [3:04:16<698:14:48, 4.50s/it]
2%|▏ | 11553/569592 [3:04:21<734:46:08, 4.74s/it]
2%|▏ | 11553/569592 [3:04:21<734:46:08, 4.74s/it]
2%|▏ | 11554/569592 [3:04:26<734:08:27, 4.74s/it]
2%|▏ | 11554/569592 [3:04:26<734:08:27, 4.74s/it]
2%|▏ | 11555/569592 [3:04:29<680:14:07, 4.39s/it]
2%|▏ | 11555/569592 [3:04:29<680:14:07, 4.39s/it]
2%|▏ | 11556/569592 [3:04:30<516:57:24, 3.33s/it]
2%|▏ | 11556/569592 [3:04:30<516:57:24, 3.33s/it]
2%|▏ | 11557/569592 [3:04:35<582:51:02, 3.76s/it]
2%|▏ | 11557/569592 [3:04:35<582:51:02, 3.76s/it]
2%|▏ | 11558/569592 [3:04:40<637:24:12, 4.11s/it]
2%|▏ | 11558/569592 [3:04:40<637:24:12, 4.11s/it]
2%|▏ | 11559/569592 [3:04:43<601:58:03, 3.88s/it]
2%|▏ | 11559/569592 [3:04:43<601:58:03, 3.88s/it]
2%|▏ | 11560/569592 [3:04:48<638:35:42, 4.12s/it]
2%|▏ | 11560/569592 [3:04:48<638:35:42, 4.12s/it]
2%|▏ | 11561/569592 [3:04:52<652:37:29, 4.21s/it]
2%|▏ | 11561/569592 [3:04:52<652:37:29, 4.21s/it]
2%|▏ | 11562/569592 [3:04:56<608:03:05, 3.92s/it]
2%|▏ | 11562/569592 [3:04:56<608:03:05, 3.92s/it]
2%|▏ | 11563/569592 [3:04:59<572:54:16, 3.70s/it]
2%|▏ | 11563/569592 [3:04:59<572:54:16, 3.70s/it]
2%|▏ | 11564/569592 [3:05:02<566:49:23, 3.66s/it]
2%|▏ | 11564/569592 [3:05:02<566:49:23, 3.66s/it]
2%|▏ | 11565/569592 [3:05:06<543:04:29, 3.50s/it]
2%|▏ | 11565/569592 [3:05:06<543:04:29, 3.50s/it]
2%|▏ | 11566/569592 [3:05:07<424:15:26, 2.74s/it]
2%|▏ | 11566/569592 [3:05:07<424:15:26, 2.74s/it]
2%|▏ | 11567/569592 [3:05:11<527:23:57, 3.40s/it]
2%|▏ | 11567/569592 [3:05:11<527:23:57, 3.40s/it]
2%|▏ | 11568/569592 [3:05:15<556:33:27, 3.59s/it]
2%|▏ | 11568/569592 [3:05:15<556:33:27, 3.59s/it]
2%|▏ | 11569/569592 [3:05:20<603:09:12, 3.89s/it]
2%|▏ | 11569/569592 [3:05:20<603:09:12, 3.89s/it]
2%|▏ | 11570/569592 [3:05:25<636:23:14, 4.11s/it]
2%|▏ | 11570/569592 [3:05:25<636:23:14, 4.11s/it]
2%|▏ | 11571/569592 [3:05:29<662:19:09, 4.27s/it]
2%|▏ | 11571/569592 [3:05:29<662:19:09, 4.27s/it]
2%|▏ | 11572/569592 [3:05:34<677:06:04, 4.37s/it]
2%|▏ | 11572/569592 [3:05:34<677:06:04, 4.37s/it]
2%|▏ | 11573/569592 [3:05:38<650:14:29, 4.19s/it]
2%|▏ | 11573/569592 [3:05:38<650:14:29, 4.19s/it]
2%|▏ | 11574/569592 [3:05:39<496:01:08, 3.20s/it]
2%|▏ | 11574/569592 [3:05:39<496:01:08, 3.20s/it]
2%|▏ | 11575/569592 [3:05:43<537:29:42, 3.47s/it]
2%|▏ | 11575/569592 [3:05:43<537:29:42, 3.47s/it]
2%|▏ | 11576/569592 [3:05:47<586:19:05, 3.78s/it]
2%|▏ | 11576/569592 [3:05:47<586:19:05, 3.78s/it]
2%|▏ | 11577/569592 [3:05:52<633:40:48, 4.09s/it]
2%|▏ | 11577/569592 [3:05:52<633:40:48, 4.09s/it]
2%|▏ | 11578/569592 [3:05:55<583:23:32, 3.76s/it]
2%|▏ | 11578/569592 [3:05:55<583:23:32, 3.76s/it]
2%|▏ | 11579/569592 [3:06:00<639:05:10, 4.12s/it]
2%|▏ | 11579/569592 [3:06:00<639:05:10, 4.12s/it]
2%|▏ | 11580/569592 [3:06:03<604:15:40, 3.90s/it]
2%|▏ | 11580/569592 [3:06:03<604:15:40, 3.90s/it]
2%|▏ | 11581/569592 [3:06:08<659:38:59, 4.26s/it]
2%|▏ | 11581/569592 [3:06:08<659:38:59, 4.26s/it]
2%|▏ | 11582/569592 [3:06:13<674:37:21, 4.35s/it]
2%|▏ | 11582/569592 [3:06:13<674:37:21, 4.35s/it]
2%|▏ | 11583/569592 [3:06:18<705:52:01, 4.55s/it]
2%|▏ | 11583/569592 [3:06:18<705:52:01, 4.55s/it]
2%|▏ | 11584/569592 [3:06:19<539:03:52, 3.48s/it]
2%|▏ | 11584/569592 [3:06:19<539:03:52, 3.48s/it]
2%|▏ | 11585/569592 [3:06:20<419:45:31, 2.71s/it]
2%|▏ | 11585/569592 [3:06:20<419:45:31, 2.71s/it]
2%|▏ | 11586/569592 [3:06:24<472:39:12, 3.05s/it]
2%|▏ | 11586/569592 [3:06:24<472:39:12, 3.05s/it]
2%|▏ | 11587/569592 [3:06:29<572:16:41, 3.69s/it]
2%|▏ | 11587/569592 [3:06:29<572:16:41, 3.69s/it]
2%|▏ | 11588/569592 [3:06:34<612:58:38, 3.95s/it]
2%|▏ | 11588/569592 [3:06:34<612:58:38, 3.95s/it]
2%|▏ | 11589/569592 [3:06:37<576:12:07, 3.72s/it]
2%|▏ | 11589/569592 [3:06:37<576:12:07, 3.72s/it]
2%|▏ | 11590/569592 [3:06:40<575:43:30, 3.71s/it]
2%|▏ | 11590/569592 [3:06:40<575:43:30, 3.71s/it]
2%|▏ | 11591/569592 [3:06:46<640:18:14, 4.13s/it]
2%|▏ | 11591/569592 [3:06:46<640:18:14, 4.13s/it]
2%|▏ | 11592/569592 [3:06:47<495:58:36, 3.20s/it]
2%|▏ | 11592/569592 [3:06:47<495:58:36, 3.20s/it]
2%|▏ | 11593/569592 [3:06:52<597:48:29, 3.86s/it]
2%|▏ | 11593/569592 [3:06:52<597:48:29, 3.86s/it]
2%|▏ | 11594/569592 [3:06:57<635:30:40, 4.10s/it]
2%|▏ | 11594/569592 [3:06:57<635:30:40, 4.10s/it]
2%|▏ | 11595/569592 [3:07:00<590:33:54, 3.81s/it]
2%|▏ | 11595/569592 [3:07:00<590:33:54, 3.81s/it]
2%|▏ | 11596/569592 [3:07:05<638:38:13, 4.12s/it]
2%|▏ | 11596/569592 [3:07:05<638:38:13, 4.12s/it]
2%|▏ | 11597/569592 [3:07:08<596:06:46, 3.85s/it]
2%|▏ | 11597/569592 [3:07:08<596:06:46, 3.85s/it]
2%|▏ | 11598/569592 [3:07:11<551:35:25, 3.56s/it]
2%|▏ | 11598/569592 [3:07:11<551:35:25, 3.56s/it]
2%|▏ | 11599/569592 [3:07:12<428:39:18, 2.77s/it]
2%|▏ | 11599/569592 [3:07:12<428:39:18, 2.77s/it]
2%|▏ | 11600/569592 [3:07:15<467:46:36, 3.02s/it]
2%|▏ | 11600/569592 [3:07:15<467:46:36, 3.02s/it]
2%|▏ | 11601/569592 [3:07:16<372:48:53, 2.41s/it]
2%|▏ | 11601/569592 [3:07:16<372:48:53, 2.41s/it]
2%|▏ | 11602/569592 [3:07:20<443:34:19, 2.86s/it]
2%|▏ | 11602/569592 [3:07:20<443:34:19, 2.86s/it]
2%|▏ | 11603/569592 [3:07:21<357:06:41, 2.30s/it]
2%|▏ | 11603/569592 [3:07:21<357:06:41, 2.30s/it]
2%|▏ | 11604/569592 [3:07:22<294:12:35, 1.90s/it]
2%|▏ | 11604/569592 [3:07:22<294:12:35, 1.90s/it]
2%|▏ | 11605/569592 [3:07:23<250:16:11, 1.61s/it]
2%|▏ | 11605/569592 [3:07:23<250:16:11, 1.61s/it]
2%|▏ | 11606/569592 [3:07:27<381:39:11, 2.46s/it]
2%|▏ | 11606/569592 [3:07:27<381:39:11, 2.46s/it]
2%|▏ | 11607/569592 [3:07:31<419:15:37, 2.70s/it]
2%|▏ | 11607/569592 [3:07:31<419:15:37, 2.70s/it]
2%|▏ | 11608/569592 [3:07:34<457:17:38, 2.95s/it]
2%|▏ | 11608/569592 [3:07:34<457:17:38, 2.95s/it]
2%|▏ | 11609/569592 [3:07:35<373:41:49, 2.41s/it]
2%|▏ | 11609/569592 [3:07:35<373:41:49, 2.41s/it]
2%|▏ | 11610/569592 [3:07:36<311:06:24, 2.01s/it]
2%|▏ | 11610/569592 [3:07:36<311:06:24, 2.01s/it]
2%|▏ | 11611/569592 [3:07:41<431:53:40, 2.79s/it]
2%|▏ | 11611/569592 [3:07:41<431:53:40, 2.79s/it]
2%|▏ | 11612/569592 [3:07:42<346:58:48, 2.24s/it]
2%|▏ | 11612/569592 [3:07:42<346:58:48, 2.24s/it]
2%|▏ | 11613/569592 [3:07:43<286:32:11, 1.85s/it]
2%|▏ | 11613/569592 [3:07:43<286:32:11, 1.85s/it]
2%|▏ | 11614/569592 [3:07:44<243:21:42, 1.57s/it]
2%|▏ | 11614/569592 [3:07:44<243:21:42, 1.57s/it]
2%|▏ | 11615/569592 [3:07:45<214:06:30, 1.38s/it]
2%|▏ | 11615/569592 [3:07:45<214:06:30, 1.38s/it]
2%|▏ | 11616/569592 [3:07:46<201:15:50, 1.30s/it]
2%|▏ | 11616/569592 [3:07:46<201:15:50, 1.30s/it]
2%|▏ | 11617/569592 [3:07:51<387:49:04, 2.50s/it]
2%|▏ | 11617/569592 [3:07:51<387:49:04, 2.50s/it]
2%|▏ | 11618/569592 [3:07:52<324:49:12, 2.10s/it]
2%|▏ | 11618/569592 [3:07:52<324:49:12, 2.10s/it]
2%|▏ | 11619/569592 [3:07:57<445:42:56, 2.88s/it]
2%|▏ | 11619/569592 [3:07:57<445:42:56, 2.88s/it]
2%|▏ | 11620/569592 [3:07:58<361:11:46, 2.33s/it]
2%|▏ | 11620/569592 [3:07:58<361:11:46, 2.33s/it]
2%|▏ | 11621/569592 [3:08:00<325:57:15, 2.10s/it]
2%|▏ | 11621/569592 [3:08:00<325:57:15, 2.10s/it]
2%|▏ | 11622/569592 [3:08:02<314:00:41, 2.03s/it]
2%|▏ | 11622/569592 [3:08:02<314:00:41, 2.03s/it]
2%|▏ | 11623/569592 [3:08:06<425:56:08, 2.75s/it]
2%|▏ | 11623/569592 [3:08:06<425:56:08, 2.75s/it]
2%|▏ | 11624/569592 [3:08:07<346:11:00, 2.23s/it]
2%|▏ | 11624/569592 [3:08:07<346:11:00, 2.23s/it]
2%|▏ | 11625/569592 [3:08:10<403:53:06, 2.61s/it]
2%|▏ | 11625/569592 [3:08:11<403:53:06, 2.61s/it]
2%|▏ | 11626/569592 [3:08:13<406:27:38, 2.62s/it]
2%|▏ | 11626/569592 [3:08:13<406:27:38, 2.62s/it]
2%|▏ | 11627/569592 [3:08:16<425:28:15, 2.75s/it]
2%|▏ | 11627/569592 [3:08:16<425:28:15, 2.75s/it]
2%|▏ | 11628/569592 [3:08:17<342:47:50, 2.21s/it]
2%|▏ | 11628/569592 [3:08:17<342:47:50, 2.21s/it]
2%|▏ | 11629/569592 [3:08:21<419:57:00, 2.71s/it]
2%|▏ | 11629/569592 [3:08:21<419:57:00, 2.71s/it]
2%|▏ | 11630/569592 [3:08:24<420:52:08, 2.72s/it]
2%|▏ | 11630/569592 [3:08:24<420:52:08, 2.72s/it]
2%|▏ | 11631/569592 [3:08:28<470:53:25, 3.04s/it]
2%|▏ | 11631/569592 [3:08:28<470:53:25, 3.04s/it]
2%|▏ | 11632/569592 [3:08:29<375:17:51, 2.42s/it]
2%|▏ | 11632/569592 [3:08:29<375:17:51, 2.42s/it]
2%|▏ | 11633/569592 [3:08:35<559:58:38, 3.61s/it]
2%|▏ | 11633/569592 [3:08:35<559:58:38, 3.61s/it]
2%|▏ | 11634/569592 [3:08:36<435:39:32, 2.81s/it]
2%|▏ | 11634/569592 [3:08:36<435:39:32, 2.81s/it]
2%|▏ | 11635/569592 [3:08:37<351:57:00, 2.27s/it]
2%|▏ | 11635/569592 [3:08:37<351:57:00, 2.27s/it]
2%|▏ | 11636/569592 [3:08:40<410:01:24, 2.65s/it]
2%|▏ | 11636/569592 [3:08:40<410:01:24, 2.65s/it]
2%|▏ | 11637/569592 [3:08:42<342:49:38, 2.21s/it]
2%|▏ | 11637/569592 [3:08:42<342:49:38, 2.21s/it]
2%|▏ | 11638/569592 [3:08:44<339:02:32, 2.19s/it]
2%|▏ | 11638/569592 [3:08:44<339:02:32, 2.19s/it]
2%|▏ | 11639/569592 [3:08:48<421:22:47, 2.72s/it]
2%|▏ | 11639/569592 [3:08:48<421:22:47, 2.72s/it]
2%|▏ | 11640/569592 [3:08:49<338:12:16, 2.18s/it]
2%|▏ | 11640/569592 [3:08:49<338:12:16, 2.18s/it]
2%|▏ | 11641/569592 [3:08:53<454:02:56, 2.93s/it]
2%|▏ | 11641/569592 [3:08:53<454:02:56, 2.93s/it]
2%|▏ | 11642/569592 [3:08:55<404:11:49, 2.61s/it]
2%|▏ | 11642/569592 [3:08:55<404:11:49, 2.61s/it]
2%|▏ | 11643/569592 [3:08:58<416:20:56, 2.69s/it]
2%|▏ | 11643/569592 [3:08:58<416:20:56, 2.69s/it]
2%|▏ | 11644/569592 [3:08:59<335:51:23, 2.17s/it]
2%|▏ | 11644/569592 [3:08:59<335:51:23, 2.17s/it]
2%|▏ | 11645/569592 [3:09:03<445:31:26, 2.87s/it]
2%|▏ | 11645/569592 [3:09:04<445:31:26, 2.87s/it]
2%|▏ | 11646/569592 [3:09:06<411:24:40, 2.65s/it]
2%|▏ | 11646/569592 [3:09:06<411:24:40, 2.65s/it]
2%|▏ | 11647/569592 [3:09:08<398:42:52, 2.57s/it]
2%|▏ | 11647/569592 [3:09:08<398:42:52, 2.57s/it]
2%|▏ | 11648/569592 [3:09:12<466:39:12, 3.01s/it]
2%|▏ | 11648/569592 [3:09:12<466:39:12, 3.01s/it]
2%|▏ | 11649/569592 [3:09:14<415:15:41, 2.68s/it]
2%|▏ | 11649/569592 [3:09:14<415:15:41, 2.68s/it]
2%|▏ | 11650/569592 [3:09:15<355:07:11, 2.29s/it]
2%|▏ | 11650/569592 [3:09:15<355:07:11, 2.29s/it]
2%|▏ | 11651/569592 [3:09:19<426:38:48, 2.75s/it]
2%|▏ | 11651/569592 [3:09:19<426:38:48, 2.75s/it]
2%|▏ | 11652/569592 [3:09:24<544:27:55, 3.51s/it]
2%|▏ | 11652/569592 [3:09:24<544:27:55, 3.51s/it]
2%|▏ | 11653/569592 [3:09:27<522:33:32, 3.37s/it]
2%|▏ | 11653/569592 [3:09:28<522:33:32, 3.37s/it]
2%|▏ | 11654/569592 [3:09:28<408:41:25, 2.64s/it]
2%|▏ | 11654/569592 [3:09:28<408:41:25, 2.64s/it]
2%|▏ | 11655/569592 [3:09:32<475:13:38, 3.07s/it]
2%|▏ | 11655/569592 [3:09:32<475:13:38, 3.07s/it]
2%|▏ | 11656/569592 [3:09:36<511:36:53, 3.30s/it]
2%|▏ | 11656/569592 [3:09:36<511:36:53, 3.30s/it]
2%|▏ | 11657/569592 [3:09:37<401:56:47, 2.59s/it]
2%|▏ | 11657/569592 [3:09:37<401:56:47, 2.59s/it]
2%|▏ | 11658/569592 [3:09:42<487:12:29, 3.14s/it]
2%|▏ | 11658/569592 [3:09:42<487:12:29, 3.14s/it]
2%|▏ | 11659/569592 [3:09:45<514:39:08, 3.32s/it]
2%|▏ | 11659/569592 [3:09:45<514:39:08, 3.32s/it]
2%|▏ | 11660/569592 [3:09:50<576:32:10, 3.72s/it]
2%|▏ | 11660/569592 [3:09:50<576:32:10, 3.72s/it]
2%|▏ | 11661/569592 [3:09:51<443:52:15, 2.86s/it]
2%|▏ | 11661/569592 [3:09:51<443:52:15, 2.86s/it]
2%|▏ | 11662/569592 [3:09:52<352:25:01, 2.27s/it]
2%|▏ | 11662/569592 [3:09:52<352:25:01, 2.27s/it]
2%|▏ | 11663/569592 [3:09:53<291:46:06, 1.88s/it]
2%|▏ | 11663/569592 [3:09:53<291:46:06, 1.88s/it]
2%|▏ | 11664/569592 [3:09:57<404:59:18, 2.61s/it]
2%|▏ | 11664/569592 [3:09:57<404:59:18, 2.61s/it]
2%|▏ | 11665/569592 [3:10:02<526:21:02, 3.40s/it]
2%|▏ | 11665/569592 [3:10:02<526:21:02, 3.40s/it]
2%|▏ | 11666/569592 [3:10:06<518:10:05, 3.34s/it]
2%|▏ | 11666/569592 [3:10:06<518:10:05, 3.34s/it]
2%|▏ | 11667/569592 [3:10:10<584:11:28, 3.77s/it]
2%|▏ | 11667/569592 [3:10:10<584:11:28, 3.77s/it]
2%|▏ | 11668/569592 [3:10:15<627:38:40, 4.05s/it]
2%|▏ | 11668/569592 [3:10:15<627:38:40, 4.05s/it]
2%|▏ | 11669/569592 [3:10:20<687:38:54, 4.44s/it]
2%|▏ | 11669/569592 [3:10:20<687:38:54, 4.44s/it]
2%|▏ | 11670/569592 [3:10:24<649:14:19, 4.19s/it]
2%|▏ | 11670/569592 [3:10:24<649:14:19, 4.19s/it]
2%|▏ | 11671/569592 [3:10:28<628:10:38, 4.05s/it]
2%|▏ | 11671/569592 [3:10:28<628:10:38, 4.05s/it]
2%|▏ | 11672/569592 [3:10:32<659:24:47, 4.25s/it]
2%|▏ | 11672/569592 [3:10:32<659:24:47, 4.25s/it]
2%|▏ | 11673/569592 [3:10:37<692:37:17, 4.47s/it]
2%|▏ | 11673/569592 [3:10:37<692:37:17, 4.47s/it]
2%|▏ | 11674/569592 [3:10:41<633:28:15, 4.09s/it]
2%|▏ | 11674/569592 [3:10:41<633:28:15, 4.09s/it]
2%|▏ | 11675/569592 [3:10:44<585:08:30, 3.78s/it]
2%|▏ | 11675/569592 [3:10:44<585:08:30, 3.78s/it]
2%|▏ | 11676/569592 [3:10:49<634:42:45, 4.10s/it]
2%|▏ | 11676/569592 [3:10:49<634:42:45, 4.10s/it]
2%|▏ | 11677/569592 [3:10:56<777:24:37, 5.02s/it]
2%|▏ | 11677/569592 [3:10:56<777:24:37, 5.02s/it]
2%|▏ | 11678/569592 [3:11:00<766:58:31, 4.95s/it]
2%|▏ | 11678/569592 [3:11:00<766:58:31, 4.95s/it]
2%|▏ | 11679/569592 [3:11:01<577:31:13, 3.73s/it]
2%|▏ | 11679/569592 [3:11:01<577:31:13, 3.73s/it]
2%|▏ | 11680/569592 [3:11:05<559:35:23, 3.61s/it]
2%|▏ | 11680/569592 [3:11:05<559:35:23, 3.61s/it]
2%|▏ | 11681/569592 [3:11:06<435:13:58, 2.81s/it]
2%|▏ | 11681/569592 [3:11:06<435:13:58, 2.81s/it]
2%|▏ | 11682/569592 [3:11:09<462:37:25, 2.99s/it]
2%|▏ | 11682/569592 [3:11:09<462:37:25, 2.99s/it]
2%|▏ | 11683/569592 [3:11:14<536:32:36, 3.46s/it]
2%|▏ | 11683/569592 [3:11:14<536:32:36, 3.46s/it]
2%|▏ | 11684/569592 [3:11:15<418:26:34, 2.70s/it]
2%|▏ | 11684/569592 [3:11:15<418:26:34, 2.70s/it]
2%|▏ | 11685/569592 [3:11:19<506:49:48, 3.27s/it]
2%|▏ | 11685/569592 [3:11:19<506:49:48, 3.27s/it]
2%|▏ | 11686/569592 [3:11:24<586:09:36, 3.78s/it]
2%|▏ | 11686/569592 [3:11:24<586:09:36, 3.78s/it]
2%|▏ | 11687/569592 [3:11:27<568:11:51, 3.67s/it]
2%|▏ | 11687/569592 [3:11:28<568:11:51, 3.67s/it]
2%|▏ | 11688/569592 [3:11:32<584:02:40, 3.77s/it]
2%|▏ | 11688/569592 [3:11:32<584:02:40, 3.77s/it]
2%|▏ | 11689/569592 [3:11:36<627:45:56, 4.05s/it]
2%|▏ | 11689/569592 [3:11:36<627:45:56, 4.05s/it]
2%|▏ | 11690/569592 [3:11:41<657:14:41, 4.24s/it]
2%|▏ | 11690/569592 [3:11:41<657:14:41, 4.24s/it]
2%|▏ | 11691/569592 [3:11:44<602:48:29, 3.89s/it]
2%|▏ | 11691/569592 [3:11:44<602:48:29, 3.89s/it]
2%|▏ | 11692/569592 [3:11:47<573:29:46, 3.70s/it]
2%|▏ | 11692/569592 [3:11:47<573:29:46/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
, 3.70s/it]
2%|▏ | 11693/569592 [3:11:52<622:06:17, 4.01s/it]
2%|▏ | 11693/569592 [3:11:52<622:06:17, 4.01s/it]
2%|▏ | 11694/569592 [3:11:57<651:00:51, 4.20s/it]
2%|▏ | 11694/569592 [3:11:57<651:00:51, 4.20s/it]
2%|▏ | 11695/569592 [3:12:00<619:18:04, 4.00s/it]
2%|▏ | 11695/569592 [3:12:00<619:18:04, 4.00s/it]
2%|▏ | 11696/569592 [3:12:04<630:34:07, 4.07s/it]
2%|▏ | 11696/569592 [3:12:04<630:34:07, 4.07s/it]
2%|▏ | 11697/569592 [3:12:08<603:42:17, 3.90s/it]
2%|▏ | 11697/569592 [3:12:08<603:42:17, 3.90s/it]
2%|▏ | 11698/569592 [3:12:11<565:30:03, 3.65s/it]
2%|▏ | 11698/569592 [3:12:11<565:30:03, 3.65s/it]
2%|▏ | 11699/569592 [3:12:15<581:38:41, 3.75s/it]
2%|▏ | 11699/569592 [3:12:15<581:38:41, 3.75s/it]
2%|▏ | 11700/569592 [3:12:16<448:47:01, 2.90s/it]
2%|▏ | 11700/569592 [3:12:16<448:47:01, 2.90s/it]
2%|▏ | 11701/569592 [3:12:21<541:29:37, 3.49s/it]
2%|▏ | 11701/569592 [3:12:21<541:29:37, 3.49s/it]
2%|▏ | 11702/569592 [3:12:22<423:03:20, 2.73s/it]
2%|▏ | 11702/569592 [3:12:22<423:03:20, 2.73s/it]
2%|▏ | 11703/569592 [3:12:23<342:04:48, 2.21s/it]
2%|▏ | 11703/569592 [3:12:23<342:04:48, 2.21s/it]
2%|▏ | 11704/569592 [3:12:28<474:17:49, 3.06s/it]
2%|▏ | 11704/569592 [3:12:28<474:17:49, 3.06s/it]
2%|▏ | 11705/569592 [3:12:29<376:44:30, 2.43s/it]
2%|▏ | 11705/569592 [3:12:29<376:44:30, 2.43s/it]
2%|▏ | 11706/569592 [3:12:33<480:09:21, 3.10s/it]
2%|▏ | 11706/569592 [3:12:33<480:09:21, 3.10s/it]
2%|▏ | 11707/569592 [3:12:37<491:13:49, 3.17s/it]
2%|▏ | 11707/569592 [3:12:37<491:13:49, 3.17s/it]
2%|▏ | 11708/569592 [3:12:40<500:29:48, 3.23s/it]
2%|▏ | 11708/569592 [3:12:40<500:29:48, 3.23s/it]
2%|▏ | 11709/569592 [3:12:45<562:45:36, 3.63s/it]
2%|▏ | 11709/569592 [3:12:45<562:45:36, 3.63s/it]
2%|▏ | 11710/569592 [3:12:49<576:29:41, 3.72s/it]
2%|▏ | 11710/569592 [3:12:49<576:29:41, 3.72s/it]
2%|▏ | 11711/569592 [3:12:49<445:00:54, 2.87s/it]
2%|▏ | 11711/569592 [3:12:49<445:00:54, 2.87s/it]
2%|▏ | 11712/569592 [3:12:53<471:35:31, 3.04s/it]
2%|▏ | 11712/569592 [3:12:53<471:35:31, 3.04s/it]
2%|▏ | 11713/569592 [3:12:56<499:24:00, 3.22s/it]
2%|▏ | 11713/569592 [3:12:57<499:24:00, 3.22s/it]
2%|▏ | 11714/569592 [3:13:00<526:45:36, 3.40s/it]
2%|▏ | 11714/569592 [3:13:00<526:45:36, 3.40s/it]
2%|▏ | 11715/569592 [3:13:03<508:24:54, 3.28s/it]
2%|▏ | 11715/569592 [3:13:03<508:24:54, 3.28s/it]
2%|▏ | 11716/569592 [3:13:06<502:45:32, 3.24s/it]
2%|▏ | 11716/569592 [3:13:06<502:45:32, 3.24s/it]
2%|▏ | 11717/569592 [3:13:10<500:11:31, 3.23s/it]
2%|▏ | 11717/569592 [3:13:10<500:11:31, 3.23s/it]
2%|▏ | 11718/569592 [3:13:11<393:33:19, 2.54s/it]
2%|▏ | 11718/569592 [3:13:11<393:33:19, 2.54s/it]
2%|▏ | 11719/569592 [3:13:12<319:34:44, 2.06s/it]
2%|▏ | 11719/569592 [3:13:12<319:34:44, 2.06s/it]
2%|▏ | 11720/569592 [3:13:15<385:44:59, 2.49s/it]
2%|▏ | 11720/569592 [3:13:15<385:44:59, 2.49s/it]
2%|▏ | 11721/569592 [3:13:16<317:40:49, 2.05s/it]
2%|▏ | 11721/569592 [3:13:16<317:40:49, 2.05s/it]
2%|▏ | 11722/569592 [3:13:17<269:29:48, 1.74s/it]
2%|▏ | 11722/569592 [3:13:17<269:29:48, 1.74s/it]
2%|▏ | 11723/569592 [3:13:21<360:40:17, 2.33s/it]
2%|▏ | 11723/569592 [3:13:21<360:40:17, 2.33s/it]
2%|▏ | 11724/569592 [3:13:24<418:13:59, 2.70s/it]
2%|▏ | 11724/569592 [3:13:24<418:13:59, 2.70s/it]
2%|▏ | 11725/569592 [3:13:25<338:43:34, 2.19s/it]
2%|▏ | 11725/569592 [3:13:25<338:43:34, 2.19s/it]
2%|▏ | 11726/569592 [3:13:29<431:06:08, 2.78s/it]
2%|▏ | 11726/569592 [3:13:30<431:06:08, 2.78s/it]
2%|▏ | 11727/569592 [3:13:30<345:44:47, 2.23s/it]
2%|▏ | 11727/569592 [3:13:30<345:44:47, 2.23s/it]
2%|▏ | 11728/569592 [3:13:34<402:47:15, 2.60s/it]
2%|▏ | 11728/569592 [3:13:34<402:47:15, 2.60s/it]
2%|▏ | 11729/569592 [3:13:35<329:28:30, 2.13s/it]
2%|▏ | 11729/569592 [3:13:35<329:28:30, 2.13s/it]
2%|▏ | 11730/569592 [3:13:39<402:42:56, 2.60s/it]
2%|▏ | 11730/569592 [3:13:39<402:42:56, 2.60s/it]
2%|▏ | 11731/569592 [3:13:40<325:57:20, 2.10s/it]
2%|▏ | 11731/569592 [3:13:40<325:57:20, 2.10s/it]
2%|▏ | 11732/569592 [3:13:41<271:54:35, 1.75s/it]
2%|▏ | 11732/569592 [3:13:41<271:54:35, 1.75s/it]
2%|▏ | 11733/569592 [3:13:41<235:56:05, 1.52s/it]
2%|▏ | 11733/569592 [3:13:42<235:56:05, 1.52s/it]
2%|▏ | 11734/569592 [3:13:45<330:06:53, 2.13s/it]
2%|▏ | 11734/569592 [3:13:45<330:06:53, 2.13s/it]
2%|▏ | 11735/569592 [3:13:46<275:04:29, 1.78s/it]
2%|▏ | 11735/569592 [3:13:46<275:04:29, 1.78s/it]
2%|▏ | 11736/569592 [3:13:47<237:29:24, 1.53s/it]
2%|▏ | 11736/569592 [3:13:47<237:29:24, 1.53s/it]
2%|▏ | 11737/569592 [3:13:52<394:53:19, 2.55s/it]
2%|▏ | 11737/569592 [3:13:52<394:53:19, 2.55s/it]
2%|▏ | 11738/569592 [3:13:55<436:58:35, 2.82s/it]
2%|▏ | 11738/569592 [3:13:55<436:58:35, 2.82s/it]
2%|▏ | 11739/569592 [3:13:59<456:00:43, 2.94s/it]
2%|▏ | 11739/569592 [3:13:59<456:00:43, 2.94s/it]
2%|▏ | 11740/569592 [3:13:59<362:45:21, 2.34s/it]
2%|▏ | 11740/569592 [3:14:00<362:45:21, 2.34s/it]
2%|▏ | 11741/569592 [3:14:01<313:33:09, 2.02s/it]
2%|▏ | 11741/569592 [3:14:01<313:33:09, 2.02s/it]
2%|▏ | 11742/569592 [3:14:05<394:07:19, 2.54s/it]
2%|▏ | 11742/569592 [3:14:05<394:07:19, 2.54s/it]
2%|▏ | 11743/569592 [3:14:06<358:28:07, 2.31s/it]
2%|▏ | 11743/569592 [3:14:06<358:28:07, 2.31s/it]
2%|▏ | 11744/569592 [3:14:07<299:02:47, 1.93s/it]
2%|▏ | 11744/569592 [3:14:07<299:02:47, 1.93s/it]
2%|▏ | 11745/569592 [3:14:12<432:39:29, 2.79s/it]
2%|▏ | 11745/569592 [3:14:12<432:39:29, 2.79s/it]
2%|▏ | 11746/569592 [3:14:16<459:31:12, 2.97s/it]
2%|▏ | 11746/569592 [3:14:16<459:31:12, 2.97s/it]
2%|▏ | 11747/569592 [3:14:17<371:30:11, 2.40s/it]
2%|▏ | 11747/569592 [3:14:17<371:30:11, 2.40s/it]
2%|▏ | 11748/569592 [3:14:18<304:13:33, 1.96s/it]
2%|▏ | 11748/569592 [3:14:18<304:13:33, 1.96s/it]
2%|▏ | 11749/569592 [3:14:22<404:53:42, 2.61s/it]
2%|▏ | 11749/569592 [3:14:22<404:53:42, 2.61s/it]
2%|▏ | 11750/569592 [3:14:27<511:07:46, 3.30s/it]
2%|▏ | 11750/569592 [3:14:27<511:07:46, 3.30s/it]
2%|▏ | 11751/569592 [3:14:28<405:01:18, 2.61s/it]
2%|▏ | 11751/569592 [3:14:28<405:01:18, 2.61s/it]
2%|▏ | 11752/569592 [3:14:29<330:56:28, 2.14s/it]
2%|▏ | 11752/569592 [3:14:29<330:56:28, 2.14s/it]
2%|▏ | 11753/569592 [3:14:33<430:11:43, 2.78s/it]
2%|▏ | 11753/569592 [3:14:33<430:11:43, 2.78s/it]
2%|▏ | 11754/569592 [3:14:37<471:00:43, 3.04s/it]
2%|▏ | 11754/569592 [3:14:37<471:00:43, 3.04s/it]
2%|▏ | 11755/569592 [3:14:40<468:13:16, 3.02s/it]
2%|▏ | 11755/569592 [3:14:40<468:13:16, 3.02s/it]
2%|▏ | 11756/569592 [3:14:40<370:29:36, 2.39s/it]
2%|▏ | 11756/569592 [3:14:40<370:29:36, 2.39s/it]
2%|▏ | 11757/569592 [3:14:44<421:37:41, 2.72s/it]
2%|▏ | 11757/569592 [3:14:44<421:37:41, 2.72s/it]
2%|▏ | 11758/569592 [3:14:47<416:45:38, 2.69s/it]
2%|▏ | 11758/569592 [3:14:47<416:45:38, 2.69s/it]
2%|▏ | 11759/569592 [3:14:48<368:40:10, 2.38s/it]
2%|▏ | 11759/569592 [3:14:48<368:40:10, 2.38s/it]
2%|▏ | 11760/569592 [3:14:49<301:44:16, 1.95s/it]
2%|▏ | 11760/569592 [3:14:49<301:44:16, 1.95s/it]
2%|▏ | 11761/569592 [3:14:53<376:52:39, 2.43s/it]
2%|▏ | 11761/569592 [3:14:53<376:52:39, 2.43s/it]
2%|▏ | 11762/569592 [3:14:56<436:41:07, 2.82s/it]
2%|▏ | 11762/569592 [3:14:56<436:41:07, 2.82s/it]
2%|▏ | 11763/569592 [3:14:59<402:48:25, 2.60s/it]
2%|▏ | 11763/569592 [3:14:59<402:48:25, 2.60s/it]
2%|▏ | 11764/569592 [3:14:59<326:15:04, 2.11s/it]
2%|▏ | 11764/569592 [3:14:59<326:15:04, 2.11s/it]
2%|▏ | 11765/569592 [3:15:04<461:13:14, 2.98s/it]
2%|▏ | 11765/569592 [3:15:04<461:13:14, 2.98s/it]
2%|▏ | 11766/569592 [3:15:08<506:20:48, 3.27s/it]
2%|▏ | 11766/569592 [3:15:08<506:20:48, 3.27s/it]
2%|▏ | 11767/569592 [3:15:09<400:50:11, 2.59s/it]
2%|▏ | 11767/569592 [3:15:09<400:50:11, 2.59s/it]
2%|▏ | 11768/569592 [3:15:13<434:01:16, 2.80s/it]
2%|▏ | 11768/569592 [3:15:13<434:01:16, 2.80s/it]
2%|▏ | 11769/569592 [3:15:16<456:26:25, 2.95s/it]
2%|▏ | 11769/569592 [3:15:16<456:26:25, 2.95s/it]
2%|▏ | 11770/569592 [3:15:20<508:59:02, 3.28s/it]
2%|▏ | 11770/569592 [3:15:20<508:59:02, 3.28s/it]
2%|▏ | 11771/569592 [3:15:21<398:32:08, 2.57s/it]
2%|▏ | 11771/569592 [3:15:21<398:32:08, 2.57s/it]
2%|▏ | 11772/569592 [3:15:24<424:48:10, 2.74s/it]
2%|▏ | 11772/569592 [3:15:24<424:48:10, 2.74s/it]
2%|▏ | 11773/569592 [3:15:29<526:45:35, 3.40s/it]
2%|▏ | 11773/569592 [3:15:29<526:45:35, 3.40s/it]
2%|▏ | 11774/569592 [3:15:34<588:22:40, 3.80s/it]
2%|▏ | 11774/569592 [3:15:34<588:22:40, 3.80s/it]
2%|▏ | 11775/569592 [3:15:38<615:31:35, 3.97s/it]
2%|▏ | 11775/569592 [3:15:38<615:31:35, 3.97s/it]
2%|▏ | 11776/569592 [3:15:39<472:34:18, 3.05s/it]
2%|▏ | 11776/569592 [3:15:39<472:34:18, 3.05s/it]
2%|▏ | 11777/569592 [3:15:42<482:56:58, 3.12s/it]
2%|▏ | 11777/569592 [3:15:42<482:56:58, 3.12s/it]
2%|▏ | 11778/569592 [3:15:46<487:10:11, 3.14s/it]
2%|▏ | 11778/569592 [3:15:46<487:10:11, 3.14s/it]
2%|▏ | 11779/569592 [3:15:47<392:02:11, 2.53s/it]
2%|▏ | 11779/569592 [3:15:47<392:02:11, 2.53s/it]
2%|▏ | 11780/569592 [3:15:50<429:12:40, 2.77s/it]
2%|▏ | 11780/569592 [3:15:50<429:12:40, 2.77s/it]
2%|▏ | 11781/569592 [3:15:53<448:30:07, 2.89s/it]
2%|▏ | 11781/569592 [3:15:53<448:30:07, 2.89s/it]
2%|▏ | 11782/569592 [3:15:57<499:39:05, 3.22s/it]
2%|▏ | 11782/569592 [3:15:57<499:39:05, 3.22s/it]
2%|▏ | 11783/569592 [3:16:02<583:18:50, 3.76s/it]
2%|▏ | 11783/569592 [3:16:02<583:18:50, 3.76s/it]
2%|▏ | 11784/569592 [3:16:05<543:36:42, 3.51s/it]
2%|▏ | 11784/569592 [3:16:05<543:36:42, 3.51s/it]
2%|▏ | 11785/569592 [3:16:08<536:08:37, 3.46s/it]
2%|▏ | 11785/569592 [3:16:08<536:08:37, 3.46s/it]
2%|▏ | 11786/569592 [3:16:13<599:26:32, 3.87s/it]
2%|▏ | 11786/569592 [3:16:13<599:26:32, 3.87s/it]
2%|▏ | 11787/569592 [3:16:18<637:51:41, 4.12s/it]
2%|▏ | 11787/569592 [3:16:18<637:51:41, 4.12s/it]
2%|▏ | 11788/569592 [3:16:23<685:32:57, 4.42s/it]
2%|▏ | 11788/569592 [3:16:23<685:32:57, 4.42s/it]
2%|▏ | 11789/569592 [3:16:28<706:37:37, 4.56s/it]
2%|▏ | 11789/569592 [3:16:28<706:37:37, 4.56s/it]
2%|▏ | 11790/569592 [3:16:31<642:41:03, 4.15s/it]
2%|▏ | 11790/569592 [3:16:31<642:41:03, 4.15s/it]
2%|▏ | 11791/569592 [3:16:32<490:36:45, 3.17s/it]
2%|▏ | 11791/569592 [3:16:32<490:36:45, 3.17s/it]
2%|▏ | 11792/569592 [3:16:33<386:13:48, 2.49s/it]
2%|▏ | 11792/569592 [3:16:33<386:13:48, 2.49s/it]
2%|▏ | 11793/569592 [3:16:37<435:57:02, 2.81s/it]
2%|▏ | 11793/569592 [3:16:37<435:57:02, 2.81s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11794/569592 [3:16:42<574:52:00, 3.71s/it]
2%|▏ | 11794/569592 [3:16:42<574:52:00, 3.71s/it]
2%|▏ | 11795/569592 [3:16:45<539:57:04, 3.48s/it]
2%|▏ | 11795/569592 [3:16:45<539:57:04, 3.48s/it]
2%|▏ | 11796/569592 [3:16:48<518:48:47, 3.35s/it]
2%|▏ | 11796/569592 [3:16:48<518:48:47, 3.35s/it]
2%|▏ | 11797/569592 [3:16:49<406:09:01, 2.62s/it]
2%|▏ | 11797/569592 [3:16:49<406:09:01, 2.62s/it]
2%|▏ | 11798/569592 [3:16:53<446:20:23, 2.88s/it]
2%|▏ | 11798/569592 [3:16:53<446:20:23, 2.88s/it]
2%|▏ | 11799/569592 [3:16:57<503:04:21, 3.25s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (89998064 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11799/569592 [3:16:57<503:04:21, 3.25s/it]
2%|▏ | 11800/569592 [3:17:00<509:47:35, 3.29s/it]
2%|▏ | 11800/569592 [3:17:00<509:47:35, 3.29s/it]
2%|▏ | 11801/569592 [3:17:04<518:59:25, 3.35s/it]
2%|▏ | 11801/569592 [3:17:04<518:59:25, 3.35s/it]
2%|▏ | 11802/569592 [3:17:05<404:59:02, 2.61s/it]
2%|▏ | 11802/569592 [3:17:05<404:59:02, 2.61s/it]
2%|▏ | 11803/569592 [3:17:10<512:32:58, 3.31s/it]
2%|▏ | 11803/569592 [3:17:10<512:32:58, 3.31s/it]
2%|▏ | 11804/569592 [3:17:14<574:23:22, 3.71s/it]
2%|▏ | 11804/569592 [3:17:14<5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
74:23:22, 3.71s/it]
2%|▏ | 11805/569592 [3:17:19<617:04:06, 3.98s/it]
2%|▏ | 11805/569592 [3:17:19<617:04:06, 3.98s/it]
2%|▏ | 11806/569592 [3:17:23<637:27:10, 4.11s/it]
2%|▏ | 11806/569592 [3:17:23<637:27:10, 4.11s/it]
2%|▏ | 11807/569592 [3:17:28<683:34:40, 4.41s/it]
2%|▏ | 11807/569592 [3:17:28<683:34:40, 4.41s/it]
2%|▏ | 11808/569592 [3:17:29<519:35:28, 3.35s/it]
2%|▏ | 11808/569592 [3:17:29<519:35:28, 3.35s/it]
2%|▏ | 11809/569592 [3:17:34<593:41:47, 3.83s/it]
2%|▏ | 11809/569592 [3:17:34<593:41:47, 3.83s/it]
2%|▏ | 11810/569592 [3:17:39<636:03:11, 4.11s/it]
2%|▏ | 11810/569592 [3:17:39<636:03:11, 4.11s/it]
2%|▏ | 11811/569592 [3:17:42<609:13:05, 3.93s/it]
2%|▏ | 11811/569592 [3:17:42<609:13:05, 3.93s/it]
2%|▏ | 11812/569592 [3:17:47<638:05:19, 4.12s/it]
2%|▏ | 11812/569592 [3:17:47<638:05:19, 4.12s/it]
2%|▏ | 11813/569592 [3:17:51<641:16:24, 4.14s/it]
2%|▏ | 11813/569592 [3:17:51<641:16:24, 4.14s/it]
2%|▏ | 11814/569592 [3:17:54<591:46:05, 3.82s/it]
2%|▏ | 11814/569592 [3:17:54<591:46:05, 3.82s/it]
2%|▏ | 11815/569592 [3:17:58<574:18:35, 3.71s/it]
2%|▏ | 11815/569592 [3:17:58<574:18:35, 3.71s/it]
2%|▏ | 11816/569592 [3:18:01<542:49:45, 3.50s/it]
2%|▏ | 11816/569592 [3:18:01<542:49:45, 3.50s/it]
2%|▏ | 11817/569592 [3:18:06<612:31:31, 3.95s/it]
2%|▏ | 11817/569592 [3:18:06<612:31:31, 3.95s/it]
2%|▏ | 11818/569592 [3:18:09<569:55:22, 3.68s/it]
2%|▏ | 11818/569592 [3:18:09<569:55:22, 3.68s/it]
2%|▏ | 11819/569592 [3:18:12<544:42:54, 3.52s/it]
2%|▏ | 11819/569592 [3:18:12<544:42:54, 3.52s/it]
2%|▏ | 11820/569592 [3:18:13<423:04:25, 2.73s/it]
2%|▏ | 11820/569592 [3:18:13<423:04:25, 2.73s/it]
2%|▏ | 11821/569592 [3:18:16<45/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
0:32:15, 2.91s/it]
2%|▏ | 11821/569592 [3:18:16<450:32:15, 2.91s/it]
2%|▏ | 11822/569592 [3:18:20<481:50:32, 3.11s/it]
2%|▏ | 11822/569592 [3:18:20<481:50:32, 3.11s/it]
2%|▏ | 11823/569592 [3:18:21<382:53:15, 2.47s/it]
2%|▏ | 11823/569592 [3:18:21<382:53:15, 2.47s/it]
2%|▏ | 11824/569592 [3:18:26<503:32:50, 3.25s/it]
2%|▏ | 11824/569592 [3:18:26<503:32:50, 3.25s/it]
2%|▏ | 11825/569592 [3:18:30<558:09:58, 3.60s/it]
2%|▏ | 11825/569592 [3:18:30<558:09:58, 3.60s/it]
2%|▏ | 11826/569592 [3:18:34<546:59:22, 3.53s/it]
2%|▏ | 11826/569592 [3:18:34<546:59:22, 3.53s/it]
2%|▏ | 11827/569592 [3:18:38<609:26:37, 3.93s/it]
2%|▏ | 11827/569592 [3:18:38<609:26:37, 3.93s/it]
2%|▏ | 11828/569592 [3:18:43<638:08:05, 4.12s/it]
2%|▏ | 11828/569592 [3:18:43<638:08:05, 4.12s/it]
2%|▏ | 11829/569592 [3:18:44<487:27:49, 3.15s/it]
2%|▏ | 11829/569592 [3:18:44<487:27:49, 3.15s/it]
2%|▏ | 11830/569592 [3:18:48<543:42:45, 3.51s/it]
2%|▏ | 11830/569592 [3:18:48<543:42:45, 3.51s/it]
2%|▏ | 11831/569592 [3:18:51<533:41:39, 3.44s/it]
2%|▏ | 11831/569592 [3:18:51<533:41:39, 3.44s/it]
2%|▏ | 11832/569592 [3:18:56<594:34:38, 3.84s/it]
2%|▏ | 11832/569592 [3:18:56<594:34:38, 3.84s/it]
2%|▏ | 11833/569592 [3:18:57<458:55:31, 2.96s/it]
2%|▏ | 11833/569592 [3:18:57<458:55:31, 2.96s/it]
2%|▏ | 11834/569592 [3:19:00<459:49:34, 2.97s/it]
2%|▏ | 11834/569592 [3:19:00<459:49:34, 2.97s/it]
2%|▏ | 11835/569592 [3:19:01<366:06:39, 2.36s/it]
2%|▏ | 11835/569592 [3:19:01<366:06:39, 2.36s/it]
2%|▏ | 11836/569592 [3:19:06<480:34:43, 3.10s/it]
2%|▏ | 11836/569592 [3:19:06<480:34:43, 3.10s/it]
2%|▏ | 11837/569592 [3:19:11<559:21:55, 3.61s/it]
2%|▏ | 11837/569592 [3:19:11<559:21:55, 3.61s/it]
2%|▏ | 11838/569592 [3:19:12<431:34:30, 2.79s/it]
2%|▏ | 11838/569592 [3:19:12<431:34:30, 2.79s/it]
2%|▏ | 11839/569592 [3:19:12<344:42:12, 2.22s/it]
2%|▏ | 11839/569592 [3:19:12<344:42:12, 2.22s/it]
2%|▏ | 11840/569592 [3:19:13<285:42:34, 1.84s/it]
2%|▏ | 11840/569592 [3:19:13<285:42:34, 1.84s/it]
2%|▏ | 11841/569592 [3:19:19<457:22:17, 2.95s/it]
2%|▏ | 11841/569592 [3:19:19<457:22:17, 2.95s/it]
2%|▏ | 11842/569592 [3:19:20<364:39:09, 2.35s/it]
2%|▏ | 11842/569592 [3:19:20<364:39:09, 2.35s/it]
2%|▏ | 11843/569592 [3:19:24<424:23:53, 2.74s/it]
2%|▏ | 11843/569592 [3:19:24<424:23:53, 2.74s/it]
2%|▏ | 11844/569592 [3:19:25<344:34:30, 2.22s/it]
2%|▏ | 11844/569592 [3:19:25<344:34:30, 2.22s/it]
2%|▏ | 11845/569592 [3:19:26<286:28:01, 1.85s/it]
2%|▏ | 11845/569592 [3:19:26<286:28:01, 1.85s/it]
2%|▏ | 11846/569592 [3:19:29<352:22:12, 2.27s/it]
2%|▏ | 11846/569592 [3:19:29<352:22:12, 2.27s/it]
2%|▏ | 11847/569592 [3:19:30<290:33:08, 1.88s/it]
2%|▏ | 11847/569592 [3:19:30<290:33:08, 1.88s/it]
2%|▏ | 11848/569592 [3:19:33<370:44:27, 2.39s/it]
2%|▏ | 11848/569592 [3:19:33<370:44:27, 2.39s/it]
2%|▏ | 11849/569592 [3:19:34<308:40:40, 1.99s/it]
2%|▏ | 11849/569592 [3:19:34<308:40:40, 1.99s/it]
2%|▏ | 11850/569592 [3:19:35<262:30:25, 1.69s/it]
2%|▏ | 11850/569592 [3:19:35<262:30:25, 1.69s/it]
2%|▏ | 11851/569592 [3:19:36<226:26:11, 1.46s/it]
2%|▏ | 11851/569592 [3:19:36<226:26:11, 1.46s/it]
2%|▏ | 11852/569592 [3:19:40<315:22:59, 2.04s/it]
2%|▏ | 11852/569592 [3:19:40<315:22:59, 2.04s/it]
2%|▏ | 11853/569592 [3:19:41<266:09:07, 1.72s/it]
2%|▏ | 11853/569592 [3:19:41<266:09:07, 1.72s/it]
2%|▏ | 11854/569592 [3:19:45<390:42:49, 2.52s/it]
2%|▏ | 11854/569592 [3:19:45<390:42:49, 2.52s/it]
2%|▏ | 11855/569592 [3:19:50<521:18:04, 3.36s/it]
2%|▏ | 11855/569592 [3:19:50<521:18:04, 3.36s/it]
2%|▏ | 11856/569592 [3:19:51<410:12:45, 2.65s/it]
2%|▏ | 11856/569592 [3:19:51<410:12:45, 2.65s/it]
2%|▏ | 11857/569592 [3:19:52<330:11:50, 2.13s/it]
2%|▏ | 11857/569592 [3:19:52<330:11:50, 2.13s/it]
2%|▏ | 11858/569592 [3:19:55<335:54:52, 2.17s/it]
2%|▏ | 11858/569592 [3:19:55<335:54:52, 2.17s/it]
2%|▏ | 11859/569592 [3:19:58<385:18:10, 2.49s/it]
2%|▏ | 11859/569592 [3:19:58<385:18:10, 2.49s/it]
2%|▏ | 11860/569592 [3:19:59<314:28:16, 2.03s/it]
2%|▏ | 11860/569592 [3:19:59<314:28:16, 2.03s/it]
2%|▏ | 11861/569592 [3:20:00<273:44:58, 1.77s/it]
2%|▏ | 11861/569592 [3:20:00<273:44:58, 1.77s/it]
2%|▏ | 11862/569592 [3:20:04<400:24:17, 2.58s/it]
2%|▏ | 11862/569592 [3:20:04<400:24:17, 2.58s/it]
2%|▏ | 11863/569592 [3:20:06<345:06:24, 2.23s/it]
2%|▏ | 11863/569592 [3:20:06<345:06:24, 2.23s/it]
2%|▏ | 11864/569592 [3:20:08<325:21:12, 2.10s/it]
2%|▏ | 11864/569592 [3:20:08<325:21:12, 2.10s/it]
2%|▏ | 11865/569592 [3:20:10<355:43:42, 2.30s/it]
2%|▏ | 11865/569592 [3:20:10<355:43:42, 2.30s/it]
2%|▏ | 11866/569592 [3:20:16<509:34:32, 3.29s/it]
2%|▏ | 11866/569592 [3:20:16<509:34:32, 3.29s/it]
2%|▏ | 11867/569592 [3:20:17<404:15:11, 2.61s/it]
2%|▏ | 11867/569592 [3:20:17<404:15:11, 2.61s/it]
2%|▏ | 11868/569592 [3:20:18<346:30:49, 2.24s/it]
2%|▏ | 11868/569592 [3:20:18<346:30:49, 2.24s/it]
2%|▏ | 11869/569592 [3:20:20<324:07:31, 2.09s/it]
2%|▏ | 11869/569592 [3:20:20<324:07:31, 2.09s/it]
2%|▏ | 11870/569592 [3:20:25<476:59:53, 3.08s/it]
2%|▏ | 11870/569592 [3:20:26<476:59:53, 3.08s/it]
2%|▏ | 11871/569592 [3:20:29<504:08:50, 3.25s/it]
2%|▏ | 11871/569592 [3:20:29<504:08:50, 3.25s/it]
2%|▏ | 11872/569592 [3:20:30<396:58:47, 2.56s/it]
2%|▏ | 11872/569592 [3:20:30<396:58:47, 2.56s/it]
2%|▏ | 11873/569592 [3:20:31<321:19:22, 2.07s/it]
2%|▏ | 11873/569592 [3:20:31<321:19:22, 2.07s/it]
2%|▏ | 11874/569592 [3:20:35<430:15:34, 2.78s/it]
2%|▏ | 11874/569592 [3:20:35<430:15:34, 2.78s/it]
2%|▏ | 11875/569592 [3:20:37<372:50:13, 2.41s/it]
2%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
�� | 11875/569592 [3:20:37<372:50:13, 2.41s/it]
2%|▏ | 11876/569592 [3:20:40<395:10:20, 2.55s/it]
2%|▏ | 11876/569592 [3:20:40<395:10:20, 2.55s/it]
2%|▏ | 11877/569592 [3:20:41<349:30:32, 2.26s/it]
2%|▏ | 11877/569592 [3:20:41<349:30:32, 2.26s/it]
2%|▏ | 11878/569592 [3:20:45<410:57:08, 2.65s/it]
2%|▏ | 11878/569592 [3:20:45<410:57:08, 2.65s/it]
2%|▏ | 11879/569592 [3:20:49<480:37:28, 3.10s/it]
2%|▏ | 11879/569592 [3:20:49<480:37:28, 3.10s/it]
2%|▏ | 11880/569592 [3:20:51<410:01:55, 2.65s/it]
2%|▏ | 11880/569592 [3:20:51<410:01:55, 2.65s/it]
2%|▏ | 11881/569592 [3:20:52<330:24:19, 2.13s/it]
2%|▏ | 11881/569592 [3:20:52<330:24:19, 2.13s/it]
2%|▏ | 11882/569592 [3:20:56<433:03:26, 2.80s/it]
2%|▏ | 11882/569592 [3:20:56<433:03:26, 2.80s/it]
2%|▏ | 11883/569592 [3:20:59<457:45:38, 2.95s/it]
2%|▏ | 11883/569592 [3:20:59<457:45:38, 2.95s/it]
2%|▏ | 11884/569592 [3:21:03<473:20:46, 3.06s/it]
2%|▏ | 11884/569592 [3:21:03<473:20:46, 3.06s/it]
2%|▏ | 11885/569592 [3:21:06<507:32:25, 3.28s/it]
2%|▏ | 11885/569592 [3:21:06<507:32:25, 3.28s/it]
2%|▏ | 11886/569592 [3:21:07<399:49:58, 2.58s/it]
2%|▏ | 11886/569592 [3:21:07<399:49:58, 2.58s/it]
2%|▏ | 11887/569592 [3:21:13<519:08:56, 3.35s/it]
2%|▏ | 11887/569592 [3:21:13<519:08:56, 3.35s/it]
2%|▏ | 11888/569592 [3:21:16<513:09:54, 3.31s/it]
2%|▏ | 11888/569592 [3:21:16<513:09:54, 3.31s/it]
2%|▏ | 11889/569592 [3:21:19<517:19:36, 3.34s/it]
2%|▏ | 11889/569592 [3:21:19<517:19:36, 3.34s/it]
2%|▏ | 11890/569592 [3:21:20<408:49:40, 2.64s/it]
2%|▏ | 11890/569592 [3:21:20<408:49:40, 2.64s/it]
2%|▏ | 11891/569592 [3:21:23<432:21:11, 2.79s/it]
2%|▏ | 11891/569592 [3:21:23<432:21:11, 2.79s/it]
2%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98911692 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
� | 11892/569592 [3:21:28<535:06:12, 3.45s/it]
2%|▏ | 11892/569592 [3:21:28<535:06:12, 3.45s/it]
2%|▏ | 11893/569592 [3:21:34<658:11:30, 4.25s/it]
2%|▏ | 11893/569592 [3:21:34<658:11:30, 4.25s/it]
2%|▏ | 11894/569592 [3:21:35<502:52:42, 3.25s/it]
2%|▏ | 11894/569592 [3:21:35<502:52:42, 3.25s/it]
2%|▏ | 11895/569592 [3:21:39<510:21:38, 3.29s/it]
2%|▏ | 11895/569592 [3:21:39<510:21:38, 3.29s/it]
2%|▏ | 11896/569592 [3:21:40<400:34:40, 2.59s/it]
2%|▏ | 11896/569592 [3:21:40<400:34:40, 2.59s/it]
2%|▏ | 11897/569592 [3:21:43<435:23:33, 2.81s/it]
2%|▏ | 11897/569592 [3:21:43<435:23:33, 2.81s/it]
2%|▏ | 11898/569592 [3:21:48<551:08:37, 3.56s/it]
2%|▏ | 11898/569592 [3:21:48<551:08:37, 3.56s/it]
2%|▏ | 11899/569592 [3:21:53<617:59:15, 3.99s/it]
2%|▏ | 11899/569592 [3:21:53<617:59:15, 3.99s/it]
2%|▏ | 11900/569592 [3:21:58<639:32:22, 4.13s/it]
2%|▏ | 11900/569592 [3:21:58<639:32:22, 4.13s/it]
2%|▏ | 11901/569592 [3:22:01<597:50:40, 3.86s/it]
2%|▏ | 11901/569592 [3:22:01<597:50:40, 3.86s/it]
2%|▏ | 11902/569592 [3:22:06<636:50:17, 4.11s/it]
2%|▏ | 11902/569592 [3:22:06<636:50:17, 4.11s/it]
2%|▏ | 11903/569592 [3:22:09<588:52:57, 3.80s/it]
2%|▏ | 11903/569592 [3:22:09<588:52:57, 3.80s/it]
2%|▏ | 11904/569592 [3:22:12<558:55:02, 3.61s/it]
2%|▏ | 11904/569592 [3:22:12<558:55:02, 3.61s/it]
2%|▏ | 11905/569592 [3:22:16<556:46:23, 3.59s/it]
2%|▏ | 11905/569592 [3:22:16<556:46:23, 3.59s/it]
2%|▏ | 11906/569592 [3:22:19<544:53:31, 3.52s/it]
2%|▏ | 11906/569592 [3:22:19<544:53:31, 3.52s/it]
2%|▏ | 11907/569592 [3:22:24<606:47:39, 3.92s/it]
2%|▏ | 11907/569592 [3:22:24<606:47:39, 3.92s/it]
2%|▏ | 11908/569592 [3:22:28<604:21:54, 3.90s/it]
2%|▏ | 11908/569592 [3:22:28<604:21:54, 3.90s/it]
2%|▏ | 11909/569592 [3:22:29<475:50:14, 3.07s/it]
2%|▏ | 11909/569592 [3:22:29<475:50:14, 3.07s/it]
2%|▏ | 11910/569592 [3:22:33<554:47:42, 3.58s/it]
2%|▏ | 11910/569592 [3:22:33<554:47:42, 3.58s/it]
2%|▏ | 11911/569592 [3:22:34<430:39:34, 2.78s/it]
2%|▏ | 11911/569592 [3:22:34<430:39:34, 2.78s/it]
2%|▏ | 11912/569592 [3:22:38<489:06:50, 3.16s/it]
2%|▏ | 11912/569592 [3:22:38<489:06:50, 3.16s/it]
2%|▏ | 11913/569592 [3:22:43<538:42:44, 3.48s/it]
2%|▏ | 11913/569592 [3:22:43<538:42:44, 3.48s/it]
2%|▏ | 11914/569592 [3:22:48<604:00:57, 3.90s/it]
2%|▏ | 11914/569592 [3:22:48<604:00:57, 3.90s/it]
2%|▏ | 11915/569592 [3:22:51<580:37:42, 3.75s/it]
2%|▏ | 11915/569592 [3:22:51<580:37:42, 3.75s/it]
2%|▏ | 11916/569592 [3:22:55<618:45:51, 3.99s/it]
2%|▏ | 11916/569592 [3:22:55<618:45:51, 3.99s/it]
2%|▏ | 11917/569592 [3:22:59<577:24:56, 3.73s/it]
2%|▏ | 11917/569592 [3:22:59<577:24:56, 3.73s/it]
2%|▏ | 11918/569592 [3:23:00<450:29:59, 2.91s/it]
2%|▏ | 11918/569592 [3:23:00<450:29:59, 2.91s/it]
2%|▏ | 11919/569592 [3:23:04<538:10:04, 3.47s/it]
2%|▏ | 11919/569592 [3:23:04<538:10:04, 3.47s/it]
2%|▏ | 11920/569592 [3:23:09<597:22:11, 3.86s/it]
2%|▏ | 11920/569592 [3:23:09<597:22:11, 3.86s/it]
2%|▏ | 11921/569592 [3:23:14<649:02:11, 4.19s/it]
2%|▏ | 11921/569592 [3:23:14<649:02:11, 4.19s/it]
2%|▏ | 11922/569592 [3:23:19<671:45:31, 4.34s/it]
2%|▏ | 11922/569592 [3:23:19<671:45:31, 4.34s/it]
2%|▏ | 11923/569592 [3:23:20<510:50:01, 3.30s/it]
2%|▏ | 11923/569592 [3:23:20<510:50:01, 3.30s/it]
2%|▏ | 11924/569592 [3:23:24<581:19:26, 3.75s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 11924/569592 [3:23:24<581:19:26, 3.75s/it]
2%|▏ | 11925/569592 [3:23:29<609:26:00, 3.93s/it]
2%|▏ | 11925/569592 [3:23:29<609:26:00, 3.93s/it]
2%|▏ | 11926/569592 [3:23:32<569:38:50, 3.68s/it]
2%|▏ | 11926/569592 [3:23:32<569:38:50, 3.68s/it]
2%|▏ | 11927/569592 [3:23:35<534:23:53, 3.45s/it]
2%|▏ | 11927/569592 [3:23:35<534:23:53, 3.45s/it]
2%|▏ | 11928/569592 [3:23:38<522:33:23, 3.37s/it]
2%|▏ | 11928/569592 [3:23:38<522:33:23, 3.37s/it]
2%|▏ | 11929/569592 [3:23:41<508:12:01, 3.28s/it]
2%|▏ | 11929/569592 [3:23:41<508:12:01, 3.28s/it]
2%|▏ | 11930/569592/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
[3:23:44<514:07:42, 3.32s/it]
2%|▏ | 11930/569592 [3:23:44<514:07:42, 3.32s/it]
2%|▏ | 11931/569592 [3:23:49<583:54:28, 3.77s/it]
2%|▏ | 11931/569592 [3:23:49<583:54:28, 3.77s/it]
2%|▏ | 11932/569592 [3:23:53<573:24:30, 3.70s/it]
2%|▏ | 11932/569592 [3:23:53<573:24:30, 3.70s/it]
2%|▏ | 11933/569592 [3:23:57<614:31:37, 3.97s/it]
2%|▏ | 11933/569592 [3:23:57<614:31:37, 3.97s/it]
2%|▏ | 11934/569592 [3:23:58<472:26:37, 3.05s/it]
2%|▏ | 11934/569592 [3:23:58<472:26:37, 3.05s/it]
2%|▏ | 11935/569592 [3:24:03<540:36:44, 3.49s/it]
2%|▏ | 11935/569592 [3:24:03<540:36:44, 3.49s/it]
2%|▏ | 11936/569592 [3:24:06<541:22:01, 3.49s/it]
2%|▏ | 11936/569592 [3:24:06<541:22:01, 3.49s/it]
2%|▏ | 11937/569592 [3:24:10<532:46:12, 3.44s/it]
2%|▏ | 11937/569592 [3:24:10<532:46:12, 3.44s/it]
2%|▏ | 11938/569592 [3:24:13<517:19:35, 3.34s/it]
2%|▏ | 11938/569592 [3:24:13<517:19:35, 3.34s/it]
2%|▏ | 11939/569592 [3:24:14<404:18:29, 2.61s/it]
2%|▏ | 11939/569592 [3:24:14<404:18:29, 2.61s/it]
2%|▏ | 11940/569592 [3:24:17<448:51:55, 2.90s/it]
2%|▏ | 11940/569592 [3:24:17<448:51:55, 2.90s/it]
2%|▏ | 11941/569592 [3:24:18<358:48:44, 2.32s/it]
2%|▏ | 11941/569592 [3:24:18<358:48:44, 2.32s/it]
2%|▏ | 11942/569592 [3:24:23<481:50:44, 3.11s/it]
2%|▏ | 11942/569592 [3:24:23<481:50:44, 3.11s/it]
2%|▏ | 11943/569592 [3:24:28<556:24:34, 3.59s/it]
2%|▏ | 11943/569592 [3:24:28<556:24:34, 3.59s/it]
2%|▏ | 11944/569592 [3:24:32<595:50:01, 3.85s/it]
2%|▏ | 11944/569592 [3:24:32<595:50:01, 3.85s/it]
2%|▏ | 11945/569592 [3:24:37<634:58:34, 4.10s/it]
2%|▏ | 11945/569592 [3:24:37<634:58:34, 4.10s/it]
2%|▏ | 11946/569592 [3:24:40<590:46:21, 3.81s/it]
2%|▏ | 11946/569592 [3:24:40<590:46:21, 3.81s/it]
2%|▏ | 11947/569592 [3:24:43<551:08:11, 3.56s/it]
2%|▏ | 11947/569592 [3:24:43<551:08:11, 3.56s/it]
2%|▏ | 11948/569592 [3:24:48<609:47:19, 3.94s/it]
2%|▏ | 11948/569592 [3:24:48<609:47:19, 3.94s/it]
2%|▏ | 11949/569592 [3:24:51<570:30:09, 3.68s/it]
2%|▏ | 11949/569592 [3:24:51<570:30:09, 3.68s/it]
2%|▏ | 11950/569592 [3:24:54<559:27:31, 3.61s/it]
2%|▏ | 11950/569592 [3:24:55<559:27:31, 3.61s/it]
2%|▏ | 11951/569592 [3:24:58<531:30:24, 3.43s/it]
2%|▏ | 11951/569592 [3:24:58<531:30:24, 3.43s/it]
2%|▏ | 11952/569592 [3:24:58<413:50:16, 2.67s/it]
2%|▏ | 11952/569592 [3:24:58<413:50:16, 2.67s/it]
2%|▏ | 11953/569592 [3:24:59<335:28:24, 2.17s/it]
2%|▏ | 11953/569592 [3:24:59<335:28:24, 2.17s/it]
2%|▏ | 11954/569592 [3:25:00<280:13:52, 1.81s/it]
2%|▏ | 11954/569592 [3:25:00<280:13:52, 1.81s/it]
2%|▏ | 11955/569592 [3:25:05<430:20:15, 2.78s/it]
2%|▏ | 11955/569592 [3:25:05<430:20:15, 2.78s/it]
2%|▏ | 11956/569592 [3:25:06<346:31:17, 2.24s/it]
2%|▏ | 11956/569592 [3:25:06<346:31:17, 2.24s/it]
2%|▏ | 11957/569592 [3:25:07<286:54:30, 1.85s/it]
2%|▏ | 11957/569592 [3:25:07<286:54:30, 1.85s/it]
2%|▏ | 11958/569592 [3:25:08<244:14:39, 1.58s/it]
2%|▏ | 11958/569592 [3:25:08<244:14:39, 1.58s/it]
2%|▏ | 11959/569592 [3:25:12<346:04:52, 2.23s/it]
2%|▏ | 11959/569592 [3:25:12<346:04:52, 2.23s/it]
2%|▏ | 11960/569592 [3:25:13<285:39:42, 1.84s/it]
2%|▏ | 11960/569592 [3:25:13<285:39:42, 1.84s/it]
2%|▏ | 11961/569592 [3:25:16<360:24:11, 2.33s/it]
2%|▏ | 11961/569592 [3:25:16<360:24:11, 2.33s/it]
2%|▏ | 11962/569592 [3:25:17<294:50:43, 1.90s/it]
2%|▏ | 11962/569592 [3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98085130 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (91410432 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:25:17<294:50:43, 1.90s/it]
2%|▏ | 11963/569592 [3:25:18<251:36:16, 1.62s/it]
2%|▏ | 11963/569592 [3:25:18<251:36:16, 1.62s/it]
2%|▏ | 11964/569592 [3:25:24<421:38:04, 2.72s/it]
2%|▏ | 11964/569592 [3:25:24<421:38:04, 2.72s/it]
2%|▏ | 11965/569592 [3:25:27<446:35:35, 2.88s/it]
2%|▏ | 11965/569592 [3:25:27<446:35:35, 2.88s/it]
2%|▏ | 11966/569592 [3:25:33<575:07:48, 3.71s/it]
2%|▏ | 11966/569592 [3:25:33<575:07:48, 3.71s/it]
2%|▏ | 11967/569592 [3:25:34<451:56:51, 2.92s/it]
2%|▏ | 11967/569592 [3:25:34<451:56:51, 2.92s/it]
2%|▏ | 11968/569592 [3:25:35<362:23:25, 2.34s/it]
2%|▏ | 11968/569592 [3:25:35<362:23:25, 2.34s/it]
2%|▏ | 11969/569592 [3:25:36<298:27:02, 1.93s/it]
2%|▏ | 11969/569592 [3:25:36<298:27:02, 1.93s/it]
2%|▏ | 11970/569592 [3:25:36<253:16:18, 1.64s/it]
2%|▏ | 11970/569592 [3:25:36<253:16:18, 1.64s/it]
2%|▏ | 11971/569592 [3:25:38<255:56:39, 1.65s/it]
2%|▏ | 11971/569592 [3:25:38<255:56:39, 1.65s/it]
2%|▏ | 11972/569592 [3:25:42<377:07:23, 2.43s/it]
2%|▏ | 11972/569592 [3:25:42<377:07:23, 2.43s/it]
2%|▏ | 11973/569592 [3:25:46<444:12:59, 2.87s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 11973/569592 [3:25:46<444:12:59, 2.87s/it]
2%|▏ | 11974/569592 [3:25:47<355:27:22, 2.29s/it]
2%|▏ | 11974/569592 [3:25:47<355:27:22, 2.29s/it]
2%|▏ | 11975/569592 [3:25:48<299:11:38, 1.93s/it]
2%|▏ | 11975/569592 [3:25:48<299:11:38, 1.93s/it]
2%|▏ | 11976/569592 [3:25:53<445:19:06, 2.88s/it]
2%|▏ | 11976/569592 [3:25:53<445:19:06, 2.88s/it]
2%|▏ | 11977/569592 [3:25:54<355:23:50, 2.29s/it]
2%|▏ | 11977/569592 [3:25:54<355:23:50, 2.29s/it]
2%|▏ | 11978/569592 [3:25:56<342:21:16, 2.21s/it]
2%|▏ | 11978/569592 [3:25:56<342:21:16, 2.21s/it]
2%|▏ | 11979/569592 [3:26:00<419:51:25, 2.71s/it]
2%|▏ | 11979/569592 [3:26:00<419:51:25, 2.71s/it]
2%|▏ | 11980/569592 [3:26:03<442:52:43, 2.86s/it]
2%|▏ | 11980/569592 [3:26:03<442:52:43, 2.86s/it]
2%|▏ | 11981/569592 [3:26:04<352:49:32, 2.28s/it]
2%|▏ | 11981/569592 [3:26:04<352:49:32, 2.28s/it]
2%|▏ | 11982/569592 [3:26:07<349:06:09, 2.25s/it]
2%|▏ | 11982/569592 [3:26:07<349:06:09, 2.25s/it]
2%|▏ | 11983/569592 [3:26:09<371:11:54, 2.40s/it]
2%|▏ | 11983/569592 [3:26:09<371:11:54, 2.40s/it]
2%|▏ | 11984/569592 [3:26:13<443:35:02, 2.86s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (108743184 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 11984/569592 [3:26:13<443:35:02, 2.86s/it]
2%|▏ | 11985/569592 [3:26:14<356:43:27, 2.30s/it]
2%|▏ | 11985/569592 [3:26:14<356:43:27, 2.30s/it]
2%|▏ | 11986/569592 [3:26:17<360:42:35, 2.33s/it]
2%|▏ | 11986/569592 [3:26:17<360:42:35, 2.33s/it]
2%|▏ | 11987/569592 [3:26:21<439:03:05, 2.83s/it]
2%|▏ | 11987/569592 [3:26:21<439:03:05, 2.83s/it]
2%|▏ | 11988/569592 [3:26:26<573:25:10, 3.70s/it]
2%|▏ | 11988/569592 [3:26:26<573:25:10, 3.70s/it]
2%|▏ | 11989/569592 [3:26:27<444:28:42, 2.87s/it]
2%|▏ | 11989/569592 [3:26:27<444:28:42, 2.87s/it]
2%|▏ | 11990/569592 [3:26:28<356:01:15, 2.30s/it]
2%|▏ | 11990/569592 [3:26:28<356:01:15, 2.30s/it]
2%|▏ | 11991/569592 [3:26:30<347:31:00, 2.24s/it]
2%|▏ | 11991/569592 [3:26:30<347:31:00, 2.24s/it]
2%|▏ | 11992/569592 [3:26:34<425:11:20, 2.75s/it]
2%|▏ | 11992/569592 [3:26:35<425:11:20, 2.75s/it]
2%|▏ | 11993/569592 [3:26:36<398:42:48, 2.57s/it]
2%|▏ | 11993/569592 [3:26:37<398:42:48, 2.57s/it]
2%|▏ | 11994/569592 [3:26:38<340:59:42, 2.20s/it]
2%|▏ | 11994/569592 [3:26:38<340:59:42, 2.20s/it]
2%|▏ | 11995/569592 [3:26:41<394:27:37, 2.55s/it]
2%|▏ | 11995/569592 [3:26:41<394:27:37, 2.55s/it]
2%|▏ | 11996/569592 [3:26:44<422:26:01, 2.73s/it]
2%|▏ | 11996/569592 [3:26:44<422:26:01, 2.73s/it]
2%|▏ | 11997/569592 [3:26:48<450:35:26, 2.91s/it]
2%|▏ | 11997/569592 [3:26:48<450:35:26, 2.91s/it]
2%|▏ | 11998/569592 [3:26:49<361:46:20, 2.34s/it]
2%|▏ | 11998/569592 [3:26:49<361:46:20, 2.34s/it]
2%|▏ | 11999/569592 [3:26:51<376:48:35, 2.43s/it]
2%|▏ | 11999/569592 [3:26:51<376:48:35, 2.43s/it]
2%|▏ | 12000/569592 [3:26:54<365:46:00, 2.36s/it]
2%|▏ | 12000/569592 [3:26:54<365:46:00, 2.36s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-11000] due to args.save_total_limit
2%|▏ | 12001/569592 [3:29:08<6521:21:13, 42.10s/it]
2%|▏ | 12001/569592 [3:29:08<6521:21:13, 42.10s/it]
2%|▏ | 12002/569592 [3:29:13<4771:11:42, 30.80s/it]
2%|▏ | 12002/569592 [3:29:13<4771:11:42, 30.80s/it]
2%|▏ | 12003/569592 [3:29:18<3560:35:05, 22.99s/it]
2%|▏ | 12003/569592 [3:29:18<3560:35:05, 22.99s/it]
2%|▏ | 12004/569592 [3:29:23<2730:29:11, 17.63s/it]
2%|▏ | 12004/569592 [3:29:23<2730:29:11, 17.63s/it]
2%|▏ | 12005/569592 [3:29:26<2076:03:24, 13.40s/it]
2%|▏ | 12005/569592 [3:29:26<2076:03:24, 13.40s/it]
2%|▏ | 12006/569592 [3:29:27<1493:53:11, 9.65s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94781302 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96622550 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12006/569592 [3:29:27<1493:53:11, 9.65s/it]
2%|▏ | 12007/569592 [3:29:32<1290:34:03, 8.33s/it]
2%|▏ | 12007/569592 [3:29:32<1290:34:03, 8.33s/it]
2%|▏ | 12008/569592 [3:29:33<950:58:22, 6.14s/it]
2%|▏ | 12008/569592 [3:29:33<950:58:22, 6.14s/it]
2%|▏ | 12009/569592 [3:29:37<852:33:56, 5.50s/it]
2%|▏ | 12009/569592 [3:29:37<852:33:56, 5.50s/it]
2%|▏ | 12010/569592 [3:29:43<839:04:01, 5.42s/it]
2%|▏ | 12010/569592 [3:29:43<839:04:01, 5.42s/it]
2%|▏ | 12011/569592 [3:29:48<825:38:00, 5.33s/it]
2%|▏ | 12011/569592 [3:29:48<825:38:00, 5.33s/it]
2%|▏ | 12012/569592 [3:29:53<806:18:31, 5.21s/it]
2%|▏ | 12012/569592 [3:29:53<806:18:31, 5.21s/it]
2%|▏ | 12013/569592 [3:29:56<734:36:20, 4.74s/it]
2%|▏ | 12013/569592 [3:29:56<734:36:20, 4.74s/it]
2%|▏ | 12014/569592 [3:30:00<692:19:40, 4.47s/it]
2%|▏ | 12014/569592 [3:30:00<692:19:40, 4.47s/it]
2%|▏ | 12015/569592 [3:30:04<681:42:10, 4.40s/it]
2%|▏ | 12015/569592 [3:30:04<681:42:10, 4.40s/it]
2%|▏ | 12016/569592 [3:30:09<669:00:14, 4.32s/it]
2%|▏ | 12016/569592 [3:30:09<669:00:14, 4.32s/it]
2%|▏ | 12017/569592 [3:30:13<656:04:18, 4.24s/it]
2%|▏ | 12017/569592 [3:30:13<656:04:18, 4.24s/it]
2%|▏ | 12018/569592 [3:30:18<700:58:45, 4.53s/it]
2%|▏ | 12018/569592 [3:30:18<700:58:45, 4.53s/it]
2%|▏ | 12019/569592 [3:30:23<738:25:40, 4.77s/it]
2%|▏ | 12019/569592 [3:30:23<738:25:40, 4.77s/it]
2%|▏ | 12020/569592 [3:30:24<559:18:56, 3.61s/it]
2%|▏ | 12020/569592 [3:30:24<559:18:56, 3.61s/it]
2%|▏ | 12021/569592 [3:30:29<636:40:05, 4.11s/it]
2%|▏ | 12021/569592 [3:30:29<636:40:05, 4.11s/it]
2%|▏ | 12022/569592 [3:30:34<682:18:29, 4.41s/it]
2%|▏ | 12022/569592 [3:30:34<682:18:29, 4.41s/it]
2%|▏ | 12023/569592 [3:30:38<648:00:22, 4.18s/it]
2%|▏ | 12023/569592 [3:30:38<648:00:22, 4.18s/it]
2%|▏ | 12024/569592 [3:30:45<799:24:57, 5.16s/it]
2%|▏ | 12024/569592 [3:30:45<799:24:57, 5.16s/it]
2%|▏ | 12025/569592 [3:30:49<724:42:24, 4.68s/it]
2%|▏ | 12025/569592 [3:30:49<724:42:24, 4.68s/it]
2%|▏ | 12026/569592 [3:30:50<549:22:10, 3.55s/it]
2%|▏ | 12026/569592 [3:30:50<549:22:10, 3.55s/it]
2%|▏ | 12027/569592 [3:30:54<576:06:28, 3.72s/it]
2%|▏ | 12027/569592 [3:30:54<576:06:28, 3.72s/it]
2%|▏ | 12028/569592 [3:30:55<452:47:56, 2.92s/it]
2%|▏ | 12028/569592 [3:30:55<452:47:56, 2.92s/it]
2%|▏ | 12029/569592 [3:31:00<553:00:17, 3.57s/it]
2%|▏ | 12029/569592 [3:31:00<553:00:17, 3.57s/it]
2%|▏ | 12030/569592 [3:31:05<605:30:43, 3.91s/it]
2%|▏ | 12030/569592 [3:31:05<605:30:43, 3.91s/it]
2%|▏ | 12031/569592 [3:31:08<587:43:36, 3.79s/it]
2%|▏ | 12031/569592 [3:31:08<587:43:36, 3.79s/it]
2%|▏ | 12032/569592 [3:31:13<605:03:16, 3.91s/it]
2%|▏ | 12032/569592 [3:31:13<605:03:16, 3.91s/it]
2%|▏ | 12033/569592 [3:31:17<631:09:28, 4.08s/it]
2%|▏ | 12033/569592 [3:31:17<631:09:28, 4.08s/it]
2%|▏ | 12034/569592 [3:31:18<482:42:14, 3.12s/it]
2%|▏ | 12034/569592 [3:31:18<482:42:14, 3.12s/it]
2%|▏ | 12035/569592 [3:31:22<533:39:20, 3.45s/it]
2%|▏ | 12035/569592 [3:31:22<533:39:20, 3.45s/it]
2%|▏ | 12036/569592 [3:31:27<616:51:18, 3.98s/it]
2%|▏ | 12036/569592 [3:31:27<616:51:18, 3.98s/it]
2%|▏ | 12037/569592 [3:31:31<608:40:51, 3.93s/it]
2%|▏ | 12037/569592 [3:31:31<608:40:51, 3.93s/it]
2%|▏ | 12038/569592 [3:31:36<663:37:59, 4.28s/it]
2%|▏ | 12038/569592 [3:31:36<663:37:59, 4.28s/it]
2%|▏ | 12039/569592 [3:31:37<506:38:00, 3.27s/it]
2%|▏ | 12039/569592 [3:31:37<506:38:00, 3.27s/it]
2%|▏ | 12040/569592 [3:31:43<614:20:25, 3.97s/it]
2%|▏ | 12040/569592 [3:31:43<614:20:25, 3.97s/it]
2%|▏ | 12041/569592 [3:31:44<475:35:37, 3.07s/it]
2%|▏ | 12041/569592 [3:31:44<475:35:37, 3.07s/it]
2%|▏ | 12042/569592 [3:31:48<510:38:09, 3.30s/it]
2%|▏ | 12042/569592 [3:31:48<510:38:09, 3.30s/it]
2%|▏ | 12043/569592 [3:31:51<529:59:13, 3.42s/it]
2%|▏ | 12043/569592 [3:31:51<529:59:13, 3.42s/it]
2%|▏ | 12044/569592 [3:31:55<539:38:07, 3.48s/it]
2%|▏ | 12044/569592 [3:31:55<539:38:07, 3.48s/it]
2%|▏ | 12045/569592 [3:31:56<421:23:28, 2.72s/it]
2%|▏ | 12045/569592 [3:31:56<421:23:28, 2.72s/it]
2%|▏ | 12046/569592 [3:32:01<539:14:57, 3.48s/it]
2%|▏ | 12046/569592 [3:32:01<539:14:57, 3.48s/it]
2%|▏ | 12047/569592 [3:32:05<561:40:57, 3.63s/it]
2%|▏ | 12047/569592 [3:32:05<561:40:57, 3.63s/it]
2%|▏ | 12048/569592 [3:32:08<540:59:10, 3.49s/it]
2%|▏ | 12048/569592 [3:32:08<540:59:10, 3.49s/it]
2%|▏ | 12049/569592 [3:32:12<535:16:35, 3.46s/it]
2%|▏ | 12049/569592 [3:32:12<535:16:35, 3.46s/it]
2%|▏ | 12050/569592 [3:32:13<416:45:34, 2.69s/it]
2%|▏ | 12050/569592 [3:32:13<416:45:34, 2.69s/it]
2%|▏ | 12051/569592 [3:32:18<531:38:31, 3.43s/it]
2%|▏ | 12051/569592 [3:32:18<531:38:31, 3.43s/it]
2%|▏ | 12052/569592 [3:32:22<556:46:22, 3.60s/it]
2%|▏ | 12052/569592 [3:32:22<556:46:22, 3.60s/it]
2%|▏ | 12053/569592 [3:32:26<575:10:26, 3.71s/it]
2%|▏ | 12053/569592 [3:32:26<575:10:26, 3.71s/it]
2%|▏ | 12054/569592 [3:32:29<568:53:25, 3.67s/it]
2%|▏ | 12054/569592 [3:32:29<568:53:25, 3.67s/it]
2%|▏ | 12055/569592 [3:32:30<439:22:18, 2.84s/it]
2%|▏ | 12055/569592 [3:32:30<439:22:18, 2.84s/it]
2%|▏ | 12056/569592 [3:32:34<493:30:24, 3.19s/it]
2%|▏ | 12056/569592 [3:32:34<493:30:24, 3.19s/it]
2%|▏ | 12057/569592 [3:32:39<558:36:30, 3.61s/it]
2%|▏ | 12057/569592 [3:32:39<558:36:30, 3.61s/it]
2%|▏ | 12058/569592 [3:32:42<559:16:33, 3.61s/it]
2%|▏ | 12058/569592 [3:32:42<559:16:33, 3.61s/it]
2%|▏ | 12059/569592 [3:32:47<628:30:19, 4.06s/it]
2%|▏ | 12059/569592 [3:32:47<628:30:19, 4.06s/it]
2%|▏ | 12060/569592 [3:32:52<638:02:58, 4.12s/it]
2%|▏ | 12060/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/569592 [3:32:52<638:02:58, 4.12s/it]
2%|▏ | 12061/569592 [3:32:55<615:56:08, 3.98s/it]
2%|▏ | 12061/569592 [3:32:55<615:56:08, 3.98s/it]
2%|▏ | 12062/569592 [3:32:59<606:02:36, 3.91s/it]
2%|▏ | 12062/569592 [3:32:59<606:02:36, 3.91s/it]
2%|▏ | 12063/569592 [3:33:03<604:01:03, 3.90s/it]
2%|▏ | 12063/569592 [3:33:03<604:01:03, 3.90s/it]
2%|▏ | 12064/569592 [3:33:04<466:15:37, 3.01s/it]
2%|▏ | 12064/569592 [3:33:04<466:15:37, 3.01s/it]
2%|▏ | 12065/569592 [3:33:08<505:14:30, 3.26s/it]
2%|▏ | 12065/569592 [3:33:08<505:14:30, 3.26s/it]
2%|▏ | 12066/569592 [3:33:11<501:33:48, 3.24s/it]
2%|▏ | 12066/569592 [3:33:11<501:33:48, 3.24s/it]
2%|▏ | 12067/569592 [3:33:15<534:23:39, 3.45s/it]
2%|▏ | 12067/569592 [3:33:15<534:23:39, 3.45s/it]
2%|▏ | 12068/569592 [3:33:16<416:01:28, 2.69s/it]
2%|▏ | 12068/569592 [3:33:16<416:01:28, 2.69s/it]
2%|▏ | 12069/569592 [3:33:17<334:32:55, 2.16s/it]
2%|▏ | 12069/569592 [3:33:17<334:32:55, 2.16s/it]
2%|▏ | 12070/569592 [3:33:22<469:10:10, 3.03s/it]
2%|▏ | 12070/569592 [3:33:22<469:10:10, 3.03s/it]
2%|▏ | 12071/569592 [3:33:23<375:39:15, 2.43s/it]
2%|▏ | 12071/569592 [3:33:24<375:39:15, 2.43s/it]
2%|▏ | 12072/569592 [3:33:25<355:00:25, 2.29s/it]
2%|▏ | 12072/569592 [3:33:25<355:00:25, 2.29s/it]
2%|▏ | 12073/569592 [3:33:29<438:26:38, 2.83s/it]
2%|▏ | 12073/569592 [3:33:29<438:26:38, 2.83s/it]
2%|▏ | 12074/569592 [3:33:32<464:22:01, 3.00s/it]
2%|▏ | 12074/569592 [3:33:32<464:22:01, 3.00s/it]
2%|▏ | 12075/569592 [3:33:33<368:52:50, 2.38s/it]
2%|▏ | 12075/569592 [3:33:33<368:52:50, 2.38s/it]
2%|▏ | 12076/569592 [3:33:38<461:11:16, 2.98s/it]
2%|▏ | 12076/569592 [3:33:38<461:11:16, 2.98s/it]
2%|▏ | 12077/569592 [3:33:39<368:13:33, 2.38s/it]
2%|▏ | 12077/569592 [3:33:39<368:13:33, 2.38s/it]
2%|▏ | 12078/569592 [3:33:44<509:25:57, 3.29s/it]
2%|▏ | 12078/569592 [3:33:44<509:25:57, 3.29s/it]
2%|▏ | 12079/569592 [3:33:45<398:40:50, 2.57s/it]
2%|▏ | 12079/569592 [3:33:45<398:40:50, 2.57s/it]
2%|▏ | 12080/569592 [3:33:49<462:50:32, 2.99s/it]
2%|▏ | 12080/569592 [3:33:49<462:50:32, 2.99s/it]
2%|▏ | 12081/569592 [3:33:50<365:41:16, 2.36s/it]
2%|▏ | 12081/569592 [3:33:50<365:41:16, 2.36s/it]
2%|▏ | 12082/569592 [3:33:53<411:54:38, 2.66s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12082/569592 [3:33:53<411:54:38, 2.66s/it]
2%|▏ | 12083/569592 [3:33:54<332:29:33, 2.15s/it]
2%|▏ | 12083/569592 [3:33:54<332:29:33, 2.15s/it]
2%|▏ | 12084/569592 [3:33:55<278:34:12, 1.80s/it]
2%|▏ | 12084/569592 [3:33:55<278:34:12, 1.80s/it]
2%|▏ | 12085/569592 [3:33:56<239:07:42, 1.54s/it]
2%|▏ | 12085/569592 [3:33:56<239:07:42, 1.54s/it]
2%|▏ | 12086/569592 [3:33:57<210:55:25, 1.36s/it]
2%|▏ | 12086/569592 [3:33:57<210:55:25, 1.36s/it]
2%|▏ | 12087/569592 [3:34:01<344:53:07, 2.23s/it]
2%|▏ | 12087/569592 [3:34:01<344:53:07, 2.23s/it]
2%|▏ | 12088/569592 [3:34:02<287:36:32, 1.86s/it]
2%|▏ | 12088/569592 [3:34:02<287:36:32, 1.86s/it]
2%|▏ | 12089/569592 [3:34:03<245:23:46, 1.58s/it]
2%|▏ | 12089/569592 [3:34:03<245:23:46, 1.58s/it]
2%|▏ | 12090/569592 [3:34:04<214:44:57, 1.39s/it]
2%|▏ | 12090/569592 [3:34:04<214:44:57, 1.39s/it]
2%|▏ | 12091/569592 [3:34:07<307:47:34, 1.99s/it]
2%|▏ | 12091/569592 [3:34:07<307:47:34, 1.99s/it]
2%|▏ | 12092/569592 [3:34:10<312:39:57, 2.02s/it]
2%|▏ | 12092/569592 [3:34:10<312:39:57, 2.02s/it]
2%|▏ | 12093/569592 [3:34:13<359:01:58, 2.32s/it]
2%|▏ | 12093/569592 [3:34:13<359:01:58, 2.32s/it]
2%|▏ | 12094/569592 [3:34:14<307:55:14, 1.99s/it]
2%|▏ | 12094/569592 [3:34:14<307:55:14, 1.99s/it]
2%|▏ | 12095/569592 [3:34:18<423:23:07, 2.73s/it]
2%|▏ | 12095/569592 [3:34:18<423:23:07, 2.73s/it]
2%|▏ | 12096/569592 [3:34:20<363:07:51, 2.34s/it]
2%|▏ | 12096/569592 [3:34:20<363:07:51, 2.34s/it]
2%|▏ | 12097/569592 [3:34:23<406:38:05, 2.63s/it]
2%|▏ | 12097/569592 [3:34:23<406:38:05, 2.63s/it]
2%|▏ | 12098/569592 [3:34:24<336:43:31, 2.17s/it]
2%|▏ | 12098/569592 [3:34:24<336:43:31, 2.17s/it]
2%|▏ | 12099/569592 [3:34:28<435:49:08, 2.81s/it]
2%|▏ | 12099/569592 [3:34:28<435:49:08, 2.81s/it]
2%|▏ | 12100/569592 [3:34:29<350:51:19, 2.27s/it]
2%|▏ | 12100/569592 [3:34:29<350:51:19, 2.27s/it]
2%|▏ | 12101/569592 [3:34:33<434:15:38, 2.80s/it]
2%|▏ | 12101/569592 [3:34:33<434:15:38, 2.80s/it]
2%|▏ | 12102/569592 [3:34:34<347:24:39, 2.24s/it]
2%|▏ | 12102/569592 [3:34:34<347:24:39, 2.24s/it]
2%|▏ | 12103/569592 [3:34:38<391:45:09, 2.53s/it]
2%|▏ | 12103/569592 [3:34:38<391:45:09, 2.53s/it]
2%|▏ | 12104/569592 [3:34:41<435:04:19, 2.81s/it]
2%|▏ | 12104/569592 [3:34:41<435:04:19, 2.81s/it]
2%|▏ | 12105/569592 [3:34:45<475:03:44, 3.07s/it]
2%|▏ | 12105/569592 [3:34:45<475:03:44, 3.07s/it]
2%|▏ | 12106/569592 [3:34:46<375:06:53, 2.42s/it]
2%|▏ | 12106/569592 [3:34:46<375:06:53, 2.42s/it]
2%|▏ | 12107/569592 [3:34:49<420:22:48, 2.71s/it]
2%|▏ | 12107/569592 [3:34:49<420:22:48, 2.71s/it]
2%|▏ | 12108/569592 [3:34:53<482:30:24, 3.12s/it]
2%|▏ | 12108/569592 [3:34:53<482:30:24, 3.12s/it]
2%|▏ | 12109/569592 [3:34:55<432:13:40, 2.79s/it]
2%|▏ | 12109/569592 [3:34:55<432:13:40, 2.79s/it]
2%|▏ | 12110/569592 [3:34:59<496:30:25, 3.21s/it]
2%|▏ | 12110/569592 [3:34:59<496:30:25, 3.21s/it]
2%|▏ | 12111/569592 [3:35:00<390:53:07, 2.52s/it]
2%|▏ | 12111/569592 [3:35:00<390:53:07, 2.52s/it]
2%|▏ | 12112/569592 [3:35:04<455:33:34, 2.94s/it]
2%|▏ | 12112/569592 [3:35:04<455:33:34, 2.94s/it]
2%|▏ | 12113/569592 [3:35:05<377:36:50, 2.44s/it]
2%|▏ | 12113/569592 [3:35:05<377:36:50, 2.44s/it]
2%|▏ | 12114/569592 [3:35:09<418:15:21, 2.70s/it]
2%|▏ | 12114/569592 [3:35:09<418:15:21, 2.70s/it]
2%|▏ | 12115/569592 [3:35:14<521:04:57, 3.36s/it]
2%|▏ | 12115/569592 [3:35:14<521:04:57, 3.36s/it]
2%|▏ | 12116/569592 [3:35:15<409:28:52, 2.64s/it]
2%|▏ | 12116/569592 [3:35:15<409:28:52, 2.64s/it]
2%|▏ | 12117/569592 [3:35:20<523:10:15, 3.38s/it]
2%|▏ | 12117/569592 [3:35:20<523:10:15, 3.38s/it]
2%|▏ | 12118/569592 [3:35:21<410:26:22, 2.65s/it]
2%|▏ | 12118/569592 [3:35:21<410:26:22, 2.65s/it]
2%|▏ | 12119/569592 [3:35:22<331:33:33, 2.14s/it]
2%|▏ | 12119/569592 [3:35:22<331:33:33, 2.14s/it]
2%|▏ | 12120/569592 [3:35:27<486:13:50, 3.14s/it]
2%|▏ | 12120/569592 [3:35:27<486:13:50, 3.14s/it]
2%|▏ | 12121/569592 [3:35:28<385:12:30, 2.49s/it]
2%|▏ | 12121/569592 [3:35:28<385:12:30, 2.49s/it]
2%|▏ | 12122/569592 [3:35:33<523:10:12, 3.38s/it]
2%|▏ | 12122/569592 [3:35:34<523:10:12, 3.38s/it]
2%|▏ | 12123/569592 [3:35:38<578:03:11, 3.73s/it]
2%|▏ | 12123/569592 [3:35:38<578:03:11, 3.73s/it]
2%|▏ | 12124/569592 [3:35:41<557:37:28, 3.60s/it]
2%|▏ | 12124/569592 [3:35:41<557:37:28, 3.60s/it]
2%|▏ | 12125/569592 [3:35:42<431:45:56, 2.79s/it]
2%|▏ | 12125/569592 [3:35:42<431:45:56, 2.79s/it]
2%|▏ | 12126/569592 [3:35:43<345:07:20, 2.23s/it]
2%|▏ | 12126/569592 [3:35:43<345:07:20, 2.23s/it]
2%|▏ | 12127/569592 [3:35:47<421:48:25, 2.72s/it]
2%|▏ | 12127/569592 [3:35:47<421:48:25, 2.72s/it]
2%|▏ | 12128/569592 [3:35:51<471:29:28, 3.04s/it]
2%|▏ | 12128/569592 [3:35:51<471:29:28, 3.04s/it]
2%|▏ | 12129/569592 [3:35:54<468:31:33, 3.03s/it]
2%|▏ | 12129/569592 [3:35:54<468:31:33, 3.03s/it]
2%|▏ | 12130/569592 [3:35:59<569:06:49, 3.68s/it]
2%|▏ | 12130/569592 [3:35:59<569:06:49, 3.68s/it]
2%|▏ | 12131/569592 [3:36:03<582:31:39, 3.76s/it]
2%|▏ | 12131/569592 [3:36:03<582:31:39, 3.76s/it]
2%|▏ | 12132/569592 [3:36:08<622:59:10, 4.02s/it]
2%|▏ | 12132/569592 [3:36:08<622:59:10, 4.02s/it]
2%|▏ | 12133/569592 [3:36:08<477:38:21, 3.08s/it]
2%|▏ | 12133/569592 [3:36:08<477:38:21, 3.08s/it]
2%|▏ | 12134/569592 [3:36:13<551:48:26, 3.56s/it]
2%|▏ | 12134/569592 [3:36:13<551:48:26, 3.56s/it]
2%|▏ | 12135/569592 [3:36:18<622:43:38, 4.02s/it]
2%|▏ | 12135/569592 [3:36:18<622:43:38, 4.02s/it]
2%|▏ | 12136/569592 [3:36:22<586:52:24, 3.79s/it]
2%|▏ | 12136/569592 [3:36:22<586:52:24, 3.79s/it]
2%|▏ | 12137/569592 [3:36:26<637:33:09, 4.12s/it]
2%|▏ | 12137/569592 [3:36:26<637:33:09, 4.12s/it]
2%|▏ | 12138/569592 [3:36:31<673:19:21, 4.35s/it]
2%|▏ | 12138/569592 [3:36:31<673:19:21, 4.35s/it]
2%|▏ | 12139/569592 [3:36:32<512:29:12, 3.31s/it]
2%|▏ | 12139/569592 [3:36:32<512:29:12, 3.31s/it]
2%|▏ | 12140/569592 [3:36:37<584:16:43, 3.77s/it]
2%|▏ | 12140/569592 [3:36:37<584:16:43, 3.77s/it]
2%|▏ | 12141/569592 [3:36:41<570:54:39, 3.69s/it]
2%|▏ | 12141/569592 [3:36:41<570:54:39, 3.69s/it]
2%|▏ | 12142/569592 [3:36:45<616:37:02, 3.98s/it]
2%|▏ | 12142/569592 [3:36:45<616:37:02, 3.98s/it]
2%|▏ | 12143/569592 [3:36:50<660:05:04, 4.26s/it]
2%|▏ | 12143/569592 [3:36:50<660:05:04, 4.26s/it]
2%|▏ | 12144/569592 [3:36:55<677:04:10, 4.37s/it]
2%|▏ | 12144/569592 [3:36:55<677:04:10, 4.37s/it]
2%|▏ | 12145/569592 [3:36:56<519:15:00, 3.35s/it]
2%|▏ | 12145/569592 [3:36:56<519:15:00, 3.35s/it]
2%|▏ | 12146/569592 [3:36:59<517:59:55, 3.35s/it]
2%|▏ | 12146/569592 [3:36:59<517:59:55, 3.35s/it]
2%|▏ | 12147/569592 [3:37:03<531:59:55, 3.44s/it]
2%|▏ | 12147/569592 [3:37:03<531:59:55, 3.44s/it]
2%|▏ | 12148/569592 [3:37:06<517:37:55, 3.34s/it]
2%|▏ | 12148/569592 [3:37:06<517:37:55, 3.34s/it]
2%|▏ | 12149/569592 [3:37:09<521:04:10, 3.37s/it]
2%|▏ | 12149/569592 [3:37:09<521:04:10, 3.37s/it]
2%|▏ | 12150/569592 [3:37:13<558:44:40, 3.61s/it]
2%|▏ | 12150/569592 [3:37:13<558:44:40, 3.61s/it]
2%|▏ | 12151/569592 [3:37:18<601:03:42, 3.88s/it]
2%|▏ | 12151/569592 [3:37:18<601:03:42, 3.88s/it]
2%|▏ | 12152/569592 [3:37:22<633:15:36, 4.09s/it]
2%|▏ | 12152/569592 [3:37:22<633:15:36, 4.09s/it]
2%|▏ | 12153/569592 [3:37:27<634:45:23, 4.10s/it]
2%|▏ | 12153/569592 [3:37:27<634:45:23, 4.10s/it]
2%|▏ | 12154/569592 [3:37:27<484:36:26, 3.13s/it]
2%|▏ | 12154/569592 [3:37:27<484:36:26, 3.13s/it]
2%|▏ | 12155/569592 [3:37:31<510:01:36, 3.29s/it]
2%|▏ | 12155/569592 [3:37:31<510:01:36, 3.29s/it]
2%|▏ | 12156/569592 [3:37:34<505:26:17, 3.26s/it]
2%|▏ | 12156/569592 [3:37:34<505:26:17, 3.26s/it]
2%|▏ | 12157/569592 [3:37:38<536:26:04, 3.46s/it]
2%|▏ | 12157/569592 [3:37:38<536:26:04, 3.46s/it]
2%|▏ | 12158/569592 [3:37:39<420:47:29, 2.72s/it]
2%|▏ | 12158/569592 [3:37:39<420:47:29, 2.72s/it]
2%|▏ | 12159/569592 [3:37:44<512:43:49, 3.31s/it]
2%|▏ | 12159/569592 [3:37:44<512:43:49, 3.31s/it]
2%|▏ | 12160/569592 [3:37:47<510:13:29, 3.30s/it]
2%|▏ | 12160/569592 [3:37:47<510:13:29, 3.30s/it]
2%|▏ | 12161/569592 [3:37:51<514:25:16, 3.32s/it]
2%|▏ | 12161/569592 [3:37:51<514:25:16, 3.32s/it]
2%|▏ | 12162/569592 [3:37:54<506:17:38, 3.27s/it]
2%|▏ | 12162/569592 [3:37:54<506:17:38, 3.27s/it]
2%|▏ | 12163/569592 [3:37:55<396:22:35, 2.56s/it]
2%|▏ | 12163/569592 [3:37:55<396:22:35, 2.56s/it]
2%|▏ | 12164/569592 [3:37:58<449:56:56, 2.91s/it]
2%|▏ | 12164/569592 [3:37:58<449:56:56, 2.91s/it]
2%|▏ | 12165/569592 [3:38:03<518:27:33, 3.35s/it]
2%|▏ | 12165/569592 [3:38:03<518:27:33, 3.35s/it]
2%|▏ | 12166/569592 [3:38:07<544:20:14, 3.52s/it]
2%|▏ | 12166/569592 [3:38:07<544:20:14, 3.52s/it]
2%|▏ | 12167/569592 [3:38:10<530:29:50, 3.43s/it]
2%|▏ | 12167/569592 [3:38:10<530:29:50, 3.43s/it]
2%|▏ | 12168/569592 [3:38:13<533:24:20, 3.44s/it]
2%|▏ | 12168/569592 [3:38:13<533:24:20, 3.44s/it]
2%|▏ | 12169/569592 [3:38:18<601:45:35, 3.89s/it]
2%|▏ | 12169/569592 [3:38:18<601:45:35, 3.89s/it]
2%|▏ | 12170/569592 [3:38:23<629:38:28, 4.07s/it]
2%|▏ | 12170/569592 [3:38:23<629:38:28, 4.07s/it]
2%|▏ | 12171/569592 [3:38:26<584:05:14, 3.77s/it]
2%|▏ | 12171/569592 [3:38:26<584:05:14, 3.77s/it]
2%|▏ | 12172/569592 [3:38:31<639:44:58, 4.13s/it]
2%|▏ | 12172/569592 [3:38:31<639:44:58, 4.13s/it]
2%|▏ | 12173/569592 [3:38:32<488:10:42, 3.15s/it]
2%|▏ | 12173/569592 [3:38:32<488:10:42, 3.15s/it]
2%|▏ | 12174/569592 [3:38:35<507:04:51, 3.27s/it]
2%|▏ | 12174/569592 [3:38:35<507:04:51, 3.27s/it]
2%|▏ | 12175/569592 [/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3:38:39<518:26:52, 3.35s/it]
2%|▏ | 12175/569592 [3:38:39<518:26:52, 3.35s/it]
2%|▏ | 12176/569592 [3:38:44<601:21:46, 3.88s/it]
2%|▏ | 12176/569592 [3:38:44<601:21:46, 3.88s/it]
2%|▏ | 12177/569592 [3:38:47<568:09:03, 3.67s/it]
2%|▏ | 12177/569592 [3:38:47<568:09:03, 3.67s/it]
2%|▏ | 12178/569592 [3:38:52<616:55:57, 3.98s/it]
2%|▏ | 12178/569592 [3:38:52<616:55:57, 3.98s/it]
2%|▏ | 12179/569592 [3:38:57<658:50:44, 4.26s/it]
2%|▏ | 12179/569592 [3:38:57<658:50:44, 4.26s/it]
2%|▏ | 12180/569592 [3:39:00<613:45:36, 3.96s/it]
2%|▏ | 12180/569592 [3:39:00<613:45:36, 3.96s/it]
2%|▏ | 12181/569592 [3:39:01<470:17:17, 3.04s/it]
2%|▏ | 12181/569592 [3:39:01<470:17:17, 3.04s/it]
2%|▏ | 12182/569592 [3:39:02<370:23:04, 2.39s/it]
2%|▏ | 12182/569592 [3:39:02<370:23:04, 2.39s/it]
2%|▏ | 12183/569592 [3:39:05<422:09:43, 2.73s/it]
2%|▏ | 12183/569592 [3:39:05<422:09:43, 2.73s/it]
2%|▏ | 12184/569592 [3:39:06<342:14:16, 2.21s/it]
2%|▏ | 12184/569592 [3:39:06<342:14:16, 2.21s/it]
2%|▏ | 12185/569592 [3:39:10<396:38:33, 2.56s/it]
2%|▏ | 12185/569592 [3:39:10<396:38:33, 2.56s/it]
2%|▏ | 12186/569592 [3:39:15<527:49:29, 3.41s/it]
2%|▏ | 12186/569592 [3:39:15<527:49:29, 3.41s/it]
2%|▏ | 12187/569592 [3:39:18<523:16:10, 3.38s/it]
2%|▏ | 12187/569592 [3:39:18<523:16:10, 3.38s/it]
2%|▏ | 12188/569592 [3:39:19<408:03:36, 2.64s/it]
2%|▏ | 12188/569592 [3:39:19<408:03:36, 2.64s/it]
2%|▏ | 12189/569592 [3:39:23<473:42:44, 3.06s/it]
2%|▏ | 12189/569592 [3:39:23<473:42:44, 3.06s/it]
2%|▏ | 12190/569592 [3:39:24<381:24:28, 2.46s/it]
2%|▏ | 12190/569592 [3:39:24<381:24:28, 2.46s/it]
2%|▏ | 12191/569592 [3:39:29<492:46:04, 3.18s/it]
2%|▏ | 12191/569592 [3:39:29<492:46:04, 3.18s/it]
2%|▏ | 12192/569592 [3:39:30<386:14:30, 2.49s/it]
2%|▏ | 12192/569592 [3:39:30<386:14:30, 2.49s/it]
2%|▏ | 12193/569592 [3:39:35<507:30:35, 3.28s/it]
2%|▏ | 12193/569592 [3:39:35<507:30:35, 3.28s/it]
2%|▏ | 12194/569592 [3:39:36<402:13:38, 2.60s/it]
2%|▏ | 12194/569592 [3:39:36<402:13:38, 2.60s/it]
2%|▏ | 12195/569592 [3:39:37<326:08:45, 2.11s/it]
2%|▏ | 12195/569592 [3:39:37<326:08:45, 2.11s/it]
2%|▏ | 12196/569592 [3:39:38<273:10:35, 1.76s/it]
2%|▏ | 12196/569592 [3:39:38<273:10:35, 1.76s/it]
2%|▏ | 12197/569592 [3:39:39<235:33:04, 1.52s/it]
2%|▏ | 12197/569592 [3:39:39<235:33:04, 1.52s/it]
2%|▏ | 12198/569592 [3:39:40<207:52:19, 1.34s/it]
2%|▏ | 12198/569592 [3:39:40<207:52:19, 1.34s/it]
2%|▏ | 12199/569592 [3:39:43<304:26:43, 1.97s/it]
2%|▏ | 12199/569592 [3:39:43<304:26:43, 1.97s/it]
2%|▏ | 12200/569592 [3:39:47<370:37:32, 2.39s/it]
2%|▏ | 12200/569592 [3:39:47<370:37:32, 2.39s/it]
2%|▏ | 12201/569592 [3:39:48<305:20:14, 1.97s/it]
2%|▏ | 12201/569592 [3:39:48<305:20:14, 1.97s/it]
2%|▏ | 12202/569592 [3:39:49<281:35:38, 1.82s/it]
2%|▏ | 12202/569592 [3:39:49<281:35:38, 1.82s/it]
2%|▏ | 12203/569592 [3:39:50<242:02:41, 1.56s/it]
2%|▏ | 12203/569592 [3:39:50<242:02:41, 1.56s/it]
2%|▏ | 12204/569592 [3:39:55<386:31:19, 2.50s/it]
2%|▏ | 12204/569592 [3:39:55<386:31:19, 2.50s/it]
2%|▏ | 12205/569592 [3:39:56<320:56:22, 2.07s/it]
2%|▏ | 12205/569592 [3:39:56<320:56:22, 2.07s/it]
2%|▏ | 12206/569592 [3:40:00<410:33:25, 2.65s/it]
2%|▏ | 12206/569592 [3:40:00<410:33:25, 2.65s/it]
2%|▏ | 12207/569592 [3:40:02<360:49:55, 2.33s/it]
2%|▏ | 12207/569592 [3:40:02<360:49:55, 2.33s/it]
2%|▏ | 12208/569592 [3:40:03<307:04:15, 1.98s/it]
2%|▏ | 12208/569592 [3:40:03<307:04:15, 1.98s/it]
2%|▏ | 12209/569592 [3:40:04<287:33:58, 1.86s/it]
2%|▏ | 12209/569592 [3:40:04<287:33:58, 1.86s/it]
2%|▏ | 12210/569592 [3:40:10<476:10:47, 3.08s/it]
2%|▏ | 12210/569592 [3:40:10<476:10:47, 3.08s/it]
2%|▏ | 12211/569592 [3:40:12<409:49:32, 2.65s/it]
2%|▏ | 12211/569592 [3:40:12<409:49:32, 2.65s/it]
2%|▏ | 12212/569592 [3:40:13<335:46:48, 2.17s/it]
2%|▏ | 12212/569592 [3:40:13<335:46:48, 2.17s/it]
2%|▏ | 12213/569592 [3:40:14<292:22:05, 1.89s/it]
2%|▏ | 12213/569592 [3:40:14<292:22:05, 1.89s/it]
2%|▏ | 12214/569592 [3:40:19<430:56:21, 2.78s/it]
2%|▏ | 12214/569592 [3:40:19<430:56:21, 2.78s/it]
2%|▏ | 12215/569592 [3:40:22<461:22:36, 2.98s/it]
2%|▏ | 12215/569592 [3:40:22<461:22:36, 2.98s/it]
2%|▏ | 12216/569592 [3:40:24<371:03:01, 2.40s/it]
2%|▏ | 12216/569592 [3:40:24<371:03:01, 2.40s/it]
2%|▏ | 12217/569592 [3:40:27<425:07:35, 2.75s/it]
2%|▏ | 12217/569592 [3:40:27<425:07:35, 2.75s/it]
2%|▏ | 12218/569592 [3:40:30<423:08:30, 2.73s/it]
2%|▏ | 12218/569592 [3:40:30<423:08:30, 2.73s/it]
2%|▏ | 12219/569592 [3:40:32<393:52:56, 2.54s/it]
2%|▏ | 12219/569592 [3:40:32<393:52:56, 2.54s/it]
2%|▏ | 12220/569592 [3:40:34<363:04:42, 2.35s/it]
2%|▏ | 12220/569592 [3:40:34<363:04:42, 2.35s/it]
2%|▏ | 12221/569592 [3:40:35<313:51:51, 2.03s/it]
2%|▏ | 12221/569592 [3:40:35<313:51:51, 2.03s/it]
2%|▏ | 12222/569592 [3:40:40<473:14:20, 3.06s/it]
2%|▏ | 12222/569592 [3:40:41<473:14:20, 3.06s/it]
2%|▏ | 12223/569592 [3:40:44<509:44:06, 3.29s/it]
2%|▏ | 12223/569592 [3:40:44<509:44:06, 3.29s/it]
2%|▏ | 12224/569592 [3:40:45<400:19:00, 2.59s/it]
2%|▏ | 12224/569592 [3:40:45<400:19:00, 2.59s/it]
2%|▏ | 12225/569592 [3:40:46<328:54:46, 2.12s/it]
2%|▏ | 12225/569592 [3:40:46<328:54:46, 2.12s/it]
2%|▏ | 12226/569592 [3:40:50<384:15:24, 2.48s/it]
2%|▏ | 12226/569592 [3:40:50<384:15:24, 2.48s/it]
2%|▏ | 12227/569592 [3:40:54<458:07:25, 2.96s/it]
2%|▏ | 12227/569592 [3:40:54<458:07:25, 2.96s/it]
2%|▏ | 12228/569592 [3:40:57<490:51:05, 3.17s/it]
2%|▏ | 12228/569592 [3:40:57<490:51:05, 3.17s/it]
2%|▏ | 12229/569592 [3:41:01<516:43:36, 3.34s/it]
2%|▏ | 12229/569592 [3:41:01<516:43:36, 3.34s/it]
2%|▏ | 12230/569592 [3:41:05<520:39:23, 3.36s/it]
2%|▏ | 12230/569592 [3:41:05<520:39:23, 3.36s/it]
2%|▏ | 12231/569592 [3:41:05<407:31:57, 2.63s/it]
2%|▏ | 12231/569592 [3:41:05<407:31:57, 2.63s/it]
2%|▏ | 12232/569592 [3:41:06<332:51:47, 2.15s/it]
2%|▏ | 12232/569592 [3:41:06<332:51:47, 2.15s/it]
2%|▏ | 12233/569592 [3:41:10<386:29:04, 2.50s/it]
2%|▏ | 12233/569592 [3:41:10<386:29:04, 2.50s/it]
2%|▏ | 12234/569592 [3:41:13<399:24:48, 2.58s/it]
2%|▏ | 12234/569592 [3:41:13<399:24:48, 2.58s/it]
2%|▏ | 12235/569592 [3:41:14<323:49:27, 2.09s/it]
2%|▏ | 12235/569592 [3:41:14<323:49:27, 2.09s/it]
2%|▏ | 12236/569592 [3:41:18<417:15:14, 2.70s/it]
2%|▏ | 12236/569592 [3:41:18<417:15:14, 2.70s/it]
2%|▏ | 12237/569592 [3:41:21<461:55:15, 2.98s/it]
2%|▏ | 12237/569592 [3:41:21<461:55:15, 2.98s/it]
2%|▏ | 12238/569592 [3:41:26<539:43:36, 3.49s/it]
2%|▏ | 12238/569592 [3:41:26<539:43:36, 3.49s/it]
2%|▏ | 12239/569592 [3:41:27<421:34:54, 2.72s/it]
2%|▏ | 12239/569592 [3:41:27<421:34:54, 2.72s/it]
2%|▏ | 12240/569592 [3:41:28<347:39:23, 2.25s/it]
2%|▏ | 12240/569592 [3:41:28<347:39:23, 2.25s/it]
2%|▏ | 12241/569592 [3:41:29<287:37:12, 1.86s/it]
2%|▏ | 12241/569592 [3:41:29<287:37:12, 1.86s/it]
2%|▏ | 12242/569592 [3:41:31<291:59:03, 1.89s/it]
2%|▏ | 12242/569592 [3:41:31<291:59:03, 1.89s/it]
2%|▏ | 12243/569592 [3:41:35<382:18:06, 2.47s/it]
2%|▏ | 12243/569592 [3:41:35<382:18:06, 2.47s/it]
2%|▏ | 12244/569592 [3:41:36<312:27:52, 2.02s/it]
2%|▏ | 12244/569592 [3:41:36<312:27:52, 2.02s/it]
2%|▏ | 12245/569592 [3:41:43<554:21:01, 3.58s/it]
2%|▏ | 12245/569592 [3:41:43<554:21:01, 3.58s/it]
2%|▏ | 12246/569592 [3:41:46<552:23:19, 3.57s/it]
2%|▏ | 12246/569592 [3:41:46<552:23:19, 3.57s/it]
2%|▏ | 12247/569592 [3:41:51<610:41:33, 3.94s/it]
2%|▏ | 12247/569592 [3:41:51<610:41:33, 3.94s/it]
2%|▏ | 12248/569592 [3:41:55<584:01:15, 3.77s/it]
2%|▏ | 12248/569592 [3:41:55<584:01:15, 3.77s/it]
2%|▏ | 12249/569592 [3:41:58<563:13:36, 3.64s/it]
2%|▏ | 12249/569592 [3:41:58<563:13:36, 3.64s/it]
2%|▏ | 12250/569592 [3:42:02<575:32:11, 3.72s/it]
2%|▏ | 12250/569592 [3:42:02<575:32:11, 3.72s/it]
2%|▏ | 12251/569592 [3:42:05<565:27:39, 3.65s/it]
2%|▏ | 12251/569592 [3:42:05<565:27:39, 3.65s/it]
2%|▏ | 12252/569592 [3:42:09<540:50:02, 3.49s/it]
2%|▏ | 12252/569592 [3:42:09<540:50:02, 3.49s/it]
2%|▏ | 12253/569592 [3:42:12<538:16:01, 3.48s/it]
2%|▏ | 12253/569592 [3:42:12<538:16:01, 3.48s/it]
2%|▏ | 12254/569592 [3:42:15<522:53:17, 3.38s/it]
2%|▏ | 12254/569592 [3:42:15<522:53:17, 3.38s/it]
2%|▏ | 12255/569592 [3:42:20<594:14:44, 3.84s/it]
2%|▏ | 12255/569592 [3:42:20<594:14:44, 3.84s/it]
2%|▏ | 12256/569592 [3:42:23<554:48:52, 3.58s/it]
2%|▏ | 12256/569592 [3:42:23<554:48:52, 3.58s/it]
2%|▏ | 12257/569592 [3:42:27<552:06:12, 3.57s/it]
2%|▏ | 12257/569592 [3:42:27<552:06:12, 3.57s/it]
2%|▏ | 12258/569592 [3:42:27<428:43:15, 2.77s/it]
2%|▏ | 12258/569592 [3:42:27<428:43:15, 2.77s/it]
2%|▏ | 12259/569592 [3:42:31<451:01:31, 2.91s/it]
2%|▏ | 12259/569592 [3:42:31<451:01:31, 2.91s/it]
2%|▏ | 12260/569592 [3:42:34<478:29:34, 3.09s/it]
2%|▏ | 12260/569592 [3:42:34<478:29:34, 3.09s/it]
2%|▏ | 12261/569592 [3:42:39<557:09:33, 3.60s/it]
2%|▏ | 12261/569592 [3:42:39<557:09:33, 3.60s/it]
2%|▏ | 12262/569592 [3:42:42<546:03:44, 3.53s/it]
2%|▏ | 12262/569592 [3:42:42<546:03:44, 3.53s/it]
2%|▏ | 12263/569592 [3:42:48<634:46:42, 4.10s/it]
2%|▏ | 12263/569592 [3:42:48<634:46:42, 4.10s/it]
2%|▏ | 12264/569592 [3:42:51<612:42:37, 3.96s/it]
2%|▏ | 12264/569592 [3:42:51<612:42:37, 3.96s/it]
2%|▏ | 12265/569592 [3:42:55<603:07:15, 3.90s/it]
2%|▏ | 12265/569592 [3:42:55<603:07:15, 3.90s/it]
2%|▏ | 12266/569592 [3:42:58<567:10:49, 3.66s/it]
2%|▏ | 12266/569592 [3:42:58<567:10:49, 3.66s/it]
2%|▏ | 12267/569592 [3:42:59<439:25:09, 2.84s/it]
2%|▏ | 12267/569592 [3:42:59<439:25:09, 2.84s/it]
2%|▏ | 12268/569592 [3:43:04<531:12:14, 3.43s/it]
2%|▏ | 12268/569592 [3:43:04<531:12:14, 3.43s/it]
2%|▏ | 12269/569592 [3:43:09<593:59:15, 3.84s/it]
2%|▏ | 12269/569592 [3:43:09<593:59:15, 3.84s/it]
2%|▏ | 12270/569592 [3:43:12<581:51:11, 3.76s/it]
2%|▏ | 12270/569592 [3:43:12<581:51:11, 3.76s/it]
2%|▏ | 12271/569592 [3:43:16<561:00:32, 3.62s/it]
2%|▏ | 12271/569592 [3:43:16<561:00:32, 3.62s/it]
2%|▏ | 12272/569592 [3:43:20<613:00:35, 3.96s/it]
2%|▏ | 12272/569592 [3:43:20<613:00:35, 3.96s/it]
2%|▏ | 12273/569592 [3:43:24<590:08:09, 3.81s/it]
2%|▏ | 12273/569592 [3:43:24<590:08:09, 3.81s/it]
2%|▏ | 12274/569592 [3:43:25<455:01:54, 2.94s/it]
2%|▏ | 12274/569592 [3:43:25<455:01:54, 2.94s/it]
2%|▏ | 12275/569592 [3:43:28<462:52:30, 2.99s/it]
2%|▏ | 12275/569592 [3:43:28<462:52:30, 2.99s/it]
2%|▏ | 12276/569592 [3:43:31<475:08:19, 3.07s/it]
2%|▏ | 12276/569592 [3:43:31<475:08:19, 3.07s/it]
2%|▏ | 12277/569592 [3:43:32<375:36:19, 2.43s/it]
2%|▏ | 12277/569592 [3:43:32<375:36:19, 2.43s/it]
2%|▏ | 12278/569592 [3:43:36<459:32:25, 2.97s/it]
2%|▏ | 12278/569592 [3:43:36<459:32:25, 2.97s/it]
2%|▏ | 12279/569592 [3:43:37<367:15:59, 2.37s/it]
2%|▏ | 12279/569592 [3:43:37<367:15:59, 2.37s/it]
2%|▏ | 12280/569592 [3:43:41<439:46:43, 2.84s/it]
2%|▏ | 12280/569592 [3:43:41<439:46:43, 2.84s/it]
2%|▏ | 12281/569592 [3:43:46<548:40:43, 3.54s/it]
2%|▏ | 12281/569592 [3:43:46<548:40:43, 3.54s/it]
2%|▏ | 12282/569592 [3:43:51<604:02:36, 3.90s/it]
2%|▏ | 12282/569592 [3:43:51<604:02:36, 3.90s/it]
2%|▏ | 12283/569592 [3:43:54<573:02:08, 3.70s/it]
2%|▏ | 12283/569592 [3:43:54<573:02:08, 3.70s/it]
2%|▏ | 12284/569592 [3:43:57<543:32:23, 3.51s/it]
2%|▏ | 12284/569592 [3:43:57<543:32:23, 3.51s/it]
2%|▏ | 12285/569592 [3:44:01<550:00:11, 3.55s/it]
2%|▏ | 12285/569592 [3:44:01<550:00:11, 3.55s/it]
2%|▏ | 12286/569592 [3:44:05<544:38:49, 3.52s/it]
2%|▏ | 12286/569592 [3:44:05<544:38:49, 3.52s/it]
2%|▏ | 12287/569592 [3:44:08<536:34:04, 3.47s/it]
2%|▏ | 12287/569592 [3:44:08<536:34:04, 3.47s/it]
2%|▏ | 12288/569592 [3:44:12<549:13:16, 3.55s/it]
2%|▏ | 12288/569592 [3:44:12<549:13:16, 3.55s/it]
2%|▏ | 12289/569592 [3:44:15<543:12:00, 3.51s/it]
2%|▏ | 12289/569592 [3:44:15<543:12:00, 3.51s/it]
2%|▏ | 12290/569592 [3:44:18<526:00:25, 3.40s/it]
2%|▏ | 12290/569592 [3:44:18<526:00:25, 3.40s/it]
2%|▏ | 12291/569592 [3:44:19<409:27:39, 2.64s/it]
2%|▏ | 12291/569592 [3:44:19<409:27:39, 2.64s/it]
2%|▏ | 12292/569592 [3:44:23<453:58:15, 2.93s/it]
2%|▏ | 12292/569592 [3:44:23<453:58:15, 2.93s/it]
2%|▏ | 12293/569592 [3:44:27<518:19:46, 3.35s/it]
2%|▏ | 12293/569592 [3:44:27<518:19:46, 3.35s/it]
2%|▏ | 12294/569592 [3:44:28<408:41:26, 2.64s/it]
2%|▏ | 12294/569592 [3:44:28<408:41:26, 2.64s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12295/569592 [3:44:32<463:15:51, 2.99s/it]
2%|▏ | 12295/569592 [3:44:32<463:15:51, 2.99s/it]
2%|▏ | 12296/569592 [3:44:33<367:34:54, 2.37s/it]
2%|▏ | 12296/569592 [3:44:33<367:34:54, 2.37s/it]
2%|▏ | 12297/569592 [3:44:34<301:36:01, 1.95s/it]
2%|▏ | 12297/569592 [3:44:34<301:36:01, 1.95s/it]
2%|▏ | 12298/569592 [3:44:37<369:08:06, 2.38s/it]
2%|▏ | 12298/569592 [3:44:37<369:08:06, 2.38s/it]
2%|▏ | 12299/569592 [3:44:41<426:34:32, 2.76s/it]
2%|▏ | 12299/569592 [3:44:41<426:34:32, 2.76s/it]
2%|▏ | 12300/569592 [3:44:44<452:49:55, 2.93s/it]
2%|▏ | 12300/569592 [3:44:44<452:49:55, 2.93s/it]
2%|▏ | 12301/569592 [3:44:45<359:38:39, 2.32s/it]
2%|▏ | 12301/569592 [3:44:45<359:38:39, 2.32s/it]
2%|▏ | 12302/569592 [3:44:49<428:50:13, 2.77s/it]
2%|▏ | 12302/569592 [3:44:49<428:50:13, 2.77s/it]
2%|▏ | 12303/569592 [3:44:52<452:26:40, 2.92s/it]
2%|▏ | 12303/569592 [3:44:52<452:26:40, 2.92s/it]
2%|▏ | 12304/569592 [3:44:55<471:48:50, 3.05s/it]
2%|▏ | 12304/569592 [3:44:55<471:48:50, 3.05s/it]
2%|▏ | 12305/569592 [3:45:00<546:19:50, 3.53s/it]
2%|▏ | 12305/569592 [3:45:00<546:19:50, 3.53s/it]
2%|▏ | 12306/569592 [3:45:03<539:13:07, 3.48s/it]
2%|▏ | 12306/569592 [3:45:03<539:13:07, 3.48s/it]
2%|▏ | 12307/569592 [3:45:04<419:27:51, 2.71s/it]
2%|▏ | 12307/569592 [3:45:04<419:27:51, 2.71s/it]
2%|▏ | 12308/569592 [3:45:05<336:29:26, 2.17s/it]
2%|▏ | 12308/569592 [3:45:05<336:29:26, 2.17s/it]
2%|▏ | 12309/569592 [3:45:06<280:09:20, 1.81s/it]
2%|▏ | 12309/569592 [3:45:06<280:09:20, 1.81s/it]
2%|▏ | 12310/569592 [3:45:07<243:35:00, 1.57s/it]
2%|▏ | 12310/569592 [3:45:07<243:35:00, 1.57s/it]
2%|▏ | 12311/569592 [3:45:08<214:53:19, 1.39s/it]
2%|▏ | 12311/569592 [3:45:08<214:53:19, 1.39s/it]
2%|▏ | 12312/569592 [3:45:11<302:57:09, 1.96s/it]
2%|▏ | 12312/569592 [3:45:11<302:57:09, 1.96s/it]
2%|▏ | 12313/569592 [3:45:15<372:11:54, 2.40s/it]
2%|▏ | 12313/569592 [3:45:15<372:11:54, 2.40s/it]
2%|▏ | 12314/569592 [3:45:16<306:34:35, 1.98s/it]
2%|▏ | 12314/569592 [3:45:16<306:34:35, 1.98s/it]
2%|▏ | 12315/569592 [3:45:18<320:05:36, 2.07s/it]
2%|▏ | 12315/569592 [3:45:18<320:05:36, 2.07s/it]
2%|▏ | 12316/569592 [3:45:19<267:47:08, 1.73s/it]
2%|▏ | 12316/569592 [3:45:19<267:47:08, 1.73s/it]
2%|▏ | 12317/569592 [3:45:20<230:32:31, 1.49s/it]
2%|▏ | 12317/569592 [3:45:20<230:32:31, 1.49s/it]
2%|▏ | 12318/569592 [3:45:23<309:25:49, 2.00s/it]
2%|▏ | 12318/569592 [3:45:23<309:25:49, 2.00s/it]
2%|▏ | 12319/569592 [3:45:29<466:35:10, 3.01s/it]
2%|▏ | 12319/569592 [3:45:29<466:35:10, 3.01s/it]
2%|▏ | 12320/569592 [3:45:30<374:00:36, 2.42s/it]
2%|▏ | 12320/569592 [3:45:30<374:00:36, 2.42s/it]
2%|▏ | 12321/569592 [3:45:35<515:54:15, 3.33s/it]
2%|▏ | 12321/569592 [3:45:35<515:54:15, 3.33s/it]
2%|▏ | 12322/569592 [3:45:36<406:36:44, 2.63s/it]
2%|▏ | 12322/569592 [3:45:36<406:36:44, 2.63s/it]
2%|▏ | 12323/569592 [3:45:39<403:31:17, 2.61s/it]
2%|▏ | 12323/569592 [3:45:39<403:31:17, 2.61s/it]
2%|▏ | 12324/569592 [3:45:40<341:33:07, 2.21s/it]
2%|▏ | 12324/569592 [3:45:40<341:33:07, 2.21s/it]
2%|▏ | 12325/569592 [3:45:41<282:17:28, 1.82s/it]
2%|▏ | 12325/569592 [3:45:41<282:17:28, 1.82s/it]
2%|▏ | 12326/569592 [3:45:45<374:32:00, 2.42s/it]
2%|▏ | 12326/569592 [3:45:45<374:32:00, 2.42s/it]
2%|▏ | 12327/569592 [3:45:48<403:26:03, 2.61s/it]
2%|▏ | 12327/569592 [3:45:48<403:26:03, 2.61s/it]
2%|▏ | 12328/569592 [3:45:49<356:55:51, 2.31s/it]
2%|▏ | 12328/569592 [3:45:49<356:55:51, 2.31s/it]
2%|▏ | 12329/569592 [3:45:50<294:12:26, 1.90s/it]
2%|▏ | 12329/569592 [3:45:50<294:12:26, 1.90s/it]
2%|▏ | 12330/569592 [3:45:55<421:35:11, 2.72s/it]
2%|▏ | 12330/569592 [3:45:55<421:35:11, 2.72s/it]
2%|▏ | 12331/569592 [3:45:58<450:34:40, 2.91s/it]
2%|▏ | 12331/569592 [3:45:58<450:34:40, 2.91s/it]
2%|▏ | 12332/569592 [3:45:59<363:16:13, 2.35s/it]
2%|▏ | 12332/569592 [3:45:59<363:16:13, 2.35s/it]
2%|▏ | 12333/569592 [3:46:00<297:54:30, 1.92s/it]
2%|▏ | 12333/569592 [3:46:00<297:54:30, 1.92s/it]
2%|▏ | 12334/569592 [3:46:06<470:15:54, 3.04s/it]
2%|▏ | 12334/569592 [3:46:06<470:15:54, 3.04s/it]
2%|▏ | 12335/569592 [3:46:09<462:45:46, 2.99s/it]
2%|▏ | 12335/569592 [3:46:09<462:45:46, 2.99s/it]
2%|▏ | 12336/569592 [3:46:10<377:49:49, 2.44s/it]
2%|▏ | 12336/569592 [3:46:10<377:49:49, 2.44s/it]
2%|▏ | 12337/569592 [3:46:11<311:32:18, 2.01s/it]
2%|▏ | 12337/569592 [3:46:11<311:32:18, 2.01s/it]
2%|▏ | 12338/569592 [3:46:15<407:34:59, 2.63s/it]
2%|▏ | 12338/569592 [3:46:15<407:34:59, 2.63s/it]
2%|▏ | 12339/569592 [3:46:19<466:04:48, 3.01s/it]
2%|▏ | 12339/569592 [3:46:19<466:04:48, 3.01s/it]
2%|▏ | 12340/569592 [3:46:23<514:24:04, 3.32s/it]
2%|▏ | 12340/569592 [3:46:23<514:24:04, 3.32s/it]
2%|▏ | 12341/569592 [3:46:27<554:59:00, 3.59s/it]
2%|▏ | 12341/569592 [3:46:27<554:59:00, 3.59s/it]
2%|▏ | 12342/569592 [3:46:28<434:48:11, 2.81s/it]
2%|▏ | 12342/569592 [3:46:28<434:48:11, 2.81s/it]
2%|▏ | 12343/569592 [3:46:33<509:57:00, 3.29s/it]
2%|▏ | 12343/569592 [3:46:33<509:57:00, 3.29s/it]
2%|▏ | 12344/569592 [3:46:33<400:35:53, 2.59s/it]
2%|▏ | 12344/569592 [3:46:33<400:35:53, 2.59s/it]
2%|▏ | 12345/569592 [3:46:37<460:32:44, 2.98s/it]
2%|▏ | 12345/569592 [3:46:37<460:32:44, 2.98s/it]
2%|▏ | 12346/569592 [3:46:41<477:38:29, 3.09s/it]
2%|▏ | 12346/569592 [3:46:41<477:38:29, 3.09s/it]
2%|▏ | 12347/569592 [3:46:44<483:27:58, 3.12s/it]
2%|▏ | 12347/569592 [3:46:44<483:27:58, 3.12s/it]
2%|▏ | 12348/569592 [3:46:47<479:36:15, 3.10s/it]
2%|▏ | 12348/569592 [3:46:47<479:36:15, 3.10s/it]
2%|▏ | 12349/569592 [3:46:48<389:56:44, 2.52s/it]
2%|▏ | 12349/569592 [3:46:48<389:56:44, 2.52s/it]
2%|▏ | 12350/569592 [3:46:52<440:35:43, 2.85s/it]
2%|▏ | 12350/569592 [3:46:52<440:35:43, 2.85s/it]
2%|▏ | 12351/569592 [3:46:55<458:04:54, 2.96s/it]
2%|▏ | 12351/569592 [3:46:55<458:04:54, 2.96s/it]
2%|▏ | 12352/569592 [3:47:00<545:24:48, 3.52s/it]
2%|▏ | 12352/569592 [3:47:00<545:24:48, 3.52s/it]
2%|▏ | 12353/569592 [3:47:01<423:55:38, 2.74s/it]
2%|▏ | 12353/569592 [3:47:01<423:55:38, 2.74s/it]
2%|▏ | 12354/569592 [3:47:02<341:30:31, 2.21s/it]
2%|▏ | 12354/569592 [3:47:02<341:30:31, 2.21s/it]
2%|▏ | 12355/569592 [3:47:07<470:15:04, 3.04s/it]
2%|▏ | 12355/569592 [3:47:07<470:15:04, 3.04s/it]
2%|▏ | 12356/569592 [3:47:10<492:13:36, 3.18s/it]
2%|▏ | 12356/569592 [3:47:10<492:13:36, 3.18s/it]
2%|▏ | 12357/569592 [3:47:11<387:17:57, 2.50s/it]
2%|▏ | 12357/569592 [3:47:11<387:17:57, 2.50s/it]
2%|▏ | 12358/569592 [3:47:12<314:56:25, 2.03s/it]
2%|▏ | 12358/569592 [3:47:12<314:56:25, 2.03s/it]
2%|▏ | 12359/569592 [3:47:15<376:30:21, 2.43s/it]
2%|▏ | 12359/569592 [3:47:15<376:30:21, 2.43s/it]
2%|▏ | 12360/569592 [3:47:19<433:18:09, 2.80s/it]
2%|▏ | 12360/569592 [3:47:19<433:18:09, 2.80s/it]
2%|▏ | 12361/569592 [3:47:22<458:14:33, 2.96s/it]
2%|▏ | 12361/569592 [3:47:22<458:14:33, 2.96s/it]
2%|▏ | 12362/569592 [3:47:27<554:46:31, 3.58s/it]
2%|▏ | 12362/569592 [3:47:27<554:46:31, 3.58s/it]
2%|▏ | 12363/569592 [3:47:31<542:04:44, 3.50s/it]
2%|▏ | 12363/569592 [3:47:31<542:04:44, 3.50s/it]
2%|▏ | 12364/569592 [3:47:35<559:20:34, 3.61s/it]
2%|▏ | 12364/569592 [3:47:35<559:20:34, 3.61s/it]
2%|▏ | 12365/569592 [3:47:40<642:16:26, 4.15s/it]
2%|▏ | 12365/569592 [3:47:40<642:16:26, 4.15s/it]
2%|▏ | 12366/569592 [3:47:45<669:46:28, 4.33s/it]
2%|▏ | 12366/569592 [3:47:45<669:46:28, 4.33s/it]
2%|▏ | 12367/569592 [3:47:48<618:59:28, 4.00s/it]
2%|▏ | 12367/569592 [3:47:48<618:59:28, 4.00s/it]
2%|▏ | 12368/569592 [3:47:52<637:35:38, 4.12s/it]
2%|▏ | 12368/569592 [3:47:52<637:35:38, 4.12s/it]
2%|▏ | 12369/569592 [3:47:56<603:37:29, 3.90s/it]
2%|▏ | 12369/569592 [3:47:56<603:37:29, 3.90s/it]
2%|▏ | 12370/569592 [3:48:01<684:00:11, 4.42s/it]
2%|▏ | 12370/569592 [3:48:01<684:00:11, 4.42s/it]
2%|▏ | 12371/569592 [3:48:02<521:31:22, 3.37s/it]
2%|▏ | 12371/569592 [3:48:02<521:31:22, 3.37s/it]
2%|▏ | 12372/569592 [3:48:05<506:30:48, 3.27s/it]
2%|▏ | 12372/569592 [3:48:05<506:30:48, 3.27s/it]
2%|▏ | 12373/569592 [3:48:06<398:04:40, 2.57s/it]
2%|▏ | 12373/569592 [3:48:06<398:04:40, 2.57s/it]
2%|▏ | 12374/569592 [3:48:07<322:11:25, 2.08s/it]
2%|▏ | 12374/569592 [3:48:07<322:11:25, 2.08s/it]
2%|▏ | 12375/569592 [3:48:11<394:47:59, 2.55s/it]
2%|▏ | 12375/569592 [3:48:11<394:47:59, 2.55s/it]
2%|▏ | 12376/569592 [3:48:15<454:07:15, 2.93s/it]
2%|▏ | 12376/569592 [3:48:15<454:07:15, 2.93s/it]
2%|▏ | 12377/569592 [3:48:20<551:12:12, 3.56s/it]
2%|▏ | 12377/569592 [3:48:20<551:12:12, 3.56s/it]
2%|▏ | 12378/569592 [3:48:26<674:31:15, 4.36s/it]
2%|▏ | 12378/569592 [3:48:26<674:31:15, 4.36s/it]
2%|▏ | 12379/569592 [3:48:30<677:53:17, 4.38s/it]
2%|▏ | 12379/569592 [3:48:30<677:53:17, 4.38s/it]
2%|▏ | 12380/569592 [3:48:34<635:57:49, 4.11s/it]
2%|▏ | 12380/569592 [3:48:34<635:57:49, 4.11s/it]
2%|▏ | 12381/569592 [3:48:39<662:33:07, 4.28s/it]
2%|▏ | 12381/569592 [3:48:39<662:33:07, 4.28s/it]
2%|▏ | 12382/569592 [3:48:43<658:21:06, 4.25s/it]
2%|▏ | 12382/569592 [3:48:43<658:21:06, 4.25s/it]
2%|▏ | 12383/569592 [3:48:47<679:54:38, 4.39s/it]
2%|▏ | 12383/569592 [3:48:47<679:54:38, 4.39s/it]
2%|▏ | 12384/569592 [3:48:51<651:34:01, 4.21s/it]
2%|▏ | 12384/569592 [3:48:51<651:34:01, 4.21s/it]
2%|▏ | 12385/569592 [3:48:52<496:25:18, 3.21s/it]
2%|▏ | 12385/569592 [3:48:52<496:25:18, 3.21s/it]
2%|▏ | 12386/569592 [3:48:55<506:13:18, 3.27s/it]
2%|▏ | 12386/569592 [3:48:56<506:13:18, 3.27s/it]
2%|▏ | 12387/569592 [3:49:01<587:39:46, 3.80s/it]
2%|▏ | 12387/569592 [3:49:01<587:39:46, 3.80s/it]
2%|▏ | 12388/569592 [3:49:05<596:49:27, 3.86s/it]
2%|▏ | 12388/569592 [3:49:05<596:49:27, 3.86s/it]
2%|▏ | 12389/569592 [3:49:09<617:51:53, 3.99s/it]
2%|▏ | 12389/569592 [3:49:09<617:51:53, 3.99s/it]
2%|▏ | 12390/569592 [3:49:10<474:40:37, 3.07s/it]
2%|▏ | 12390/569592 [3:49:10<474:40:37, 3.07s/it]
2%|▏ | 12391/569592 [3:49:13<476:42:11, 3.08s/it]
2%|▏ | 12391/569592 [3:49:13<476:42:11, 3.08s/it]
2%|▏ | 12392/569592 [3:49:14<376:37:19, 2.43s/it]
2%|▏ | 12392/569592 [3:49:14<376:37:19, 2.43s/it]
2%|▏ | 12393/569592 [3:49:15<309:06:08, 2.00s/it]
2%|▏ | 12393/569592 [3:49:15<309:06:08, 2.00s/it]
2%|▏ | 12394/569592 [3:49:18<381:45:04, 2.47s/it]
2%|▏ | 12394/569592 [3:49:18<381:45:04, 2.47s/it]
2%|▏ | 12395/569592 [3:49:22<419:06:25, 2.71s/it]
2%|▏ | 12395/569592 [3:49:22<419:06:25, 2.71s/it]
2%|▏ | 12396/569592 [3:49:26<491:14:48, 3.17s/it]
2%|▏ | 12396/569592 [3:49:26<491:14:48, 3.17s/it]
2%|▏ | 12397/569592 [3:49:31<565:45:14, 3.66s/it]
2%|▏ | 12397/569592 [3:49:31<565:45:14, 3.66s/it]
2%|▏ | 12398/569592 [3:49:34<550:38:22, 3.56s/it]
2%|▏ | 12398/569592 [3:49:34<550:38:22, 3.56s/it]
2%|▏ | 12399/569592 [3:49:37<528:47:26, 3.42s/it]
2%|▏ | 12399/569592 [3:49:37<528:47:26, 3.42s/it]
2%|▏ | 12400/569592 [3:49:40<517:06:15, 3.34s/it]
2%|▏ | 12400/569592 [3:49:40<517:06:15, 3.34s/it]
2%|▏ | 12401/569592 [3:49:46<615:59:52, 3.98s/it]
2%|▏ | 12401/569592 [3:49:46<615:59:52, 3.98s/it]
2%|▏ | 12402/569592 [3:49:49<596:22:42, 3.85s/it]
2%|▏ | 12402/569592 [3:49:49<596:22:42, 3.85s/it]
2%|▏ | 12403/569592 [3:49:52<563:09:01, 3.64s/it]
2%|▏ | 12403/569592 [3:49:52<563:09:01, 3.64s/it]
2%|▏ | 12404/569592 [3:49:53<436:50:38, 2.82s/it]
2%|▏ | 12404/569592 [3:49:53<436:50:38, 2.82s/it]
2%|▏ | 12405/569592 [3:49:54<349:11:22, 2.26s/it]
2%|▏ | 12405/569592 [3:49:54<349:11:22, 2.26s/it]
2%|▏ | 12406/569592 [3:49:59<487:55:18, 3.15s/it]
2%|▏ | 12406/569592 [3:50:00<487:55:18, 3.15s/it]
2%|▏ | 12407/569592 [3:50:03<496:29:29, 3.21s/it]
2%|▏ | 12407/569592 [3:50:03<496:29:29, 3.21s/it]
2%|▏ | 12408/569592 [3:50:04<397:04:24, 2.57s/it]
2%|▏ | 12408/569592 [3:50:04<397:04:24, 2.57s/it]
2%|▏ | 12409/569592 [3:50:07<421:19:56, 2.72s/it]
2%|▏ | 12409/569592 [3:50:07<421:19:56, 2.72s/it]
2%|▏ | 12410/569592 [3:50:10<454:20:05, 2.94s/it]
2%|▏ | 12410/569592 [3:50:10<454:20:05, 2.94s/it]
2%|▏ | 12411/569592 [3:50:14<470:36:02, 3.04s/it]
2%|▏ | 12411/569592 [3:50:14<470:36:02, 3.04s/it]
2%|▏ | 12412/569592 [3:50:17<473:09:26, 3.06s/it]
2%|▏ | 12412/569592 [3:50:17<473:09:26, 3.06s/it]
2%|▏ | 12413/569592 [3:50:22<556:36:31, 3.60s/it]
2%|▏ | 12413/569592 [3:50:22<556:36:31, 3.60s/it]
2%|▏ | 12414/569592 [3:50:26<572:43:01, 3.70s/it]
2%|▏ | 12414/569592 [3:50:26<572:43:01, 3.70s/it]
2%|▏ | 12415/569592 [3:50:29<554:05:13, 3.58s/it]
2%|▏ | 12415/569592 [3:50:29<554:05:13, 3.58s/it]
2%|▏ | 12416/569592 [3:50:32<525:40:03, 3.40s/it]
2%|▏ | 12416/569592 [3:50:32<525:40:03, 3.40s/it]
2%|▏ | 12417/569592 [3:50:35<530:10:36, 3.43s/it]
2%|▏ | 12417/569592 [3:50:35<530:10:36, 3.43s/it]
2%|▏ | 12418/569592 [3:50:36<413:40:20, 2.67s/it]
2%|▏ | 12418/569592 [3:50:36<413:40:20, 2.67s/it]
2%|▏ | 12419/569592 [3:50:41<528:42:03, 3.42s/it]
2%|▏ | 12419/569592 [3:50:41<528:42:03, 3.42s/it]
2%|▏ | 12420/569592 [3:50:42<413:42:59, 2.67s/it]
2%|▏ | 12420/569592 [3:50:42<413:42:59, 2.67s/it]
2%|▏ | 12421/569592 [3:50:46<466:08:10, 3.01s/it]
2%|▏ | 12421/569592 [3:50:46<466:08:10, 3.01s/it]
2%|▏ | 12422/569592 [3:50:47<370:22:48, 2.39s/it]
2%|▏ | 12422/569592 [3:50:47<370:22:48, 2.39s/it]
2%|▏ | 12423/569592 [3:50:52<487:05:39, 3.15s/it]
2%|▏ | 12423/569592 [3:50:52<487:05:39, 3.15s/it]
2%|▏ | 12424/569592 [3:50:53<384:36:30, 2.49s/it]
2%|▏ | 12424/569592 [3:50:53<384:36:30, 2.49s/it]
2%|▏ | 12425/569592 [3:50:56<423:22:09, 2.74s/it]
2%|▏ | 12425/569592 [3:50:56<423:22:09, 2.74s/it]
2%|▏ | 12426/569592 [3:51:00<483:32:52, 3.12s/it]
2%|▏ | 12426/569592 [3:51:00<483:32:52, 3.12s/it]
2%|▏ | 12427/569592 [3:51:01<384:00:58, 2.48s/it]
2%|▏ | 12427/569592 [3:51:01<384:00:58, 2.48s/it]
2%|▏ | 12428/569592 [3:51:02<312:33:54, 2.02s/it]
2%|▏ | 12428/569592 [3:51:02<312:33:54, 2.02s/it]
2%|▏ | 12429/569592 [3:51:03<265:42:30, 1.72s/it]
2%|▏ | 12429/569592 [3:51:03<265:42:30, 1.72s/it]
2%|▏ | 12430/569592 [3:51:04<233:47:01, 1.51s/it]
2%|▏ | 12430/569592 [3:51:04<233:47:01, 1.51s/it]
2%|▏ | 12431/569592 [3:51:05<206:59:05, 1.34s/it]
2%|▏ | 12431/569592 [3:51:05<206:59:05, 1.34s/it]
2%|▏ | 12432/569592 [3:51:06<193:42:36, 1.25s/it]
2%|▏ | 12432/569592 [3:51:06<193:42:36, 1.25s/it]
2%|▏ | 12433/569592 [3:51:07<181:11:17, 1.17s/it]
2%|▏ | 12433/569592 [3:51:07<181:11:17, 1.17s/it]
2%|▏ | 12434/569592 [3:51:12<356:08:30, 2.30s/it]
2%|▏ | 12434/569592 [3:51:12<356:08:30, 2.30s/it]
2%|▏ | 12435/569592 [3:51:14<339:49:12, 2.20s/it]
2%|▏ | 12435/569592 [3:51:14<339:49:12, 2.20s/it]
2%|▏ | 12436/569592 [3:51:15<300:43:32, 1.94s/it]
2%|▏ | 12436/569592 [3:51:15<300:43:32, 1.94s/it]
2%|▏ | 12437/569592 [3:51:17<277:16:26, 1.79s/it]
2%|▏ | 12437/569592 [3:51:17<277:16:26, 1.79s/it]
2%|▏ | 12438/569592 [3:51:20<338:35:00, 2.19s/it]
2%|▏ | 12438/569592 [3:51:20<338:35:00, 2.19s/it]
2%|▏ | 12439/569592 [3:51:24<439:01:57, 2.84s/it]
2%|▏ | 12439/569592 [3:51:24<439:01:57, 2.84s/it]
2%|▏ | 12440/569592 [3:51:26<367:20:25, 2.37s/it]
2%|▏ | 12440/569592 [3:51:26<367:20:25, 2.37s/it]
2%|▏ | 12441/569592 [3:51:27<337:36:42, 2.18s/it]
2%|▏ | 12441/569592 [3:51:27<337:36:42, 2.18s/it]
2%|▏ | 12442/569592 [3:51:30<371:21:00, 2.40s/it]
2%|▏ | 12442/569592 [3:51:30<371:21:00, 2.40s/it]
2%|▏ | 12443/569592 [3:51:35<460:13:08, 2.97s/it]
2%|▏ | 12443/569592 [3:51:35<460:13:08, 2.97s/it]
2%|▏ | 12444/569592 [3:51:36<396:45:07, 2.56s/it]
2%|▏ | 12444/569592 [3:51:36<396:45:07, 2.56s/it]
2%|▏ | 12445/569592 [3:51:40<430:39:19, 2.78s/it]
2%|▏ | 12445/569592 [3:51:40<430:39:19, 2.78s/it]
2%|▏ | 12446/569592 [3:51:41<347:45:09, 2.25s/it]
2%|▏ | 12446/569592 [3:51:41<347:45:09, 2.25s/it]
2%|▏ | 12447/569592 [3:51:45<457:28:16, 2.96s/it]
2%|▏ | 12447/569592 [3:51:45<457:28:16, 2.96s/it]
2%|▏ | 12448/569592 [3:51:46<378:03:44, 2.44s/it]
2%|▏ | 12448/569592 [3:51:46<378:03:44, 2.44s/it]
2%|▏ | 12449/569592 [3:51:48<326:29:09, 2.11s/it]
2%|▏ | 12449/569592 [3:51:48<326:29:09, 2.11s/it]
2%|▏ | 12450/569592 [3:51:50<317:54:22, 2.05s/it]
2%|▏ | 12450/569592 [3:51:50<317:54:22, 2.05s/it]
2%|▏ | 12451/569592 [3:51:55<478:24:06, 3.09s/it]
2%|▏ | 12451/569592 [3:51:55<478:24:06, 3.09s/it]
2%|▏ | 12452/569592 [3:51:57<416:53:01, 2.69s/it]
2%|▏ | 12452/569592 [3:51:57<416:53:01, 2.69s/it]
2%|▏ | 12453/569592 [3:52:02<525:25:24, 3.40s/it]
2%|▏ | 12453/569592 [3:52:02<525:25:24, 3.40s/it]
2%|▏ | 12454/569592 [3:52:07<598:52:43, 3.87s/it]
2%|▏ | 12454/569592 [3:52:07<598:52:43, 3.87s/it]
2%|▏ | 12455/569592 [3:52:08<463:27:25, 2.99s/it]
2%|▏ | 12455/569592 [3:52:08<463:27:25, 2.99s/it]
2%|▏ | 12456/569592 [3:52:13<562:37:14, 3.64s/it]
2%|▏ | 12456/569592 [3:52:13<562:37:14, 3.64s/it]
2%|▏ | 12457/569592 [3:52:14<436:05:55, 2.82s/it]
2%|▏ | 12457/569592 [3:52:14<436:05:55, 2.82s/it]
2%|▏ | 12458/569592 [3:52:17<461:27:57, 2.98s/it]
2%|▏ | 12458/569592 [3:52:17<461:27:57, 2.98s/it]
2%|▏ | 12459/569592 [3:52:22<550:15:50, 3.56s/it]
2%|▏ | 12459/569592 [3:52:22<550:15:50, 3.56s/it]
2%|▏ | 12460/569592 [3:52:23<427:56:04, 2.77s/it]
2%|▏ | 12460/569592 [3:52:23<427:56:04, 2.77s/it]
2%|▏ | 12461/569592 [3:52:28<531:29:32, 3.43s/it]
2%|▏ | 12461/569592 [3:52:28<531:29:32, 3.43s/it]
2%|▏ | 12462/569592 [3:52:29<418:51:07, 2.71s/it]
2%|▏ | 12462/569592 [3:52:29<418:51:07, 2.71s/it]
2%|▏ | 12463/569592 [3:52:33<460:53:17, 2.98s/it]
2%|▏ | 12463/569592 [3:52:33<460:53:17, 2.98s/it]
2%|▏ | 12464/569592 [3:52:37<544:42:03, 3.52s/it]
2%|▏ | 12464/569592 [3:52:37<544:42:03, 3.52s/it]
2%|▏ | 12465/569592 [3:52:42<586:07:32, 3.79s/it]
2%|▏ | 12465/569592 [3:52:42<586:07:32, 3.79s/it]
2%|▏ | 12466/569592 [3:52:43<452:42:58, 2.93s/it]
2%|▏ | 12466/569592 [3:52:43<452:42:58, 2.93s/it]
2%|▏ | 12467/569592 [3:52:44<358:32:12, 2.32s/it]
2%|▏ | 12467/569592 [3:52:44<358:32:12, 2.32s/it]
2%|▏ | 12468/569592 [3:52:47<418:32:07, 2.70s/it]
2%|▏ | 12468/569592 [3:52:47<418:32:07, 2.70s/it]
2%|▏ | 12469/569592 [3:52:48<337:26:40, 2.18s/it]
2%|▏ | 12469/569592 [3:52:48<337:26:40, 2.18s/it]
2%|▏ | 12470/569592 [3:52:49<282:20:03, 1.82s/it]
2%|▏ | 12470/569592 [3:52:49<282:20:03, 1.82s/it]
2%|▏ | 12471/569592 [3:52:50<242:17:42, 1.57s/it]
2%|▏ | 12471/569592 [3:52:50<242:17:42, 1.57s/it]
2%|▏ | 12472/569592 [3:52:55<402:26:10, 2.60s/it]
2%|▏ | 12472/569592 [3:52:55<402:26:10, 2.60s/it]
2%|▏ | 12473/569592 [3:52:59<445:18:22, 2.88s/it]
2%|▏ | 12473/569592 [3:52:59<445:18:22, 2.88s/it]
2%|▏ | 12474/569592 [3:53:02<468:58:50, 3.03s/it]
2%|▏ | 12474/569592 [3:53:02<468:58:50, 3.03s/it]
2%|▏ | 12475/569592 [3:53:03<373:32:13, 2.41s/it]
2%|▏ | 12475/569592 [3:53:03<373:32:13, 2.41s/it]
2%|▏ | 12476/569592 [3:53:09<515:15:59, 3.33s/it]
2%|▏ | 12476/569592 [3:53:09<515:15:59, 3.33s/it]
2%|▏ | 12477/569592 [3:53:14<594:07:52, 3.84s/it]
2%|▏ | 12477/569592 [3:53:14<594:07:52, 3.84s/it]
2%|▏ | 12478/569592 [3:53:19<649:43:03, 4.20s/it]
2%|▏ | 12478/569592 [3:53:19<649:43:03, 4.20s/it]
2%|▏ | 12479/569592 [3:53:22<604:53:54, 3.91s/it]
2%|▏ | 12479/569592 [3:53:22<604:53:54, 3.91s/it]
2%|▏ | 12480/569592 [3:53:27<642:51:11, 4.15s/it]
2%|▏ | 12480/569592 [3:53:27<642:51:11, 4.15s/it]
2%|▏ | 12481/569592 [3:53:30<621:32:11, 4.02s/it]
2%|▏ | 12481/569592 [3:53:30<621:32:11, 4.02s/it]
2%|▏ | 12482/569592 [3:53:35<671:40:29, 4.34s/it]
2%|▏ | 12482/569592 [3:53:35<671:40:29, 4.34s/it]
2%|▏ | 12483/569592 [3:53:40<669:21:00, 4.33s/it]
2%|▏ | 12483/569592 [3:53:40<669:21:00, 4.33s/it]
2%|▏ | 12484/569592 [3:53:45<695:46:43, 4.50s/it]
2%|▏ | 12484/569592 [3:53:45<695:46:43, 4.50s/it]
2%|▏ | 12485/569592 [3:53:48<651:28:47, 4.21s/it]
2%|▏ | 12485/569592 [3:53:48<651:28:47, 4.21s/it]
2%|▏ | 12486/569592 [3:53:51<605:18:38, 3.91s/it]
2%|▏ | 12486/569592 [3:53:51<605:18:38, 3.91s/it]
2%|▏ | 12487/569592 [3:53:55<582:30:30, 3.76s/it]
2%|▏ | 12487/569592 [3:53:55<582:30:30, 3.76s/it]
2%|▏ | 12488/569592 [3:53:58<549:44:34, 3.55s/it]
2%|▏ | 12488/569592 [3:53:58<549:44:34, 3.55s/it]
2%|▏ | 12489/569592 [3:54:03<618:10:21, 3.99s/it]
2%|▏ | 12489/569592 [3:54:03<618:10:21, 3.99s/it]
2%|▏ | 12490/569592 [3:54:08<653:58:48, 4.23s/it]
2%|▏ | 12490/569592 [3:54:08<653:58:48, 4.23s/it]
2%|▏ | 12491/569592 [3:54:11<619:20:50, 4.00s/it]
2%|▏ | 12491/569592 [3:54:11<619:20:50, 4.00s/it]
2%|▏ | 12492/569592 [3:54:12<474:29:18, 3.07s/it]
2%|▏ | 12492/569592 [3:54:12<474:29:18, 3.07s/it]
2%|▏ | 12493/569592 [3:54:13<374:17:40, 2.42s/it]
2%|▏ | 12493/569592 [3:54:13<374:17:40, 2.42s/it]
2%|▏ | 12494/569592 [3:54:17<459:22:48, 2.97s/it]
2%|▏ | 12494/569592 [3:54:17<459:22:48, 2.97s/it]
2%|▏ | 12495/569592 [3:54:22<545:58:43, 3.53s/it]
2%|▏ | 12495/569592 [3:54:22<545:58:43, 3.53s/it]
2%|▏ | 12496/569592 [3:54:27<598:06:00, 3.86s/it]
2%|▏ | 12496/569592 [3:54:27<598:06:00, 3.86s/it]
2%|▏ | 12497/569592 [3:54:31<631:28:55, 4.08s/it]
2%|▏ | 12497/569592 [3:54:31<631:28:55, 4.08s/it]
2%|▏ | 12498/569592 [3:54:36<676:11:37, 4.37s/it]
2%|▏ | 12498/569592 [3:54:36<676:11:37, 4.37s/it]
2%|▏ | 12499/569592 [3:54:40<647:18:02, 4.18s/it]
2%|▏ | 12499/569592 [3:54:40<647:18:02, 4.18s/it]
2%|▏ | 12500/569592 [3:54:41<494:18:24, 3.19s/it]
2%|▏ | 12500/569592 [3:54:41<494:18:24, 3.19s/it]
2%|▏ | 12501/569592 [3:54:42<387:36:55, 2.50s/it]
2%|▏ | 12501/569592 [3:54:42<387:36:55, 2.50s/it]
2%|▏ | 12502/569592 [3:54:43<316:11:35, 2.04s/it]
2%|▏ | 12502/569592 [3:54:43<316:11:35, 2.04s/it]
2%|▏ | 12503/569592 [3:54:44<266:51:39, 1.72s/it]
2%|▏ | 12503/569592 [3:54:44<266:51:39, 1.72s/it]
2%|▏ | 12504/569592 [3:54:47<352:50:44, 2.28s/it]
2%|▏ | 12504/569592 [3:54:47<352:50:44, 2.28s/it]
2%|▏ | 12505/569592 [3:54:52<445:05:31, 2.88s/it]
2%|▏ | 12505/569592 [3:54:52<445:05:31, 2.88s/it]
2%|▏ | 12506/569592 [3:54:56<496:55:00, 3.21s/it]
2%|▏ | 12506/569592 [3:54:56<496:55:00, 3.21s/it]
2%|▏ | 12507/569592 [3:55:00<573:51:54, 3.71s/it]
2%|▏ | 12507/569592 [3:55:00<573:51:54, 3.71s/it]
2%|▏ | 12508/569592 [3:55:05<625:58:00, 4.05s/it]
2%|▏ | 12508/569592 [3:55:05<625:58:00, 4.05s/it]
2%|▏ | 12509/569592 [3:55:10<654:21:49, 4.23s/it]
2%|▏ | 12509/569592 [3:55:10<654:21:49, 4.23s/it]
2%|▏ | 12510/569592 [3:55:14<668:17:27, 4.32s/it]
2%|▏ | 12510/569592 [3:55:14<668:17:27, 4.32s/it]
2%|▏ | 12511/569592 [3:55:17<608:51:12, 3.93s/it]
2%|▏ | 12511/569592 [3:55:18<608:51:12, 3.93s/it]
2%|▏ | 12512/569592 [3:55:18<467:42:35, 3.02s/it]
2%|▏ | 12512/569592 [3:55:18/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
<467:42:35, 3.02s/it]
2%|▏ | 12513/569592 [3:55:19<369:10:53, 2.39s/it]
2%|▏ | 12513/569592 [3:55:19<369:10:53, 2.39s/it]
2%|▏ | 12514/569592 [3:55:24<495:48:03, 3.20s/it]
2%|▏ | 12514/569592 [3:55:24<495:48:03, 3.20s/it]
2%|▏ | 12515/569592 [3:55:29<578:43:39, 3.74s/it]
2%|▏ | 12515/569592 [3:55:29<578:43:39, 3.74s/it]
2%|▏ | 12516/569592 [3:55:35<643:28:07, 4.16s/it]
2%|▏ | 12516/569592 [3:55:35<643:28:07, 4.16s/it]
2%|▏ | 12517/569592 [3:55:38<592:16:58, 3.83s/it]
2%|▏ | 12517/569592 [3:55:38<592:16:58, 3.83s/it]
2%|▏ | 12518/569592 [3:55:41<578:08:09, 3.74s/it]
2%|▏ | 12518/569592 [3:55:41<578:08:09, 3.74s/it]
2%|▏ | 12519/569592 [3:55:44<559:28:17, 3.62s/it]
2%|▏ | 12519/569592 [3:55:44<559:28:17, 3.62s/it]
2%|▏ | 12520/569592 [3:55:48<561:41:31, 3.63s/it]
2%|▏ | 12520/569592 [3:55:48<561:41:31, 3.63s/it]
2%|▏ | 12521/569592 [3:55:52<573:19:56, 3.71s/it]
2%|▏ | 12521/569592 [3:55:52<573:19:56, 3.71s/it]
2%|▏ | 12522/569592 [3:55:53<443:45:28, 2.87s/it]
2%|▏ | 12522/569592 [3:55:53<443:45:28, 2.87s/it]
2%|▏ | 12523/569592 [3:55:58<546:18:57, 3.53s/it]
2%|▏ | 12523/569592 [3:55:58<546:18:57, 3.53s/it]
2%|▏ | 12524/569592 [3:55:59<427:44:57, 2.76s/it]
2%|▏ | 12524/569592 [3:55:59<427:44:57, 2.76s/it]
2%|▏ | 12525/569592 [3:56:03<495:28:24, 3.20s/it]
2%|▏ | 12525/569592 [3:56:03<495:28:24, 3.20s/it]
2%|▏ | 12526/569592 [3:56:04<390:06:47, 2.52s/it]
2%|▏ | 12526/569592 [3:56:04<390:06:47, 2.52s/it]
2%|▏ | 12527/569592 [3:56:08<436:41:59, 2.82s/it]
2%|▏ | 12527/569592 [3:56:08<436:41:59, 2.82s/it]
2%|▏ | 12528/569592 [3:56:11<463:28:23, 3.00s/it]
2%|▏ | 12528/569592 [3:56:11<463:28:23, 3.00s/it]
2%|▏ | 12529/569592 [3:56:16<551:10:34, 3.56s/it]
2%|▏ | 12529/569592 [3:56:16<551:10:34, 3.56s/it]
2%|▏ | 12530/569592 [3:56:20<570:51:36, 3.69s/it]
2%|▏ | 12530/569592 [3:56:20<570:51:36, 3.69s/it]
2%|▏ | 12531/569592 [3:56:21<439:57:10, 2.84s/it]
2%|▏ | 12531/569592 [3:56:21<439:57:10, 2.84s/it]
2%|▏ | 12532/569592 [3:56:24<464:31:39, 3.00s/it]
2%|▏ | 12532/569592 [3:56:24<464:31:39, 3.00s/it]
2%|▏ | 12533/569592 [3:56:29<570:10:37, 3.68s/it]
2%|▏ | 12533/569592 [3:56:29<570:10:37, 3.68s/it]
2%|▏ | 12534/569592 [3:56:32<541:49:42, 3.50s/it]
2%|▏ | 12534/569592 [3:56:33<541:49:42, 3.50s/it]
2%|▏ | 12535/569592 [3:56:33<420:43:44, 2.72s/it]
2%|▏ | 12535/569592 [3:56:33<420:43:44, 2.72s/it]
2%|▏ | 12536/569592 [3:56:34<335:57:48, 2.17s/it]
2%|▏ | 12536/569592 [3:56:34<335:57:48, 2.17s/it]
2%|▏ | 12537/569592 [3:56:35<281:42:31, 1.82s/it]
2%|▏ | 12537/569592 [3:56:35<281:42:31, 1.82s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12538/569592 [3:56:40<406:03:17, 2.62s/it]
2%|▏ | 12538/569592 [3:56:40<406:03:17, 2.62s/it]
2%|▏ | 12539/569592 [3:56:43<437:55:21, 2.83s/it]
2%|▏ | 12539/569592 [3:56:43<437:55:21, 2.83s/it]
2%|▏ | 12540/569592 [3:56:44<351:09:37, 2.27s/it]
2%|▏ | 12540/569592 [3:56:44<351:09:37, 2.27s/it]
2%|▏ | 12541/569592 [3:56:48<408:24:44, 2.64s/it]
2%|▏ | 12541/569592 [3:56:48<408:24:44, 2.64s/it]
2%|▏ | 12542/569592 [3:56:49<333:19:26, 2.15s/it]
2%|▏ | 12542/569592 [3:56:49<333:19:26, 2.15s/it]
2%|▏ | 12543/569592 [3:56:50<276:53:09, 1.79s/it]
2%|▏ | 12543/569592 [3:56:50<276:53:09, 1.79s/it]
2%|▏ | 12544/569592 [3:56:50<239:04:03, 1.55s/it]
2%|▏ | 12544/569592 [3:56:50<239:04:03, 1.55s/it]
2%|▏ | 12545/569592 [3:56:51<211:44:46, 1.37s/it]
2%|▏ | 12545/569592 [3:56:51<211:44:46, 1.37s/it]
2%|▏ | 12546/569592 [3:56:56<349:49:07, 2.26s/it]
2%|▏ | 12546/569592 [3:56:56<349:49:07, 2.26s/it]
2%|▏ | 12547/569592 [3:56:57<298:08:08, 1.93s/it]
2%|▏ | 12547/569592 [3:56:57<298:08:08, 1.93s/it]
2%|▏ | 12548/569592 [3:56:58<260:01:06, 1.68s/it]
2%|▏ | 12548/569592 [3:56:58<260:01:06, 1.68s/it]
2%|▏ | 12549/569592 [3:56:59<235:49:38, 1.52s/it]
2%|▏ | 12549/569592 [3:56:59<235:49:38, 1.52s/it]
2%|▏ | 12550/569592 [3:57:03<334:14:15, 2.16s/it]
2%|▏ | 12550/569592 [3:57:03<334:14:15, 2.16s/it]
2%|▏ | 12551/569592 [3:57:06<391:25:16, 2.53s/it]
2%|▏ | 12551/569592 [3:57:06<391:25:16, 2.53s/it]
2%|▏ | 12552/569592 [3:57:08<375:55:08, 2.43s/it]
2%|▏ | 12552/569592 [3:57:08<375:55:08, 2.43s/it]
2%|▏ | 12553/569592 [3:57:09<308:53:35, 2.00s/it]
2%|▏ | 12553/569592 [3:57:09<308:53:35, 2.00s/it]
2%|▏ | 12554/569592 [3:57:14<430:58:11, 2.79s/it]
2%|▏ | 12554/569592 [3:57:14<430:58:11, 2.79s/it]
2%|▏ | 12555/569592 [3:57:16<409:16:32, 2.65s/it]
2%|▏ | 12555/569592 [3:57:16<409:16:32, 2.65s/it]
2%|▏ | 12556/569592 [3:57:20<432:30:38, 2.80s/it]
2%|▏ | 12556/569592 [3:57:20<432:30:38, 2.80s/it]
2%|▏ | 12557/569592 [3:57:21<350:14:47, 2.26s/it]
2%|▏ | 12557/569592 [3:57:21<350:14:47, 2.26s/it]
2%|▏ | 12558/569592 [3:57:24<385:46:15, 2.49s/it]
2%|▏ | 12558/569592 [3:57:24<385:46:15, 2.49s/it]
2%|▏ | 12559/569592 [3:57:27<434:11:25, 2.81s/it]
2%|▏ | 12559/569592 [3:57:27<434:11:25, 2.81s/it]
2%|▏ | 12560/569592 [3:57:30<434:26:00, 2.81s/it]
2%|▏ | 12560/569592 [3:57:30<434:26:00, 2.81s/it]
2%|▏ | 12561/569592 [3:57:31<350:55:10, 2.27s/it]
2%|▏ | 12561/569592 [3:57:31<350:55:10, 2.27s/it]
2%|▏ | 12562/569592 [3:57:34<387:26:25, 2.50s/it]
2%|▏ | 12562/569592 [3:57:34<387:26:25, 2.50s/it]
2%|▏ | 12563/569592 [3:57:38<442:49:13, 2.86s/it]
2%|▏ | 12563/569592 [3:57:38<442:49:13, 2.86s/it]
2%|▏ | 12564/569592 [3:57:40<430:03:09, 2.78s/it]
2%|▏ | 12564/569592 [3:57:40<430:03:09, 2.78s/it]
2%|▏ | 12565/569592 [3:57:44<46/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
5:35:34, 3.01s/it]
2%|▏ | 12565/569592 [3:57:44<465:35:34, 3.01s/it]
2%|▏ | 12566/569592 [3:57:48<542:56:43, 3.51s/it]
2%|▏ | 12566/569592 [3:57:48<542:56:43, 3.51s/it]
2%|▏ | 12567/569592 [3:57:49<427:07:59, 2.76s/it]
2%|▏ | 12567/569592 [3:57:50<427:07:59, 2.76s/it]
2%|▏ | 12568/569592 [3:57:51<349:27:40, 2.26s/it]
2%|▏ | 12568/569592 [3:57:51<349:27:40, 2.26s/it]
2%|▏ | 12569/569592 [3:57:55<473:10:11, 3.06s/it]
2%|▏ | 12569/569592 [3:57:56<473:10:11, 3.06s/it]
2%|▏ | 12570/569592 [3:57:57<379:39:59, 2.45s/it]
2%|▏ | 12570/569592 [3:57:57<379:39:59, 2.45s/it]
2%|▏ | 12571/569592 [3:58:01<494:45:15, 3.20s/it]
2%|▏ | 12571/569592 [3:58:01<494:45:15, 3.20s/it]
2%|▏ | 12572/569592 [3:58:02<390:59:31, 2.53s/it]
2%|▏ | 12572/569592 [3:58:02<390:59:31, 2.53s/it]
2%|▏ | 12573/569592 [3:58:03<317:34:20, 2.05s/it]
2%|▏ | 12573/569592 [3:58:03<317:34:20, 2.05s/it]
2%|▏ | 12574/569592 [3:58:07<377:36:17, 2.44s/it]
2%|▏ | 12574/569592 [3:58:07<377:36:17, 2.44s/it]
2%|▏ | 12575/569592 [3:58:10<437:04:27, 2.82s/it]
2%|▏ | 12575/569592 [3:58:10<437:04:27, 2.82s/it]
2%|▏ | 12576/569592 [3:58:15<516:26:35, 3.34s/it]
2%|▏ | 12576/569592 [3:58:15<516:26:35, 3.34s/it]
2%|▏ | 12577/569592 [3:58:18<517:58:28, 3.35s/it]
2%|▏ | 12577/569592 [3:58:18<517:58:28, 3.35s/it]
2%|▏ | 12578/569592 [3:58:22<510:44:01, 3.30s/it]
2%|▏ | 12578/569592 [3:58:22<510:44:01, 3.30s/it]
2%|▏ | 12579/569592 [3:58:22<400:09:14, 2.59s/it]
2%|▏ | 12579/569592 [3:58:22<400:09:14, 2.59s/it]
2%|▏ | 12580/569592 [3:58:23<323:18:14, 2.09s/it]
2%|▏ | 12580/569592 [3:58:23<323:18:14, 2.09s/it]
2%|▏ | 12581/569592 [3:58:27<393:49:44, 2.55s/it]
2%|▏ | 12581/569592 [3:58:27<393:49:44, 2.55s/it]
2%|▏ | 12582/569592 [3:58:28<324:07:20, 2.09s/it]
2%|▏ | 12582/569592 [3:58:28<324:07:20, 2.09s/it]
2%|▏ | 12583/569592 [3:58:33<450:41:06, 2.91s/it]
2%|▏ | 12583/569592 [3:58:33<450:41:06, 2.91s/it]
2%|▏ | 12584/569592 [3:58:34<358:19:47, 2.32s/it]
2%|▏ | 12584/569592 [3:58:34<358:19:47, 2.32s/it]
2%|▏ | 12585/569592 [3:58:39<481:49:08, 3.11s/it]
2%|▏ | 12585/569592 [3:58:39<481:49:08, 3.11s/it]
2%|▏ | 12586/569592 [3:58:40<382:44:57, 2.47s/it]
2%|▏ | 12586/569592 [3:58:40<382:44:57, 2.47s/it]
2%|▏ | 12587/569592 [3:58:43<441:48:20, 2.86s/it]
2%|▏ | 12587/569592 [3:58:44<441:48:20, 2.86s/it]
2%|▏ | 12588/569592 [3:58:44<351:46:53, 2.27s/it]
2%|▏ | 12588/569592 [3:58:44<351:46:53, 2.27s/it]
2%|▏ | 12589/569592 [3:58:45<290:32:20, 1.88s/it]
2%|▏ | 12589/569592 [3:58:45<290:32:20, 1.88s/it]
2%|▏ | 12590/569592 [3:58:49<358:11:14, 2.32s/it]
2%|▏ | 12590/569592 [3:58:49<358:11:14, 2.32s/it]
2%|▏ | 12591/569592 [3:58:52<411:08:44, 2.66s/it]
2%|▏ | 12591/569592 [3:58:52<411:08:44, 2.66s/it]
2%|▏ | 12592/569592 [3:58:56<452:57:19, 2.93s/it]
2%|▏ | 12592/569592 [3:58:56<452:57:19, 2.93s/it]
2%|▏ | 12593/569592 [3:59:00<514:47:42, 3.33s/it]
2%|▏ | 12593/569592 [3:59:00<514:47:42, 3.33s/it]
2%|▏ | 12594/569592 [3:59:04<553:50:23, 3.58s/it]
2%|▏ | 12594/569592 [3:59:04<553:50:23, 3.58s/it]
2%|▏ | 12595/569592 [3:59:09<618:54:11, 4.00s/it]
2%|▏ | 12595/569592 [3:59:09<618:54:11, 4.00s/it]
2%|▏ | 12596/569592 [3:59:12<575:12:14, 3.72s/it]
2%|▏ | 12596/569592 [3:59:12<575:12:14, 3.72s/it]
2%|▏ | 12597/569592 [3:59:13<443:21:25, 2.87s/it]
2%|▏ | 12597/569592 [3:59:13<443:21:25, 2.87s/it]
2%|▏ | 12598/569592 [3:59:18<546:31:10, 3.53s/it]
2%|▏ | 12598/569592 [3:59:18<546:31:10, 3.53s/it]
2%|▏ | 12599/569592 [3:59:22<553:38:04, 3.58s/it]
2%|▏ | 12599/569592 [3:59:22<553:38:04, 3.58s/it]
2%|▏ | 12600/569592 [3:59:25<530:12:07, 3.43s/it]
2%|▏ | 12600/569592 [3:59:25<530:12:07, 3.43s/it]
2%|▏ | 12601/569592 [3:59:30<602:13:50, 3.89s/it]
2%|▏ | 12601/569592 [3:59:30<602:13:50, 3.89s/it]
2%|▏ | 12602/569592 [3:59:35<640:01:40, 4.14s/it]
2%|▏ | 12602/569592 [3:59:35<640:01:40, 4.14s/it]
2%|▏ | 12603/569592 [3:59:40<676:51:22, 4.37s/it]
2%|▏ | 12603/569592 [3:59:40<676:51:22, 4.37s/it]
2%|▏ | 12604/569592 [3:59:45<708:05:58, 4.58s/it]
2%|▏ | 12604/569592 [3:59:45<708:05:58, 4.58s/it]
2%|▏ | 12605/569592 [3:59:49<710:28:07, 4.59s/it]
2%|▏ | 12605/569592 [3:59:49<710:28:07, 4.59s/it]
2%|▏ | 12606/569592 [3:59:52<644:36:49, 4.17s/it]
2%|▏ | 12606/569592 [3:59:52<644:36:49, 4.17s/it]
2%|▏ | 12607/569592 [3:59:57<681:07:30, 4.40s/it]
2%|▏ | 12607/569592 [3:59:57<681:07:30, 4.40s/it]
2%|▏ | 12608/569592 [4:00:02<676:37:05, 4.37s/it]
2%|▏ | 12608/569592 [4:00:02<676:37:05, 4.37s/it]
2%|▏ | 12609/569592 [4:00:03<514:49:17, 3.33s/it]
2%|▏ | 12609/569592 [4:00:03<514:49:17, 3.33s/it]
2%|▏ | 12610/569592 [4:00:07<588:43:50, 3.81s/it]
2%|▏ | 12610/569592 [4:00:07<588:43:50, 3.81s/it]
2%|▏ | 12611/569592 [4:00:08<453:38:41, 2.93s/it]
2%|▏ | 12611/569592 [4:00:08<453:38:41, 2.93s/it]
2%|▏ | 12612/569592 [4:00:13<555:10:18, 3.59s/it]
2%|▏ | 12612/569592 [4:00:13<555:10:18, 3.59s/it]
2%|▏ | 12613/569592 [4:00:18<604:54:03, 3.91s/it]
2%|▏ | 12613/569592 [4:00:18<604:54:03, 3.91s/it]
2%|▏ | 12614/569592 [4:00:23<638:23:13, 4.13s/it]
2%|▏ | 12614/569592 [4:00:23<638:23:13, 4.13s/it]
2%|▏ | 12615/569592 [4:00:28<678:02:16, 4.38s/it]
2%|▏ | 12615/569592 [4:00:28<678:02:16, 4.38s/it]
2%|▏ | 12616/569592 [4:00:31<626:02:44, 4.05s/it]
2%|▏ | 12616/569592 [4:00:31<626:02:44, 4.05s/it]
2%|▏ | 12617/569592 [4:00:35<607:26:51, 3.93s/it]
2%|▏ | 12617/569592 [4:00:35<607:26:51, 3.93s/it]
2%|▏ | 12618/569592 [4:00:36<469:46:44, 3.04s/it]
2%|▏ | 12618/569592 [4:00:36<469:46:44, 3.04s/it]
2%|▏ | 12619/569592 [4:00:37<371:29:03, 2.40s/it]
2%|▏ | 12619/569592 [4:00:37<371:29:03, 2.40s/it]
2%|▏ | 12620/569592 [4:00:40<444:57:51, 2.88s/it]
2%|▏ | 12620/569592 [4:00:41<444:57:51, 2.88s/it]
2%|▏ | 12621/569592 [4:00:44<472:18:58, 3.05s/it]
2%|▏ | 12621/569592 [4:00:44<472:18:58, 3.05s/it]
2%|▏ | 12622/569592 [4:00:50<605:41:18, 3.91s/it]
2%|▏ | 12622/569592 [4:00:50<605:41:18, 3.91s/it]
2%|▏ | 12623/569592 [4:00:55<654:14:22, 4.23s/it]
2%|▏ | 12623/569592 [4:00:55<654:14:22, 4.23s/it]
2%|▏ | 12624/569592 [4:00:59<664:40:24, 4.30s/it]
2%|▏ | 12624/569592 [4:00:59<664:40:24, 4.30s/it]
2%|▏ | 12625/569592 [4:01:00<505:40:47, 3.27s/it]
2%|▏ | 12625/569592 [4:01:00<505:40:47, 3.27s/it]
2%|▏ | 12626/569592 [4:01:04<513:20:37, 3.32s/it]
2%|▏ | 12626/569592 [4:01:04<513:20:37, 3.32s/it]
2%|▏ | 12627/569592 [4:01:05<401:38:17, 2.60s/it]
2%|▏ | 12627/569592 [4:01:05<401:38:17, 2.60s/it]
2%|▏ | 12628/569592 [4:01:10<531:58:45, 3.44s/it]
2%|▏ | 12628/569592 [4:01:10<531:58:45, 3.44s/it]
2%|▏ | 12629/569592 [4:01:13<514:54:37, 3.33s/it]
2%|▏ | 12629/569592 [4:01:13<514:54:37, 3.33s/it]
2%|▏ | 12630/569592 [4:01:14<403:30:03, 2.61s/it]
2%|▏ | 12630/569592 [4:01:14<403:30:03, 2.61s/it]
2%|▏ | 12631/569592 [4:01:18<454:17:16, 2.94s/it]
2%|▏ | 12631/569592 [4:01:18<454:17:16, 2.94s/it]
2%|▏ | 12632/569592 [4:01:21<470:15:25, 3.04s/it]
2%|▏ | 12632/569592 [4:01:21<470:15:25, 3.04s/it]
2%|▏ | 12633/569592 [4:01:22<376:40:16, 2.43s/it]
2%|▏ | 12633/569592 [4:01:22<376:40:16, 2.43s/it]
2%|▏ | 12634/569592 [4:01:27<507:36:51, 3.28s/it]
2%|▏ | 12634/569592 [4:01:27<507:36:51, 3.28s/it]
2%|▏ | 12635/569592 [4:01:28<399:28:59, 2.58s/it]
2%|▏ | 12635/569592 [4:01:28<399:28:59, 2.58s/it]
2%|▏ | 12636/569592 [4:01:33<502:53:34, 3.25s/it]
2%|▏ | 12636/569592 [4:01:33<502:53:34, 3.25s/it]
2%|▏ | 12637/569592 [4:01:36<512:48:31, 3.31s/it]
2%|▏ | 12637/569592 [4:01:36<512:48:31, 3.31s/it]
2%|▏ | 12638/569592 [4:01:40<522:33:32, 3.38s/it]
2%|▏ | 12638/569592 [4:01:40<522:33:32, 3.38s/it]
2%|▏ | 12639/569592 [4:01:43<514:11:37, 3.32s/it]
2%|▏ | 12639/569592 [4:01:43<514:11:37, 3.32s/it]
2%|▏ | 12640/569592 [4:01:47<518:04:12, 3.35s/it]
2%|▏ | 12640/569592 [4:01:47<518:04:12, 3.35s/it]
2%|▏ | 12641/569592 [4:01:51<574:33:06, 3.71s/it]
2%|▏ | 12641/569592 [4:01:51<574:33:06, 3.71s/it]
2%|▏ | 12642/569592 [4:01:54<548:38:11, 3.55s/it]
2%|▏ | 12642/569592 [4:01:54<548:38:11, 3.55s/it]
2%|▏ | 12643/569592 [4:01:57<528:12:29, 3.41s/it]
2%|▏ | 12643/569592 [4:01:57<528:12:29, 3.41s/it]
2%|▏ | 12644/569592 [4:01:58<413:15:48, 2.67s/it]
2%|▏ | 12644/569592 [4:01:58<413:15:48, 2.67s/it]
2%|▏ | 12645/569592 [4:02:01<433:55:09, 2.80s/it]
2%|▏ | 12645/569592 [4:02:01<433:55:09, 2.80s/it]
2%|▏ | 12646/569592 [4:02:05<457:46:51, 2.96s/it]
2%|▏ | 12646/569592 [4:02:05<457:46:51, 2.96s/it]
2%|▏ | 12647/569592 [4:02:10<543:11:12, 3.51s/it]
2%|▏ | 12647/569592 [4:02:10<543:11:12, 3.51s/it]
2%|▏ | 12648/569592 [4:02:14<596:39:47, 3.86s/it]
2%|▏ | 12648/569592 [4:02:14<596:39:47, 3.86s/it]
2%|▏ | 12649/569592 [4:02:18<604:01:58, 3.90s/it]
2%|▏ | 12649/569592 [4:02:18<604:01:58, 3.90s/it]
2%|▏ | 12650/569592 [4:02:21<557:11:00, 3.60s/it]
2%|▏ | 12650/569592 [4:02:21<557:11:00, 3.60s/it]
2%|▏ | 12651/569592 [4:02:26<606:04:52, 3.92s/it]
2%|▏ | 12651/569592 [4:02:26<606:04:52, 3.92s/it]
2%|▏ | 12652/569592 [4:02:29<575:49:28, 3.72s/it]
2%|▏ | 12652/569592 [4:02:29<575:49:28, 3.72s/it]
2%|▏ | 12653/569592 [4:02:30<446:28:37, 2.89s/it]
2%|▏ | 12653/569592 [4:02:30<446:28:37, 2.89s/it]
2%|▏ | 12654/569592 [4:02:31<354:16:48, 2.29s/it]
2%|▏ | 12654/569592 [4:02:31<354:16:48, 2.29s/it]
2%|▏ | 12655/569592 [4:02:32<291:50:35, 1.89s/it]
2%|▏ | 12655/569592 [4:02:32<291:50:35, 1.89s/it]
2%|▏ | 12656/569592 [4:02:36<397:52:26, 2.57s/it]
2%|▏ | 12656/569592 [4:02:36<397:52:26, 2.57s/it]
2%|▏ | 12657/569592 [4:02:37<321:57:16, 2.08s/it]
2%|▏ | 12657/569592 [4:02:37<321:57:16, 2.08s/it]
2%|▏ | 12658/569592 [4:02:38<280:29:29, 1.81s/it]
2%|▏ | 12658/569592 [4:02:38<280:29:29, 1.81s/it]
2%|▏ | 12659/569592 [4:02:43<440:18:25, 2.85s/it]
2%|▏ | 12659/569592 [4:02:43<440:18:25, 2.85s/it]
2%|▏ | 12660/569592 [4:02:44<351:38:55, 2.27s/it]
2%|▏ | 12660/569592 [4:02:44<351:38:55, 2.27s/it]
2%|▏ | 12661/569592 [4:02:45<289:56:13, 1.87s/it]
2%|▏ | 12661/569592 [4:02:45<289:56:13, 1.87s/it]
2%|▏ | 12662/569592 [4:02:46<246:10:28, 1.59s/it]
2%|▏ | 12662/569592 [4:02:46<246:10:28, 1.59s/it]
2%|▏ | 12663/569592 [4:02:47<215:06:11, 1.39s/it]
2%|▏ | 12663/569592 [4:02:47<215:06:11, 1.39s/it]
2%|▏ | 12664/569592 [4:02:49<243:10:37, 1.57s/it]
2%|▏ | 12664/569592 [4:02:49<243:10:37, 1.57s/it]
2%|▏ | 12665/569592 [4:02:52<322:02:35, 2.08s/it]
2%|▏ | 12665/569592 [4:02:52<322:02:35, 2.08s/it]
2%|▏ | 12666/569592 [4:02:53<277:01:32, 1.79s/it]
2%|▏ | 12666/569592 [4:02:53<277:01:32, 1.79s/it]
2%|▏ | 12667/569592 [4:02:55<257:44:01, 1.67s/it]
2%|▏ | 12667/569592 [4:02:55<257:44:01, 1.67s/it]
2%|▏ | 12668/569592 [4:02:59<378:54:08, 2.45s/it]
2%|▏ | 12668/569592 [4:02:59<378:54:08, 2.45s/it]
2%|▏ | 12669/569592 [4:03:03<450:12:56, 2.91s/it]
2%|▏ | 12669/569592 [4:03:03<450:12:56, 2.91s/it]
2%|▏ | 12670/569592 [4:03:04<359:06:08, 2.32s/it]
2%|▏ | 12670/569592 [4:03:04<359:06:08, 2.32s/it]
2%|▏ | 12671/569592 [4:03:05<295:19:21, 1.91s/it]
2%|▏ | 12671/569592 [4:03:05<295:19:21, 1.91s/it]
2%|▏ | 12672/569592 [4:03:09<387:58:26, 2.51s/it]
2%|▏ | 12672/569592 [4:03:09<387:58:26, 2.51s/it]
2%|▏ | 12673/569592 [4:03:13<445:40:48, 2.88s/it]
2%|▏ | 12673/569592 [4:03:13<445:40:48, 2.88s/it]
2%|▏ | 12674/569592 [4:03:14<354:30:36, 2.29s/it]
2%|▏ | 12674/569592 [4:03:14<354:30:36, 2.29s/it]
2%|▏ | 12675/569592 [4:03:15<298:20:48, 1.93s/it]
2%|▏ | 12675/569592 [4:03:15<298:20:48, 1.93s/it]
2%|▏ | 12676/569592 [4:03:20<473:52:29, 3.06s/it]
2%|▏ | 12676/569592 [4:03:20<473:52:29, 3.06s/it]
2%|▏ | 12677/569592 [4:03:23<472:42:52, 3.06s/it]
2%|▏ | 12677/569592 [4:03:23<472:42:52, 3.06s/it]
2%|▏ | 12678/569592 [4:03:28<544:58:31, 3.52s/it]
2%|▏ | 12678/569592 [4:03:28<544:58:31, 3.52s/it]
2%|▏ | 12679/569592 [4:03:33<628:50:09, 4.06s/it]
2%|▏ | 12679/569592 [4:03:33<628:50:09, 4.06s/it]
2%|▏ | 12680/569592 [4:03:34<485:49:45, 3.14s/it]
2%|▏ | 12680/569592 [4:03:34<485:49:45, 3.14s/it]
2%|▏ | 12681/569592 [4:03:39<578:22:01, 3.74s/it]
2%|▏ | 12681/569592 [4:03:39<578:22:01, 3.74s/it]
2%|▏ | 12682/569592 [4:03:40<446:33:03, 2.89s/it]
2%|▏ | 12682/569592 [4:03:40<446:33:03, 2.89s/it]
2%|▏ | 12683/569592 [4:03:41<355:49:32, 2.30s/it]
2%|▏ | 12683/569592 [4:03:41<355:49:32, 2.30s/it]
2%|▏ | 12684/569592 [4:03:45<402:37:16, 2.60s/it]
2%|▏ | 12684/569592 [4:03:45<402:37:16, 2.60s/it]
2%|▏ | 12685/569592 [4:03:46<325:22:14, 2.10s/it]
2%|▏ | 12685/569592 [4:03:46<325:22:14, 2.10s/it]
2%|▏ | 12686/569592 [4:03:46<269:53:48, 1.74s/it]
2%|▏ | 12686/569592 [4:03:46<269:53:48, 1.74s/it]
2%|▏ | 12687/569592 [4:03:47<233:32:10, 1.51s/it]
2%|▏ | 12687/569592 [4:03:47<233:32:10, 1.51s/it]
2%|▏ | 12688/569592 [4:03:51<337:35:28, 2.18s/it]
2%|▏ | 12688/569592 [4:03:51<337:35:28, 2.18s/it]
2%|▏ | 12689/569592 [4:03:54<370:32:22, 2.40s/it]
2%|▏ | 12689/569592 [4:03:54<370:32:22, 2.40s/it]
2%|▏ | 12690/569592 [4:03:57<417:52:43, 2.70s/it]
2%|▏ | 12690/569592 [4:03:57<417:52:43, 2.70s/it]
2%|▏ | 12691/569592 [4:04:03<534:15:56, 3.45s/it]
2%|▏ | 12691/569592 [4:04:03<534:15:56, 3.45s/it]
2%|▏ | 12692/569592 [4:04:04<418:43:41, 2.71s/it]
2%|▏ | 12692/569592 [4:04:04<418:43:41, 2.71s/it]
2%|▏ | 12693/569592 [4:04:05<360:56:10, 2.33s/it]
2%|▏ | 12693/569592 [4:04:05<360:56:10, 2.33s/it]
2%|▏ | 12694/569592 [4:04:10<476:01:30, 3.08s/it]
2%|▏ | 12694/569592 [4:04:10<476:01:30, 3.08s/it]
2%|▏ | 12695/569592 [4:04:11<377:54:15, 2.44s/it]
2%|▏ | 12695/569592 [4:04:11<377:54:15, 2.44s/it]
2%|▏ | 12696/569592 [4:04:13<362:09:29, 2.34s/it]
2%|▏ | 12696/569592 [4:04:13<362:09:29, 2.34s/it]
2%|▏ | 12697/569592 [4:04:15<365:17:08, 2.36s/it]
2%|▏ | 12697/569592 [4:04:15<365:17:08, 2.36s/it]
2%|▏ | 12698/569592 [4:04:19<406:20:32, 2.63s/it]
2%|▏ | 12698/569592 [4:04:19<406:20:32, 2.63s/it]
2%|▏ | 12699/569592 [4:04:20<329:45:16, 2.13s/it]
2%|▏ | 12699/569592 [4:04:20<329:45:16, 2.13s/it]
2%|▏ | 12700/569592 [4:04:23<410:21:38, 2.65s/it]
2%|▏ | 12700/569592 [4:04:24<410:21:38, 2.65s/it]
2%|▏ | 12701/569592 [4:04:27<444:10:49, 2.87s/it]
2%|▏ | 12701/569592 [4:04:27<444:10:49, 2.87s/it]
2%|▏ | 12702/569592 [4:04:30<458:42:07, 2.97s/it]
2%|▏ | 12702/569592 [4:04:30<458:42:07, 2.97s/it]
2%|▏ | 12703/569592 [4:04:31<364:01:03, 2.35s/it]
2%|▏ | 12703/569592 [4:04:31<364:01:03, 2.35s/it]
2%|▏ | 12704/569592 [4:04:34<374:51:15, 2.42s/it]
2%|▏ | 12704/569592 [4:04:34<374:51:15, 2.42s/it]
2%|▏ | 12705/569592 [4:04:39<506:18:23, 3.27s/it]
2%|▏ | 12705/569592 [4:04:39<506:18:23, 3.27s/it]
2%|▏ | 12706/569592 [4:04:42<509:53:53, 3.30s/it]
2%|▏ | 12706/569592 [4:04:42<509:53:53, 3.30s/it]
2%|▏ | 12707/569592 [4:04:46<543:30:06, 3.51s/it]
2%|▏ | 12707/569592 [4:04:46<543:30:06, 3.51s/it]
2%|▏ | 12708/569592 [4:04:49<526:13:31, 3.40s/it]
2%|▏ | 12708/569592 [4:04:49<526:13:31, 3.40s/it]
2%|▏ | 12709/569592 [4:04:54<598:19:38, 3.87s/it]
2%|▏ | 12709/569592 [4:04:54<598:19:38, 3.87s/it]
2%|▏ | 12710/569592 [4:04:57<558:19:26, 3.61s/it]
2%|▏ | 12710/569592 [4:04:57<558:19:26, 3.61s/it]
2%|▏ | 12711/569592 [4:05:01<551:50:48, 3.57s/it]
2%|▏ | 12711/569592 [4:05:01<551:50:48, 3.57s/it]
2%|▏ | 12712/569592 [4:05:04<529:39:24, 3.42s/it]
2%|▏ | 12712/569592 [4:05:04<529:39:24, 3.42s/it]
2%|▏ | 12713/569592 [4:05:07<537:01:46, 3.47s/it]
2%|▏ | 12713/569592 [4:05:07<537:01:46, 3.47s/it]
2%|▏ | 12714/569592 [4:05:11<524:53:03, 3.39s/it]
2%|▏ | 12714/569592 [4:05:11<524:53:03, 3.39s/it]
2%|▏ | 12715/569592 [4:05:12<410:28:06, 2.65s/it]
2%|▏ | 12715/569592 [4:05:12<410:28:06, 2.65s/it]
2%|▏ | 12716/569592 [4:05:17<527:39:54, 3.41s/it]
2%|▏ | 12716/569592 [4:05:17<527:39:54, 3.41s/it]
2%|▏ | 12717/569592 [4:05:20<539:52:06, 3.49s/it]
2%|▏ | 12717/569592 [4:05:20<539:52:06, 3.49s/it]
2%|▏ | 12718/569592 [4:05:24<526:02:24, 3.40s/it]
2%|▏ | 12718/569592 [4:05:24<526:02:24, 3.40s/it]
2%|▏ | 12719/569592 [4:05:27<545:23:26, 3.53s/it]
2%|▏ | 12719/569592 [4:05:27<545:23:26, 3.53s/it]
2%|▏ | 12720/569592 [4:05:31<542:58:29, 3.51s/it]
2%|▏ | 12720/569592 [4:05:31<542:58:29, 3.51s/it]
2%|▏ | 12721/569592 [4:05:34<520:18:14, 3.36s/it]
2%|▏ | 12721/569592 [4:05:34<520:18:14, 3.36s/it]
2%|▏ | 12722/569592 [4:05:37<525:38:05, 3.40s/it]
2%|▏ | 12722/569592 [4:05:37<525:38:05, 3.40s/it]
2%|▏ | 12723/569592 [4:05:42<587:46:28, 3.80s/it]
2%|▏ | 12723/569592 [4:05:42<587:46:28, 3.80s/it]
2%|▏ | 12724/569592 [4:05:46<581:48:42, 3.76s/it]
2%|▏ | 12724/569592 [4:05:46<581:48:42, 3.76s/it]
2%|▏ | 12725/569592 [4:05:49<563:09:40, 3.64s/it]
2%|▏ | 12725/569592 [4:05:49<563:09:40, 3.64s/it]
2%|▏ | 12726/569592 [4:05:50<436:25:52, 2.82s/it]
2%|▏ | 12726/569592 [4:05:50<436:25:52, 2.82s/it]
2%|▏ | 12727/569592 [4:05:54<489:05:30, 3.16s/it]
2%|▏ | 12727/569592 [4:05:54<489:05:30, 3.16s/it]
2%|▏ | 12728/569592 [4:05:55<387:06:02, 2.50s/it]
2%|▏ | 12728/569592 [4:05:55<387:06:02, 2.50s/it]
2%|▏ | 12729/569592 [4:05:59<438:19:35, 2.83s/it]
2%|▏ | 12729/569592 [4:05:59<438:19:35, 2.83s/it]
2%|▏ | 12730/569592 [4:06:02<463:32:14, 3.00s/it]
2%|▏ | 12730/569592 [4:06:02<463:32:14, 3.00s/it]
2%|▏ | 12731/569592 [4:06:05<486:22:17, 3.14s/it]
2%|▏ | 12731/569592 [4:06:05<486:22:17, 3.14s/it]
2%|▏ | 12732/569592 [4:06:10<558:15:24, 3.61s/it]
2%|▏ | 12732/569592 [4:06:10<558:15:24, 3.61s/it]
2%|▏ | 12733/569592 [4:06:15<605:42:01, 3.92s/it]
2%|▏ | 12733/569592 [4:06:15<605:42:01, 3.92s/it]
2%|▏ | 12734/569592 [4:06:19<639:47:02, 4.14s/it]
2%|▏ | 12734/569592 [4:06:19<639:47:02, 4.14s/it]
2%|▏ | 12735/569592 [4:06:24<669:34:27, 4.33s/it]
2%|▏ | 12735/569592 [4:06:24<669:34:27, 4.33s/it]
2%|▏ | 12736/569592 [4:06:25<510:49:02, 3.30s/it]
2%|▏ | 12736/569592 [4:06:25<510:49:02, 3.30s/it]
2%|▏ | 12737/569592 [4:06:26<405:06:53, 2.62s/it]
2%|▏ | 12737/569592 [4:06:26<405:06:53, 2.62s/it]
2%|▏ | 12738/569592 [4:06:31<519:23:07, 3.36s/it]
2%|▏ | 12738/569592 [4:06:31<519:23:07, 3.36s/it]
2%|▏ | 12739/569592 [4:06:36<576:51:31, 3.73s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12739/569592 [4:06:36<576:51:31, 3.73s/it]
2%|▏ | 12740/569592 [4:06:39<559:55:48, 3.62s/it]
2%|▏ | 12740/569592 [4:06:39<559:55:48, 3.62s/it]
2%|▏ | 12741/569592 [4:06:42<544:05:36, 3.52s/it]
2%|▏ | 12741/569592 [4:06:43<544:05:36, 3.52s/it]
2%|▏ | 12742/569592 [4:06:46<551:30:29, 3.57s/it]
2%|▏ | 12742/569592 [4:06:46<551:30:29, 3.57s/it]
2%|▏ | 12743/569592 [4:06:47<429:44:02, 2.78s/it]
2%|▏ | 12743/569592 [4:06:47<429:44:02, 2.78s/it]
2%|▏ | 12744/569592 [4:06:51<497:04:04, 3.21s/it]
2%|▏ | 12744/569592 [4:06:51<497:04:04, 3.21s/it]
2%|▏ | 12745/569592 [4:06:55<513:50:18, 3.32s/it]
2%|▏ | 12745/569592 [4:06:55<513:50:18, 3.32s/it]
2%|▏ | 12746/569592 [4:06:58<514:33:33, 3.33s/it]
2%|▏ | 12746/569592 [4:06:58<514:33:33, 3.33s/it]
2%|▏ | 12747/569592 [4:06:59<402:38:39, 2.60s/it]
2%|▏ | 12747/569592 [4:06:59<402:38:39, 2.60s/it]
2%|▏ | 12748/569592 [4:07:00<328:13:48, 2.12s/it]
2%|▏ | 12748/569592 [4:07:00<328:13:48, 2.12s/it]
2%|▏ | 12749/569592 [4:07:05<474:56:15, 3.07s/it]
2%|▏ | 12749/569592 [4:07:05<474:56:15, 3.07s/it]
2%|▏ | 12750/569592 [4:07:09<499:05:47, 3.23s/it]
2%|▏ | 12750/569592 [4:07:09<499:05:47, 3.23s/it]
2%|▏ | 12751/569592 [4:07:13<517:20:02, 3.34s/it]
2%|▏ | 12751/569592 [4:07:13<517:20:02, 3.34s/it]
2%|▏ | 12752/569592 [4:07:14<404:48:02, 2.62s/it]
2%|▏ | 12752/569592 [4:07:14<404:48:02, 2.62s/it]
2%|▏ | 12753/569592 [4:07:15<327:10:43, 2.12s/it]
2%|▏ | 12753/569592 [4:07:15<327:10:43, 2.12s/it]
2%|▏ | 12754/569592 [4:07:18<397:32:01, 2.57s/it]
2%|▏ | 12754/569592 [4:07:18<397:32:01, 2.57s/it]
2%|▏ | 12755/569592 [4:07:23<518:03:50, 3.35s/it]
2%|▏ | 12755/569592 [4:07:23<518:03:50, 3.35s/it]
2%|▏ | 12756/569592 [4:07:27<545:01:16, 3.52s/it]
2%|▏ | 12756/569592 [4:07:27<545:01:16, 3.52s/it]
2%|▏ | 12757/569592 [4:07:32<596:32:30, 3.86s/it]
2%|▏ | 12757/569592 [4:07:32<596:32:30, 3.86s/it]
2%|▏ | 12758/569592 [4:07:33<458:56:31, 2.97s/it]
2%|▏ | 12758/569592 [4:07:33<458:56:31, 2.97s/it]
2%|▏ | 12759/569592 [4:07:37<509:27:06, 3.29s/it]
2%|▏ | 12759/569592 [4:07:37<509:27:06, 3.29s/it]
2%|▏ | 12760/569592 [4:07:40<508:37:35, 3.29s/it]
2%|▏ | 12760/569592 [4:07:40<508:37:35, 3.29s/it]
2%|▏ | 12761/569592 [4:07:43<500:06:11, 3.23s/it]
2%|▏ | 12761/569592 [4:07:43<500:06:11, 3.23s/it]
2%|▏ | 12762/569592 [4:07:47<509:58:48, 3.30s/it]
2%|▏ | 12762/569592 [4:07:47<509:58:48, 3.30s/it]
2%|▏ | 12763/569592 [4:07:51<549:32:29, 3.55s/it]
2%|▏ | 12763/569592 [4:07:51<549:32:29, 3.55s/it]
2%|▏ | 12764/569592 [4:07:54<531:12:54, 3.43s/it]
2%|▏ | 12764/569592 [4:07:54<531:12:54, 3.43s/it]
2%|▏ | 12765/569592 [4:07:55<413:26:22, 2.67s/it]
2%|▏ | 12765/569592 [4:07:55<413:26:22, 2.67s/it]
2%|▏ | 12766/569592 [4:07:58<447:52:46, 2.90s/it]
2%|▏ | 12766/569592 [4:07:58<447:52:46, 2.90s/it]
2%|▏ | 12767/569592 [4:08:02<466:06:34, 3.01s/it]
2%|▏ | 12767/569592 [4:08:02<466:06:34, 3.01s/it]
2%|▏ | 12768/569592 [4:08:05<482:03:23, 3.12s/it]
2%|▏ | 12768/569592 [4:08:05<482:03:23, 3.12s/it]
2%|▏ | 12769/569592 [4:08:08<491:12:06, 3.18s/it]
2%|▏ | 12769/569592 [4:08:08<491:12:06, 3.18s/it]
2%|▏ | 12770/569592 [4:08:12<540:49:24, 3.50s/it]
2%|▏ | 12770/569592 [4:08:12<540:49:24, 3.50s/it]
2%|▏ | 12771/569592 [4:08:13<420:42:25, 2.72s/it]
2%|▏ | 12771/569592 [4:08:13<420:42:25, 2.72s/it]
2%|▏ | 12772/569592 [4:08:17<482:16:13, 3.12s/it]
2%|▏ | 12772/569592 [4:08:17<482:16:13, 3.12s/it]
2%|▏ | 12773/569592 [4:08:18<380:48:37, 2.46s/it]
2%|▏ | 12773/569592 [4:08:18<380:48:37, 2.46s/it]
2%|▏ | 12774/569592 [4:08:22<432:17:32, 2.79s/it]
2%|▏ | 12774/569592 [4:08:22<432:17:32, 2.79s/it]
2%|▏ | 12775/569592 [4:08:23<347:08:08, 2.24s/it]
2%|▏ | 12775/569592 [4:08:23<347:08:08, 2.24s/it]
2%|▏ | 12776/569592 [4:08:27<432:21:47, 2.80s/it]
2%|▏ | 12776/569592 [4:08:27<432:21:47, 2.80s/it]
2%|▏ | 12777/569592 [4:08:28<345:42:36, 2.24s/it]
2%|▏ | 12777/569592 [4:08:28<345:42:36, 2.24s/it]
2%|▏ | 12778/569592 [4:08:29<291:49:20, 1.89s/it]
2%|▏ | 12778/569592 [4:08:29<291:49:20, 1.89s/it]
2%|▏ | 12779/569592 [4:08:30<247:53:28, 1.60s/it]
2%|▏ | 12779/569592 [4:08:30<247:53:28, 1.60s/it]
2%|▏ | 12780/569592 [4:08:31<217:58:33, 1.41s/it]
2%|▏ | 12780/569592 [4:08:31<217:58:33, 1.41s/it]
2%|▏ | 12781/569592 [4:08:32<197:10:17, 1.27s/it]
2%|▏ | 12781/569592 [4:08:32<197:10:17, 1.27s/it]
2%|▏ | 12782/569592 [4:08:33<182:53:50, 1.18s/it]
2%|▏ | 12782/569592 [4:08:33<182:53:50, 1.18s/it]
2%|▏ | 12783/569592 [4:08:36<289:20:46, 1.87s/it]
2%|▏ | 12783/569592 [4:08:36<289:20:46, 1.87s/it]
2%|▏ | 12784/569592 [4:08:37<248:28:37, 1.61s/it]
2%|▏ | 12784/569592 [4:08:37<248:28:37, 1.61s/it]
2%|▏ | 12785/569592 [4:08:41<357:17:05, 2.31s/it]
2%|▏ | 12785/569592 [4:08:41<357:17:05, 2.31s/it]
2%|▏ | 12786/569592 [4:08:43<309:42:48, 2.00s/it]
2%|▏ | 12786/569592 [4:08:43<309:42:48, 2.00s/it]
2%|▏ | 12787/569592 [4:08:46<389:18:18, 2.52s/it]
2%|▏ | 12787/569592 [4:08:46<389:18:18, 2.52s/it]
2%|▏ | 12788/569592 [4:08:47<316:27:36, 2.05s/it]
2%|▏ | 12788/569592 [4:08:47<316:27:36, 2.05s/it]
2%|▏ | 12789/569592 [4:08:51<405:22:10, 2.62s/it]
2%|▏ | 12789/569592 [4:08:51<405:22:10, 2.62s/it]
2%|▏ | 12790/569592 [4:08:53<357:37:14, 2.31s/it]
2%|▏ | 12790/569592 [4:08:53<357:37:14, 2.31s/it]
2%|▏ | 12791/569592 [4:08:57<438:31:07, 2.84s/it]
2%|▏ | 12791/569592 [4:08:57<438:31:07, 2.84s/it]
2%|▏ | 12792/569592 [4:09:01<516:27:38, 3.34s/it]
2%|▏ | 12792/569592 [4:09:01<516:27:38, 3.34s/it]
2%|▏ | 12793/569592 [4:09:02<405:28:27, 2.62s/it]
2%|▏ | 12793/569592 [4:09:02<405:28:27, 2.62s/it]
2%|▏ | 12794/569592 [4:09:06<449:17:43, 2.90s/it]
2%|▏ | 12794/569592 [4:09:06<449:17:43, 2.90s/it]
2%|▏ | 12795/569592 [4:09:08<407:16:36, 2.63s/it]
2%|▏ | 12795/569592 [4:09:08<407:16:36, 2.63s/it]
2%|▏ | 12796/569592 [4:09:09<329:45:13, 2.13s/it]
2%|▏ | 12796/569592 [4:09:09<329:45:13, 2.13s/it]
2%|▏ | 12797/569592 [4:09:12<394:14:54, 2.55s/it]
2%|▏ | 12797/569592 [4:09:12<394:14:54, 2.55s/it]
2%|▏ | 12798/569592 [4:09:16<443:29:01, 2.87s/it]
2%|▏ | 12798/569592 [4:09:16<443:29:01, 2.87s/it]
2%|▏ | 12799/569592 [4:09:17<377:12:45, 2.44s/it]
2%|▏ | 12799/569592 [4:09:17<377:12:45, 2.44s/it]
2%|▏ | 12800/569592 [4:09:18<307:27:39, 1.99s/it]
2%|▏ | 12800/569592 [4:09:18<307:27:39, 1.99s/it]
2%|▏ | 12801/569592 [4:09:23<450:40:49, 2.91s/it]
2%|▏ | 12801/569592 [4:09:23<450:40:49, 2.91s/it]
2%|▏ | 12802/569592 [4:09:24<362:54:27, 2.35s/it]
2%|▏ | 12802/569592 [4:09:24<362:54:27, 2.35s/it]
2%|▏ | 12803/569592 [4:09:28<440:11:41, 2.85s/it]
2%|▏ | 12803/569592 [4:09:28<440:11:41, 2.85s/it]
2%|▏ | 12804/569592 [4:09:29<351:05:23, 2.27s/it]
2%|▏ | 12804/569592 [4:09:29<351:05:23, 2.27s/it]
2%|▏ | 12805/569592 [4:09:33<436:26:04, 2.82s/it]
2%|▏ | 12805/569592 [4:09:33<436:26:04, 2.82s/it]
2%|▏ | 12806/569592 [4:09:34<354:43:13, 2.29s/it]
2%|▏ | 12806/569592 [4:09:35<354:43:13, 2.29s/it]
2%|▏ | 12807/569592 [4:09:39<462:50:27, 2.99s/it]
2%|▏ | 12807/569592 [4:09:39<462:50:27, 2.99s/it]
2%|▏ | 12808/569592 [4:09:40<370:16:21, 2.39s/it]
2%|▏ | 12808/569592 [4:09:40<370:16:21, 2.39s/it]
2%|▏ | 12809/569592 [4:09:45<472:50:33, 3.06s/it]
2%|▏ | 12809/569592 [4:09:45<472:50:33, 3.06s/it]
2%|▏ | 12810/569592 [4:09:46<374:26:24, 2.42s/it]
2%|▏ | 12810/569592 [4:09:46<374:26:24, 2.42s/it]
2%|▏ | 12811/569592 [4:09:49<436:23:57, 2.82s/it]
2%|▏ | 12811/569592 [4:09:49<436:23:57, 2.82s/it]
2%|▏ | 12812/569592 [4:09:50<348:36:28, 2.25s/it]
2%|▏ | 12812/569592 [4:09:50<348:36:28, 2.25s/it]
2%|▏ | 12813/569592 [4:09:53<379:27:15, 2.45s/it]
2%|▏ | 12813/569592 [4:09:53<379:27:15, 2.45s/it]
2%|▏ | 12814/569592 [4:09:55<326:24:12, 2.11s/it]
2%|▏ | 12814/569592 [4:09:55<326:24:12, 2.11s/it]
2%|▏ | 12815/569592 [4:09:58<407:40:26, 2.64s/it]
2%|▏ | 12815/569592 [4:09:58<407:40:26, 2.64s/it]
2%|▏ | 12816/569592 [4:10:02<444:47:20, 2.88s/it]
2%|▏ | 12816/569592 [4:10:02<444:47:20, 2.88s/it]
2%|▏ | 12817/569592 [4:10:07<545:17:48, 3.53s/it]
2%|▏ | 12817/569592 [4:10:07<545:17:48, 3.53s/it]
2%|▏ | 12818/569592 [4:10:10<536:13:01, 3.47s/it]
2%|▏ | 12818/569592 [4:10:10<536:13:01, 3.47s/it]
2%|▏ | 12819/569592 [4:10:11<418:55:13, 2.71s/it]
2%|▏ | 12819/569592 [4:10:11<418:55:13, 2.71s/it]
2%|▏ | 12820/569592 [4:10:15<471:38:56, 3.05s/it]
2%|▏ | 12820/569592 [4:10:15<471:38:56, 3.05s/it]
2%|▏ | 12821/569592 [4:10:16<374:19:37, 2.42s/it]
2%|▏ | 12821/569592 [4:10:16<374:19:37, 2.42s/it]
2%|▏ | 12822/569592 [4:10:17<306:02:21, 1.98s/it]
2%|▏ | 12822/569592 [4:10:17<306:02:21, 1.98s/it]
2%|▏ | 12823/569592 [4:10:21<387:13:56, 2.50s/it]
2%|▏ | 12823/569592 [4:10:21<387:13:56, 2.50s/it]
2%|▏ | 12824/569592 [4:10:25<465:25:57, 3.01s/it]
2%|▏ | 12824/569592 [4:10:25<465:25:57, 3.01s/it]
2%|▏ | 12825/569592 [4:10:26<382:18:24, 2.47s/it]
2%|▏ | 12825/569592 [4:10:26<382:18:24, 2.47s/it]
2%|▏ | 12826/569592 [4:10:27<311:40:56, 2.02s/it]
2%|▏ | 12826/569592 [4:10:27<311:40:56, 2.02s/it]
2%|▏ | 12827/569592 [4:10:31<406:27:56, 2.63s/it]
2%|▏ | 12827/569592 [4:10:31<406:27:56, 2.63s/it]
2%|▏ | 12828/569592 [4:10:36<514:56:30, 3.33s/it]
2%|▏ | 12828/569592 [4:10:36<514:56:30, 3.33s/it]
2%|▏ | 12829/569592 [4:10:40<557:44:55, 3.61s/it]
2%|▏ | 12829/569592 [4:10:40<557:44:55, 3.61s/it]
2%|▏ | 12830/569592 [4:10:44<580:38:26, 3.75s/it]
2%|▏ | 12830/569592 [4:10:44<580:38:26, 3.75s/it]
2%|▏ | 12831/569592 [4:10:49<637:25:59, 4.12s/it]
2%|▏ | 12831/569592 [4:10:49<637:25:59, 4.12s/it]
2%|▏ | 12832/569592 [4:10:50<487:18:21, 3.15s/it]
2%|▏ | 12832/569592 [4:10:50<487:18:21, 3.15s/it]
2%|▏ | 12833/569592 [4:10:54<522:11:56, 3.38s/it]
2%|▏ | 12833/569592 [4:10:54<522:11:56, 3.38s/it]
2%|▏ | 12834/569592 [4:10:59<594:29:31, 3.84s/it]
2%|▏ | 12834/569592 [4:10:59<594:29:31, 3.84s/it]
2%|▏ | 12835/569592 [4:11:03<579:17:41, 3.75s/it]
2%|▏ | 12835/569592 [4:11:03<579:17:41, 3.75s/it]
2%|▏ | 12836/569592 [4:11:06<542:25:47, 3.51s/it]
2%|▏ | 12836/569592 [4:11:06<542:25:47, 3.51s/it]
2%|▏ | 12837/569592 [4:11:10<597:45:24, 3.87s/it]
2%|▏ | 12837/569592 [4:11:10<597:45:24, 3.87s/it]
2%|▏ | 12838/569592 [4:11:15<623:29:43, 4.03s/it]
2%|▏ | 12838/569592 [4:11:15<623:29:43, 4.03s/it]
2%|▏ | 12839/569592 [4:11:18<572:17:15, 3.70s/it]
2%|▏ | 12839/569592 [4:11:18<572:17:15, 3.70s/it]
2%|▏ | 12840/569592 [4:11:21<560:53:22, 3.63s/it]
2%|▏ | 12840/569592 [4:11:21<560:53:22, 3.63s/it]
2%|▏ | 12841/569592 [4:11:25<564:06:41, 3.65s/it]
2%|▏ | 12841/569592 [4:11:25<564:06:41, 3.65s/it]
2%|▏ | 12842/569592 [4:11:26<438:48:10, 2.84s/it]
2%|▏ | 12842/569592 [4:11:26<438:48:10, 2.84s/it]
2%|▏ | 12843/569592 [4:11:29<483:07:47, 3.12s/it]
2%|▏ | 12843/569592 [4:11:30<483:07:47, 3.12s/it]
2%|▏ | 12844/569592 [4:11:30<380:34:52, 2.46s/it]
2%|▏ | 12844/569592 [4:11:30<380:34:52, 2.46s/it]
2%|▏ | 12845/569592 [4:11:35<481:43:49, 3.11s/it]
2%|▏ | 12845/569592 [4:11:35<481:43:49, 3.11s/it]
2%|▏ | 12846/569592 [4:11:40<559:46:40, 3.62s/it]
2%|▏ | 12846/569592 [4:11:40<559:46:40, 3.62s/it]
2%|▏ | 12847/569592 [4:11:43<539:34:12, 3.49s/it]
2%|▏ | 12847/569592 [4:11:43<539:34:12, 3.49s/it]
2%|▏ | 12848/569592 [4:11:46<522:08:29, 3.38s/it]
2%|▏ | 12848/569592 [4:11:46<522:08:29, 3.38s/it]
2%|▏ | 12849/569592 [4:11:51<595:10:42, 3.85s/it]
2%|▏ | 12849/569592 [4:11:51<595:10:42, 3.85s/it]
2%|▏ | 12850/569592 [4:11:55<615:02:22, 3.98s/it]
2%|▏ | 12850/569592 [4:11:55<615:02:22, 3.98s/it]
2%|▏ | 12851/569592 [4:11:59<585:25:50, 3.79s/it]
2%|▏ | 12851/569592 [4:11:59<585:25:50, 3.79s/it]
2%|▏ | 12852/569592 [4:12:03<615:19:33, 3.98s/it]
2%|▏ | 12852/569592 [4:12:03<615:19:33, 3.98s/it]
2%|▏ | 12853/569592 [4:12:06<583:05:03, 3.77s/it]
2%|▏ | 12853/569592 [4:12:06<583:05:03, 3.77s/it]
2%|▏ | 12854/569592 [4:12:11<616:08:48, 3.98s/it]
2%|▏ | 12854/569592 [4:12:11<616:08:48, 3.98s/it]
2%|▏ | 12855/569592 [4:12:12<475:09:41, 3.07s/it]
2%|▏ | 12855/569592 [4:12:12<475:09:41, 3.07s/it]
2%|▏ | 12856/569592 [4:12:13<376:10:43, 2.43s/it]
2%|▏ | 12856/569592 [4:12:13<376:10:43, 2.43s/it]
2%|▏ | 12857/569592 [4:12:19<533:05:03, 3.45s/it]
2%|▏ | 12857/569592 [4:12:19<533:05:03, 3.45s/it]
2%|▏ | 12858/569592 [4:12:23<593:01:30, 3.83s/it]
2%|▏ | 12858/569592 [4:12:23<593:01:30, 3.83s/it]
2%|▏ | 12859/569592 [4:12:27<577:57:50, 3.74s/it]
2%|▏ | 12859/569592 [4:12:27<577:57:50, 3.74s/it]
2%|▏ | 12860/569592 [4:12:28<444:58:03, 2.88s/it]
2%|▏ | 12860/569592 [4:12:28<444:58:03, 2.88s/it]
2%|▏ | 12861/569592 [4:12:29<353:40:10, 2.29s/it]
2%|▏ | 12861/569592 [4:12:29<353:40:10, 2.29s/it]
2%|▏ | 12862/569592 [4:12:33<473:26:46, 3.06s/it]
2%|▏ | 12862/569592 [4:12:34<473:26:46, 3.06s/it]
2%|▏ | 12863/569592 [4:12:38<519:15:20, 3.36s/it]
2%|▏ | 12863/569592 [4:12:38<519:15:20, 3.36s/it]
2%|▏ | 12864/569592 [4:12:42<581:42:23, 3.76s/it]
2%|▏ | 12864/569592 [4:12:42<581:42:23, 3.76s/it]
2%|▏ | 12865/569592 [4:12:43<449:05:56, 2.90s/it]
2%|▏ | 12865/569592 [4:12:43<449:05:56, 2.90s/it]
2%|▏ | 12866/569592 [4:12:46<468:36:19, 3.03s/it]
2%|▏ | 12866/569592 [4:12:46<468:36:19, 3.03s/it]
2%|▏ | 12867/569592 [4:12:52<566:56:00, 3.67s/it]
2%|▏ | 12867/569592 [4:12:52<566:56:00, 3.67s/it]
2%|▏ | 12868/569592 [4:12:53<437:49:42, 2.83s/it]
2%|▏ | 12868/569592 [4:12:53<437:49:42, 2.83s/it]
2%|▏ | 12869/569592 [4:12:53<349:23:56, 2.26s/it]
2%|▏ | 12869/569592 [4:12:53<349:23:56, 2.26s/it]
2%|▏ | 12870/569592 [4:12:54<289:28:09, 1.87s/it]
2%|▏ | 12870/569592 [4:12:54<289:28:09, 1.87s/it]
2%|▏ | 12871/569592 [4:12:59<394:36:11, 2.55s/it]
2%|▏ | 12871/569592 [4:12:59<394:36:11, 2.55s/it]
2%|▏ | 12872/569592 [4:13:03<480:26:39, 3.11s/it]
2%|▏ | 12872/569592 [4:13:03<480:26:39, 3.11s/it]
2%|▏ | 12873/569592 [4:13:04<378:52:24, 2.45s/it]
2%|▏ | 12873/569592 [4:13:04<378:52:24, 2.45s/it]
2%|▏ | 12874/569592 [4:13:07<425:40:21, 2.75s/it]
2%|▏ | 12874/569592 [4:13:07<425:40:21, 2.75s/it]
2%|▏ | 12875/569592 [4:13:12<496:20:19, 3.21s/it]
2%|▏ | 12875/569592 [4:13:12<496:20:19, 3.21s/it]
2%|▏ | 12876/569592 [4:13:15<521:18:13, 3.37s/it]
2%|▏ | 12876/569592 [4:13:15<521:18:13, 3.37s/it]
2%|▏ | 12877/569592 [4:13:19<514:33:40, 3.33s/it]
2%|▏ | 12877/569592 [4:13:19<514:33:40, 3.33s/it]
2%|▏ | 12878/569592 [4:13:22<504:41:36, 3.26s/it]
2%|▏ | 12878/569592 [4:13:22<504:41:36, 3.26s/it]
2%|▏ | 12879/569592 [4:13:23<395:26:05, 2.56s/it]
2%|▏ | 12879/569592 [4:13:23<395:26:05, 2.56s/it]
2%|▏ | 12880/569592 [4:13:24<321:53:21, 2.08s/it]
2%|▏ | 12880/569592 [4:13:24<321:53:21, 2.08s/it]
2%|▏ | 12881/569592 [4:13:29<464:11:45, 3.00s/it]
2%|▏ | 12881/569592 [4:13:29<464:11:45, 3.00s/it]
2%|▏ | 12882/569592 [4:13:32<479:21:07, 3.10s/it]
2%|▏ | 12882/569592 [4:13:32<479:21:07, 3.10s/it]
2%|▏ | 12883/569592 [4:13:33<378:29:22, 2.45s/it]
2%|▏ | 12883/569592 [4:13:33<378:29:22, 2.45s/it]
2%|▏ | 12884/569592 [4:13:37<443:17:42, 2.87s/it]
2%|▏ | 12884/569592 [4:13:37<443:17:42, 2.87s/it]
2%|▏ | 12885/569592 [4:13:40<473:23:00, 3.06s/it]
2%|▏ | 12885/569592 [4:13:40<473:23:00, 3.06s/it]
2%|▏ | 12886/569592 [4:13:43<476:50:58, 3.08s/it]
2%|▏ | 12886/569592 [4:13:43<476:50:58, 3.08s/it]
2%|▏ | 12887/569592 [4:13:47<503:26:11, 3.26s/it]
2%|▏ | 12887/569592 [4:13:47<503:26:11, 3.26s/it]
2%|▏ | 12888/569592 [4:13:50<499:36:15, 3.23s/it]
2%|▏ | 12888/569592 [4:13:50<499:36:15, 3.23s/it]
2%|▏ | 12889/569592 [4:13:54<536:18:37, 3.47s/it]
2%|▏ | 12889/569592 [4:13:54<536:18:37, 3.47s/it]
2%|▏ | 12890/569592 [4:13:55<417:18:52, 2.70s/it]
2%|▏ | 12890/569592 [4:13:55<417:18:52, 2.70s/it]
2%|▏ | 12891/569592 [4:13:56<335:03:15, 2.17s/it]
2%|▏ | 12891/569592 [4:13:56<335:03:15, 2.17s/it]
2%|▏ | 12892/569592 [4:13:59<389:22:55, 2.52s/it]
2%|▏ | 12892/569592 [4:13:59<389:22:55, 2.52s/it]
2%|▏ | 12893/569592 [4:14:00<318:02:53, 2.06s/it]
2%|▏ | 12893/569592 [4:14:00<318:02:53, 2.06s/it]
2%|▏ | 12894/569592 [4:14:04<378:54:41, 2.45s/it]
2%|▏ | 12894/569592 [4:14:04<378:54:41, 2.45s/it]
2%|▏ | 12895/569592 [4:14:05<312:58:34, 2.02s/it]
2%|▏ | 12895/569592 [4:14:05<312:58:34, 2.02s/it]
2%|▏ | 12896/569592 [4:14:06<264:50:44, 1.71s/it]
2%|▏ | 12896/569592 [4:14:06<264:50:44, 1.71s/it]
2%|▏ | 12897/569592 [4:14:07<230:31:38, 1.49s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98716320 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 12897/569592 [4:14:07<230:31:38, 1.49s/it]
2%|▏ | 12898/569592 [4:14:08<206:13:17, 1.33s/it]
2%|▏ | 12898/569592 [4:14:08<206:13:17, 1.33s/it]
2%|▏ | 12899/569592 [4:14:09<197:46:07, 1.28s/it]
2%|▏ | 12899/569592 [4:14:09<197:46:07, 1.28s/it]
2%|▏ | 12900/569592 [4:14:10<186:18:09, 1.20s/it]
2%|▏ | 12900/569592 [4:14:10<186:18:09, 1.20s/it]
2%|▏ | 12901/569592 [4:14:14<340:14:01, 2.20s/it]
2%|▏ | 12901/569592 [4:14:15<340:14:01, 2.20s/it]
2%|▏ | 12902/569592 [4:14:19<426:35:27, 2.76s/it]
2%|▏ | 12902/569592 [4:14:19<426:35:27/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
, 2.76s/it]
2%|▏ | 12903/569592 [4:14:20<344:56:56, 2.23s/it]
2%|▏ | 12903/569592 [4:14:20<344:56:56, 2.23s/it]
2%|▏ | 12904/569592 [4:14:21<286:27:28, 1.85s/it]
2%|▏ | 12904/569592 [4:14:21<286:27:28, 1.85s/it]
2%|▏ | 12905/569592 [4:14:26<436:10:48, 2.82s/it]
2%|▏ | 12905/569592 [4:14:26<436:10:48, 2.82s/it]
2%|▏ | 12906/569592 [4:14:28<422:22:52, 2.73s/it]
2%|▏ | 12906/569592 [4:14:28<422:22:52, 2.73s/it]
2%|▏ | 12907/569592 [4:14:32<459:53:23, 2.97s/it]
2%|▏ | 12907/569592 [4:14:32<459:53:23, 2.97s/it]
2%|▏ | 12908/569592 [4:14:36<545:13:48, 3.53s/it]
2%|▏ | 12908/569592 [4:14:36<545:13:48, 3.53s/it]
2%|▏ | 12909/569592 [4:14:37<424:38:27, 2.75s/it]
2%|▏ | 12909/569592 [4:14:37<424:38:27, 2.75s/it]
2%|▏ | 12910/569592 [4:14:38<341:59:50, 2.21s/it]
2%|▏ | 12910/569592 [4:14:38<341:59:50, 2.21s/it]
2%|▏ | 12911/569592 [4:14:39<282:08:37, 1.82s/it]
2%|▏ | 12911/569592 [4:14:39<282:08:37, 1.82s/it]
2%|▏ | 12912/569592 [4:14:41<256:37:29, 1.66s/it]
2%|▏ | 12912/569592 [4:14:41<256:37:29, 1.66s/it]
2%|▏ | 12913/569592 [4:14:45<398:17:03, 2.58s/it]
2%|▏ | 12913/569592 [4:14:45<398:17:03, 2.58s/it]
2%|▏ | 12914/569592 [4:14:49<429:25:33, 2.78s/it]
2%|▏ | 12914/569592 [4:14:49<429:25:33, 2.78s/it]
2%|▏ | 12915/569592 [4:14:50<381:33:48, 2.47s/it]
2%|▏ | 12915/569592 [4:14:50<381:33:48, 2.47s/it]
2%|▏ | 12916/569592 [4:14:55<497:28:25, 3.22s/it]
2%|▏ | 12916/569592 [4:14:55<497:28:25, 3.22s/it]
2%|▏ | 12917/569592 [4:14:56<392:47:12, 2.54s/it]
2%|▏ | 12917/569592 [4:14:56<392:47:12, 2.54s/it]
2%|▏ | 12918/569592 [4:15:00<450:37:53, 2.91s/it]
2%|▏ | 12918/569592 [4:15:00<450:37:53, 2.91s/it]
2%|▏ | 12919/569592 [4:15:01<359:34:10, 2.33s/it]
2%|▏ | 12919/569592 [4:15:01<359:34:10, 2.33s/it]
2%|▏ | 12920/569592 [4:15:02<297:36:11, 1.92s/it]
2%|▏ | 12920/569592 [4:15:02<297:36:11, 1.92s/it]
2%|▏ | 12921/569592 [4:15:06<401:56:10, 2.60s/it]
2%|▏ | 12921/569592 [4:15:06<401:56:10, 2.60s/it]
2%|▏ | 12922/569592 [4:15:09<417:18:15, 2.70s/it]
2%|▏ | 12922/569592 [4:15:09<417:18:15, 2.70s/it]
2%|▏ | 12923/569592 [4:15:11<373:57:38, 2.42s/it]
2%|▏ | 12923/569592 [4:15:11<373:57:38, 2.42s/it]
2%|▏ | 12924/569592 [4:15:14<428:00:25, 2.77s/it]
2%|▏ | 12924/569592 [4:15:14<428:00:25, 2.77s/it]
2%|▏ | 12925/569592 [4:15:18<467:24:16, 3.02s/it]
2%|▏ | 12925/569592 [4:15:18<467:24:16, 3.02s/it]
2%|▏ | 12926/569592 [4:15:19<372:36:30, 2.41s/it]
2%|▏ | 12926/569592 [4:15:19<372:36:30, 2.41s/it]
2%|▏ | 12927/569592 [4:15:22<392:45:37, 2.54s/it]
2%|▏ | 12927/569592 [4:15:22<392:45:37, 2.54s/it]
2%|▏ | 12928/569592 [4:15:23<318:20:09, 2.06s/it]
2%|▏ | 12928/569592 [4:15:23<318:20:09, 2.06s/it]
2%|▏ | 12929/569592 [4:15:27<431:53:32, 2.79s/it]
2%|▏ | 12929/569592 [4:15:27<431:53:32, 2.79s/it]
2%|▏ | 12930/569592 [4:15:29<364:03:14, 2.35s/it]
2%|▏ | 12930/569592 [4:15:29<364:03:14, 2.35s/it]
2%|▏ | 12931/569592 [4:15:31<388:03:56, 2.51s/it]
2%|▏ | 12931/569592 [4:15:31<388:03:56, 2.51s/it]
2%|▏ | 12932/569592 [4:15:32<315:49:57, 2.04s/it]
2%|▏ | 12932/569592 [4:15:32<315:49:57, 2.04s/it]
2%|▏ | 12933/569592 [4:15:37<450:49:24, 2.92s/it]
2%|▏ | 12933/569592 [4:15:37<450:49:24, 2.92s/it]
2%|▏ | 12934/569592 [4:15:41<478:42:09, 3.10s/it]
2%|▏ | 12934/569592 [4:15:41<478:42:09, 3.10s/it]
2%|▏ | 12935/569592 [4:15:46<555:50:42, 3.59s/it]
2%|▏ | 12935/569592 [4:15:46<555:50:42, 3.59s/it]
2%|▏ | 12936/569592 [4:15:50<574:57:55, 3.72s/it]
2%|▏ | 12936/569592 [4:15:50<574:57:55, 3.72s/it]
2%|▏ | 12937/569592 [4:15:51<444:39:16, 2.88s/it]
2%|▏ | 12937/569592 [4:15:51<444:39:16, 2.88s/it]
2%|▏ | 12938/569592 [4:15:54<459:06:07, 2.97s/it]
2%|▏ | 12938/569592 [4:15:54<459:06:07, 2.97s/it]
2%|▏ | 12939/569592 [4:15:55<365:32:04, 2.36s/it]
2%|▏ | 12939/569592 [4:15:55<365:32:04, 2.36s/it]
2%|▏ | 12940/569592 [4:15:59<443:39:23, 2.87s/it]
2%|▏ | 12940/569592 [4:15:59<443:39:23, 2.87s/it]
2%|▏ | 12941/569592 [4:16:04<544:17:36, 3.52s/it]
2%|▏ | 12941/569592 [4:16:04<544:17:36, 3.52s/it]
2%|▏ | 12942/569592 [4:16:08<591:41:57, 3.83s/it]
2%|▏ | 12942/569592 [4:16:08<591:41:57, 3.83s/it]
2%|▏ | 12943/569592 [4:16:12<589:26:39, 3.81s/it]
2%|▏ | 12943/569592 [4:16:12<589:26:39, 3.81s/it]
2%|▏ | 12944/569592 [4:16:16<581:09:35, 3.76s/it]
2%|▏ | 12944/569592 [4:16:16<581:09:35, 3.76s/it]
2%|▏ | 12945/569592 [4:16:20<584:25:12, 3.78s/it]
2%|▏ | 12945/569592 [4:16:20<584:25:12, 3.78s/it]
2%|▏ | 12946/569592 [4:16:24<626:13:07, 4.05s/it]
2%|▏ | 12946/569592 [4:16:24<626:13:07, 4.05s/it]
2%|▏ | 12947/569592 [4:16:25<480:23:41, 3.11s/it]
2%|▏ | 12947/569592 [4:16:25<480:23:41, 3.11s/it]
2%|▏ | 12948/569592 [4:16:28<481:10:08, 3.11s/it]
2%|▏ | 12948/569592 [4:16:28<481:10:08, 3.11s/it]
2%|▏ | 12949/569592 [4:16:32<520:38:53, 3.37s/it]
2%|▏ | 12949/569592 [4:16:32<520:38:53, 3.37s/it]
2%|▏ | 12950/569592 [4:16:38<608:26:50, 3.94s/it]
2%|▏ | 12950/569592 [4:16:38<608:26:50, 3.94s/it]
2%|▏ | 12951/569592 [4:16:42<642:22:28, 4.15s/it]
2%|▏ | 12951/569592 [4:16:42<642:22:28, 4.15s/it]
2%|▏ | 12952/569592 [4:16:46<640:23:00, 4.14s/it]
2%|▏ | 12952/569592 [4:16:46<640:23:00, 4.14s/it]
2%|▏ | 12953/569592 [4:16:49<588:43:13, 3.81s/it]
2%|▏ | 12953/569592 [4:16:49<588:43:13, 3.81s/it]
2%|▏ | 12954/569592 [4:16:52<557:18:19, 3.60s/it]
2%|▏ | 12954/569592 [4:16:52<557:18:19, 3.60s/it]
2%|▏ | 12955/569592 [4:16:57<578:40:47, 3.74s/it]
2%|▏ | 12955/569592 [4:16:57<578:40:47, 3.74s/it]
2%|▏ | 12956/569592 [4:17:00<557:46:30, 3.61s/it]
2%|▏ | 12956/569592 [4:17:00<557:46:30, 3.61s/it]
2%|▏ | 12957/569592 [4:17:03<528:34:45, 3.42s/it]
2%|▏ | 12957/569592 [4:17:03<528:34:45, 3.42s/it]
2%|▏ | 12958/569592 [4:17:06<533:08:08, 3.45s/it]
2%|▏ | 12958/569592 [4:17:06<533:08:08, 3.45s/it]
2%|▏ | 12959/569592 [4:17:09<521:35:03, 3.37s/it]
2%|▏ | 12959/569592 [4:17:10<521:35:03, 3.37s/it]
2%|▏ | 12960/569592 [4:17:13<507:07:23, 3.28s/it]
2%|▏ | 12960/569592 [4:17:13<507:07:23, 3.28s/it]
2%|▏ | 12961/569592 [4:17:17<578:01:40, 3.74s/it]
2%|▏ | 12961/569592 [4:17:17<578:01:40, 3.74s/it]
2%|▏ | 12962/569592 [4:17:21<555:52:34, 3.60s/it]
2%|▏ | 12962/569592 [4:17:21<555:52:34, 3.60s/it]
2%|▏ | 12963/569592 [4:17:24<531:59:59, 3.44s/it]
2%|▏ | 12963/569592 [4:17:24<531:59:59, 3.44s/it]
2%|▏ | 12964/569592 [4:17:28<586:36:51, 3.79s/it]
2%|▏ | 12964/569592 [4:17:28<586:36:51, 3.79s/it]
2%|▏ | 12965/569592 [4:17:29<453:32:08, 2.93s/it]
2%|▏ | 12965/569592 [4:17:29<453:32:08, 2.93s/it]
2%|▏ | 12966/569592 [4:17:33<478:52:30, 3.10s/it]
2%|▏ | 12966/569592 [4:17:33<478:52:30, 3.10s/it]
2%|▏ | 12967/569592 [4:17:36<499:26:32, 3.23s/it]
2%|▏ | 12967/569592 [4:17:36<499:26:32, 3.23s/it]
2%|▏ | 12968/569592 [4:17:41<574:53:00, 3.72s/it]
2%|▏ | 12968/569592 [4:17:41<574:53:00, 3.72s/it]
2%|▏ | 12969/569592 [4:17:42<443:40:15, 2.87s/it]
2%|▏ | 12969/569592 [4:17:42<443:40:15, 2.87s/it]
2%|▏ | 12970/569592 [4:17:46<496:43:40, 3.21s/it]
2%|▏ | 12970/569592 [4:17:46<496:43:40, 3.21s/it]
2%|▏ | 12971/569592 [4:17:49<504:52:35, 3.27s/it]
2%|▏ | 12971/569592 [4:17:49<504:52:35, 3.27s/it]
2%|▏ | 12972/569592 [4:17:53<509:14:37, 3.29s/it]
2%|▏ | 12972/569592 [4:17:53<509:14:37, 3.29s/it]
2%|▏ | 12973/569592 [4:17:56<511:20:44, 3.31s/it]
2%|▏ | 12973/569592 [4:17:56<511:20:44, 3.31s/it]
2%|▏ | 12974/569592 [4:18:01<573:59:39, 3.71s/it]
2%|▏ | 12974/569592 [4:18:01<573:59:39, 3.71s/it]
2%|▏ | 12975/569592 [4:18:05<618:06:39, 4.00s/it]
2%|▏ | 12975/569592 [4:18:05<618:06:39, 4.00s/it]
2%|▏ | 12976/569592 [4:18:10<650:48:08, 4.21s/it]
2%|▏ | 12976/569592 [4:18:10<650:48:08, 4.21s/it]
2%|▏ | 12977/569592 [4:18:11<495:34:23, 3.21s/it]
2%|▏ | 12977/569592 [4:18:11<495:34:23, 3.21s/it]
2%|▏ | 12978/569592 [4:18:12<388:28:47, 2.51s/it]
2%|▏ | 12978/569592 [4:18:12<388:28:47, 2.51s/it]
2%|▏ | 12979/569592 [4:18:15<424:46:41, 2.75s/it]
2%|▏ | 12979/569592 [4:18:15<424:46:41, 2.75s/it]
2%|▏ | 12980/569592 [4:18:20<539:30:29, 3.49s/it]
2%|▏ | 12980/569592 [4:18:20<539:30:29, 3.49s/it]
2%|▏ | 12981/569592 [4:18:25<595:22:25, 3.85s/it]
2%|▏ | 12981/569592 [4:18:25<595:22:25, 3.85s/it]
2%|▏ | 12982/569592 [4:18:26<459:01:30, 2.97s/it]
2%|▏ | 12982/569592 [4:18:26<459:01:30, 2.97s/it]
2%|▏ | 12983/569592 [4:18:29<475:03:53, 3.07s/it]
2%|▏ | 12983/569592 [4:18:29<475:03:53, 3.07s/it]
2%|▏ | 12984/569592 [4:18:30<374:07:44, 2.42s/it]
2%|▏ | 12984/569592 [4:18:30<374:07:44, 2.42s/it]
2%|▏ | 12985/569592 [4:18:34<425:28:04, 2.75s/it]
2%|▏ | 12985/569592 [4:18:34<425:28:04, 2.75s/it]
2%|▏ | 12986/569592 [4:18:37<464:43:23, 3.01s/it]
2%|▏ | 12986/569592 [4:18:37<464:43:23, 3.01s/it]
2%|▏ | 12987/569592 [4:18:38<370:20:34, 2.40s/it]
2%|▏ | 12987/569592 [4:18:38<370:20:34, 2.40s/it]
2%|▏ | 12988/569592 [4:18:39<303:58:27, 1.97s/it]
2%|▏ | 12988/569592 [4:18:39<303:58:27, 1.97s/it]
2%|▏ | 12989/569592 [4:18:44<445:24:21, 2.88s/it]
2%|▏ | 12989/569592 [4:18:44<445:24:21, 2.88s/it]
2%|▏ | 12990/569592 [4:18:49<523:35:57, 3.39s/it]
2%|▏ | 12990/569592 [4:18:49<523:35:57, 3.39s/it]
2%|▏ | 12991/569592 [4:18:53<560:56:39, 3.63s/it]
2%|▏ | 12991/569592 [4:18:53<560:56:39, 3.63s/it]
2%|▏ | 12992/569592 [4:18:58<609:16:38, 3.94s/it]
2%|▏ | 12992/569592 [4:18:58<609:16:38, 3.94s/it]
2%|▏ | 12993/569592 [4:19:01<600:04:01, 3.88s/it]
2%|▏ | 12993/569592 [4:19:01<600:04:01, 3.88s/it]
2%|▏ | 12994/569592 [4:19:05<564:09:11, 3.65s/it]
2%|▏ | 12994/569592 [4:19:05<564:09:11, 3.65s/it]
2%|▏ | 12995/569592 [4:19:10<627:10:18, 4.06s/it]
2%|▏ | 12995/569592 [4:19:10<627:10:18, 4.06s/it]
2%|▏ | 12996/569592 [4:19:13<580:55:35, 3.76s/it]
2%|▏ | 12996/569592 [4:19:13<580:55:35, 3.76s/it]
2%|▏ | 12997/569592 [4:19:16<554:03:25, 3.58s/it]
2%|▏ | 12997/569592 [4:19:16<554:03:25, 3.58s/it]
2%|▏ | 12998/569592 [4:19:17<430:55:03, 2.79s/it]
2%|▏ | 12998/569592 [4:19:17<430:55:03, 2.79s/it]
2%|▏ | 12999/569592 [4:19:20<448:00:42, 2.90s/it]
2%|▏ | 12999/569592 [4:19:20<448:00:42, 2.90s/it]
2%|▏ | 13000/569592 [4:19:21<362:14:08, 2.34s/it]
2%|▏ | 13000/569592 [4:19:21<362:14:08, 2.34s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-12000] due to args.save_total_limit
2%|▏ | 13001/569592 [4:21:42<6800:24:32, 43.98s/it]
2%|▏ | 13001/569592 [4:21:42<6800:24:32, 43.98s/it]
2%|▏ | 13002/569592 [4:21:45<4908:48:31, 31.75s/it]
2%|▏ | 13002/569592 [4:21:45<4908:48:31, 31.75s/it]
2%|▏ | 13003/569592 [4:21:50<3669:25:01, 23.73s/it]
2%|▏ | 13003/569592 [4:21:50<3669:25:01, 23.73s/it]
2%|▏ | 13004/569592 [4:21:53<2705:42:56, 17.50s/it]
2%|▏ | 13004/569592 [4:21:53<2705:42:56, 17.50s/it]
2%|▏ | 13005/569592 [4:21:59<2144:09:38, 13.87s/it]
2%|▏ | 13005/569592 [4:21:59<2144:09:38, 13.87s/it]
2%|▏ | 13006/569592 [4:22:00<1545:18:45, 10.00s/it]
2%|▏ | 13006/569592 [4:22:00<1545:18:45, 10.00s/it]
2%|▏ | 13007/569592 [4:22:03<1253:48:42, 8.11s/it]
2%|▏ | 13007/569592 [4:22:03<1253:48:42, 8.11s/it]
2%|▏ | 13008/569592 [4:22:04<921:00:35, 5.96s/it]
2%|▏ | 13008/569592 [4:22:04<921:00:35, 5.96s/it]
2%|▏ | 13009/569592 [4:22:05<689:20:57, 4.46s/it]
2%|▏ | 13009/569592 [4:22:05<689:20:57, 4.46s/it]
2%|▏ | 13010/569592 [4:22:09<674:18:24, 4.36s/it]
2%|▏ | 13010/569592 [4:22:09<674:18:24, 4.36s/it]
2%|▏ | 13011/569592 [4:22:10<516:36:22, 3.34s/it]
2%|▏ | 13011/569592 [4:22:10<516:36:22, 3.34s/it]
2%|▏ | 13012/569592 [4:22:16<612:47:37, 3.96s/it]
2%|▏ | 13012/569592 [4:22:16<612:47:37, 3.96s/it]
2%|▏ | 13013/569592 [4:22:17<472:57:23, 3.06s/it]
2%|▏ | 13013/569592 [4:22:17<472:57:23, 3.06s/it]
2%|▏ | 13014/569592 [4:22:18<380:14:35, 2.46s/it]
2%|▏ | 13014/569592 [4:22:18<380:14:35, 2.46s/it]
2%|▏ | 13015/569592 [4:22:19<310:29:50, 2.01s/it]
2%|▏ | 13015/569592 [4:22:19<310:29:50, 2.01s/it]
2%|▏ | 13016/569592 [4:22:20<261:46:59, 1.69s/it]
2%|▏ | 13016/569592 [4:22:20<261:46:59, 1.69s/it]
2%|▏ | 13017/569592 [4:22:21<227:08:02, 1.47s/it]
2%|▏ | 13017/569592 [4:22:21<227:08:02, 1.47s/it]
2%|▏ | 13018/569592 [4:22:25<342:48:50, 2.22s/it]
2%|▏ | 13018/569592 [4:22:25<342:48:50, 2.22s/it]
2%|▏ | 13019/569592 [4:22:26<284:01:04, 1.84s/it]
2%|▏ | 13019/569592 [4:22:26<284:01:04, 1.84s/it]
2%|▏ | 13020/569592 [4:22:31<434:21:17, 2.81s/it]
2%|▏ | 13020/569592 [4:22:31<434:21:17, 2.81s/it]
2%|▏ | 13021/569592 [4:22:32<353:17:49, 2.29s/it]
2%|▏ | 13021/569592 [4:22:32<353:17:49, 2.29s/it]
2%|▏ | 13022/569592 [4:22:33<292:48:02, 1.89s/it]
2%|▏ | 13022/569592 [4:22:33<292:48:02, 1.89s/it]
2%|▏ | 13023/569592 [4:22:34<248:22:27, 1.61s/it]
2%|▏ | 13023/569592 [4:22:34<248:22:27, 1.61s/it]
2%|▏ | 13024/569592 [4:22:37<347:36:09, 2.25s/it]
2%|▏ | 13024/569592 [4:22:37<347:36:09, 2.25s/it]
2%|▏ | 13025/569592 [4:22:42<453:16:50, 2.93s/it]
2%|▏ | 13025/569592 [4:22:42<453:16:50, 2.93s/it]
2%|▏ | 13026/569592 [4:22:43<364:32:30, 2.36s/it]
2%|▏ | 13026/569592 [4:22:43<364:32:30, 2.36s/it]
2%|▏ | 13027/569592 [4:22:44<303:04:26, 1.96s/it]
2%|▏ | 13027/569592 [4:22:44<303:04:26, 1.96s/it]
2%|▏ | 13028/569592 [4:22:49<435:43:50, 2.82s/it]
2%|▏ | 13028/569592 [4:22:49<435:43:50, 2.82s/it]
2%|▏ | 13029/569592 [4:22:50<365:38:31, 2.37s/it]
2%|▏ | 13029/569592 [4:22:50<365:38:31, 2.37s/it]
2%|▏ | 13030/569592 [4:22:53<397:04:29, 2.57s/it]
2%|▏ | 13030/569592 [4:22:53<397:04:29, 2.57s/it]
2%|▏ | 13031/569592 [4:22:54<322:28:16, 2.09s/it]
2%|▏ | 13031/569592 [4:22:54<322:28:16, 2.09s/it]
2%|▏ | 13032/569592 [4:22:59<443:58:06, 2.87s/it]
2%|▏ | 13032/569592 [4:22:59<443:58:06, 2.87s/it]
2%|▏ | 13033/569592 [4:23:02<472:06:06, 3.05s/it]
2%|▏ | 13033/569592 [4:23:02<472:06:06, 3.05s/it]
2%|▏ | 13034/569592 [4:23:06<526:46:33, 3.41s/it]
2%|▏ | 13034/569592 [4:23:06<526:46:33, 3.41s/it]
2%|▏ | 13035/569592 [4:23:07<413:41:33, 2.68s/it]
2%|▏ | 13035/569592 [4:23:07<413:41:33, 2.68s/it]
2%|▏ | 13036/569592 [4:23:10<396:46:50, 2.57s/it]
2%|▏ | 13036/569592 [4:23:10<396:46:50, 2.57s/it]
2%|▏ | 13037/569592 [4:23:11<344:43:09, 2.23s/it]
2%|▏ | 13037/569592 [4:23:11<344:43:09, 2.23s/it]
2%|▏ | 13038/569592 [4:23:15<409:22:54, 2.65s/it]
2%|▏ | 13038/569592 [4:23:15<409:22:54, 2.65s/it]
2%|▏ | 13039/569592 [4:23:19<499:49:24, 3.23s/it]
2%|▏ | 13039/569592 [4:23:19<499:49:24, 3.23s/it]
2%|▏ | 13040/569592 [4:23:23<512:15:56, 3.31s/it]
2%|▏ | 13040/569592 [4:23:23<512:15:56, 3.31s/it]
2%|▏ | 13041/569592 [4:23:27<538:00:07, 3.48s/it]
2%|▏ | 13041/569592 [4:23:27<538:00:07, 3.48s/it]
2%|▏ | 13042/569592 [4:23:28<420:01:32, 2.72s/it]
2%|▏ | 13042/569592 [4:23:28<420:01:32, 2.72s/it]
2%|▏ | 13043/569592 [4:23:29<338:47:27, 2.19s/it]
2%|▏ | 13043/569592 [4:23:29<338:47:27, 2.19s/it]
2%|▏ | 13044/569592 [4:23:30<280:53:23, 1.82s/it]
2%|▏ | 13044/569592 [4:23:30<280:53:23, 1.82s/it]
2%|▏ | 13045/569592 [4:23:34<378:12:34, 2.45s/it]
2%|▏ | 13045/569592 [4:23:34<378:12:34, 2.45s/it]
2%|▏ | 13046/569592 [4:23:37<409:57:23, 2.65s/it]
2%|▏ | 13046/569592 [4:23:37<409:57:23, 2.65s/it]
2%|▏ | 13047/569592 [4:23:38<335:55:16, 2.17s/it]
2%|▏ | 13047/569592 [4:23:38<335:55:16, 2.17s/it]
2%|▏ | 13048/569592 [4:23:41<377:33:57, 2.44s/it]
2%|▏ | 13048/569592 [4:23:41<377:33:57, 2.44s/it]
2%|▏ | 13049/569592 [4:23:45<439:16:22, 2.84s/it]
2%|▏ | 13049/569592 [4:23:45<439:16:22, 2.84s/it]
2%|▏ | 13050/569592 [4:23:49<489:44:50, 3.17s/it]
2%|▏ | 13050/569592 [4:23:49<489:44:50, 3.17s/it]
2%|▏ | 13051/569592 [4:23:54<589:00:39, 3.81s/it]
2%|▏ | 13051/569592 [4:23:54<589:00:39, 3.81s/it]
2%|▏ | 13052/569592 [4:23:57<578:34:55, 3.74s/it]
2%|▏ | 13052/569592 [4:23:57<578:34:55, 3.74s/it]
2%|▏ | 13053/569592 [4:23:58<446:13:02, 2.89s/it]
2%|▏ | 13053/569592 [4:23:58<446:13:02, 2.89s/it]
2%|▏ | 13054/569592 [4:24:02<487:49:39, 3.16s/it]
2%|▏ | 13054/569592 [4:24:02<487:49:39, 3.16s/it]
2%|▏ | 13055/569592 [4:24:07<576:23:31, 3.73s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13055/569592 [4:24:07<576:23:31, 3.73s/it]
2%|▏ | 13056/569592 [4:24:08<444:54:15, 2.88s/it]
2%|▏ | 13056/569592 [4:24:08<444:54:15, 2.88s/it]
2%|▏ | 13057/569592 [4:24:12<517:08:08, 3.35s/it]
2%|▏ | 13057/569592 [4:24:12<517:08:08, 3.35s/it]
2%|▏ | 13058/569592 [4:24:13<403:37:32, 2.61s/it]
2%|▏ | 13058/569592 [4:24:13<403:37:32, 2.61s/it]
2%|▏ | 13059/569592 [4:24:18<477:07:56, 3.09s/it]
2%|▏ | 13059/569592 [4:24:18<477:07:56, 3.09s/it]
2%|▏ | 13060/569592 [4:24:21<508:01:22, 3.29s/it]
2%|▏ | 13060/569592 [4:24:21<508:01:22, 3.29s/it]
2%|▏ | 13061/569592 [4:24:25<544:18:34, 3.52s/it]
2%|▏ | 13061/569592 [4:24:25<544:18:34, 3.52s/it]
2%|▏ | 13062/569592 [4:24:30<606:50:05, 3.93s/it]
2%|▏ | 13062/569592 [4:24:30<606:50:05, 3.93s/it]
2%|▏ | 13063/569592 [4:24:31<466:15:43, 3.02s/it]
2%|▏ | 13063/569592 [4:24:31<466:15:43, 3.02s/it]
2%|▏ | 13064/569592 [4:24:37<579:48:18, 3.75s/it]
2%|▏ | 13064/569592 [4:24:37<579:48:18, 3.75s/it]
2%|▏ | 13065/569592 [4:24:40<583:01:01, 3.77s/it]
2%|▏ | 13065/569592 [4:24:40<583:01:01, 3.77s/it]
2%|▏ | 13066/569592 [4:24:45<615:42:43, 3.98s/it]
2%|▏ | 13066/569592 [4:24:45<615:42:43, 3.98s/it]
2%|▏ | 13067/569592 [4:24:49<600:11:21, 3.88s/it]
2%|▏ | 13067/569592 [4:24:49<600:11:21, 3.88s/it]
2%|▏ | 13068/569592 [4:24:52<582:31:55, 3.77s/it]
2%|▏ | 13068/569592 [4:24:52<582:31:55, 3.77s/it]
2%|▏ | 13069/569592 [4:24:56<572:36:12, 3.70s/it]
2%|▏ | 13069/569592 [4:24:56<572:36:12, 3.70s/it]
2%|▏ | 13070/569592 [4:25:00<608:13:57, 3.93s/it]
2%|▏ | 13070/569592 [4:25:00<608:13:57, 3.93s/it]
2%|▏ | 13071/569592 [4:25:03<581:16:44, 3.76s/it]
2%|▏ | 13071/569592 [4:25:03<581:16:44, 3.76s/it]
2%|▏ | 13072/569592 [4:25:07<578:44:07, 3.74s/it]
2%|▏ | 13072/569592 [4:25:07<578:44:07, 3.74s/it]
2%|▏ | 13073/569592 [4:25:11<567:00:32, 3.67s/it]
2%|▏ | 13073/569592 [4:25:11<567:00:32, 3.67s/it]
2%|▏ | 13074/569592 [4:25:14<571:49:54, 3.70s/it]
2%|▏ | 13074/569592 [4:25:14<571:49:54, 3.70s/it]
2%|▏ | 13075/569592 [4:25:19<603:04:23, 3.90s/it]
2%|▏ | 13075/569592 [4:25:19<603:04:23, 3.90s/it]
2%|▏ | 13076/569592 [4:25:24<660:50:09, 4.27s/it]
2%|▏ | 13076/569592 [4:25:24<660:50:09, 4.27s/it]
2%|▏ | 13077/569592 [4:25:28<646:03:42, 4.18s/it]
2%|▏ | 13077/569592 [4:25:28<646:03:42, 4.18s/it]
2%|▏ | 13078/569592 [4:25:31<612:59:12, 3.97s/it]
2%|▏ | 13078/569592 [4:25:31<612:59:12, 3.97s/it]
2%|▏ | 13079/569592 [4:25:32<470:33:20, 3.04s/it]
2%|▏ | 13079/569592 [4:25:32<470:33:20, 3.04s/it]
2%|▏ | 13080/569592 [4:25:36<506:11:29, 3.27s/it]
2%|▏ | 13080/569592 [4:25:36<506:11:29, 3.27s/it]
2%|▏ | 13081/569592 [4:25:37<397:20:57, 2.57s/it]
2%|▏ | 13081/569592 [4:25:37<397:20:57, 2.57s/it]
2%|▏ | 13082/569592 [4:25:41<454:07:35, 2.94s/it]
2%|▏ | 13082/569592 [4:25:41<454:07:35, 2.94s/it]
2%|▏ | 13083/569592 [4:25:45<509:56:48, 3.30s/it]
2%|▏ | 13083/569592 [4:25:45<509:56:48, 3.30s/it]
2%|▏ | 13084/569592 [4:25:49<534:35:45, 3.46s/it]
2%|▏ | 13084/569592 [4:25:49<534:35:45, 3.46s/it]
2%|▏ | 13085/569592 [4:25:53<562:16:26, 3.64s/it]
2%|▏ | 13085/569592 [4:25:53<562:16:26, 3.64s/it]
2%|▏ | 13086/569592 [4:25:54<436:01:11, 2.82s/it]
2%|▏ | 13086/569592 [4:25:54<436:01:11, 2.82s/it]
2%|▏ | 13087/569592 [4:25:55<347:39:25, 2.25s/it]
2%|▏ | 13087/569592 [4:25:55<347:39:25, 2.25s/it]
2%|▏ | 13088/569592 [4:26:02<571:24:45, 3.70s/it]
2%|▏ | 13088/569592 [4:26:02<571:24:45, 3.70s/it]
2%|▏ | 13089/569592 [4:26:05<571:48:36, 3.70s/it]
2%|▏ | 13089/569592 [4:26:05<571:48:36, 3.70s/it]
2%|▏ | 13090/569592 [4:26:09<576:43:52, 3.73s/it]
2%|▏ | 13090/569592 [4:26:09<576:43:52, 3.73s/it]
2%|▏ | 13091/569592 [4:26:14<642:45:08, 4.16s/it]
2%|▏ | 13091/569592 [4:26:14<642:45:08, 4.16s/it]
2%|▏ | 13092/569592 [4:26:19<661:53:44, 4.28s/it]
2%|▏ | 13092/569592 [4:26:19<661:53:44, 4.28s/it]
2%|▏ | 13093/569592 [4:26:23<630:40:22, 4.08s/it]
2%|▏ | 13093/569592 [4:26:23<630:40:22, 4.08s/it]
2%|▏ | 13094/569592 [4:26:27<637:40:52, 4.13s/it]
2%|▏ | 13094/569592 [4:26:27<637:40:52, 4.13s/it]
2%|▏ | 13095/569592 [4:26:28<493:37:41, 3.19s/it]
2%|▏ | 13095/569592 [4:26:28<493:37:41, 3.19s/it]
2%|▏ | 13096/569592 [4:26:32<521:02:10, 3.37s/it]
2%|▏ | 13096/569592 [4:26:32<521:02:10, 3.37s/it]
2%|▏ | 13097/569592 [4:26:32<406:07:00, 2.63s/it]
2%|▏ | 13097/569592 [4:26:32<406:07:00, 2.63s/it]
2%|▏ | 13098/569592 [4:26:33<327:24:23, 2.12s/it]
2%|▏ | 13098/569592 [4:26:33<327:24:23, 2.12s/it]
2%|▏ | 13099/569592 [4:26:38<450:31:51, 2.91s/it]
2%|▏ | 13099/569592 [4:26:38<450:31:51, 2.91s/it]
2%|▏ | 13100/569592 [4:26:39<363:05:55, 2.35s/it]
2%|▏ | 13100/569592 [4:26:39<363:05:55, 2.35s/it]
2%|▏ | 13101/569592 [4:26:40<297:15:01, 1.92s/it]
2%|▏ | 13101/569592 [4:26:40<297:15:01, 1.92s/it]
2%|▏ | 13102/569592 [4:26:44<398:58:23, 2.58s/it]
2%|▏ | 13102/569592 [4:26:44<398:58:23, 2.58s/it]
2%|▏ | 13103/569592 [4:26:48<464:25:09, 3.00s/it]
2%|▏ | 13103/569592 [4:26:48<464:25:09, 3.00s/it]
2%|▏ | 13104/569592 [4:26:53<531:11:57, 3.44s/it]
2%|▏ | 13104/569592 [4:26:53<531:11:57, 3.44s/it]
2%|▏ | 13105/569592 [4:26:56<544:09:10, 3.52s/it]
2%|▏ | 13105/569592 [4:26:56<544:09:10, 3.52s/it]
2%|▏ | 13106/569592 [4:27:00<544:07:19, 3.52s/it]
2%|▏ | 13106/569592 [4:27:00<544:07:19, 3.52s/it]
2%|▏ | 13107/569592 [4:27:04<559:37:26, 3.62s/it]
2%|▏ | 13107/569592 [4:27:04<559:37:26, 3.62s/it]
2%|▏ | 13108/569592 [4:27:05<433:32:03, 2.80s/it]
2%|▏ | 13108/569592 [4:27:05<433:32:03, 2.80s/it]
2%|▏ | 13109/569592 [4:27:08<463:12:07, 3.00s/it]
2%|▏ | 13109/569592 [4:27:08<463:12:07, 3.00s/it]
2%|▏ | 13110/569592 [4:27:13<549:08:12, 3.55s/it]
2%|▏ | 13110/569592 [4:27:13<549:08:12, 3.55s/it]
2%|▏ | 13111/569592 [4:27:17<565:51:14, 3.66s/it]
2%|▏ | 13111/569592 [4:27:17<565:51:14, 3.66s/it]
2%|▏ | 13112/569592 [4:27:21<593:44:28, 3.84s/it]
2%|▏ | 13112/569592 [4:27:21<593:44:28, 3.84s/it]
2%|▏ | 13113/569592 [4:27:22<456:34:39, 2.95s/it]
2%|▏ | 13113/569592 [4:27:22<456:34:39, 2.95s/it]
2%|▏ | 13114/569592 [4:27:23<361:19:10, 2.34s/it]
2%|▏ | 13114/569592 [4:27:23<361:19:10, 2.34s/it]
2%|▏ | 13115/569592 [4:27:26<411:49:16, 2.66s/it]
2%|▏ | 13115/569592 [4:27:26<411:49:16, 2.66s/it]
2%|▏ | 13116/569592 [4:27:27<331:30:11, 2.14s/it]
2%|▏ | 13116/569592 [4:27:27<331:30:11, 2.14s/it]
2%|▏ | 13117/569592 [4:27:31<410:43:21, 2.66s/it]
2%|▏ | 13117/569592 [4:27:31<410:43:21, 2.66s/it]
2%|▏ | 13118/569592 [4:27:32<332:04:36, 2.15s/it]
2%|▏ | 13118/569592 [4:27:32<332:04:36, 2.15s/it]
2%|▏ | 13119/569592 [4:27:33<276:27:51, 1.79s/it]
2%|▏ | 13119/569592 [4:27:33<276:27:51, 1.79s/it]
2%|▏ | 13120/569592 [4:27:37<391:07:27, 2.53s/it]
2%|▏ | 13120/569592 [4:27:37<391:07:27, 2.53s/it]
2%|▏ | 13121/569592 [4:27:38<317:22:40, 2.05s/it]
2%|▏ | 13121/569592 [4:27:38<317:22:40, 2.05s/it]
2%|▏ | 13122/569592 [4:27:39<264:26:50, 1.71s/it]
2%|▏ | 13122/569592 [4:27:39<264:26:50, 1.71s/it]
2%|▏ | 13123/569592 [4:27:43<364:40:54, 2.36s/it]
2%|▏ | 13123/569592 [4:27:43<364:40:54, 2.36s/it]
2%|▏ | 13124/569592 [4:27:47<457:41:10, 2.96s/it]
2%|▏ | 13124/569592 [4:27:47<457:41:10, 2.96s/it]
2%|▏ | 13125/569592 [4:27:48<367:38:52, 2.38s/it]
2%|▏ | 13125/569592 [4:27:48<367:38:52, 2.38s/it]
2%|▏ | 13126/569592 [4:27:49<300:35:34, 1.94s/it]
2%|▏ | 13126/569592 [4:27:49<300:35:34, 1.94s/it]
2%|▏ | 13127/569592 [4:27:50<253:38:06, 1.64s/it]
2%|▏ | 13127/569592 [4:27:50<253:38:06, 1.64s/it]
2%|▏ | 13128/569592 [4:27:51<220:16:11, 1.43s/it]
2%|▏ | 13128/569592 [4:27:51<220:16:11, 1.43s/it]
2%|▏ | 13129/569592 [4:27:55<340:18:13, 2.20s/it]
2%|▏ | 13129/569592 [4:27:55<340:18:13, 2.20s/it]
2%|▏ | 13130/569592 [4:27:56<286:46:29, 1.86s/it]
2%|▏ | 13130/569592 [4:27:56<286:46:29, 1.86s/it]
2%|▏ | 13131/569592 [4:27:57<245:14:17, 1.59s/it]
2%|▏ | 13131/569592 [4:27:57<245:14:17, 1.59s/it]
2%|▏ | 13132/569592 [4:28:01<331:06:40, 2.14s/it]
2%|▏ | 13132/569592 [4:28:01<331:06:40, 2.14s/it]
2%|▏ | 13133/569592 [4:28:06<488:56:44, 3.16s/it]
2%|▏ | 13133/569592 [4:28:06<488:56:44, 3.16s/it]
2%|▏ | 13134/569592 [4:28:07<387:20:00, 2.51s/it]
2%|▏ | 13134/569592 [4:28:07<387:20:00, 2.51s/it]
2%|▏ | 13135/569592 [4:28:08<314:47:25, 2.04s/it]
2%|▏ | 13135/569592 [4:28:08<314:47:25, 2.04s/it]
2%|▏ | 13136/569592 [4:28:12<416:46:30, 2.70s/it]
2%|▏ | 13136/569592 [4:28:12<416:46:30, 2.70s/it]
2%|▏ | 13137/569592 [4:28:16<463:50:29, 3.00s/it]
2%|▏ | 13137/569592 [4:28:16<463:50:29, 3.00s/it]
2%|▏ | 13138/569592 [4:28:20<483:35:30, 3.13s/it]
2%|▏ | 13138/569592 [4:28:20<483:35:30, 3.13s/it]
2%|▏ | 13139/569592 [4:28:20<381:49:50, 2.47s/it]
2%|▏ | 13139/569592 [4:28:20<381:49:50, 2.47s/it]
2%|▏ | 13140/569592 [4:28:22<334:27:01, 2.16s/it]
2%|▏ | 13140/569592 [4:28:22<334:27:01, 2.16s/it]
2%|▏ | 13141/569592 [4:28:26<439:03:03, 2.84s/it]
2%|▏ | 13141/569592 [4:28:26<439:03:03, 2.84s/it]
2%|▏ | 13142/569592 [4:28:27<350:40:13, 2.27s/it]
2%|▏ | 13142/569592 [4:28:27<350:40:13, 2.27s/it]
2%|▏ | 13143/569592 [4:28:29<314:35:28, 2.04s/it]
2%|▏ | 13143/569592 [4:28:29<314:35:28, 2.04s/it]
2%|▏ | 13144/569592 [4:28:32<383:04:30, 2.48s/it]
2%|▏ | 13144/569592 [4:28:32<383:04:30, 2.48s/it]
2%|▏ | 13145/569592 [4:28:38<515:41:31, 3.34s/it]
2%|▏ | 13145/569592 [4:28:38<515:41:31, 3.34s/it]
2%|▏ | 13146/569592 [4:28:41<528:00:08, 3.42s/it]
2%|▏ | 13146/569592 [4:28:41<528:00:08, 3.42s/it]
2%|▏ | 13147/569592 [4:28:45<562:17:17, 3.64s/it]
2%|▏ | 13147/569592 [4:28:45<562:17:17, 3.64s/it]
2%|▏ | 13148/569592 [4:28:46<439:09:24, 2.84s/it]
2%|▏ | 13148/569592 [4:28:46<439:09:24, 2.84s/it]
2%|▏ | 13149/569592 [4:28:47<350:26:53, 2.27s/it]
2%|▏ | 13149/569592 [4:28:47<350:26:53, 2.27s/it]
2%|▏ | 13150/569592 [4:28:48<289:37:59, 1.87s/it]
2%|▏ | 13150/569592 [4:28:48<289:37:59, 1.87s/it]
2%|▏ | 13151/569592 [4:28:49<246:45:09, 1.60s/it]
2%|▏ | 13151/569592 [4:28:49<246:45:09, 1.60s/it]
2%|▏ | 13152/569592 [4:28:54<383:20:28, 2.48s/it]
2%|▏ | 13152/569592 [4:28:54<383:20:28, 2.48s/it]
2%|▏ | 13153/569592 [4:28:58<473:51:45, 3.07s/it]
2%|▏ | 13153/569592 [4:28:58<473:51:45, 3.07s/it]
2%|▏ | 13154/569592 [4:28:59<386:05:24, 2.50s/it]
2%|▏ | 13154/569592 [4:28:59<386:05:24, 2.50s/it]
2%|▏ | 13155/569592 [4:29:00<319:10:21, 2.06s/it]
2%|▏ | 13155/569592 [4:29:00<319:10:21, 2.06s/it]
2%|▏ | 13156/569592 [4:29:04<372:12:49, 2.41s/it]
2%|▏ | 13156/569592 [4:29:04<372:12:49, 2.41s/it]
2%|▏ | 13157/569592 [4:29:09<491:55:29, 3.18s/it]
2%|▏ | 13157/569592 [4:29:09<491:55:29, 3.18s/it]
2%|▏ | 13158/569592 [4:29:12<521:07:21, 3.37s/it]
2%|▏ | 13158/569592 [4:29:12<521:07:21, 3.37s/it]
2%|▏ | 13159/569592 [4:29:13<415:02:12, 2.69s/it]
2%|▏ | 13159/569592 [4:29:14<415:02:12, 2.69s/it]
2%|▏ | 13160/569592 [4:29:15<366:06:29, 2.37s/it]
2%|▏ | 13160/569592 [4:29:15<366:06:29, 2.37s/it]
2%|▏ | 13161/569592 [4:29:19<459:47:36, 2.97s/it]
2%|▏ | 13161/569592 [4:29:19<459:47:36, 2.97s/it]
2%|▏ | 13162/569592 [4:29:23<487:43:36, 3.16s/it]
2%|▏ | 13162/569592 [4:29:23<487:43:36, 3.16s/it]
2%|▏ | 13163/569592 [4:29:24<384:55:54, 2.49s/it]
2%|▏ | 13163/569592 [4:29:24<384:55:54, 2.49s/it]
2%|▏ | 13164/569592 [4:29:25<313:59:43, 2.03s/it]
2%|▏ | 13164/569592 [4:29:25<313:59:43, 2.03s/it]
2%|▏ | 13165/569592 [4:29:30<431:36:22, 2.79s/it]
2%|▏ | 13165/569592 [4:29:30<431:36:22, 2.79s/it]
2%|▏ | 13166/569592 [4:29:30<345:08:07, 2.23s/it]
2%|▏ | 13166/569592 [4:29:30<345:08:07, 2.23s/it]
2%|▏ | 13167/569592 [4:29:34<392:17:04, 2.54s/it]
2%|▏ | 13167/569592 [4:29:34<392:17:04, 2.54s/it]
2%|▏ | 13168/569592 [4:29:38<468:33:52, 3.03s/it]
2%|▏ | 13168/569592 [4:29:38<468:33:52, 3.03s/it]
2%|▏ | 13169/569592 [4:29:43<567:10:43, 3.67s/it]
2%|▏ | 13169/569592 [4:29:43<567:10:43, 3.67s/it]
2%|▏ | 13170/569592 [4:29:47<559:36:57, 3.62s/it]
2%|▏ | 13170/569592 [4:29:47<559:36:57, 3.62s/it]
2%|▏ | 13171/569592 [4:29:47<433:58:06, 2.81s/it]
2%|▏ | 13171/569592 [4:29:47<433:58:06, 2.81s/it]
2%|▏ | 13172/569592 [4:29:52<526:32:42, 3.41s/it]
2%|▏ | 13172/569592 [4:29:52<526:32:42, 3.41s/it]
2%|▏ | 13173/569592 [4:29:56<542:07:32, 3.51s/it]
2%|▏ | 13173/569592 [4:29:56<542:07:32, 3.51s/it]
2%|▏ | 13174/569592 [4:29:59<524:24:43, 3.39s/it]
2%|▏ | 13174/569592 [4:29:59<524:24:43, 3.39s/it]
2%|▏ | 13175/569592 [4:30:02<518:08:12, 3.35s/it]
2%|▏ | 13175/569592 [4:30:02<518:08:12, 3.35s/it]
2%|▏ | 13176/569592 [4:30:06<537:07:48, 3.48s/it]
2%|▏ | 13176/569592 [4:30:06<537:07:48, 3.48s/it]
2%|▏ | 13177/569592 [4:30:10<548:03:41, 3.55s/it]
2%|▏ | 13177/569592 [4:30:10<548:03:41, 3.55s/it]
2%|▏ | 13178/569592 [4:30:13<537:36:24, 3.48s/it]
2%|▏ | 13178/569592 [4:30:13<537:36:24, 3.48s/it]
2%|▏ | 13179/569592 [4:30:18<588:22:27, 3.81s/it]
2%|▏ | 13179/569592 [4:30:18<588:22:27, 3.81s/it]
2%|▏ | 13180/569592 [4:30:22<591:21:16, 3.83s/it]
2%|▏ | 13180/569592 [4:30:22<591:21:16, 3.83s/it]
2%|▏ | 13181/569592 [4:30:25<550:36:24, 3.56s/it]
2%|▏ | 13181/569592 [4:30:25<550:36:24, 3.56s/it]
2%|▏ | 13182/569592 [4:30:28<540:24:43, 3.50s/it]
2%|▏ | 13182/569592 [4:30:28<540:24:43, 3.50s/it]
2%|▏ | 13183/569592 [4:30:31<527:58:44, 3.42s/it]
2%|▏ | 13183/569592 [4:30:31<527:58:44, 3.42s/it]
2%|▏ | 13184/569592 [4:30:35<568:33:05, 3.68s/it]
2%|▏ | 13184/569592 [4:30:35<568:33:05, 3.68s/it]
2%|▏ | 13185/569592 [4:30:39<565:08:45, 3.66s/it]
2%|▏ | 13185/569592 [4:30:39<565:08:45, 3.66s/it]
2%|▏ | 13186/569592 [4:30:43<561:33:07, 3.63s/it]
2%|▏ | 13186/569592 [4:30:43<561:33:07, 3.63s/it]
2%|▏ | 13187/569592 [4:30:46<542:16:32, 3.51s/it]
2%|▏ | 13187/569592 [4:30:46<542:16:32, 3.51s/it]
2%|▏ | 13188/569592 [4:30:50<591:07:28, 3.82s/it]
2%|▏ | 13188/569592 [4:30:50<591:07:28, 3.82s/it]
2%|▏ | 13189/569592 [4:30:55<620:12:52, 4.01s/it]
2%|▏ | 13189/569592 [4:30:55<620:12:52, 4.01s/it]
2%|▏ | 13190/569592 [4:30:58<574:26:05, 3.72s/it]
2%|▏ | 13190/569592 [4:30:58<574:26:05, 3.72s/it]
2%|▏ | 13191/569592 [4:31:01<545:50:42, 3.53s/it]
2%|▏ | 13191/569592 [4:31:01<545:50:42, 3.53s/it]
2%|▏ | 13192/569592 [4:31:04<524:22:59, 3.39s/it]
2%|▏ | 13192/569592 [4:31:04<524:22:59, 3.39s/it]
2%|▏ | 13193/569592 [4:31:09<585:16:36, 3.79s/it]
2%|▏ | 13193/569592 [4:31:09<585:16:36, 3.79s/it]
2%|▏ | 13194/569592 [4:31:12<565:17:07, 3.66s/it]
2%|▏ | 13194/569592 [4:31:12<565:17:07, 3.66s/it]
2%|▏ | 13195/569592 [4:31:17<622:04:10, 4.02s/it]
2%|▏ | 13195/569592 [4:31:17<622:04:10, 4.02s/it]
2%|▏ | 13196/569592 [4:31:22<653:43:45, 4.23s/it]
2%|▏ | 13196/569592 [4:31:22<653:43:45, 4.23s/it]
2%|▏ | 13197/569592 [4:31:25<614:56:04, 3.98s/it]
2%|▏ | 13197/569592 [4:31:25<614:56:04, 3.98s/it]
2%|▏ | 13198/569592 [4:31:29<606:53:40, 3.93s/it]
2%|▏ | 13198/569592 [4:31:29<606:53:40, 3.93s/it]
2%|▏ | 13199/569592 [4:31:30<466:14:10, 3.02s/it]
2%|▏ | 13199/569592 [4:31:30<466:14:10, 3.02s/it]
2%|▏ | 13200/569592 [4:31:33<490:53:24, 3.18s/it]
2%|▏ | 13200/569592 [4:31:33<490:53:24, 3.18s/it]
2%|▏ | 13201/569592 [4:31:37<497:25:08, 3.22s/it]
2%|▏ | 13201/569592 [4:31:37<497:25:08, 3.22s/it]
2%|▏ | 13202/569592 [4:31:40<518:29:24, 3.35s/it]
2%|▏ | 13202/569592 [4:31:40<518:29:24, 3.35s/it]
2%|▏ | 13203/569592 [4:31:44<517:10:22, 3.35s/it]
2%|▏ | 13203/569592 [4:31:44<517:10:22, 3.35s/it]
2%|▏ | 13204/569592 [4:31:45<404:03:47, 2.61s/it]
2%|▏ | 13204/569592 [4:31:45<404:03:47, 2.61s/it]
2%|▏ | 13205/569592 [4:31:48<447:47:38, 2.90s/it]
2%|▏ | 13205/569592 [4:31:48<447:47:38, 2.90s/it]
2%|▏ | 13206/569592 [4:31:52<475:38:07, 3.08s/it]
2%|▏ | 13206/569592 [4:31:52<475:38:07, 3.08s/it]
2%|▏ | 13207/569592 [4:31:55<508:20:57, 3.29s/it]
2%|▏ | 13207/569592 [4:31:55<508:20:57, 3.29s/it]
2%|▏ | 13208/569592 [4:31:56<398:03:11, 2.58s/it]
2%|▏ | 13208/569592 [4:31:56<398:03:11, 2.58s/it]
2%|▏ | 13209/569592 [4:32:00<454:53:06, 2.94s/it]
2%|▏ | 13209/569592 [4:32:00<454:53:06, 2.94s/it]
2%|▏ | 13210/569592 [4:32:04<490:12:10, 3.17s/it]
2%|▏ | 13210/569592 [4:32:04<490:12:10, 3.17s/it]
2%|▏ | 13211/569592 [4:32:05<387:29:21, 2.51s/it]
2%|▏ | 13211/569592 [4:32:05<387:29:21, 2.51s/it]
2%|▏ | 13212/569592 [4:32:09<447:20:39, 2.89s/it]
2%|▏ | 13212/569592 [4:32:09<447:20:39, 2.89s/it]
2%|▏ | 13213/569592 [4:32:12<475:13:42, 3.07s/it]
2%|▏ | 13213/569592 [4:32:12<475:13:42, 3.07s/it]
2%|▏ | 13214/569592 [4:32:13<377:38:41, 2.44s/it]
2%|▏ | 13214/569592 [4:32:13<377:38:41, 2.44s/it]
2%|▏ | 13215/569592 [4:32:17<433:41:31, 2.81s/it]
2%|▏ | 13215/569592 [4:32:17<433:41:31, 2.81s/it]
2%|▏ | 13216/569592 [4:32:20<468:08:24, 3.03s/it]
2%|▏ | 13216/569592 [4:32:20<468:08:24, 3.03s/it]
2%|▏ | 13217/569592 [4:32:21<375:17:29, 2.43s/it]
2%|▏ | 13217/569592 [4:32:21<375:17:29, 2.43s/it]
2%|▏ | 13218/569592 [4:32:27<508:53:04, 3.29s/it]
2%|▏ | 13218/569592 [4:32:27<508:53:04, 3.29s/it]
2%|▏ | 13219/569592 [4:32:28<402:02:18, 2.60s/it]
2%|▏ | 13219/569592 [4:32:28<402:02:18, 2.60s/it]
2%|▏ | 13220/569592 [4:32:31<443:37:08, 2.87s/it]
2%|▏ | 13220/569592 [4:32:31<443:37:08, 2.87s/it]
2%|▏ | 13221/569592 [4:32:37<563:15:55, 3.64s/it]
2%|▏ | 13221/569592 [4:32:37<563:15:55, 3.64s/it]
2%|▏ | 13222/569592 [4:32:40<534:59:19, 3.46s/it]
2%|▏ | 13222/569592 [4:32:40<534:59:19, 3.46s/it]
2%|▏ | 13223/569592 [4:32:40<415:33:11, 2.69s/it]
2%|▏ | 13223/569592 [4:32:40<415:33:11, 2.69s/it]
2%|▏ | 13224/569592 [4:32:44<439:40:04, 2.84s/it]
2%|▏ | 13224/569592 [4:32:44<439:40:04, 2.84s/it]
2%|▏ | 13225/569592 [4:32:45<353:56:02, 2.29s/it]
2%|▏ | 13225/569592 [4:32:45<353:56:02, 2.29s/it]
2%|▏ | 13226/569592 [4:32:46<292:30:57, 1.89s/it]
2%|▏ | 13226/569592 [4:32:46<292:30:57, 1.89s/it]
2%|▏ | 13227/569592 [4:32:49<358:17:37, 2.32s/it]
2%|▏ | 13227/569592 [4:32:49<358:17:37, 2.32s/it]
2%|▏ | 13228/569592 [4:32:53<421:44:11, 2.73s/it]
2%|▏ | 13228/569592 [4:32:53<421:44:11, 2.73s/it]
2%|▏ | 13229/569592 [4:32:57<482:16:18, 3.12s/it]
2%|▏ | 13229/569592 [4:32:57<482:16:18, 3.12s/it]
2%|▏ | 13230/569592 [4:33:02<590:38:53, 3.82s/it]
2%|▏ | 13230/569592 [4:33:02<590:38:53, 3.82s/it]
2%|▏ | 13231/569592 [4:33:03<454:40:51, 2.94s/it]
2%|▏ | 13231/569592 [4:33:03<454:40:51, 2.94s/it]
2%|▏ | 13232/569592 [4:33:06<464:31:46, 3.01s/it]
2%|▏ | 13232/569592 [4:33:06<464:31:46, 3.01s/it]
2%|▏ | 13233/569592 [4:33:10<508:23:27, 3.29s/it]
2%|▏ | 13233/569592 [4:33:10<508:23:27, 3.29s/it]
2%|▏ | 13234/569592 [4:33:11<400:25:48, 2.59s/it]
2%|▏ | 13234/569592 [4:33:11<400:25:48, 2.59s/it]
2%|▏ | 13235/569592 [4:33:14<434:59:19, 2.81s/it]
2%|▏ | 13235/569592 [4:33:14<434:59:19, 2.81s/it]
2%|▏ | 13236/569592 [4:33:18<456:16:44, 2.95s/it]
2%|▏ | 13236/569592 [4:33:18<456:16:44, 2.95s/it]
2%|▏ | 13237/569592 [4:33:21<476:06:07, 3.08s/it]
2%|▏ | 13237/569592 [4:33:21<476:06:07, 3.08s/it]
2%|▏ | 13238/569592 [4:33:25<534:44:17, 3.46s/it]
2%|▏ | 13238/569592 [4:33:25<534:44:17, 3.46s/it]
2%|▏ | 13239/569592 [4:33:26<416:31:31, 2.70s/it]
2%|▏ | 13239/569592 [4:33:26<416:31:31, 2.70s/it]
2%|▏ | 13240/569592 [4:33:27<333:28:56, 2.16s/it]
2%|▏ | 13240/569592 [4:33:27<333:28:56, 2.16s/it]
2%|▏ | 13241/569592 [4:33:28<277:13:54, 1.79s/it]
2%|▏ | 13241/569592 [4:33:28<277:13:54, 1.79s/it]
2%|▏ | 13242/569592 [4:33:29<239:48:39, 1.55s/it]
2%|▏ | 13242/569592 [4:33:29<239:48:39, 1.55s/it]
2%|▏ | 13243/569592 [4:33:30<211:35:54, 1.37s/it]
2%|▏ | 13243/569592 [4:33:30<211:35:54, 1.37s/it]
2%|▏ | 13244/569592 [4:33:31<192:08:33, 1.24s/it]
2%|▏ | 13244/569592 [4:33:31<192:08:33, 1.24s/it]
2%|▏ | 13245/569592 [4:33:32<178:43:04, 1.16s/it]
2%|▏ | 13245/569592 [4:33:32<178:43:04, 1.16s/it]
2%|▏ | 13246/569592 [4:33:36<292:58:28, 1.90s/it]
2%|▏ | 13246/569592 [4:33:36<292:58:28, 1.90s/it]
2%|▏ | 13247/569592 [4:33:39<377:14:11, 2.44s/it]
2%|▏ | 13247/569592 [4:33:39<377:14:11, 2.44s/it]
2%|▏ | 13248/569592 [4:33:40<310:18:53, 2.01s/it]
2%|▏ | 13248/569592 [4:33:40<310:18:53, 2.01s/it]
2%|▏ | 13249/569592 [4:33:41<261:10:29, 1.69s/it]
2%|▏ | 13249/569592 [4:33:41<261:10:29, 1.69s/it]
2%|▏ | 13250/569592 [4:33:44<299:40:08, 1.94s/it]
2%|▏ | 13250/569592 [4:33:44<299:40:08, 1.94s/it]
2%|▏ | 13251/569592 [4:33:50<516:11:58, 3.34s/it]
2%|▏ | 13251/569592 [4:33:50<516:11:58, 3.34s/it]
2%|▏ | 13252/569592 [4:33:51<411:16:29, 2.66s/it]
2%|▏ | 13252/569592 [4:33:51<411:16:29, 2.66s/it]
2%|▏ | 13253/569592 [4:33:57<543:39:56, 3.52s/it]
2%|▏ | 13253/569592 [4:33:57<543:39:56, 3.52s/it]
2%|▏ | 13254/569592 [4:33:58<426:37:16, 2.76s/it]
2%|▏ | 13254/569592 [4:33:58<426:37:16, 2.76s/it]
2%|▏ | 13255/569592 [4:34:00<377:26:19, 2.44s/it]
2%|▏ | 13255/569592 [4:34:00<377:26:19, 2.44s/it]
2%|▏ | 13256/569592 [4:34:01<308:55:30, 2.00s/it]
2%|▏ | 13256/569592 [4:34:01<308:55:30, 2.00s/it]
2%|▏ | 13257/569592 [4:34:02<273:30:41, 1.77s/it]
2%|▏ | 13257/569592 [4:34:02<273:30:41, 1.77s/it]
2%|▏ | 13258/569592 [4:34:04<286:06:48, 1.85s/it]
2%|▏ | 13258/569592 [4:34:04<286:06:48, 1.85s/it]
2%|▏ | 13259/569592 [4:34:10<480:57:47, 3.11s/it]
2%|▏ | 13259/569592 [4:34:10<480:57:47, 3.11s/it]
2%|▏ | 13260/569592 [4:34:14<529:47:27, 3.43s/it]
2%|▏ | 13260/569592 [4:34:14<529:47:27, 3.43s/it]
2%|▏ | 13261/569592 [4:34:15<418:13:36, 2.71s/it]
2%|▏ | 13261/569592 [4:34:15<418:13:36, 2.71s/it]
2%|▏ | 13262/569592 [4:34:19<457:14:44, 2.96s/it]
2%|▏ | 13262/569592 [4:34:19<457:14:44, 2.96s/it]
2%|▏ | 13263/569592 [4:34:22<451:00:17, 2.92s/it]
2%|▏ | 13263/569592 [4:34:22<451:00:17, 2.92s/it]
2%|▏ | 13264/569592 [4:34:25<472:01:12, 3.05s/it]
2%|▏ | 13264/569592 [4:34:25<472:01:12, 3.05s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (119245980 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13265/569592 [4:34:28<495:07:53, 3.20s/it]
2%|▏ | 13265/569592 [4:34:28<495:07:53, 3.20s/it]
2%|▏ | 13266/569592 [4:34:29<391:19:45, 2.53s/it]
2%|▏ | 13266/569592 [4:34:29<391:19:45, 2.53s/it]
2%|▏ | 13267/569592 [4:34:31<358:13:14, 2.32s/it]
2%|▏ | 13267/569592 [4:34:31<358:13:14, 2.32s/it]
2%|▏ | 13268/569592 [4:34:32<294:43:39, 1.91s/it]
2%|▏ | 13268/569592 [4:34:32<294:43:39, 1.91s/it]
2%|▏ | 13269/569592 [4:34:34<308:44:04, 2.00s/it]
2%|▏ | 13269/569592 [4:34:34<308:44:04, 2.00s/it]
2%|▏ | 13270/569592 [4:34:35<259:06:07, 1.68s/it]
2%|▏ | 13270/569592 [4:34:35<259:06:07, 1.68s/it]
2%|▏ | 13271/569592 [4:34:43<558:43:32, 3.62s/it]
2%|▏ | 13271/569592 [4:34:43<558:43:32, 3.62s/it]
2%|▏ | 13272/569592 [4:34:44<437:55:01, 2.83s/it]
2%|▏ | 13272/569592 [4:34:44<437:55:01, 2.83s/it]
2%|▏ | 13273/569592 [4:34:45<350:01:40, 2.27s/it]
2%|▏ | 13273/569592 [4:34:45<350:01:40, 2.27s/it]
2%|▏ | 13274/569592 [4:34:46<288:13:36, 1.87s/it]
2%|▏ | 13274/569592 [4:34:46<288:13:36, 1.87s/it]
2%|▏ | 13275/569592 [4:34:52<482:18:08, 3.12s/it]
2%|▏ | 13275/569592 [4:34:52<482:18:08, 3.12s/it]
2%|▏ | 13276/569592 [4:34:56<488:19:49, 3.16s/it]
2%|▏ | 13276/569592 [4:34:56<488:19:49, 3.16s/it]
2%|▏ | 13277/569592 [4:34:57<386:58:55, 2.50s/it]
2%|▏ | 13277/569592 [4:34:57<386:58:55, 2.50s/it]
2%|▏ | 13278/569592 [4:34:58<313:45:27, 2.03s/it]
2%|▏ | 13278/569592 [4:34:58<313:45:27, 2.03s/it]
2%|▏ | 13279/569592 [4:35:03<450:11:12, 2.91s/it]
2%|▏ | 13279/569592 [4:35:03<450:11:12, 2.91s/it]
2%|▏ | 13280/569592 [4:35:03<360:20:26, 2.33s/it]
2%|▏ | 13280/569592 [4:35:04<360:20:26, 2.33s/it]
2%|▏ | 13281/569592 [4:35:07<429:43:28, 2.78s/it]
2%|▏ | 13281/569592 [4:35:07<429:43:28, 2.78s/it]
2%|▏ | 13282/569592 [4:35:12<524:56:52, 3.40s/it]
2%|▏ | 13282/569592 [4:35:12<524:56:52, 3.40s/it]
2%|▏ | 13283/569592 [4:35:16<539:10:46, 3.49s/it]
2%|▏ | 13283/569592 [4:35:16<539:10:46, 3.49s/it]
2%|▏ | 13284/569592 [4:35:19<540:00:04, 3.49s/it]
2%|▏ | 13284/569592 [4:35:19<540:00:04, 3.49s/it]
2%|▏ | 13285/569592 [4:35:23<538:14:31, 3.48s/it]
2%|▏ | 13285/569592 [4:35:23<538:14:31, 3.48s/it]
2%|▏ | 13286/569592 [4:35:28<615:11:22, 3.98s/it]
2%|▏ | 13286/569592 [4:35:28<615:11:22, 3.98s/it]
2%|▏ | 13287/569592 [4:35:31<589:00:35, 3.81s/it]
2%|▏ | 13287/569592 [4:35:31<589:00:35, 3.81s/it]
2%|▏ | 13288/569592 [4:35:35<561:23:37, 3.63s/it]
2%|▏ | 13288/569592 [4:35:35<561:23:37, 3.63s/it]
2%|▏ | 13289/569592 [4:35:38<543:24:22, 3.52s/it]
2%|▏ | 13289/569592 [4:35:38<543:24:22, 3.52s/it]
2%|▏ | 13290/569592 [4:35:43<613:27:00, 3.97s/it]
2%|▏ | 13290/569592 [4:35:43<613:27:00, 3.97s/it]
2%|▏ | 13291/569592 [4:35:44<470:31:55, 3.04s/it]
2%|▏ | 13291/569592 [4:35:44<470:31:55, 3.04s/it]
2%|▏ | 13292/569592 [4:35:49<553:11:12, 3.58s/it]
2%|▏ | 13292/569592 [4:35:49<553:11:12, 3.58s/it]
2%|▏ | 13293/569592 [4:35:53<576:02:07, 3.73s/it]
2%|▏ | 13293/569592 [4:35:53<576:02:07, 3.73s/it]
2%|▏ | 13294/569592 [4:35:56<542:36:45, 3.51s/it]
2%|▏ | 13294/569592 [4:35:56<542:36:45, 3.51s/it]
2%|▏ | 13295/569592 [4:35:59<541:37:20, 3.51s/it]
2%|▏ | 13295/569592 [4:35:59<541:37:20, 3.51s/it]
2%|▏ | 13296/569592 [4:36:03<550:17:53, 3.56s/it]
2%|▏ | 13296/569592 [4:36:03<550:17:53, 3.56s/it]
2%|▏ | 13297/569592 [4:36:07<569:03:51, 3.68s/it]
2%|▏ | 13297/569592 [4:36:07<569:03:51, 3.68s/it]
2%|▏ | 13298/569592 [4:36:10<568:31:16, 3.68s/it]
2%|▏ | 13298/569592 [4:36:11<568:31:16, 3.68s/it]
2%|▏ | 13299/569592 [4:36:15<614:37:01, 3.98s/it]
2%|▏ | 13299/569592 [4:36:15<614:37:01, 3.98s/it]
2%|▏ | 13300/569592 [4:36:18<570:22:13, 3.69s/it]
2%|▏ | 13300/569592 [4:36:18<570:22:13, 3.69s/it]
2%|▏ | 13301/569592 [4:36:21<534:07:15, 3.46s/it]
2%|▏ | 13301/569592 [4:36:21<534:07:15, 3.46s/it]
2%|▏ | 13302/569592 [4:36:24<524:09:35, 3.39s/it]
2%|▏ | 13302/569592 [4:36:24<524:09:35, 3.39s/it]
2%|▏ | 13303/569592 [4:36:28<516:41:23, 3.34s/it]
2%|▏ | 13303/569592 [4:36:28<516:41:23, 3.34s/it]
2%|▏ | 13304/569592 [4:36:32<588:16:39, 3.81s/it]
2%|▏ | 13304/569592 [4:36:32<588:16:39, 3.81s/it]
2%|▏ | 13305/569592 [4:36:36<556:09:01, 3.60s/it]
2%|▏ | 13305/569592 [4:36:36<556:09:01, 3.60s/it]
2%|▏ | 13306/569592 [4:36:39<565:39:47, 3.66s/it]
2%|▏ | 13306/569592 [4:36:39<565:39:47, 3.66s/it]
2%|▏ | 13307/569592 [4:36:40<437:18:32, 2.83s/it]
2%|▏ | 13307/569592 [4:36:40<437:18:32, 2.83s/it]
2%|▏ | 13308/569592 [4:36:45<511:34:30, 3.31s/it]
2%|▏ | 13308/569592 [4:36:45<511:34:30, 3.31s/it]
2%|▏ | 13309/569592 [4:36:48<519:16:06, 3.36s/it]
2%|▏ | 13309/569592 [4:36:48<519:16:06, 3.36s/it]
2%|▏ | 13310/569592 [4:36:51<505:15:39, 3.27s/it]
2%|▏ | 13310/569592 [4:36:51<505:15:39, 3.27s/it]
2%|▏ | 13311/569592 [4:36:56<574:35:30, 3.72s/it]
2%|▏ | 13311/569592 [4:36:56<574:35:30, 3.72s/it]
2%|▏ | 13312/569592 [4:37:00<576:55:45, 3.73s/it]
2%|▏ | 13312/569592 [4:37:00<576:55:45, 3.73s/it]
2%|▏ | 13313/569592 [4:37:03<575:44:54, 3.73s/it]
2%|▏ | 13313/569592 [4:37:03<575:44:54, 3.73s/it]
2/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
%|▏ | 13314/569592 [4:37:08<607:14:13, 3.93s/it]
2%|▏ | 13314/569592 [4:37:08<607:14:13, 3.93s/it]
2%|▏ | 13315/569592 [4:37:11<587:07:48, 3.80s/it]
2%|▏ | 13315/569592 [4:37:11<587:07:48, 3.80s/it]
2%|▏ | 13316/569592 [4:37:14<553:38:52, 3.58s/it]
2%|▏ | 13316/569592 [4:37:14<553:38:52, 3.58s/it]
2%|▏ | 13317/569592 [4:37:18<561:39:11, 3.63s/it]
2%|▏ | 13317/569592 [4:37:18<561:39:11, 3.63s/it]
2%|▏ | 13318/569592 [4:37:19<437:03:32, 2.83s/it]
2%|▏ | 13318/569592 [4:37:19<437:03:32, 2.83s/it]
2%|▏ | 13319/569592 [4:37:22<449:31:30, 2.91s/it]
2%|▏ | 13319/569592 [4:37:22<449:31:30, 2.91s/it]
2%|▏ | 13320/569592 [4:37:26<482:50:08, 3.12s/it]
2%|▏ | 13320/569592 [4:37:26<482:50:08, 3.12s/it]
2%|▏ | 13321/569592 [4:37:27<392:42:28, 2.54s/it]
2%|▏ | 13321/569592 [4:37:27<392:42:28, 2.54s/it]
2%|▏ | 13322/569592 [4:37:28<318:41:50, 2.06s/it]
2%|▏ | 13322/569592 [4:37:28<318:41:50, 2.06s/it]
2%|▏ | 13323/569592 [4:37:33<436:57:57, 2.83s/it]
2%|▏ | 13323/569592 [4:37:33<436:57:57, 2.83s/it]
2%|▏ | 13324/569592 [4:37:37<486:08:48, 3.15s/it]
2%|▏ | 13324/569592 [4:37:37<486:08:48, 3.15s/it]
2%|▏ | 13325/569592 [4:37:40<524:34:56, 3.39s/it]
2%|▏ | 13325/569592 [4:37:40<524:34:56, 3.39s/it]
2%|▏ | 13326/569592 [4:37:43<507:06:01, 3.28s/it]
2%|▏ | 13326/569592 [4:37:44<507:06:01, 3.28s/it]
2%|▏ | 13327/569592 [4:37:48<574:55:27, 3.72s/it]
2%|▏ | 13327/569592 [4:37:48<574:55:27, 3.72s/it]
2%|▏ | 13328/569592 [4:37:51<549:45:03, 3.56s/it]
2%|▏ | 13328/569592 [4:37:51<549:45:03, 3.56s/it]
2%|▏ | 13329/569592 [4:37:52<425:40:30, 2.75s/it]
2%|▏ | 13329/569592 [4:37:52<425:40:30, 2.75s/it]
2%|▏ | 13330/569592 [4:37:57<523:44:10, 3.39s/it]
2%|▏ | 13330/569592 [4:37:57<523:44:10, 3.39s/it]
2%|▏ | 13331/569592 [4:38:02<602:14:55, 3.90s/it]
2%|▏ | 13331/569592 [4:38:02<602:14:55, 3.90s/it]
2%|▏ | 13332/569592 [4:38:03<462:34:11, 2.99s/it]
2%|▏ | 13332/569592 [4:38:03<462:34:11, 2.99s/it]
2%|▏ | 13333/569592 [4:38:07<480:36:45, 3.11s/it]
2%|▏ | 13333/569592 [4:38:07<480:36:45, 3.11s/it]
2%|▏ | 13334/569592 [4:38:10<499:29:50, 3.23s/it]
2%|▏ | 13334/569592 [4:38:10<499:29:50, 3.23s/it]
2%|▏ | 13335/569592 [4:38:11<393:32:19, 2.55s/it]
2%|▏ | 13335/569592 [4:38:11<393:32:19, 2.55s/it]
2%|▏ | 13336/569592 [4:38:12<320:18:25, 2.07s/it]
2%|▏ | 13336/569592 [4:38:12<320:18:25, 2.07s/it]
2%|▏ | 13337/569592 [4:38:16<394:14:45, 2.55s/it]
2%|▏ | 13337/569592 [4:38:16<394:14:45, 2.55s/it]
2%|▏ | 13338/569592 [4:38:21<528:13:28, 3.42s/it]
2%|▏ | 13338/569592 [4:38:21<528:13:28, 3.42s/it]
2%|▏ | 13339/569592 [4:38:25<535:03:54, 3.46s/it]
2%|▏ | 13339/569592 [4:38:25<535:03:54, 3.46s/it]
2%|▏ | 13340/569592 [4:38:26<415:52:52, 2.69s/it]
2%|▏ | 13340/569592 [4:38:26<415:52:52, 2.69s/it]
2%|▏ | 13341/569592 [4:38:29<433:54:26, 2.81s/it]
2%|▏ | 13341/569592 [4:38:29<433:54:26, 2.81s/it]
2%|▏ | 13342/569592 [4:38:30<353:43:29, 2.29s/it]
2%|▏ | 13342/569592 [4:38:30<353:43:29, 2.29s/it]
2%|▏ | 13343/569592 [4:38:34<444:53:09, 2.88s/it]
2%|▏ | 13343/569592 [4:38:34<444:53:09, 2.88s/it]
2%|▏ | 13344/569592 [4:38:35<354:34:09, 2.29s/it]
2%|▏ | 13344/569592 [4:38:35<354:34:09, 2.29s/it]
2%|▏ | 13345/569592 [4:38:38<404:30:23, 2.62s/it]
2%|▏ | 13345/569592 [4:38:38<404:30:23, 2.62s/it]
2%|▏ | 13346/569592 [4:38:43<520:22:07, 3.37s/it]
2%|▏ | 13346/569592 [4:38:43<520:22:07, 3.37s/it]
2%|▏ | 13347/569592 [4:38:47<517:23:31, 3.35s/it]
2%|▏ | 13347/569592 [4:38:47<517:23:31, 3.35s/it]
2%|▏ | 13348/569592 [4:38:50<516:23:07, 3.34s/it]
2%|▏ | 13348/569592 [4:38:50<516:23:07, 3.34s/it]
2%|▏ | 13349/569592 [4:38:51<403:01:52, 2.61s/it]
2%|▏ | 13349/569592 [4:38:51<403:01:52, 2.61s/it]
2%|▏ | 13350/569592 [4:38:54<431:30:24, 2.79s/it]
2%|▏ | 13350/569592 [4:38:54<431:30:24, 2.79s/it]
2%|▏ | 13351/569592 [4:38:55<347:50:03, 2.25s/it]
2%|▏ | 13351/569592 [4:38:55<347:50:03, 2.25s/it]
2%|▏ | 13352/569592 [4:38:56<287:05:08, 1.86s/it]
2%|▏ | 13352/569592 [4:38:56<287:05:08, 1.86s/it]
2%|▏ | 13353/569592 [4:39:00<366:42:10, 2.37s/it]
2%|▏ | 13353/569592 [4:39:00<366:42:10, 2.37s/it]
2%|▏ | 13354/569592 [4:39:03<417:10:55, 2.70s/it]
2%|▏ | 13354/569592 [4:39:03<417:10:55, 2.70s/it]
2%|▏ | 13355/569592 [4:39:07<454:33:41, 2.94s/it]
2%|▏ | 13355/569592 [4:39:07<454:33:41, 2.94s/it]
2%|▏ | 13356/569592 [4:39:10<475:17:44, 3.08s/it]
2%|▏ | 13356/569592 [4:39:10<475:17:44, 3.08s/it]
2%|▏ | 13357/569592 [4:39:11<374:34:21, 2.42s/it]
2%|▏ | 13357/569592 [4:39:11<374:34:21, 2.42s/it]
2%|▏ | 13358/569592 [4:39:12<306:45:46, 1.99s/it]
2%|▏ | 13358/569592 [4:39:12<306:45:46, 1.99s/it]
2%|▏ | 13359/569592 [4:39:16<407:39:38, 2.64s/it]
2%|▏ | 13359/569592 [4:39:16<407:39:38, 2.64s/it]
2%|▏ | 13360/569592 [4:39:17<337:46:52, 2.19s/it]
2%|▏ | 13360/569592 [4:39:17<337:46:52, 2.19s/it]
2%|▏ | 13361/569592 [4:39:18<281:26:45, 1.82s/it]
2%|▏ | 13361/569592 [4:39:18<281:26:45, 1.82s/it]
2%|▏ | 13362/569592 [4:39:19<242:00:50, 1.57s/it]
2%|▏ | 13362/569592 [4:39:19<242:00:50, 1.57s/it]
2%|▏ | 13363/569592 [4:39:20<212:42:57, 1.38s/it]
2%|▏ | 13363/569592 [4:39:20<212:42:57, 1.38s/it]
2%|▏ | 13364/569592 [4:39:24<314:18:47, 2.03s/it]
2%|▏ | 13364/569592 [4:39:24<314:18:47, 2.03s/it]
2%|▏ | 13365/569592 [4:39:25<263:37:45, 1.71s/it]
2%|▏ | 13365/569592 [4:39:25<263:37:45, 1.71s/it]
2%|▏ | 13366/569592 [4:39:29<401:14:54, 2.60s/it]
2%|▏ | 13366/569592 [4:39:29<401:14:54, 2.60s/it]
2%|▏ | 13367/569592 [4:39:30<324:25:38, 2.10s/it]
2%|▏ | 13367/569592 [4:39:30<324:25:38, 2.10s/it]
2%|▏ | 13368/569592 [4:39:31<270:59:15, 1.75s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13368/569592 [4:39:31<270:59:15, 1.75s/it]
2%|▏ | 13369/569592 [4:39:35<349:17:32, 2.26s/it]
2%|▏ | 13369/569592 [4:39:35<349:17:32, 2.26s/it]
2%|▏ | 13370/569592 [4:39:36<312:10:47, 2.02s/it]
2%|▏ | 13370/569592 [4:39:36<312:10:47, 2.02s/it]
2%|▏ | 13371/569592 [4:39:40<388:46:38, 2.52s/it]
2%|▏ | 13371/569592 [4:39:40<388:46:38, 2.52s/it]
2%|▏ | 13372/569592 [4:39:41<344:27:49, 2.23s/it]
2%|▏ | 13372/569592 [4:39:41<344:27:49, 2.23s/it]
2%|▏ | 13373/569592 [4:39:46<477:34:17, 3.09s/it]
2%|▏ | 13373/569592 [4:39:46<477:34:17/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
, 3.09s/it]
2%|▏ | 13374/569592 [4:39:47<377:09:16, 2.44s/it]
2%|▏ | 13374/569592 [4:39:47<377:09:16, 2.44s/it]
2%|▏ | 13375/569592 [4:39:49<352:17:24, 2.28s/it]
2%|▏ | 13375/569592 [4:39:49<352:17:24, 2.28s/it]
2%|▏ | 13376/569592 [4:39:50<306:01:18, 1.98s/it]
2%|▏ | 13376/569592 [4:39:50<306:01:18, 1.98s/it]
2%|▏ | 13377/569592 [4:39:55<433:17:43, 2.80s/it]
2%|▏ | 13377/569592 [4:39:55<433:17:43, 2.80s/it]
2%|▏ | 13378/569592 [4:39:59<467:12:11, 3.02s/it]
2%|▏ | 13378/569592 [4:39:59<467:12:11, 3.02s/it]
2%|▏ | 13379/569592 [4:40:00<381:43:57, 2.47s/it]
2%|▏ | 13379/569592 [4:40:00<381:43:57, 2.47s/it]
2%|▏ | 13380/569592 [4:40:03<422:59:36, 2.74s/it]
2%|▏ | 13380/569592 [4:40:03<422:59:36, 2.74s/it]
2%|▏ | 13381/569592 [4:40:05<400:31:33, 2.59s/it]
2%|▏ | 13381/569592 [4:40:05<400:31:33, 2.59s/it]
2%|▏ | 13382/569592 [4:40:07<365:11:58, 2.36s/it]
2%|▏ | 13382/569592 [4:40:07<365:11:58, 2.36s/it]
2%|▏ | 13383/569592 [4:40:11<426:25:51, 2.76s/it]
2%|▏ | 13383/569592 [4:40:11<426:25:51, 2.76s/it]
2%|▏ | 13384/569592 [4:40:14<454:37:06, 2.94s/it]
2%|▏ | 13384/569592 [4:40:14<454:37:06, 2.94s/it]
2%|▏ | 13385/569592 [4:40:17<420:43:48, 2.72s/it]
2%|▏ | 13385/569592 [4:40:17<420:43:48, 2.72s/it]
2%|▏ | 13386/569592 [4:40:18<362:53:58, 2.35s/it]
2%|▏ | 13386/569592 [4:40:18<362:53:58, 2.35s/it]
2%|▏ | 13387/569592 [4:40:20<334:46:50, 2.17s/it]
2%|▏ | 13387/569592 [4:40:20<334:46:50, 2.17s/it]
2%|▏ | 13388/569592 [4:40:24<412:02:29, 2.67s/it]
2%|▏ | 13388/569592 [4:40:24<412:02:29, 2.67s/it]
2%|▏ | 13389/569592 [4:40:27<441:45:16, 2.86s/it]
2%|▏ | 13389/569592 [4:40:27<441:45:16, 2.86s/it]
2%|▏ | 13390/569592 [4:40:29<403:01:51, 2.61s/it]
2%|▏ | 13390/569592 [4:40:29<403:01:51, 2.61s/it]
2%|▏ | 13391/569592 [4:40:31<364:24:19, 2.36s/it]
2%|▏ | 13391/569592 [4:40:31<364:24:19, 2.36s/it]
2%|▏ | 13392/569592 [4:40:32<302:41:13, 1.96s/it]
2%|▏ | 13392/569592 [4:40:32<302:41:13, 1.96s/it]
2%|▏ | 13393/569592 [4:40:36<404:25:52, 2.62s/it]
2%|▏ | 13393/569592 [4:40:36<404:25:52, 2.62s/it]
2%|▏ | 13394/569592 [4:40:41<518:34:12, 3.36s/it]
2%|▏ | 13394/569592 [4:40:41<518:34:12, 3.36s/it]
2%|▏ | 13395/569592 [4:40:42<408:23:46, 2.64s/it]
2%|▏ | 13395/569592 [4:40:42<408:23:46, 2.64s/it]
2%|▏ | 13396/569592 [4:40:46<458:00:36, 2.96s/it]
2%|▏ | 13396/569592 [4:40:46<458:00:36, 2.96s/it]
2%|▏ | 13397/569592 [4:40:49<487:11:16, 3.15s/it]
2%|▏ | 13397/569592 [4:40:49<487:11:16, 3.15s/it]
2%|▏ | 13398/569592 [4:40:50<385:16:16, 2.49s/it]
2%|▏ | 13398/569592 [4:40:50<385:16:16, 2.49s/it]
2%|▏ | 13399/569592 [4:40:55<505:46:14, 3.27s/it]
2%|▏ | 13399/569592 [4:40:55<505:46:14, 3.27s/it]
2%|▏ | 13400/569592 [4:40:59<502:35:59, 3.25s/it]
2%|▏ | 13400/569592 [4:40:59<502:35:59, 3.25s/it]
2%|▏ | 13401/569592 [4:41:02<495:36:46, 3.21s/it]
2%|▏ | 13401/569592 [4:41:02<495:36:46, 3.21s/it]
2%|▏ | 13402/569592 [4:41:06<529:41:27, 3.43s/it]
2%|▏ | 13402/569592 [4:41:06<529:41:27, 3.43s/it]
2%|▏ | 13403/569592 [4:41:09<523:32:04, 3.39s/it]
2%|▏ | 13403/569592 [4:41:09<523:32:04, 3.39s/it]
2%|▏ | 13404/569592 [4:41:12<533:22:17, 3.45s/it]
2%|▏ | 13404/569592 [4:41:12<533:22:17, 3.45s/it]
2%|▏ | 13405/569592 [4:41:16<538:30:54, 3.49s/it]
2%|▏ | 13405/569592 [4:41:16<538:30:54, 3.49s/it]
2%|▏ | 13406/569592 [4:41:20<550:29:01, 3.56s/it]
2%|▏ | 13406/569592 [4:41:20<550:29:01, 3.56s/it]
2%|▏ | 13407/569592 [4:41:23<528:21:46, 3.42s/it]
2%|▏ | 13407/569592 [4:41:23<528:21:46, 3.42s/it]
2%|▏ | 13408/569592 [4:41:28<595:22:02, 3.85s/it]
2%|▏ | 13408/569592 [4:41:28<595:22:02, 3.85s/it]
2%|▏ | 13409/569592 [4:41:32<634:06:11, 4.10s/it]
2%|▏ | 13409/569592 [4:41:32<634:06:11, 4.10s/it]
2%|▏ | 13410/569592 [4:41:36<623:32:03, 4.04s/it]
2%|▏ | 13410/569592 [4:41:36<623:32:03, 4.04s/it]
2%|▏ | 13411/569592 [4:41:40<589:16:04, 3.81s/it]
2%|▏ | 13411/569592 [4:41:40<589:16:04, 3.81s/it]
2%|▏ | 13412/569592 [4:41:43<557:32:21, 3.61s/it]
2%|▏ | 13412/569592 [4:41:43<557:32:21, 3.61s/it]
2%|▏ | 13413/569592 [4:41:48<617:56:57, 4.00s/it]
2%|▏ | 13413/569592 [4:41:48<617:56:57, 4.00s/it]
2%|▏ | 13414/569592 [4:41:53<664:26:35, 4.30s/it]
2%|▏ | 13414/569592 [4:41:53<664:26:35, 4.30s/it]
2%|▏ | 13415/569592 [4:41:58<697:24:04, 4.51s/it]
2%|▏ | 13415/569592 [4:41:58<697:24:04, 4.51s/it]
2%|▏ | 13416/569592 [4:41:59<531:11:40, 3.44s/it]
2%|▏ | 13416/569592 [4:41:59<531:11:40, 3.44s/it]
2%|▏ | 13417/569592 [4:42:03<591:59:33, 3.83s/it]
2%|▏ | 13417/569592 [4:42:03<591:59:33, 3.83s/it]
2%|▏ | 13418/569592 [4:42:06<561:18:03, 3.63s/it]
2%|▏ | 13418/569592 [4:42:07<561:18:03, 3.63s/it]
2%|▏ | 13419/569592 [4:42:10<555:47:04, 3.60s/it]
2%|▏ | 13419/569592 [4:42:10<555:47:04, 3.60s/it]
2%|▏ | 13420/569592 [4:42:15<615:16:24, 3.98s/it]
2%|▏ | 13420/569592 [4:42:15<615:16:24, 3.98s/it]
2%|▏ | 13421/569592 [4:42:18<579:29:06, 3.75s/it]
2%|▏ | 13421/569592 [4:42:18<579:29:06, 3.75s/it]
2%|▏ | 13422/569592 [4:42:21<559:32:29, 3.62s/it]
2%|▏ | 13422/569592 [4:42:21<559:32:29, 3.62s/it]
2%|▏ | 13423/569592 [4:42:25<565:54:30, 3.66s/it]
2%|▏ | 13423/569592 [4:42:25<565:54:30, 3.66s/it]
2%|▏ | 13424/569592 [4:42:29<589:05:32, 3.81s/it]
2%|▏ | 13424/569592 [4:42:29<589:05:32, 3.81s/it]
2%|▏ | 13425/569592 [4:42:32<553:37:28, 3.58s/it]
2%|▏ | 13425/569592 [4:42:32<553:37:28, 3.58s/it]
2%|▏ | 13426/569592 [4:42:36<535:10:06, 3.46s/it]
2%|▏ | 13426/569592 [4:42:36<535:10:06, 3.46s/it]
2%|▏ | 13427/569592 [4:42:40<593:17:06, 3.84s/it]
2%|▏ | 13427/569592 [4:42:40<593:17:06, 3.84s/it]
2%|▏ | 13428/569592 [4:42:45<626:34:25, 4.06s/it]
2%|▏ | 13428/569592 [4:42:45<626:34:25, 4.06s/it]
2%|▏ | 13429/569592 [4:42:50<653:54:37, 4.23s/it]
2%|▏ | 13429/569592 [4:42:50<653:54:37, 4.23s/it]
2%|▏ | 13430/569592 [4:42:50<498:12:59, 3.22s/it]
2%|▏ | 13430/569592 [4:42:50<498:12:59, 3.22s/it]
2%|▏ | 13431/569592 [4:42:54<501:14:46, 3.24s/it]
2%|▏ | 13431/569592 [4:42:54<501:14:46, 3.24s/it]
2%|▏ | 13432/569592 [4:42:59<575:50:33, 3.73s/it]
2%|▏ | 13432/569592 [4:42:59<575:50:33, 3.73s/it]
2%|▏ | 13433/569592 [4:42:59<445:16:19, 2.88s/it]
2%|▏ | 13433/569592 [4:42:59<445:16:19, 2.88s/it]
2%|▏ | 13434/569592 [4:43:03<470:09:29, 3.04s/it]
2%|▏ | 13434/569592 [4:43:03<470:09:29, 3.04s/it]
2%|▏ | 13435/569592 [4:43:06<492:50:55, 3.19s/it]
2%|▏ | 13435/569592 [4:43:06<492:50:55, 3.19s/it]
2%|▏ | 13436/569592 [4:43:07<388:35:06, 2.52s/it]
2%|▏ | 13436/569592 [4:43:07<388:35:06, 2.52s/it]
2%|▏ | 13437/569592 [4:43:11<436:58:09, 2.83s/it]
2%|▏ | 13437/569592 [4:43:11<436:58:09, 2.83s/it]
2%|▏ | 13438/569592 [4:43:14<458:09:37, 2.97s/it]
2%|▏ | 13438/569592 [4:43:14<458:09:37, 2.97s/it]
2%|▏ | 13439/569592 [4:43:15<363:49:48, 2.36s/it]
2%|▏ | 13439/569592 [4:43:15<363:49:48, 2.36s/it]
2%|▏ | 13440/569592 [4:43:16<306:55:43, 1.99s/it]
2%|▏ | 13440/569592 [4:43:16<306:55:43, 1.99s/it]
2%|▏ | 13441/569592 [4:43:21<421:55:04, 2.73s/it]
2%|▏ | 13441/569592 [4:43:21<421:55:04, 2.73s/it]
2%|▏ | 13442/569592 [4:43:22<348:03:19, 2.25s/it]
2%|▏ | 13442/569592 [4:43:22<348:03:19, 2.25s/it]
2%|▏ | 13443/569592 [4:43:23<287:25:51, 1.86s/it]
2%|▏ | 13443/569592 [4:43:23<287:25:51, 1.86s/it]
2%|▏ | 13444/569592 [4:43:26<355:04:45, 2.30s/it]
2%|▏ | 13444/569592 [4:43:26<355:04:45, 2.30s/it]
2%|▏ | 13445/569592 [4:43:30<424:58:43, 2.75s/it]
2%|▏ | 13445/569592 [4:43:30<424:58:43, 2.75s/it]
2%|▏ | 13446/569592 [4:43:31<345:46:12, 2.24s/it]
2%|▏ | 13446/569592 [4:43:31<345:46:12, 2.24s/it]
2%|▏ | 13447/569592 [4:43:35<411:30:07, 2.66s/it]
2%|▏ | 13447/569592 [4:43:35<411:30:07, 2.66s/it]
2%|▏ | 13448/569592 [4:43:38<448:42:40, 2.90s/it]
2%|▏ | 13448/569592 [4:43:38<448:42:40, 2.90s/it]
2%|▏ | 13449/569592 [4:43:42<488:10:12, 3.16s/it]
2%|▏ | 13449/569592 [4:43:42<488:10:12, 3.16s/it]
2%|▏ | 13450/569592 [4:43:45<489:14:28, 3.17s/it]
2%|▏ | 13450/569592 [4:43:45<489:14:28, 3.17s/it]
2%|▏ | 13451/569592 [4:43:48<501:08:56, 3.24s/it]
2%|▏ | 13451/569592 [4:43:48<501:08:56, 3.24s/it]
2%|▏ | 13452/569592 [4:43:49<392:44:35, 2.54s/it]
2%|▏ | 13452/569592 [4:43:49<392:44:35, 2.54s/it]
2%|▏ | 13453/569592 [4:43:50<321:01:01, 2.08s/it]
2%|▏ | 13453/569592 [4:43:50<321:01:01, 2.08s/it]
2%|▏ | 13454/569592 [4:43:51<268:42:12, 1.74s/it]
2%|▏ | 13454/569592 [4:43:51<268:42:12, 1.74s/it]
2%|▏ | 13455/569592 [4:43:55<354:01:15, 2.29s/it]
2%|▏ | 13455/569592 [4:43:55<354:01:15, 2.29s/it]
2%|▏ | 13456/569592 [4:43:59<439:34:05, 2.85s/it]
2%|▏ | 13456/569592 [4:43:59<439:34:05, 2.85s/it]
2%|▏ | 13457/569592 [4:44:03<481:19:39, 3.12s/it]
2%|▏ | 13457/569592 [4:44:03<481:19:39, 3.12s/it]
2%|▏ | 13458/569592 [4:44:04<382:44:41, 2.48s/it]
2%|▏ | 13458/569592 [4:44:04<382:44:41, 2.48s/it]
2%|▏ | 13459/569592 [4:44:08<476:09:36, 3.08s/it]
2%|▏ | 13459/569592 [4:44:08<476:09:36, 3.08s/it]
2%|▏ | 13460/569592 [4:44:09<376:20:02, 2.44s/it]
2%|▏ | 13460/569592 [4:44:09<376:20:02, 2.44s/it]
2%|▏ | 13461/569592 [4:44:13<427:50:48, 2.77s/it]
2%|▏ | 13461/569592 [4:44:13<427:50:48, 2.77s/it]
2%|▏ | 13462/569592 [4:44:14<342:21:54, 2.22s/it]
2%|▏ | 13462/569592 [4:44:14<342:21:54, 2.22s/it]
2%|▏ | 13463/569592 [4:44:17<396:05:41, 2.56s/it]
2%|▏ | 13463/569592 [4:44:17<396:05:41, 2.56s/it]
2%|▏ | 13464/569592 [4:44:20<430:22:39, 2.79s/it]
2%|▏ | 13464/569592 [4:44:20<430:22:39, 2.79s/it]
2%|▏ | 13465/569592 [4:44:25<532:27:08, 3.45s/it]
2%|▏ | 13465/569592 [4:44:25<532:27:08, 3.45s/it]
2%|▏ | 13466/569592 [4:44:26<418:42:48, 2.71s/it]
2%|▏ | 13466/569592 [4:44:26<418:42:48, 2.71s/it]
2%|▏ | 13467/569592 [4:44:30<487:47:27, 3.16s/it]
2%|▏ | 13467/569592 [4:44:30<487:47:27, 3.16s/it]
2%|▏ | 13468/569592 [4:44:34<490:04:14, 3.17s/it]
2%|▏ | 13468/569592 [4:44:34<490:04:14, 3.17s/it]
2%|▏ | 13469/569592 [4:44:35<386:29:12, 2.50s/it]
2%|▏ | 13469/569592 [4:44:35<386:29:12, 2.50s/it]
2%|▏ | 13470/569592 [4:44:36<314:36:59, 2.04s/it]
2%|▏ | 13470/569592 [4:44:36<314:36:59, 2.04s/it]
2%|▏ | 13471/569592 [4:44:39<388:48:15, 2.52s/it]
2%|▏ | 13471/569592 [4:44:39<388:48:15, 2.52s/it]
2%|▏ | 13472/569592 [4:44:44<515:19:42, 3.34s/it]
2%|▏ | 13472/569592 [4:44:44<515:19:42, 3.34s/it]
2%|▏ | 13473/569592 [4:44:49<585:20:31, 3.79s/it]
2%|▏ | 13473/569592 [4:44:49<585:20:31, 3.79s/it]
2%|▏ | 13474/569592 [4:44:53<585:15:30, 3.79s/it]
2%|▏ | 13474/569592 [4:44:53<585:15:30, 3.79s/it]
2%|▏ | 13475/569592 [4:44:54<453:32:04, 2.94s/it]
2%|▏ | 13475/569592 [4:44:54<453:32:04, 2.94s/it]
2%|▏ | 13476/569592 [4:44:55<361:00:31, 2.34s/it]
2%|▏ | 13476/569592 [4:44:55<361:00:31, 2.34s/it]
2%|▏ | 13477/569592 [4:44:56<297:40:41, 1.93s/it]
2%|▏ | 13477/569592 [4:44:56<297:40:41, 1.93s/it]
2%|▏ | 13478/569592 [4:44:57<253:27:14, 1.64s/it]
2%|▏ | 13478/569592 [4:44:57<253:27:14, 1.64s/it]
2%|▏ | 13479/569592 [4:45:01<364:14:37, 2.36s/it]
2%|▏ | 13479/569592 [4:45:01<364:14:37, 2.36s/it]
2%|▏ | 13480/569592 [4:45:02<299:06:41, 1.94s/it]
2%|▏ | 13480/569592 [4:45:02<299:06:41, 1.94s/it]
2%|▏ | 13481/569592 [4:45:05<360:34:22, 2.33s/it]
2%|▏ | 13481/569592 [4:45:05<360:34:22, 2.33s/it]
2%|▏ | 13482/569592 [4:45:06<297:07:24, 1.92s/it]
2%|▏ | 13482/569592 [4:45:06<297:07:24, 1.92s/it]
2%|▏ | 13483/569592 [4:45:08<273:57:26, 1.77s/it]
2%|▏ | 13483/569592 [4:45:08<273:57:26, 1.77s/it]
2%|▏ | 13484/569592 [4:45:13<432:51:54, 2.80s/it]
2%|▏ | 13484/569592 [4:45:13<432:51:54, 2.80s/it]
2%|▏ | 13485/569592 [4:45:14<347:25:31, 2.25s/it]
2%|▏ | 13485/569592 [4:45:14<347:25:31, 2.25s/it]
2%|▏ | 13486/569592 [4:45:15<286:06:18, 1.85s/it]
2%|▏ | 13486/569592 [4:45:15<286:06:18, 1.85s/it]
2%|▏ | 13487/569592 [4:45:19<400:25:37, 2.59s/it]
2%|▏ | 13487/569592 [4:45:19<400:25:37, 2.59s/it]
2%|▏ | 13488/569592 [4:45:20<333:58:28, 2.16s/it]
2%|▏ | 13488/569592 [4:45:20<333:58:28, 2.16s/it]
2%|▏ | 13489/569592 [4:45:24<415:06:05, 2.69s/it]
2%|▏ | 13489/569592 [4:45:24<415:06:05, 2.69s/it]
2%|▏ | 13490/569592 [4:45:25<333:02:44, 2.16s/it]
2%|▏ | 13490/569592 [4:45:25<333:02:44, 2.16s/it]
2%|▏ | 13491/569592 [4:45:28<358:31:10, 2.32s/it]
2%|▏ | 13491/569592 [4:45:28<358:31:10, 2.32s/it]
2%|▏ | 13492/569592 [4:45:32<433:48:25, 2.81s/it]
2%|▏ | 13492/569592 [4:45:32<433:48:25, 2.81s/it]
2%|▏ | 13493/569592 [4:45:33<346:43:39, 2.24s/it]
2%|▏ | 13493/569592 [4:45:33<346:43:39, 2.24s/it]
2%|▏ | 13494/569592 [4:45:34<289:06:03, 1.87s/it]
2%|▏ | 13494/569592 [4:45:34<289:06:03, 1.87s/it]
2%|▏ | 13495/569592 [4:45:37<380:51:42, 2.47s/it]
2%|▏ | 13495/569592 [4:45:37<380:51:42, 2.47s/it]
2%|▏ | 13496/569592 [4:45:41<433:46:27, 2.81s/it]
2%|▏ | 13496/569592 [4:45:41<433:46:27, 2.81s/it]
2%|▏ | 13497/569592 [4:45:42<347:12:53, 2.25s/it]
2%|▏ | 13497/569592 [4:45:42<347:12:53, 2.25s/it]
2%|▏ | 13498/569592 [4:45:46<424:59:13, 2.75s/it]
2%|▏ | 13498/569592 [4:45:46<424:59:13, 2.75s/it]
2%|▏ | 13499/569592 [4:45:47<360:48:47, 2.34s/it]
2%|▏ | 13499/569592 [4:45:47<360:48:47, 2.34s/it]
2%|▏ | 13500/569592 [4:45:49<335:06:53, 2.17s/it]
2%|▏ | 13500/569592 [4:45:49<335:06:53, 2.17s/it]
2%|▏ | 13501/569592 [4:45:50<278:35:39, 1.80s/it]
2%|▏ | 13501/569592 [4:45:50<278:35:39, 1.80s/it]
2%|▏ | 13502/569592 [4:45:54<382:21:14, 2.48s/it]
2%|▏ | 13502/569592 [4:45:54<382:21:14, 2.48s/it]
2%|▏ | 13503/569592 [4:45:57<429:11:09, 2.78s/it]
2%|▏ | 13503/569592 [4:45:58<429:11:09, 2.78s/it]
2%|▏ | 13504/569592 [4:46:00<414:27:36, 2.68s/it]
2%|▏ | 13504/569592 [4:46:00<414:27:36, 2.68s/it]
2%|▏ | 13505/569592 [4:46:03<448:37:32, 2.90s/it]
2%|▏ | 13505/569592 [4:46:03<448:37:32, 2.90s/it]
2%|▏ | 13506/569592 [4:46:06<420:52:32, 2.72s/it]
2%|▏ | 13506/569592 [4:46:06<420:52:32, 2.72s/it]
2%|▏ | 13507/569592 [4:46:08<419:46:02, 2.72s/it]
2%|▏ | 13507/569592 [4:46:08<419:46:02, 2.72s/it]
2%|▏ | 13508/569592 [4:46:11<392:16:10, 2.54s/it]
2%|▏ | 13508/569592 [4:46:11<392:16:10, 2.54s/it]
2%|▏ | 13509/569592 [4:46:12<322:46:20, 2.09s/it]
2%|▏ | 13509/569592 [4:46:12<322:46:20, 2.09s/it]
2%|▏ | 13510/569592 [4:46:17<457:19:47, 2.96s/it]
2%|▏ | 13510/569592 [4:46:17<457:19:47, 2.96s/it]
2%|▏ | 13511/569592 [4:46:21<510:43:01, 3.31s/it]
2%|▏ | 13511/569592 [4:46:21<510:43:01, 3.31s/it]
2%|▏ | 13512/569592 [4:46:22<401:50:16, 2.60s/it]
2%|▏ | 13512/569592 [4:46:22<401:50:16, 2.60s/it]
2%|▏ | 13513/569592 [4:46:25<455:39:15, 2.95s/it]
2%|▏ | 13513/569592 [4:46:25<455:39:15, 2.95s/it]
2%|▏ | 13514/569592 [4:46:29<472:12:54, 3.06s/it]
2%|▏ | 13514/569592 [4:46:29<472:12:54, 3.06s/it]
2%|▏ | 13515/569592 [4:46:32<494:43:37, 3.20s/it]
2%|▏ | 13515/569592 [4:46:32<494:43:37, 3.20s/it]
2%|▏ | 13516/569592 [4:46:38<601:30:39, 3.89s/it]
2%|▏ | 13516/569592 [4:46:38<601:30:39, 3.89s/it]
2%|▏ | 13517/569592 [4:46:41<586:37:28, 3.80s/it]
2%|▏ | 13517/569592 [4:46:41<586:37:28, 3.80s/it]
2%|▏ | 13518/569592 [4:46:44<556:41:01, 3.60s/it]
2%|▏ | 13518/569592 [4:46:44<556:41:01, 3.60s/it]
2%|▏ | 13519/569592 [4:46:48<534:52:11, 3.46s/it]
2%|▏ | 13519/569592 [4:46:48<534:52:11, 3.46s/it]
2%|▏ | 13520/569592 [4:46:51<533:24:53, 3.45s/it]
2%|▏ | 13520/569592 [4:46:51<533:24:53, 3.45s/it]
2%|▏ | 13521/569592 [4:46:55<555:52:26, 3.60s/it]
2%|▏ | 13521/569592 [4:46:55<555:52:26, 3.60s/it]
2%|▏ | 13522/569592 [4:47:00<609:28:22, 3.95s/it]
2%|▏ | 13522/569592 [4:47:00<609:28:22, 3.95s/it]
2%|▏ | 13523/569592 [4:47:03<586:01:36, 3.79s/it]
2%|▏ | 13523/569592 [4:47:03<586:01:36, 3.79s/it]
2%|▏ | 13524/569592 [4:47:08<625:03:12, 4.05s/it]
2%|▏ | 13524/569592 [4:47:08<625:03:12, 4.05s/it]
2%|▏ | 13525/569592 [4:47:13<672:36:42, 4.35s/it]
2%|▏ | 13525/569592 [4:47:13<672:36:42, 4.35s/it]
2%|▏ | 13526/569592 [4:47:14<511:43:23, 3.31s/it]
2%|▏ | 13526/569592 [4:47:14<511:43:23, 3.31s/it]
2%|▏ | 13527/569592 [4:47:18<566:14:09, 3.67s/it]
2%|▏ | 13527/569592 [4:47:18<566:14:09, 3.67s/it]
2%|▏ | 13528/569592 [4:47:22<550:00:47, 3.56s/it]
2%|▏ | 13528/569592 [4:47:22<550:00:47, 3.56s/it]
2%|▏ | 13529/569592 [4:47:26<605:20:53, 3.92s/it]
2%|▏ | 13529/569592 [4:47:26<605:20:53, 3.92s/it]
2%|▏ | 13530/569592 [4:47:27<466:36:12, 3.02s/it]
2%|▏ | 13530/569592 [4:47:27<466:36:12, 3.02s/it]
2%|▏ | 13531/569592 [4:47:31<478:22:47, 3.10s/it]
2%|▏ | 13531/569592 [4:47:31<478:22:47, 3.10s/it]
2%|▏ | 13532/569592 [4:47:36<567:31:07, 3.67s/it]
2%|▏ | 13532/569592 [4:47:36<567:31:07, 3.67s/it]
2%|▏ | 13533/569592 [4:47:39<553:16:50, 3.58s/it]
2%|▏ | 13533/569592 [4:47:39<553:16:50, 3.58s/it]
2%|▏ | 13534/569592 [4:47:44<603:09:27, 3.90s/it]
2%|▏ | 13534/569592 [4:47:44<603:09:27, 3.90s/it]
2%|▏ | 13535/569592 [4:47:48<632:03:26, 4.09s/it]
2%|▏ | 13535/569592 [4:47:48<632:03:26, 4.09s/it]
2%|▏ | 13536/569592 [4:47:51<598:49:57, 3.88s/it]
2%|▏ | 13536/569592 [4:47:51<598:49:57, 3.88s/it]
2%|▏ | 13537/569592 [4:47:56<636:25:08, 4.12s/it]
2%|▏ | 13537/569592 [4:47:56<636:25:08, 4.12s/it]
2%|▏ | 13538/569592 [4:48:01<650:40:27, 4.21s/it]
2%|▏ | 13538/569592 [4:48:01<650:40:27, 4.21s/it]
2%|▏ | 13539/569592 [4:48:04<611:01:50, 3.96s/it]
2%|▏ | 13539/569592 [4:48:04<611:01:50, 3.96s/it]
2%|▏ | 13540/569592 [4:48:09<669:00:59, 4.33s/it]
2%|▏ | 13540/569592 [4:48:09<669:00:59, 4.33s/it]
2%|▏ | 13541/569592 [4:48:12<622:37:07, 4.03s/it]
2%|▏ | 13541/569592 [4:48:12<622:37:07, 4.03s/it]
2%|▏ | 13542/569592 [4:48:13<477:44:58, 3.09s/it]
2%|▏ | 13542/569592 [4:48:13<477:44:58, 3.09s/it]
2%|▏ | 13543/569592 [4:48:18<535:18:23, 3.47s/it]
2%|▏ | 13543/569592 [4:48:18<535:18:23, 3.47s/it]
2%|▏ | 13544/569592 [4:48:19<419:10:23, 2.71s/it]
2%|▏ | 13544/569592 [4:48:19<419:10:23, 2.71s/it]
2%|▏ | 13545/569592 [4:48:23<498:08:37, 3.23s/it]
2%|▏ | 13545/569592 [4:48:23<498:08:37, 3.23s/it]
2%|▏ | 13546/569592 [4:48:27<511:12:41, 3.31s/it]
2%|▏ | 13546/569592 [4:48:27<511:12:41, 3.31s/it]
2%|▏ | 13547/569592 [4:48:30<510:18:33, 3.30s/it]
2%|▏ | 13547/569592 [4:48:30<510:18:33, 3.30s/it]
2%|▏ | 13548/569592 [4:48:33<504:33:13, 3.27s/it]
2%|▏ | 13548/569592 [4:48:33<504:33:13, 3.27s/it]
2%|▏ | 13549/569592 [4:48:36<507:31:27, 3.29s/it]
2%|▏ | 13549/569592 [4:48:36<507:31:27, 3.29s/it]
2%|▏ | 13550/569592 [4:48:40<511:38:36, 3.31s/it]
2%|▏ | 13550/569592 [4:48:40<511:38:36, 3.31s/it]
2%|▏ | 13551/569592 [4:48:41<400:07:23, 2.59s/it]
2%|▏ | 13551/569592 [4:48:41<400:07:23, 2.59s/it]
2%|▏ | 13552/569592 [4:48:42<325:42:39, 2.11s/it]
2%|▏ | 13552/569592 [4:48:42<325:42:39, 2.11s/it]
2%|▏ | 13553/569592 [4:48:45<392:41:14, 2.54s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13553/569592 [4:48:45<392:41:14, 2.54s/it]
2%|▏ | 13554/569592 [4:48:49<466:37:20, 3.02s/it]
2%|▏ | 13554/569592 [4:48:49<466:37:20, 3.02s/it]
2%|▏ | 13555/569592 [4:48:53<485:09:35, 3.14s/it]
2%|▏ | 13555/569592 [4:48:53<485:09:35, 3.14s/it]
2%|▏ | 13556/569592 [4:48:56<487:38:11, 3.16s/it]
2%|▏ | 13556/569592 [4:48:56<487:38:11, 3.16s/it]
2%|▏ | 13557/569592 [4:48:57<384:05:35, 2.49s/it]
2%|▏ | 13557/569592 [4:48:57<384:05:35, 2.49s/it]
2%|▏ | 13558/569592 [4:48:58<312:44:37, 2.02s/it]
2%|▏ | 13558/569592 [4:48:58<312:44:37, 2.02s/it]
2%|▏ | 13559/569592 [4:49:01<384:09:12, 2.49s/it]
2%|▏ | 13559/569592 [4:49:01<384:09:12, 2.49s/it]
2%|▏ | 13560/569592 [4:49:06<470:26:26, 3.05s/it]
2%|▏ | 13560/569592 [4:49:06<470:26:26, 3.05s/it]
2%|▏ | 13561/569592 [4:49:07<373:22:40, 2.42s/it]
2%|▏ | 13561/569592 [4:49:07<373:22:40, 2.42s/it]
2%|▏ | 13562/569592 [4:49:11<440:59:54, 2.86s/it]
2%|▏ | 13562/569592 [4:49:11<440:59:54, 2.86s/it]
2%|▏ | 13563/569592 [4:49:14<470:36:16, 3.05s/it]
2%|▏ | 13563/569592 [4:49:14<470:36:16, 3.05s/it]
2%|▏ | 13564/569592 [4:49:15<376:45:28, 2.44s/it]
2%|▏ | 13564/569592 [4:49:15<376:45:28, 2.44s/it]
2%|▏ | 13565/569592 [4:49:16<308:49:38, 2.00s/it]
2%|▏ | 13565/569592 [4:49:16<308:49:38, 2.00s/it]
2%|▏ | 13566/569592 [4:49:17<261:08:56, 1.69s/it]
2%|▏ | 13566/569592 [4:49:17<261:08:56, 1.69s/it]
2%|▏ | 13567/569592 [4:49:22<398:58:31, 2.58s/it]
2%|▏ | 13567/569592 [4:49:22<398:58:31, 2.58s/it]
2%|▏ | 13568/569592 [4:49:25<434:04:46, 2.81s/it]
2%|▏ | 13568/569592 [4:49:25<434:04:46, 2.81s/it]
2%|▏ | 13569/569592 [4:49:29<465:13:54, 3.01s/it]
2%|▏ | 13569/569592 [4:49:29<465:13:54, 3.01s/it]
2%|▏ | 13570/569592 [4:49:33<546:02:49, 3.54s/it]
2%|▏ | 13570/569592 [4:49:34<546:02:49, 3.54s/it]
2%|▏ | 13571/569592 [4:49:34<432:58:07, 2.80s/it]
2%|▏ | 13571/569592 [4:49:34<432:58:07, 2.80s/it]
2%|▏ | 13572/569592 [4:49:38<462:09:38, 2.99s/it]
2%|▏ | 13572/569592 [4:49:38<462:09:38, 2.99s/it]
2%|▏ | 13573/569592 [4:49:39<367:34:21, 2.38s/it]
2%|▏ | 13573/569592 [4:49:39<367:34:21, 2.38s/it]
2%|▏ | 13574/569592 [4:49:44<497:57:39, 3.22s/it]
2%|▏ | 13574/569592 [4:49:44<497:57:39, 3.22s/it]
2%|▏ | 13575/569592 [4:49:48<530:27:48, 3.43s/it]
2%|▏ | 13575/569592 [4:49:48<530:27:48, 3.43s/it]
2%|▏ | 13576/569592 [4:49:51<526:41:32, 3.41s/it]
2%|▏ | 13576/569592 [4:49:51<526:41:32, 3.41s/it]
2%|▏ | 13577/569592 [4:49:56<589:48:10, 3.82s/it]
2%|▏ | 13577/569592 [4:49:56<589:48:10, 3.82s/it]
2%|▏ | 13578/569592 [4:49:57<453:54:06, 2.94s/it]
2%|▏ | 13578/569592 [4:49:57<453:54:06, 2.94s/it]
2%|▏ | 13579/569592 [4:49:58<358:45:19, 2.32s/it]
2%|▏ | 13579/569592 [4:49:58<358:45:19, 2.32s/it]
2%|▏ | 13580/569592 [4:50:01<411:04:36, 2.66s/it]
2%|▏ | 13580/569592 [4:50:01<411:04:36, 2.66s/it]
2%|▏ | 13581/569592 [4:50:05<466:18:44, 3.02s/it]
2%|▏ | 13581/569592 [4:50:05<466:18:44, 3.02s/it]
2%|▏ | 13582/569592 [4:50:06<372:46:19, 2.41s/it]
2%|▏ | 13582/569592 [4:50:06<372:46:19, 2.41s/it]
2%|▏ | 13583/569592 [4:50:09<415:36:46, 2.69s/it]
2%|▏ | 13583/569592 [4:50:09<415:36:46, 2.69s/it]
2%|▏ | 13584/569592 [4:50:15<528:40:30, 3.42s/it]
2%|▏ | 13584/569592 [4:50:15<528:40:30, 3.42s/it]
2%|▏ | 13585/569592 [4:50:15<411:56:10, 2.67s/it]
2%|▏ | 13585/569592 [4:50:15<411:56:10, 2.67s/it]
2%|▏ | 13586/569592 [4:50:16<332:09:49, 2.15s/it]
2%|▏ | 13586/569592 [4:50:16<332:09:49, 2.15s/it]
2%|▏ | 13587/569592 [4:50:17<275:29:55, 1.78s/it]
2%|▏ | 13587/569592 [4:50:17<275:29:55, 1.78s/it]
2%|▏ | 13588/569592 [4:50:21<356:59:18, 2.31s/it]
2%|▏ | 13588/569592 [4:50:21<356:59:18, 2.31s/it]
2%|▏ | 13589/569592 [4:50:22<296:42:02, 1.92s/it]
2%|▏ | 13589/569592 [4:50:22<296:42:02, 1.92s/it]
2%|▏ | 13590/569592 [4:50:26<380:51:40, 2.47s/it]
2%|▏ | 13590/569592 [4:50:26<380:51:40, 2.47s/it]
2%|▏ | 13591/569592 [4:50:29<440:07:32, 2.85s/it]
2%|▏ | 13591/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [4:50:29<440:07:32, 2.85s/it]
2%|▏ | 13592/569592 [4:50:33<476:16:45, 3.08s/it]
2%|▏ | 13592/569592 [4:50:33<476:16:45, 3.08s/it]
2%|▏ | 13593/569592 [4:50:34<376:15:19, 2.44s/it]
2%|▏ | 13593/569592 [4:50:34<376:15:19, 2.44s/it]
2%|▏ | 13594/569592 [4:50:35<307:36:47, 1.99s/it]
2%|▏ | 13594/569592 [4:50:35<307:36:47, 1.99s/it]
2%|▏ | 13595/569592 [4:50:36<260:11:12, 1.68s/it]
2%|▏ | 13595/569592 [4:50:36<260:11:12, 1.68s/it]
2%|▏ | 13596/569592 [4:50:41<427:21:55, 2.77s/it]
2%|▏ | 13596/569592 [4:50:41<427:21:55, 2.77s/it]
2%|▏ | 13597/569592 [4:50:45<457:09:00, 2.96s/it]
2%|▏ | 13597/569592 [4:50:45<457:09:00, 2.96s/it]
2%|▏ | 13598/569592 [4:50:46<368:14:45, 2.38s/it]
2%|▏ | 13598/569592 [4:50:46<368:14:45, 2.38s/it]
2%|▏ | 13599/569592 [4:50:49<431:50:28, 2.80s/it]
2%|▏ | 13599/569592 [4:50:49<431:50:28, 2.80s/it]
2%|▏ | 13600/569592 [4:50:50<345:41:24, 2.24s/it]
2%|▏ | 13600/569592 [4:50:50<345:41:24, 2.24s/it]
2%|▏ | 13601/569592 [4:50:51<286:12:17, 1.85s/it]
2%|▏ | 13601/569592 [4:50:51<286:12:17, 1.85s/it]
2%|▏ | 13602/569592 [4:50:52<244:33:05, 1.58s/it]
2%|▏ | 13602/569592 [4:50:52<244:33:05, 1.58s/it]
2%|▏ | 13603/569592 [4:50:53<214:26:00, 1.39s/it]
2%|▏ | 13603/569592 [4:50:53<214:26:00, 1.39s/it]
2%|▏ | 13604/569592 [4:50:57<318:29:39, 2.06s/it]
2%|▏ | 13604/569592 [4:50:57<318:29:39, 2.06s/it]
2%|▏ | 13605/569592 [4:51:01<406:14:17, 2.63s/it]
2%|▏ | 13605/569592 [4:51:01<406:14:17, 2.63s/it]
2%|▏ | 13606/569592 [4:51:02<330:29:42, 2.14s/it]
2%|▏ | 13606/569592 [4:51:02<330:29:42, 2.14s/it]
2%|▏ | 13607/569592 [4:51:03<275:47:32, 1.79s/it]
2%|▏ | 13607/569592 [4:51:03<275:47:32, 1.79s/it]
2%|▏ | 13608/569592 [4:51:04<236:09:00, 1.53s/it]
2%|▏ | 13608/569592 [4:51:04<236:09:00, 1.53s/it]
2%|▏ | 13609/569592 [4:51:11<521:34:05, 3.38s/it]
2%|▏ | 13609/569592 [4:51:11<521:34:05, 3.38s/it]
2%|▏ | 13610/569592 [4:51:15<523:41:54, 3.39s/it]
2%|▏ | 13610/569592 [4:51:15<523:41:54, 3.39s/it]
2%|▏ | 13611/569592 [4:51:16<410:15:30, 2.66s/it]
2%|▏ | 13611/569592 [4:51:16<410:15:30, 2.66s/it]
2%|▏ | 13612/569592 [4:51:17<331:17:55, 2.15s/it]
2%|▏ | 13612/569592 [4:51:17<331:17:55, 2.15s/it]
2%|▏ | 13613/569592 [4:51:20<395:36:19, 2.56s/it]
2%|▏ | 13613/569592 [4:51:20<395:36:19, 2.56s/it]
2%|▏ | 13614/569592 [4:51:21<321:34:19, 2.08s/it]
2%|▏ | 13614/569592 [4:51:21<321:34:19, 2.08s/it]
2%|▏ | 13615/569592 [4:51:23<325:50:00, 2.11s/it]
2%|▏ | 13615/569592 [4:51:23<325:50:00, 2.11s/it]
2%|▏ | 13616/569592 [4:51:27<388:13:24, 2.51s/it]
2%|▏ | 13616/569592 [4:51:27<388:13:24, 2.51s/it]
2%|▏ | 13617/569592 [4:51:30<437:24:21, 2.83s/it]
2%|▏ | 13617/569592 [4:51:30<437:24:21, 2.83s/it]
2%|▏ | 13618/569592 [4:51:32<396:02:39, 2.56s/it]
2%|▏ | 13618/569592 [4:51:32<396:02:39, 2.56s/it]
2%|▏ | 13619/569592 [4:51:34<338:46:36, 2.19s/it]
2%|▏ | 13619/569592 [4:51:34<338:46:36, 2.19s/it]
2%|▏ | 13620/569592 [4:51:35<282:58:21, 1.83s/it]
2%|▏ | 13620/569592 [4:51:35<282:58:21, 1.83s/it]
2%|▏ | 13621/569592 [4:51:40<453:05:11, 2.93s/it]
2%|▏ | 13621/569592 [4:51:40<453:05:11, 2.93s/it]
2%|▏ | 13622/569592 [4:51:41<367:55:57, 2.38s/it]
2%|▏ | 13622/569592 [4:51:41<367:55:57, 2.38s/it]
2%|▏ | 13623/569592 [4:51:45<416:27:38, 2.70s/it]
2%|▏ | 13623/569592 [4:51:45<416:27:38, 2.70s/it]
2%|▏ | 13624/569592 [4:51:46<334:45:10, 2.17s/it]
2%|▏ | 13624/569592 [4:51:46<334:45:10, 2.17s/it]
2%|▏ | 13625/569592 [4:51:52<524:41:16, 3.40s/it]
2%|▏ | 13625/569592 [4:51:52<524:41:16, 3.40s/it]
2%|▏ | 13626/569592 [4:51:53<414:50:29, 2.69s/it]
2%|▏ | 13626/569592 [4:51:53<414:50:29, 2.69s/it]
2%|▏ | 13627/569592 [4:51:57<481:27:37, 3.12s/it]
2%|▏ | 13627/569592 [4:51:57<481:27:37, 3.12s/it]
2%|▏ | 13628/569592 [4:52:00<490:21:13, 3.18s/it]
2%|▏ | 13628/569592 [4:52:00<490:21:13, 3.18s/it]
2%|▏ | 13629/569592 [4:52:05<542:21:41, 3.51s/it]
2%|▏ | 13629/569592 [4:52:05<542:21:41/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101756928 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
, 3.51s/it]
2%|▏ | 13630/569592 [4:52:10<621:11:01, 4.02s/it]
2%|▏ | 13630/569592 [4:52:10<621:11:01, 4.02s/it]
2%|▏ | 13631/569592 [4:52:13<577:37:24, 3.74s/it]
2%|▏ | 13631/569592 [4:52:13<577:37:24, 3.74s/it]
2%|▏ | 13632/569592 [4:52:18<623:09:33, 4.04s/it]
2%|▏ | 13632/569592 [4:52:18<623:09:33, 4.04s/it]
2%|▏ | 13633/569592 [4:52:18<479:23:17, 3.10s/it]
2%|▏ | 13633/569592 [4:52:19<479:23:17, 3.10s/it]
2%|▏ | 13634/569592 [4:52:22<493:20:19, 3.19s/it]
2%|▏ | 13634/569592 [4:52:22<493:20:19, 3.19s/it]
2%|▏ | 13635/569592 [4:52:27<582:07:24, 3.77s/it]
2%|▏ | 13635/569592 [4:52:27<582:07:24, 3.77s/it]
2%|▏ | 13636/569592 [4:52:32<623:31:38, 4.04s/it]
2%|▏ | 13636/569592 [4:52:32<623:31:38, 4.04s/it]
2%|▏ | 13637/569592 [4:52:35<576:32:51, 3.73s/it]
2%|▏ | 13637/569592 [4:52:35<576:32:51, 3.73s/it]
2%|▏ | 13638/569592 [4:52:40<631:00:22, 4.09s/it]
2%|▏ | 13638/569592 [4:52:40<631:00:22, 4.09s/it]
2%|▏ | 13639/569592 [4:52:43<618:46:58, 4.01s/it]
2%|▏ | 13639/569592 [4:52:43<618:46:58, 4.01s/it]
2%|▏ | 13640/569592 [4:52:47<602:32:04, 3.90s/it]
2%|▏ | 13640/569592 [4:52:47<602:32:04, 3.90s/it]
2%|▏ | 13641/569592 [4:52:50<563:08:39, 3.65s/it]
2%|▏ | 13641/569592 [4:52:50<563:08:39, 3.65s/it]
2%|▏ | 13642/569592 [4:52:54<560:18:30, 3.63s/it]
2%|▏ | 13642/569592 [4:52:54<560:18:30, 3.63s/it]
2%|▏ | 13643/569592 [4:52:57<536:50:41, 3.48s/it]
2%|▏ | 13643/569592 [4:52:57<536:50:41, 3.48s/it]
2%|▏ | 13644/569592 [4:52:58<418:41:49, 2.71s/it]
2%|▏ | 13644/569592 [4:52:58<418:41:49, 2.71s/it]
2%|▏ | 13645/569592 [4:52:59<336:45:30, 2.18s/it]
2%|▏ | 13645/569592 [4:52:59<336:45:30, 2.18s/it]
2%|▏ | 13646/569592 [4:53:05<549:07:37, 3.56s/it]
2%|▏ | 13646/569592 [4:53:05<549:07:37, 3.56s/it]
2%|▏ | 13647/569592 [4:53:09<548:16:50, 3.55s/it]
2%|▏ | 13647/569592 [4:53:09<548:16:50, 3.55s/it]
2%|▏ | 13648/569592 [4:53:10<425:21:10, 2.75s/it]
2%|▏ | 13648/569592 [4:53:10<425:21:10, 2.75s/it]
2%|▏ | 13649/569592 [4:53:14<468:47:24, 3.04s/it]
2%|▏ | 13649/569592 [4:53:14<468:47:24, 3.04s/it]
2%|▏ | 13650/569592 [4:53:19<568:35:24, 3.68s/it]
2%|▏ | 13650/569592 [4:53:19<568:35:24, 3.68s/it]
2%|▏ | 13651/569592 [4:53:22<538:11:47, 3.49s/it]
2%|▏ | 13651/569592 [4:53:22<538:11:47, 3.49s/it]
2%|▏ | 13652/569592 [4:53:25<531:33:55, 3.44s/it]
2%|▏ | 13652/569592 [4:53:25<531:33:55, 3.44s/it]
2%|▏ | 13653/569592 [4:53:28<520:40:25, 3.37s/it]
2%|▏ | 13653/569592 [4:53:28<520:40:25, 3.37s/it]
2%|▏ | 13654/569592 [4:53:33<586:54:22, 3.80s/it]
2%|▏ | 13654/569592 [4:53:33<586:54:22, 3.80s/it]
2%|▏ | 13655/569592 [4:53:36<552:34:00, 3.58s/it]
2%|▏ | 13655/569592 [4:53:36<552:34:00, 3.58s/it]
2%|▏ | 13656/569592 [4:53:41<584:42:37, 3.79s/it]
2%|▏ | 13656/569592 [4:53:41<584:42:37, 3.79s/it]
2%|▏ | 13657/569592 [4:53:44<569:41:23, 3.69s/it]
2%|▏ | 13657/569592 [4:53:44<569:41:23, 3.69s/it]
2%|▏ | 13658/569592 [4:53:48<570:14:16, 3.69s/it]
2%|▏ | 13658/569592 [4:53:48<570:14:16, 3.69s/it]
2%|▏ | 13659/569592 [4:53:52<619:53:09, 4.01s/it]
2%|▏ | 13659/569592 [4:53:52<619:53:09, 4.01s/it]
2%|▏ | 13660/569592 [4:53:56<595:27:13, 3.86s/it]
2%|▏ | 13660/569592 [4:53:56<595:27:13, 3.86s/it]
2%|▏ | 13661/569592 [4:53:59<558:42:21, 3.62s/it]
2%|▏ | 13661/569592 [4:53:59<558:42:21, 3.62s/it]
2%|▏ | 13662/569592 [4:54:00<433:39:17, 2.81s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92763648 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13662/569592 [4:54:00<433:39:17, 2.81s/it]
2%|▏ | 13663/569592 [4:54:03<467:32:03, 3.03s/it]
2%|▏ | 13663/569592 [4:54:03<467:32:03, 3.03s/it]
2%|▏ | 13664/569592 [4:54:07<509:39:50, 3.30s/it]
2%|▏ | 13664/569592 [4:54:07<509:39:50, 3.30s/it]
2%|▏ | 13665/569592 [4:54:11<502:17:21, 3.25s/it]
2%|▏ | 13665/569592 [4:54:11<502:17:21, 3.25s/it]
2%|▏ | 13666/569592 [4:54:14<526:41:35, 3.41s/it]
2%|▏ | 13666/569592 [4:54:14<526:41:35, 3.41s/it]
2%|▏ | 13667/569592 [4:54:19<591:00:25, 3.83s/it]
2%|▏ | 13667/569592 [4:54:19<591:00:25, 3.83s/it]
2%|▏ | 13668/569592 [4:54:22<547:51:46, 3.55s/it]
2%|▏ | 13668/569592 [4:54:22<547:51:46, 3.55s/it]
2%|▏ | 13669/569592 [4:54:26<586:57:00, 3.80s/it]
2%|▏ | 13669/569592 [4:54:26<586:57:00, 3.80s/it]
2%|▏ | 13670/569592 [4:54:27<452:49:58, 2.93s/it]
2%|▏ | 13670/569592 [4:54:27<452:49:58, 2.93s/it]
2%|▏ | 13671/569592 [4:54:31<505:29:52, 3.27s/it]
2%|▏ | 13671/569592 [4:54:31<505:29:52, 3.27s/it]
2%|▏ | 13672/569592 [4:54:37<596:56:23, 3.87s/it]
2%|▏ | 13672/569592 [4:54:37<596:56:23, 3.87s/it]
2%|▏ | 13673/569592 [4:54:41<632:18:54, 4.09s/it]
2%|▏ | 13673/569592 [4:54:41<632:18:54, 4.09s/it]
2%|▏ | 13674/569592 [4:54:46<654:30:43, 4.24s/it]
2%|▏ | 13674/569592 [4:54:46<654:30:43, 4.24s/it]
2%|▏ | 13675/569592 [4:54:47<499:08:20, 3.23s/it]
2%|▏ | 13675/569592 [4:54:47<499:08:20, 3.23s/it]
2%|▏ | 13676/569592 [4:54:48<391:36:54, 2.54s/it]
2%|▏ | 13676/569592 [4:54:48<391:36:54, 2.54s/it]
2%|▏ | 13677/569592 [4:54:49<318:29:44, 2.06s/it]
2%|▏ | 13677/569592 [4:54:49<318:29:44, 2.06s/it]
2%|▏ | 13678/569592 [4:54:49<266:13:13, 1.72s/it]
2%|▏ | 13678/569592 [4:54:50<266:13:13, 1.72s/it]
2%|▏ | 13679/569592 [4:54:53<345:29:57, 2.24s/it]
2%|▏ | 13679/569592 [4:54:53<345:29:57, 2.24s/it]
2%|▏ | 13680/569592 [4:54:56<406:54:33, 2.64s/it]
2%|▏ | 13680/569592 [4:54:57<406:54:33, 2.64s/it]
2%|▏ | 13681/569592 [4:55:00<433:27:19, 2.81s/it]
2%|▏ | 13681/569592 [4:55:00<433:27:19, 2.81s/it]
2%|▏ | 13682/569592 [4:55:03<477:40:00, 3.09s/it]
2%|▏ | 13682/569592 [4:55:03<477:40:00, 3.09s/it]
2%|▏ | 13683/569592 [4:55:07<512:21:50, 3.32s/it]
2%|▏ | 13683/569592 [4:55:07<512:21:50, 3.32s/it]
2%|▏ | 13684/569592 [4:55:11<523:25:28, 3.39s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90203575 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13684/569592 [4:55:11<523:25:28, 3.39s/it]
2%|▏ | 13685/569592 [4:55:16<591:00:04, 3.83s/it]
2%|▏ | 13685/569592 [4:55:16<591:00:04, 3.83s/it]
2%|▏ | 13686/569592 [4:55:19<565:00:07, 3.66s/it]
2%|▏ | 13686/569592 [4:55:19<565:00:07, 3.66s/it]
2%|▏ | 13687/569592 [4:55:20<437:19:25, 2.83s/it]
2%|▏ | 13687/569592 [4:55:20<437:19:25, 2.83s/it]
2%|▏ | 13688/569592 [4:55:21<348:24:26, 2.26s/it]
2%|▏ | 13688/569592 [4:55:21<348:24:26, 2.26s/it]
2%|▏ | 13689/569592 [4:55:22<289:54:32, 1.88s/it]
2%|▏ | 13689/569592 [4:55:22<289:54:32, 1.88s/it]
2%|▏ | 13690/569592 [4:55:25<371:28:26, 2.41s/it]
2%|▏ | 13690/569592 [4:55:25<371:28:26, 2.41s/it]
2%|▏ | 13691/569592 [4:55:29<424:24:59, 2.75s/it]
2%|▏ | 13691/569592 [4:55:29<424:24:59, 2.75s/it]
2%|▏ | 13692/569592 [4:55:30<341:07:28, 2.21s/it]
2%|▏ | 13692/569592 [4:55:30<341:07:28, 2.21s/it]
2%|▏ | 13693/569592 [4:55:34<416:18:04, 2.70s/it]
2%|▏ | 13693/569592 [4:55:34<416:18:04, 2.70s/it]
2%|▏ | 13694/569592 [4:55:39<528:43:21, 3.42s/it]
2%|▏ | 13694/569592 [4:55:39<528:43:21, 3.42s/it]
2%|▏ | 13695/569592 [4:55:42<510:12:32, 3.30s/it]
2%|▏ | 13695/569592 [4:55:42<510:12:32, 3.30s/it]
2%|▏ | 13696/569592 [4:55:43<401:56:22, 2.60s/it]
2%|▏ | 13696/569592 [4:55:43<401:56:22, 2.60s/it]
2%|▏ | 13697/569592 [4:55:46<447:26:52, 2.90s/it]
2%|▏ | 13697/569592 [4:55:46<447:26:52, 2.90s/it]
2%|▏ | 13698/569592 [4:55:50<475:05:42, 3.08s/it]
2%|▏ | 13698/569592 [4:55:50<475:05:42, 3.08s/it]
2%|▏ | 13699/569592 [4:55:53<486:54:38, 3.15s/it]
2%|▏ | 13699/569592 [4:55:53<486:54:38, 3.15s/it]
2%|▏ | 13700/569592 [4:55:54<383:16:34, 2.48s/it]
2%|▏ | 13700/569592 [4:55:54<383:16:34, 2.48s/it]
2%|▏ | 13701/569592 [4:55:59<499:33:05, 3.24s/it]
2%|▏ | 13701/569592 [4:55:59<499:33:05, 3.24s/it]
2%|▏ | 13702/569592 [4:56:00<392:30:20, 2.54s/it]
2%|▏ | 13702/569592 [4:56:00<392:30:20, 2.54s/it]
2%|▏ | 13703/569592 [4:56:01<319:14:06, 2.07s/it]
2%|▏ | 13703/569592 [4:56:01<319:14:06, 2.07s/it]
2%|▏ | 13704/569592 [4:56:02<270:03:51, 1.75s/it]
2%|▏ | 13704/569592 [4:56:02<270:03:51, 1.75s/it]
2%|▏ | 13705/569592 [4:56:06<355:09:41, 2.30s/it]
2%|▏ | 13705/569592 [4:56:06<355:09:41, 2.30s/it]
2%|▏ | 13706/569592 [4:56:07<297:26:16, 1.93s/it]
2%|▏ | 13706/569592 [4:56:07<297:26:16, 1.93s/it]
2%|▏ | 13707/569592 [4:56:08<254:51:41, 1.65s/it]
2%|▏ | 13707/569592 [4:56:08<254:51:41, 1.65s/it]
2%|▏ | 13708/569592 [4:56:11<340:00:15, 2.20s/it]
2%|▏ | 13708/569592 [4:56:11<340:00:15, 2.20s/it]
2%|▏ | 13709/569592 [4:56:15<402:08:40, 2.60s/it]
2%|▏ | 13709/569592 [4:56:15<402:08:40, 2.60s/it]
2%|▏ | 13710/569592 [4:56:16<324:30:27, 2.10s/it]
2%|▏ | 13710/569592 [4:56:16<324:30:27, 2.10s/it]
2%|▏ | 13711/569592 [4:56:17<271:42:02, 1.76s/it]
2%|▏ | 13711/569592 [4:56:17<271:42:02, 1.76s/it]
2%|▏ | 13712/569592 [4:56:21<380:38:15, 2.47s/it]
2%|▏ | 13712/569592 [4:56:21<380:38:15, 2.47s/it]
2%|▏ | 13713/569592 [4:56:22<315:56:31, 2.05s/it]
2%|▏ | 13713/569592 [4:56:22<315:56:31, 2.05s/it]
2%|▏ | 13714/569592 [4:56:25<389:08:12, 2.52s/it]
2%|▏ | 13714/569592 [4:56:25<389:08:12, 2.52s/it]
2%|▏ | 13715/569592 [4:56:27<324:07:49, 2.10s/it]
2%|▏ | 13715/569592 [4:56:27<324:07:49, 2.10s/it]
2%|▏ | 13716/569592 [4:56:28<271:53:25, 1.76s/it]
2%|▏ | 13716/569592 [4:56:28<271:53:25, 1.76s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101518284 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 13717/569592 [4:56:29<276:48:21, 1.79s/it]
2%|▏ | 13717/569592 [4:56:29<276:48:21, 1.79s/it]
2%|▏ | 13718/569592 [4:56:34<414:49:46, 2.69s/it]
2%|▏ | 13718/569592 [4:56:34<414:49:46, 2.69s/it]
2%|▏ | 13719/569592 [4:56:35<335:25:01, 2.17s/it]
2%|▏ | 13719/569592 [4:56:35<335:25:01, 2.17s/it]
2%|▏ | 13720/569592 [4:56:37<329:37:16, 2.13s/it]
2%|▏ | 13720/569592 [4:56:37<329:37:16, 2.13s/it]
2%|▏ | 13721/569592 [4:56:38<275:34:16, 1.78s/it]
2%|▏ | 13721/569592 [4:56:38<275:34:16, 1.78s/it]
2%|▏ | 13722/569592 [4:56:47<589:44:00, 3.82s/it]
2%|▏ | 13722/569592 [4:56:47<589:44:00, 3.82s/it]
2%|▏ | 13723/569592 [4:56:48<464:09:57, 3.01s/it]
2%|▏ | 13723/569592 [4:56:48<464:09:57, 3.01s/it]
2%|▏ | 13724/569592 [4:56:49<368:22:27, 2.39s/it]
2%|▏ | 13724/569592 [4:56:49<368:22:27, 2.39s/it]
2%|▏ | 13725/569592 [4:56:50<301:43:49, 1.95s/it]
2%|▏ | 13725/569592 [4:56:50<301:43:49, 1.95s/it]
2%|▏ | 13726/569592 [4:56:56<478:58:42, 3.10s/it]
2%|▏ | 13726/569592 [4:56:56<478:58:42, 3.10s/it]
2%|▏ | 13727/569592 [4:56:56<380:36:27, 2.46s/it]
2%|▏ | 13727/569592 [4:56:57<380:36:27, 2.46s/it]
2%|▏ | 13728/569592 [4:57:00<435:04:13, 2.82s/it]
2%|▏ | 13728/569592 [4:57:00<435:04:13, 2.82s/it]
2%|▏ | 13729/569592 [4:57:01<349:59:09, 2.27s/it]
2%|▏ | 13729/569592 [4:57:01<349:59:09, 2.27s/it]
2%|▏ | 13730/569592 [4:57:06<450:59:21, 2.92s/it]
2%|▏ | 13730/569592 [4:57:06<450:59:21, 2.92s/it]
2%|▏ | 13731/569592 [4:57:07<361:59:24, 2.34s/it]
2%|▏ | 13731/569592 [4:57:07<361:59:24, 2.34s/it]
2%|▏ | 13732/569592 [4:57:08<334:45:09, 2.17s/it]
2%|▏ | 13732/569592 [4:57:08<334:45:09, 2.17s/it]
2%|▏ | 13733/569592 [4:57:09<280:20:18, 1.82s/it]
2%|▏ | 13733/569592 [4:57:09<280:20:18, 1.82s/it]
2%|▏ | 13734/569592 [4:57:15<479:52:15, 3.11s/it]
2%|▏ | 13734/569592 [4:57:15<479:52:15, 3.11s/it]
2%|▏ | 13735/569592 [4:57:16<381:41:25, 2.47s/it]
2%|▏ | 13735/569592 [4:57:16<381:41:25, 2.47s/it]
2%|▏ | 13736/569592 [4:57:19<384:45:14, 2.49s/it]
2%|▏ | 13736/569592 [4:57:19<384:45:14, 2.49s/it]
2%|▏ | 13737/569592 [4:57:20<312:01:19, 2.02s/it]
2%|▏ | 13737/569592 [4:57:20<312:01:19, 2.02s/it]
2%|▏ | 13738/569592 [4:57:25<455:28:59, 2.95s/it]
2%|▏ | 13738/569592 [4:57:25<455:28:59, 2.95s/it]
2%|▏ | 13739/569592 [4:57:29<491:02:10, 3.18s/it]
2%|▏ | 13739/569592 [4:57:29<491:02:10, 3.18s/it]
2%|▏ | 13740/569592 [4:57:33<523:49:44, 3.39s/it]
2%|▏ | 13740/569592 [4:57:33<523:49:44, 3.39s/it]
2%|▏ | 13741/569592 [4:57:36<517:24:27, 3.35s/it]
2%|▏ | 13741/569592 [4:57:36<517:24:27, 3.35s/it]
2%|▏ | 13742/569592 [4:57:37<405:33:05, 2.63s/it]
2%|▏ | 13742/569592 [4:57:37<405:33:05, 2.63s/it]
2%|▏ | 13743/569592 [4:57:40<449:02:37, 2.91s/it]
2%|▏ | 13743/569592 [4:57:40<449:02:37, 2.91s/it]
2%|▏ | 13744/569592 [4:57:44<466:15:05, 3.02s/it]
2%|▏ | 13744/569592 [4:57:44<466:15:05, 3.02s/it]
2%|▏ | 13745/569592 [4:57:48<512:25:54, 3.32s/it]
2%|▏ | 13745/569592 [4:57:48<512:25:54, 3.32s/it]
2%|▏ | 13746/569592 [4:57:49<405:22:45, 2.63s/it]
2%|▏ | 13746/569592 [4:57:49<405:22:45, 2.63s/it]
2%|▏ | 13747/569592 [4:57:52<459:13:43, 2.97s/it]
2%|▏ | 13747/569592 [4:57:52<459:13:43, 2.97s/it]
2%|▏ | 13748/569592 [4:57:57<511:27:50, 3.31s/it]
2%|▏ | 13748/569592 [4:57:57<511:27:50, 3.31s/it]
2%|▏ | 13749/569592 [4:58:00<515:32:44, 3.34s/it]
2%|▏ | 13749/569592 [4:58:00<515:32:44, 3.34s/it]
2%|▏ | 13750/569592 [4:58:03<514:37:15, 3.33s/it]
2%|▏ | 13750/569592 [4:58:03<514:37:15, 3.33s/it]
2%|▏ | 13751/569592 [4:58:07<526:49:59, 3.41s/it]
2%|▏ | 13751/569592 [4:58:07<526:49:59, 3.41s/it]
2%|▏ | 13752/569592 [4:58:12<619:37:40, 4.01s/it]
2%|▏ | 13752/569592 [4:58:12<619:37:40, 4.01s/it]
2%|▏ | 13753/569592 [4:58:16<589:02:38, 3.82s/it]
2%|▏ | 13753/569592 [4:58:16<589:02:38, 3.82s/it]
2%|▏ | 13754/569592 [4:58:21<639:44:16, 4.14s/it]
2%|▏ | 13754/569592 [4:58:21<639:44:16, 4.14s/it]
2%|▏ | 13755/569592 [4:58:24<612:51:57, 3.97s/it]
2%|▏ | 13755/569592 [4:58:24<612:51:57, 3.97s/it]
2%|▏ | 13756/569592 [4:58:28<603:46:09, 3.91s/it]
2%|▏ | 13756/569592 [4:58:28<603:46:09, 3.91s/it]
2%|▏ | 13757/569592 [4:58:31<585:03:21, 3.79s/it]
2%|▏ | 13757/569592 [4:58:31<585:03:21, 3.79s/it]
2%|▏ | 13758/569592 [4:58:32<451:15:38, 2.92s/it]
2%|▏ | 13758/569592 [4:58:32<451:15:38, 2.92s/it]
2%|▏ | 13759/569592 [4:58:36<472:48:15, 3.06s/it]
2%|▏ | 13759/569592 [4:58:36<472:48:15, 3.06s/it]
2%|▏ | 13760/569592 [4:58:39<506:31:58, 3.28s/it]
2%|▏ | 13760/569592 [4:58:39<506:31:58, 3.28s/it]
2%|▏ | 13761/569592 [4:58:43<502:43:17, 3.26s/it]
2%|▏ | 13761/569592 [4:58:43<502:43:17, 3.26s/it]
2%|▏ | 13762/569592 [4:58:44<394:07:58, 2.55s/it]
2%|▏ | 13762/569592 [4:58:44<394:07:58, 2.55s/it]
2%|▏ | 13763/569592 [4:58:48<494:40:05, 3.20s/it]
2%|▏ | 13763/569592 [4:58:48<494:40:05, 3.20s/it]
2%|▏ | 13764/569592 [4:58:52<497:28:43, 3.22s/it]
2%|▏ | 13764/569592 [4:58:52<497:28:43, 3.22s/it]
2%|▏ | 13765/569592 [4:58:55<521:10:33, 3.38s/it]
2%|▏ | 13765/569592 [4:58:55<521:10:33, 3.38s/it]
2%|▏ | 13766/569592 [4:58:56<407:52:40, 2.64s/it]
2%|▏ | 13766/569592 [4:58:56<407:52:40, 2.64s/it]
2%|▏ | 13767/569592 [4:59:01<493:50:29, 3.20s/it]
2%|▏ | 13767/569592 [4:59:01<493:50:29, 3.20s/it]
2%|▏ | 13768/569592 [4:59:05<534:59:26, 3.47s/it]
2%|▏ | 13768/569592 [4:59:05<534:59:26, 3.47s/it]
2%|▏ | 13769/569592 [4:59:09<581:35:51, 3.77s/it]
2%|▏ | 13769/569592 [4:59:09<581:35:51, 3.77s/it]
2%|▏ | 13770/569592 [4:59:13<591:18:59, 3.83s/it]
2%|▏ | 13770/569592 [4:59:13<591:18:59, 3.83s/it]
2%|▏ | 13771/569592 [4:59:16<552:56:42, 3.58s/it]
2%|▏ | 13771/569592 [4:59:16<552:56:42, 3.58s/it]
2%|▏ | 13772/569592 [4:59:21<615:37:37, 3.99s/it]
2%|▏ | 13772/569592 [4:59:21<615:37:37, 3.99s/it]
2%|▏ | 13773/569592 [4:59:26<646:22:08, 4.19s/it]
2%|▏ | 13773/569592 [4:59:26<646:22:08, 4.19s/it]
2%|▏ | 13774/569592 [4:59:30<622:27:26, 4.03s/it]
2%|▏ | 13774/569592 [4:59:30<622:27:26, 4.03s/it]
2%|▏ | 13775/569592 [4:59:34<657:02:56, 4.26s/it]
2%|▏ | 13775/569592 [4:59:34<657:02:56, 4.26s/it]
2%|▏ | 13776/569592 [4:59:38<616:54:12, 4.00s/it]
2%|▏ | 13776/569592 [4:59:38<616:54:12, 4.00s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 13777/569592 [4:59:39<474:32:58, 3.07s/it]
2%|▏ | 13777/569592 [4:59:39<474:32:58, 3.07s/it]
2%|▏ | 13778/569592 [4:59:40<374:06:05, 2.42s/it]
2%|▏ | 13778/569592 [4:59:40<374:06:05, 2.42s/it]
2%|▏ | 13779/569592 [4:59:45<504:14:10, 3.27s/it]
2%|▏ | 13779/569592 [4:59:45<504:14:10, 3.27s/it]
2%|▏ | 13780/569592 [4:59:48<515:42:03, 3.34s/it]
2%|▏ | 13780/569592 [4:59:48<515:42:03, 3.34s/it]
2%|▏ | 13781/569592 [4:59:49<403:08:11, 2.61s/it]
2%|▏ | 13781/569592 [4:59:49<403:08:11, 2.61s/it]
2%|▏ | 13782/569592 [4:59:53<470:43:20, 3.05s/it]
2%|▏ | 13782/569592 [4:59:53<470:43:20, 3.05s/it]
2%|▏ | 13783/569592 [4:59:54<373:50:29, 2.42s/it]
2%|▏ | 13783/569592 [4:59:54<373:50:29, 2.42s/it]
2%|▏ | 13784/569592 [4:59:58<429:58:16, 2.78s/it]
2%|▏ | 13784/569592 [4:59:58<429:58:16, 2.78s/it]
2%|▏ | 13785/569592 [5:00:01<459:06:07, 2.97s/it]
2%|▏ | 13785/569592 [5:00:01<459:06:07, 2.97s/it]
2%|▏ | 13786/569592 [5:00:05<496:13:21, 3.21s/it]
2%|▏ | 13786/569592 [5:00:05<496:13:21, 3.21s/it]
2%|▏ | 13787/569592 [5:00:08<489:04:03, 3.17s/it]
2%|▏ | 13787/569592 [5:00:08<489:04:03, 3.17s/it]
2%|▏ | 13788/569592 [5:00:09<384:54:09, 2.49s/it]
2%|▏ | 13788/569592 [5:00:09<384:54:09, 2.49s/it]
2%|▏ | 13789/569592 [5:00:12<421:28:40, 2.73s/it]
2%|▏ | 13789/569592 [5:00:12<421:28:40, 2.73s/it]
2%|▏ | 13790/569592 [5:00:13<339:14:50, 2.20s/it]
2%|▏ | 13790/569592 [5:00:13<339:14:50, 2.20s/it]
2%|▏ | 13791/569592 [5:00:19<485:25:34, 3.14s/it]
2%|▏ | 13791/569592 [5:00:19<485:25:34, 3.14s/it]
2%|▏ | 13792/569592 [5:00:22<485:11:27, 3.14s/it]
2%|▏ | 13792/569592 [5:00:22<485:11:27, 3.14s/it]
2%|▏ | 13793/569592 [5:00:23<382:09:10, 2.48s/it]
2%|▏ | 13793/569592 [5:00:23<382:09:10, 2.48s/it]
2%|▏ | 13794/569592 [5:00:24<311:29:49, 2.02s/it]
2%|▏ | 13794/569592 [5:00:24<311:29:49, 2.02s/it]
2%|▏ | 13795/569592 [5:00:25<262:15:56, 1.70s/it]
2%|▏ | 13795/569592 [5:00:25<262:15:56, 1.70s/it]
2%|▏ | 13796/569592 [5:00:28<339:43:00, 2.20s/it]
2%|▏ | 13796/569592 [5:00:28<339:43:00, 2.20s/it]
2%|▏ | 13797/569592 [5:00:33<474:22:35, 3.07s/it]
2%|▏ | 13797/569592 [5:00:33<474:22:35, 3.07s/it]
2%|▏ | 13798/569592 [5:00:38<555:04:49, 3.60s/it]
2%|▏ | 13798/569592 [5:00:38<555:04:49, 3.60s/it]
2%|▏ | 13799/569592 [5:00:39<430:29:51, 2.79s/it]
2%|▏ | 13799/569592 [5:00:39<430:29:51, 2.79s/it]
2%|▏ | 13800/569592 [5:00:40<345:24:41, 2.24s/it]
2%|▏ | 13800/569592 [5:00:40<345:24:41, 2.24s/it]
2%|▏ | 13801/569592 [5:00:41<286:54:39, 1.86s/it]
2%|▏ | 13801/569592 [5:00:41<286:54:39, 1.86s/it]
2%|▏ | 13802/569592 [5:00:46<455:31:58, 2.95s/it]
2%|▏ | 13802/569592 [5:00:46<455:31:58, 2.95s/it]
2%|▏ | 13803/569592 [5:00:50<477:37:12, 3.09s/it]
2%|▏ | 13803/569592 [5:00:50<477:37:12, 3.09s/it]
2%|▏ | 13804/569592 [5:00:53<478:42:23, 3.10s/it]
2%|▏ | 13804/569592 [5:00:53<478:42:23, 3.10s/it]
2%|▏ | 13805/569592 [5:00:54<377:11:51, 2.44s/it]
2%|▏ | 13805/569592 [5:00:54<377:11:51, 2.44s/it]
2%|▏ | 13806/569592 [5:00:57<409:24:54, 2.65s/it]
2%|▏ | 13806/569592 [5:00:57<409:24:54, 2.65s/it]
2%|▏ | 13807/569592 [5:01:00<456:43:47, 2.96s/it]
2%|▏ | 13807/569592 [5:01:00<456:43:47, 2.96s/it]
2%|▏ | 13808/569592 [5:01:04<484:53:46, 3.14s/it]
2%|▏ | 13808/569592 [5:01:04<484:53:46, 3.14s/it]
2%|▏ | 13809/569592 [5:01:09<555:22:49, 3.60s/it]
2%|▏ | 13809/569592 [5:01:09<555:22:49, 3.60s/it]
2%|▏ | 13810/569592 [5:01:14<617:43:53, 4.00s/it]
2%|▏ | 13810/569592 [5:01:14<617:43:53, 4.00s/it]
2%|▏ | 13811/569592 [5:01:17<595:42:07, 3.86s/it]
2%|▏ | 13811/569592 [5:01:17<595:42:07, 3.86s/it]
2%|▏ | 13812/569592 [5:01:21<585:23:49, 3.79s/it]
2%|▏ | 13812/569592 [5:01:21<585:23:49, 3.79s/it]
2%|▏ | 13813/569592 [5:01:24<555:22:27, 3.60s/it]
2%|▏ | 13813/569592 [5:01:24<555:22:27, 3.60s/it]
2%|▏ | 13814/569592 [5:01:25<431:26:26, 2.79s/it]
2%|▏ | 13814/569592 [5:01:25<431:26:26, 2.79s/it]
2%|▏ | 13815/569592 [5:01:28<457:15:46, 2.96s/it]
2%|▏ | 13815/569592 [5:01:28<457:15:46, 2.96s/it]
2%|▏ | 13816/569592 [5:01:33<537:03:38, 3.48s/it]
2%|▏ | 13816/569592 [5:01:33<537:03:38, 3.48s/it]
2%|▏ | 13817/569592 [5:01:36<517:13:11, 3.35s/it]
2%|▏ | 13817/569592 [5:01:36<517:13:11, 3.35s/it]
2%|▏ | 13818/569592 [5:01:39<516:09:37, 3.34s/it]
2%|▏ | 13818/569592 [5:01:39<516:09:37, 3.34s/it]
2%|▏ | 13819/569592 [5:01:43<526:49:03, 3.41s/it]
2%|▏ | 13819/569592 [5:01:43<526:49:03, 3.41s/it]
2%|▏ | 13820/569592 [5:01:44<410:54:13, 2.66s/it]
2%|▏ | 13820/569592 [5:01:45<410:54:13, 2.66s/it]
2%|▏ | 13821/569592 [5:01:45<367:49:56, 2.38s/it]
2%|▏ | 13821/569592 [5:01:45<367:49:56, 2.38s/it]
2%|▏ | 13822/569592 [5:01:46<302:12:16, 1.96s/it]
2%|▏ | 13822/569592 [5:01:46<302:12:16, 1.96s/it]
2%|▏ | 13823/569592 [5:01:47<255:42:23, 1.66s/it]
2%|▏ | 13823/569592 [5:01:47<255:42:23, 1.66s/it]
2%|▏ | 13824/569592 [5:01:48<222:30:50, 1.44s/it]
2%|▏ | 13824/569592 [5:01:48<222:30:50, 1.44s/it]
2%|▏ | 13825/569592 [5:01:52<333:06:16, 2.16s/it]
2%|▏ | 13825/569592 [5:01:52<333:06:16, 2.16s/it]
2%|▏ | 13826/569592 [5:01:56<399:32:26, 2.59s/it]
2%|▏ | 13826/569592 [5:01:56<399:32:26, 2.59s/it]
2%|▏ | 13827/569592 [5:02:01<527:05:43, 3.41s/it]
2%|▏ | 13827/569592 [5:02:01<527:05:43, 3.41s/it]
2%|▏ | 13828/569592 [5:02:02<412:45:31, 2.67s/it]
2%|▏ | 13828/569592 [5:02:02<412:45:31, 2.67s/it]
2%|▏ | 13829/569592 [5:02:03<334:32:43, 2.17s/it]
2%|▏ | 13829/569592 [5:02:03<334:32:43, 2.17s/it]
2%|▏ | 13830/569592 [5:02:07<440:44:30, 2.85s/it]
2%|▏ | 13830/569592 [5:02:07<440:44:30, 2.85s/it]
2%|▏ | 13831/569592 [5:02:08<351:30:39, 2.28s/it]
2%|▏ | 13831/569592 [5:02:08<351:30:39, 2.28s/it]
2%|▏ | 13832/569592 [5:02:12<421:20:11, 2.73s/it]
2%|▏ | 13832/569592 [5:02:12<421:20:11, 2.73s/it]
2%|▏ | 13833/569592 [5:02:15<443:32:43, 2.87s/it]
2%|▏ | 13833/569592 [5:02:15<443:32:43, 2.87s/it]
2%|▏ | 13834/569592 [5:02:16<358:21:52, 2.32s/it]
2%|▏ | 13834/569592 [5:02:16<358:21:52, 2.32s/it]
2%|▏ | 13835/569592 [5:02:17<294:55:13, 1.91s/it]
2%|▏ | 13835/569592 [5:02:17<294:55:13, 1.91s/it]
2%|▏ | 13836/569592 [5:02:18<249:20:05, 1.62s/it]
2%|▏ | 13836/569592 [5:02:18<249:20:05, 1.62s/it]
2%|▏ | 13837/569592 [5:02:19<218:06:46, 1.41s/it]
2%|▏ | 13837/569592 [5:02:19<218:06:46, 1.41s/it]
2%|▏ | 13838/569592 [5:02:20<204:57:27, 1.33s/it]
2%|▏ | 13838/569592 [5:02:20<204:57:27, 1.33s/it]
2%|▏ | 13839/569592 [5:02:24<308:24:20, 2.00s/it]
2%|▏ | 13839/569592 [5:02:24<308:24:20, 2.00s/it]
2%|▏ | 13840/569592 [5:02:29<457:07:17, 2.96s/it]
2%|▏ | 13840/569592 [5:02:29<457:07:17, 2.96s/it]
2%|▏ | 13841/569592 [5:02:30<371:04:46, 2.40s/it]
2%|▏ | 13841/569592 [5:02:30<371:04:46, 2.40s/it]
2%|▏ | 13842/569592 [5:02:31<306:48:33, 1.99s/it]
2%|▏ | 13842/569592 [5:02:31<306:48:33, 1.99s/it]
2%|▏ | 13843/569592 [5:02:33<307:22:02, 1.99s/it]
2%|▏ | 13843/569592 [5:02:33<307:22:02, 1.99s/it]
2%|▏ | 13844/569592 [5:02:38<452:28:43, 2.93s/it]
2%|▏ | 13844/569592 [5:02:38<452:28:43, 2.93s/it]
2%|▏ | 13845/569592 [5:02:40<396:42:57, 2.57s/it]
2%|▏ | 13845/569592 [5:02:40<396:42:57, 2.57s/it]
2%|▏ | 13846/569592 [5:02:43<433:18:44, 2.81s/it]
2%|▏ | 13846/569592 [5:02:44<433:18:44, 2.81s/it]
2%|▏ | 13847/569592 [5:02:44<347:38:09, 2.25s/it]
2%|▏ | 13847/569592 [5:02:45<347:38:09, 2.25s/it]
2%|▏ | 13848/569592 [5:02:47<371:58:40, 2.41s/it]
2%|▏ | 13848/569592 [5:02:47<371:58:40, 2.41s/it]
2%|▏ | 13849/569592 [5:02:50<366:25:26, 2.37s/it]
2%|▏ | 13849/569592 [5:02:50<366:25:26, 2.37s/it]
2%|▏ | 13850/569592 [5:02:54<446:11:46, 2.89s/it]
2%|▏ | 13850/569592 [5:02:54<446:11:46, 2.89s/it]
2%|▏ | 13851/569592 [5:02:57<474:35:37, 3.07s/it]
2%|▏ | 13851/569592 [5:02:57<474:35:37, 3.07s/it]
2%|▏ | 13852/569592 [5:02:58<376:24:30, 2.44s/it]
2%|▏ | 13852/569592 [5:02:58<376:24:30, 2.44s/it]
2%|▏ | 13853/569592 [5:03:01<421:11:09, 2.73s/it]
2%|▏ | 13853/569592 [5:03:02<421:11:09, 2.73s/it]
2%|▏ | 13854/569592 [5:03:04<396:36:15, 2.57s/it]
2%|▏ | 13854/569592 [5:03:04<396:36:15, 2.57s/it]
2%|▏ | 13855/569592 [5:03:05<321:05:17, 2.08s/it]
2%|▏ | 13855/569592 [5:03:05<321:05:17, 2.08s/it]
2%|▏ | 13856/569592 [5:03:11<530:10:25, 3.43s/it]
2%|▏ | 13856/569592 [5:03:11<530:10:25, 3.43s/it]
2%|▏ | 13857/569592 [5:03:16<595:58:45, 3.86s/it]
2%|▏ | 13857/569592 [5:03:16<595:58:45, 3.86s/it]
2%|▏ | 13858/569592 [5:03:21<625:01:45, 4.05s/it]
2%|▏ | 13858/569592 [5:03:21<625:01:45, 4.05s/it]
2%|▏ | 13859/569592 [5:03:24<586:53:24, 3.80s/it]
2%|▏ | 13859/569592 [5:03:24<586:53:24, 3.80s/it]
2%|▏ | 13860/569592 [5:03:28<610:25:02, 3.95s/it]
2%|▏ | 13860/569592 [5:03:28<610:25:02, 3.95s/it]
2%|▏ | 13861/569592 [5:03:31<578:46:30, 3.75s/it]
2%|▏ | 13861/569592 [5:03:31<578:46:30, 3.75s/it]
2%|▏ | 13862/569592 [5:03:35<561:07:16, 3.63s/it]
2%|▏ | 13862/569592 [5:03:35<561:07:16, 3.63s/it]
2%|▏ | 13863/569592 [5:03:38<541:49:10, 3.51s/it]
2%|▏ | 13863/569592 [5:03:38<541:49:10, 3.51s/it]
2%|▏ | 13864/569592 [5:03:42<557:49:13, 3.61s/it]
2%|▏ | 13864/569592 [5:03:42<557:49:13, 3.61s/it]
2%|▏ | 13865/569592 [5:03:45<556:03:12, 3.60s/it]
2%|▏ | 13865/569592 [5:03:45<556:03:12, 3.60s/it]
2%|▏ | 13866/569592 [5:03:49<545:28:21, 3.53s/it]
2%|▏ | 13866/569592 [5:03:49<545:28:21, 3.53s/it]
2%|▏ | 13867/569592 [5:03:52<551:25:45, 3.57s/it]
2%|▏ | 13867/569592 [5:03:52<551:25:45, 3.57s/it]
2%|▏ | 13868/569592 [5:03:55<528:00:33, 3.42s/it]
2%|▏ | 13868/569592 [5:03:55<528:00:33, 3.42s/it]
2%|▏ | 13869/569592 [5:03:56<412:15:44, 2.67s/it]
2%|▏ | 13869/569592 [5:03:56<412:15:44, 2.67s/it]
2%|▏ | 13870/569592 [5:04:00<453:08:11, 2.94s/it]
2%|▏ | 13870/569592 [5:04:00<453:08:11, 2.94s/it]
2%|▏ | 13871/569592 [5:04:03<481:36:17, 3.12s/it]
2%|▏ | 13871/569592 [5:04:04<481:36:17, 3.12s/it]
2%|▏ | 13872/569592 [5:04:07<513:42:53, 3.33s/it]
2%|▏ | 13872/569592 [5:04:07<513:42:53, 3.33s/it]
2%|▏ | 13873/569592 [5:04:11<520:51:11, 3.37s/it]
2%|▏ | 13873/569592 [5:04:11<520:51:11, 3.37s/it]
2%|▏ | 13874/569592 [5:04:14<507:54:15, 3.29s/it]
2%|▏ | 13874/569592 [5:04:14<507:54:15, 3.29s/it]
2%|▏ | 13875/569592 [5:04:19<577:28:03, 3.74s/it]
2%|▏ | 13875/569592 [5:04:19<577:28:03, 3.74s/it]
2%|▏ | 13876/569592 [5:04:22<545:35:53, 3.53s/it]
2%|▏ | 13876/569592 [5:04:22<545:35:53, 3.53s/it]
2%|▏ | 13877/569592 [5:04:25<529:15:00, 3.43s/it]
2%|▏ | 13877/569592 [5:04:25<529:15:00, 3.43s/it]
2%|▏ | 13878/569592 [5:04:29<540:04:20, 3.50s/it]
2%|▏ | 13878/569592 [5:04:29<540:04:20, 3.50s/it]
2%|▏ | 13879/569592 [5:04:29<419:18:05, 2.72s/it]
2%|▏ | 13879/569592 [5:04:29<419:18:05, 2.72s/it]
2%|▏ | 13880/569592 [5:04:33<472:17:55, 3.06s/it]
2%|▏ | 13880/569592 [5:04:33<472:17:55, 3.06s/it]
2%|▏ | 13881/569592 [5:04:37<477:31:31, 3.09s/it]
2%|▏ | 13881/569592 [5:04:37<477:31:31, 3.09s/it]
2%|▏ | 13882/569592 [5:04:41<544:18:50, 3.53s/it]
2%|▏ | 13882/569592 [5:04:41<544:18:50, 3.53s/it]
2%|▏ | 13883/569592 [5:04:46<598:48:58, 3.88s/it]
2%|▏ | 13883/569592 [5:04:46<598:48:58, 3.88s/it]
2%|▏ | 13884/569592 [5:04:50<599:07:22, 3.88s/it]
2%|▏ | 13884/569592 [5:04:50<599:07:22, 3.88s/it]
2%|▏ | 13885/569592 [5:04:55<665:55:14, 4.31s/it]
2%|▏ | 13885/569592 [5:04:55<665:55:14, 4.31s/it]
2%|▏ | 13886/569592 [5:04:59<652:40:37, 4.23s/it]
2%|▏ | 13886/569592 [5:04:59<652:40:37, 4.23s/it]
2%|▏ | 13887/569592 [5:05:02<608:41:12, 3.94s/it]
2%|▏ | 13887/569592 [5:05:02<608:41:12, 3.94s/it]
2%|▏ | 13888/569592 [5:05:07<646:32:09, 4.19s/it]
2%|▏ | 13888/569592 [5:05:07<646:32:09, 4.19s/it]
2%|▏ | 13889/569592 [5:05:10<593:31:33, 3.85s/it]
2%|▏ | 13889/569592 [5:05:10<593:31:33, 3.85s/it]
2%|▏ | 13890/569592 [5:05:14<585:23:38, 3.79s/it]
2%|▏ | 13890/569592 [5:05:14<585:23:38, 3.79s/it]
2%|▏ | 13891/569592 [5:05:18<623:33:06, 4.04s/it]
2%|▏ | 13891/569592 [5:05:18<623:33:06, 4.04s/it]
2%|▏ | 13892/569592 [5:05:22<590:03:36, 3.82s/it]
2%|▏ | 13892/569592 [5:05:22<590:03:36, 3.82s/it]
2%|▏ | 13893/569592 [5:05:25<566:34:18, 3.67s/it]
2%|▏ | 13893/569592 [5:05:25<566:34:18, 3.67s/it]
2%|▏ | 13894/569592 [5:05:28<538:45:26, 3.49s/it]
2%|▏ | 13894/569592 [5:05:28<538:45:26, 3.49s/it]
2%|▏ | 13895/569592 [5:05:29<418:50:43, 2.71s/it]
2%|▏ | 13895/569592 [5:05:29<418:50:43, 2.71s/it]
2%|▏ | 13896/569592 [5:05:32<454:30:54, 2.94s/it]
2%|▏ | 13896/569592 [5:05:32<454:30:54, 2.94s/it]
2%|▏ | 13897/569592 [5:05:33<362:52:48, 2.35s/it]
2%|▏ | 13897/569592 [5:05:33<362:52:48, 2.35s/it]
2%|▏ | 13898/569592 [5:05:38<489:01:41, 3.17s/it]
2%|▏ | 13898/569592 [5:05:38<489:01:41, 3.17s/it]
2%|▏ | 13899/569592 [5:05:39<384:53:06, 2.49s/it]
2%|▏ | 13899/569592 [5:05:39<384:53:06, 2.49s/it]
2%|▏ | 13900/569592 [5:05:44<500:16:19, 3.24s/it]
2%|▏ | 13900/569592 [5:05:44<500:16:19, 3.24s/it]
2%|▏ | 13901/569592 [5:05:45<393:54:33, 2.55s/it]
2%|▏ | 13901/569592 [5:05:45<393:54:33, 2.55s/it]
2%|▏ | 13902/569592 [5:05:49<443:56:00, 2.88s/it]
2%|▏ | 13902/569592 [5:05:49<443:56:00, 2.88s/it]
2%|▏ | 13903/569592 [5:05:53<502:21:38, 3.25s/it]
2%|▏ | 13903/569592 [5:05:54<502:21:38, 3.25s/it]
2%|▏ | 13904/569592 [5:05:57<510:16:04, 3.31s/it]
2%|▏ | 13904/569592 [5:05:57<510:16:04, 3.31s/it]
2%|▏ | 13905/569592 [5:05:57<400:08:12, 2.59s/it]
2%|▏ | 13905/569592 [5:05:57<400:08:12, 2.59s/it]
2%|▏ | 13906/569592 [5:06:01<466:32:47, 3.02s/it]
2%|▏ | 13906/569592 [5:06:01<466:32:47, 3.02s/it]
2%|▏ | 13907/569592 [5:06:05<471:10:11, 3.05s/it]
2%|▏ | 13907/569592 [5:06:05<471:10:11, 3.05s/it]
2%|▏ | 13908/569592 [5:06:08<488:44:47, 3.17s/it]
2%|▏ | 13908/569592 [5:06:08<488:44:47, 3.17s/it]
2%|▏ | 13909/569592 [5:06:13<587:11:04, 3.80s/it]
2%|▏ | 13909/569592 [5:06:13<587:11:04, 3.80s/it]
2%|▏ | 13910/569592 [5:06:16<549:06:21, 3.56s/it]
2%|▏ | 13910/569592 [5:06:16<549:06:21, 3.56s/it]
2%|▏ | 13911/569592 [5:06:17<426:51:11, 2.77s/it]
2%|▏ | 13911/569592 [5:06:17<426:51:11, 2.77s/it]
2%|▏ | 13912/569592 [5:06:18<347:27:19, 2.25s/it]
2%|▏ | 13912/569592 [5:06:18<347:27:19, 2.25s/it]
2%|▏ | 13913/569592 [5:06:19<286:56:28, 1.86s/it]
2%|▏ | 13913/569592 [5:06:19<286:56:28, 1.86s/it]
2%|▏ | 13914/569592 [5:06:23<383:35:41, 2.49s/it]
2%|▏ | 13914/569592 [5:06:23<383:35:41, 2.49s/it]
2%|▏ | 13915/569592 [5:06:27<452:37:46, 2.93s/it]
2%|▏ | 13915/569592 [5:06:27<452:37:46, 2.93s/it]
2%|▏ | 13916/569592 [5:06:32<522:28:37, 3.38s/it]
2%|▏ | 13916/569592 [5:06:32<522:28:37, 3.38s/it]
2%|▏ | 13917/569592 [5:06:33<408:56:22, 2.65s/it]
2%|▏ | 13917/569592 [5:06:33<408:56:22, 2.65s/it]
2%|▏ | 13918/569592 [5:06:33<330:26:02, 2.14s/it]
2%|▏ | 13918/569592 [5:06:33<330:26:02, 2.14s/it]
2%|▏ | 13919/569592 [5:06:37<396:21:36, 2.57s/it]
2%|▏ | 13919/569592 [5:06:37<396:21:36, 2.57s/it]
2%|▏ | 13920/569592 [5:06:41<453:48:04, 2.94s/it]
2%|▏ | 13920/569592 [5:06:41<453:48:04, 2.94s/it]
2%|▏ | 13921/569592 [5:06:42<360:14:14, 2.33s/it]
2%|▏ | 13921/569592 [5:06:42<360:14:14, 2.33s/it]
2%|▏ | 13922/569592 [5:06:47<477:33:20, 3.09s/it]
2%|▏ | 13922/569592 [5:06:47<477:33:20, 3.09s/it]
2%|▏ | 13923/569592 [5:06:50<490:59:54, 3.18s/it]
2%|▏ | 13923/569592 [5:06:50<490:59:54, 3.18s/it]
2%|▏ | 13924/569592 [5:06:53<492:47:14, 3.19s/it]
2%|▏ | 13924/569592 [5:06:53<492:47:14, 3.19s/it]
2%|▏ | 13925/569592 [5:06:57<503:51:43, 3.26s/it]
2%|▏ | 13925/569592 [5:06:57<503:51:43, 3.26s/it]
2%|▏ | 13926/569592 [5:06:58<394:37:51, 2.56s/it]
2%|▏ | 13926/569592 [5:06:58<394:37:51, 2.56s/it]
2%|▏ | 13927/569592 [5:06:59<320:21:00, 2.08s/it]
2%|▏ | 13927/569592 [5:06:59<320:21:00, 2.08s/it]
2%|▏ | 13928/569592 [5:07:02<387:14:34, 2.51s/it]
2%|▏ | 13928/569592 [5:07:02<387:14:34, 2.51s/it]
2%|▏ | 13929/569592 [5:07:06<466:31:01, 3.02s/it]
2%|▏ | 13929/569592 [5:07:06<466:31:01, 3.02s/it]
2%|▏ | 13930/569592 [5:07:10<510:56:04, 3.31s/it]
2%|▏ | 13930/569592 [5:07:10<510:56:04, 3.31s/it]
2%|▏ | 13931/569592 [5:07:11<402:07:38, 2.61s/it]
2%|▏ | 13931/569592 [5:07:11<402:07:38, 2.61s/it]
2%|▏ | 13932/569592 [5:07:15<453:24:35, 2.94s/it]
2%|▏ | 13932/569592 [5:07:15<453:24:35, 2.94s/it]
2%|▏ | 13933/569592 [5:07:19<487:37:08, 3.16s/it]
2%|▏ | 13933/569592 [5:07:19<487:37:08, 3.16s/it]
2%|▏ | 13934/569592 [5:07:23<556:03:36, 3.60s/it]
2%|▏ | 13934/569592 [5:07:23<556:03:36, 3.60s/it]
2%|▏ | 13935/569592 [5:07:24<431:54:31, 2.80s/it]
2%|▏ | 13935/569592 [5:07:24<431:54:31, 2.80s/it]
2%|▏ | 13936/569592 [5:07:27<452:42:28, 2.93s/it]
2%|▏ | 13936/569592 [5:07:27<452:42:28, 2.93s/it]
2%|▏ | 13937/569592 [5:07:28<359:19:44, 2.33s/it]
2%|▏ | 13937/569592 [5:07:28<359:19:44, 2.33s/it]
2%|▏ | 13938/569592 [5:07:29<297:05:57, 1.92s/it]
2%|▏ | 13938/569592 [5:07:29<297:05:57, 1.92s/it]
2%|▏ | 13939/569592 [5:07:33<381:05:35, 2.47s/it]
2%|▏ | 13939/569592 [5:07:33<381:05:35, 2.47s/it]
2%|▏ | 13940/569592 [5:07:34<311:43:57, 2.02s/it]
2%|▏ | 13940/569592 [5:07:34<311:43:57, 2.02s/it]
2%|▏ | 13941/569592 [5:07:35<262:44:36, 1.70s/it]
2%|▏ | 13941/569592 [5:07:35<262:44:36, 1.70s/it]
2%|▏ | 13942/569592 [5:07:36<226:45:39, 1.47s/it]
2%|▏ | 13942/569592 [5:07:36<226:45:39, 1.47s/it]
2%|▏ | 13943/569592 [5:07:40<331:43:44, 2.15s/it]
2%|▏ | 13943/569592 [5:07:40<331:43:44, 2.15s/it]
2%|▏ | 13944/569592 [5:07:41<276:38:56, 1.79s/it]
2%|▏ | 13944/569592 [5:07:41<276:38:56, 1.79s/it]
2%|▏ | 13945/569592 [5:07:44<355:09:53, 2.30s/it]
2%|▏ | 13945/569592 [5:07:44<355:09:53, 2.30s/it]
2%|▏ | 13946/569592 [5:07:45<291:08:52, 1.89s/it]
2%|▏ | 13946/569592 [5:07:45<291:08:52, 1.89s/it]
2%|▏ | 13947/569592 [5:07:47<290:19:03, 1.88s/it]
2%|▏ | 13947/569592 [5:07:47<290:19:03, 1.88s/it]
2%|▏ | 13948/569592 [5:07:48<246:03:23, 1.59s/it]
2%|▏ | 13948/569592 [5:07:48<246:03:23, 1.59s/it]
2%|▏ | 13949/569592 [5:07:51<320:35:04, 2.08s/it]
2%|▏ | 13949/569592 [5:07:51<320:35:04, 2.08s/it]
2%|▏ | 13950/569592 [5:07:56<470:03:15, 3.05s/it]
2%|▏ | 13950/569592 [5:07:56<470:03:15, 3.05s/it]
2%|▏ | 13951/569592 [5:07:58<402:26:10, 2.61s/it]
2%|▏ | 13951/569592 [5:07:58<402:26:10, 2.61s/it]
2%|▏ | 13952/569592 [5:07:59<325:22:20, 2.11s/it]
2%|▏ | 13952/569592 [5:07:59<325:22:20, 2.11s/it]
2%|▏ | 13953/569592 [5:08:01<336:59:38, 2.18s/it]
2%|▏ | 13953/569592 [5:08:01<336:59:38, 2.18s/it]
2%|▏ | 13954/569592 [5:08:05<392:19:50, 2.54s/it]
2%|▏ | 13954/569592 [5:08:05<392:19:50, 2.54s/it]
2%|▏ | 13955/569592 [5:08:08<435:18:33, 2.82s/it]
2%|▏ | 13955/569592 [5:08:08<435:18:33, 2.82s/it]
2%|▏ | 13956/569592 [5:08:09<348:18:46, 2.26s/it]
2%|▏ | 13956/569592 [5:08:09<348:18:46, 2.26s/it]
2%|▏ | 13957/569592 [5:08:12<394:43:47, 2.56s/it]
2%|▏ | 13957/569592 [5:08:12<394:43:47, 2.56s/it]
2%|▏ | 13958/569592 [5:08:14<353:39:47, 2.29s/it]
2%|▏ | 13958/569592 [5:08:14<353:39:47, 2.29s/it]
2%|▏ | 13959/569592 [5:08:18<432:34:23, 2.80s/it]
2%|▏ | 13959/569592 [5:08:18<432:34:23, 2.80s/it]
2%|▏ | 13960/569592 [5:08:20<395:43:24, 2.56s/it]
2%|▏ | 13960/569592 [5:08:20<395:43:24, 2.56s/it]
2%|▏ | 13961/569592 [5:08:23<415:29:14, 2.69s/it]
2%|▏ | 13961/569592 [5:08:23<415:29:14, 2.69s/it]
2%|▏ | 13962/569592 [5:08:26<454:24:49, 2.94s/it]
2%|▏ | 13962/569592 [5:08:26<454:24:49, 2.94s/it]
2%|▏ | 13963/569592 [5:08:30<473:08:08, 3.07s/it]
2%|▏ | 13963/569592 [5:08:30<473:08:08, 3.07s/it]
2%|▏ | 13964/569592 [5:08:31<376:12:06, 2.44s/it]
2%|▏ | 13964/569592 [5:08:31<376:12:06, 2.44s/it]
2%|▏ | 13965/569592 [5:08:35<457:19:25, 2.96s/it]
2%|▏ | 13965/569592 [5:08:35<457:19:25, 2.96s/it]
2%|▏ | 13966/569592 [5:08:36<363:26:45, 2.35s/it]
2%|▏ | 13966/569592 [5:08:36<363:26:45, 2.35s/it]
2%|▏ | 13967/569592 [5:08:39<393:57:24, 2.55s/it]
2%|▏ | 13967/569592 [5:08:39<393:57:24, 2.55s/it]
2%|▏ | 13968/569592 [5:08:41<367:37:24, 2.38s/it]
2%|▏ | 13968/569592 [5:08:41<367:37:24, 2.38s/it]
2%|▏ | 13969/569592 [5:08:44<408:48:42, 2.65s/it]
2%|▏ | 13969/569592 [5:08:44<408:48:42, 2.65s/it]
2%|▏ | 13970/569592 [5:08:48<443:36:58, 2.87s/it]
2%|▏ | 13970/569592 [5:08:48<443:36:58, 2.87s/it]
2%|▏ | 13971/569592 [5:08:51<474:43:13, 3.08s/it]
2%|▏ | 13971/569592 [5:08:51<474:43:13, 3.08s/it]
2%|▏ | 13972/569592 [5:08:52<376:58:52, 2.44s/it]
2%|▏ | 13972/569592 [5:08:52<376:58:52, 2.44s/it]
2%|▏ | 13973/569592 [5:08:57<495:26:37, 3.21s/it]
2%|▏ | 13973/569592 [5:08:57<495:26:37, 3.21s/it]
2%|▏ | 13974/569592 [5:09:00<502:42:42, 3.26s/it]
2%|▏ | 13974/569592 [5:09:00<502:42:42, 3.26s/it]
2%|▏ | 13975/569592 [5:09:05<578:19:53, 3.75s/it]
2%|▏ | 13975/569592 [5:09:05<578:19:53, 3.75s/it]
2%|▏ | 13976/569592 [5:09:06<450:41:00, 2.92s/it]
2%|▏ | 13976/569592 [5:09:06<450:41:00, 2.92s/it]
2%|▏ | 13977/569592 [5:09:09<459:53:39, 2.98s/it]
2%|▏ | 13977/569592 [5:09:09<459:53:39, 2.98s/it]
2%|▏ | 13978/569592 [5:09:12<463:53:28, 3.01s/it]
2%|▏ | 13978/569592 [5:09:12<463:53:28, 3.01s/it]
2%|▏ | 13979/569592 [5:09:18<556:46:09, 3.61s/it]
2%|▏ | 13979/569592 [5:09:18<556:46:09, 3.61s/it]
2%|▏ | 13980/569592 [5:09:21<530:13:22, 3.44s/it]
2%|▏ | 13980/569592 [5:09:21<530:13:22, 3.44s/it]
2%|▏ | 13981/569592 [5:09:24<536:27:58, 3.48s/it]
2%|▏ | 13981/569592 [5:09:24<536:27:58, 3.48s/it]
2%|▏ | 13982/569592 [5:09:28<540:06:27, 3.50s/it]
2%|▏ | 13982/569592 [5:09:28<540:06:27, 3.50s/it]
2%|▏ | 13983/569592 [5:09:31<527:43:09, 3.42s/it]
2%|▏ | 13983/569592 [5:09:31<527:43:09, 3.42s/it]
2%|▏ | 13984/569592 [5:09:35<581:03:14, 3.76s/it]
2%|▏ | 13984/569592 [5:09:35<581:03:14, 3.76s/it]
2%|▏ | 13985/569592 [5:09:40<611:39:38, 3.96s/it]
2%|▏ | 13985/569592 [5:09:40<611:39:38, 3.96s/it]
2%|▏ | 13986/569592 [5:09:43<569:15:33, 3.69s/it]
2%|▏ | 13986/569592 [5:09:43<569:15:33, 3.69s/it]
2%|▏ | 13987/569592 [5:09:46<543:50:53, 3.52s/it]
2%|▏ | 13987/569592 [5:09:46<543:50:53, 3.52s/it]
2%|▏ | 13988/569592 [5:09:49<525:01:16, 3.40s/it]
2%|▏ | 13988/569592 [5:09:49<525:01:16, 3.40s/it]
2%|▏ | 13989/569592 [5:09:53<521:26:38, 3.38s/it]
2%|▏ | 13989/569592 [5:09:53<521:26:38, 3.38s/it]
2%|▏ | 13990/569592 [5:09:56<544:05:01, 3.53s/it]
2%|▏ | 13990/569592 [5:09:56<544:05:01, 3.53s/it]
2%|▏ | 13991/569592 [5:10:01<598:41:43, 3.88s/it]
2%|▏ | 13991/569592 [5:10:01<598:41:43, 3.88s/it]
2%|▏ | 13992/569592 [5:10:05<597:22:16, 3.87s/it]
2%|▏ | 13992/569592 [5:10:05<597:22:16, 3.87s/it]
2%|▏ | 13993/569592 [5:10:09<587:15:42, 3.81s/it]
2%|▏ | 13993/569592 [5:10:09<587:15:42, 3.81s/it]
2%|▏ | 13994/569592 [5:10:12<557:06:08, 3.61s/it]
2%|▏ | 13994/569592 [5:10:12<557:06:08, 3.61s/it]
2%|▏ | 13995/569592 [5:10:15<549:34:01, 3.56s/it]
2%|▏ | 13995/569592 [5:10:15<549:34:01, 3.56s/it]
2%|▏ | 13996/569592 [5:10:19<543:56:58, 3.52s/it]
2%|▏ | 13996/569592 [5:10:19<543:56:58, 3.52s/it]
2%|▏ | 13997/569592 [5:10:25<662:37:53, 4.29s/it]
2%|▏ | 13997/569592 [5:10:25<662:37:53, 4.29s/it]
2%|▏ | 13998/569592 [5:10:28<597:47:26, 3.87s/it]
2%|▏ | 13998/569592 [5:10:28<597:47:26, 3.87s/it]
2%|▏ | 13999/569592 [5:10:31<563:37:58, 3.65s/it]
2%|▏ | 13999/569592 [5:10:31<563:37:58, 3.65s/it]
2%|▏ | 14000/569592 [5:10:34<553:02:26, 3.58s/it]
2%|▏ | 14000/569592 [5:10:34<553:02:26, 3.58s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-13000] due to args.save_total_limit
2%|▏ | 14001/569592 [5:12:52<6751:23:31, 43.75s/it]
2%|▏ | 14001/569592 [5:12:52<6751:23:31, 43.75s/it]
2%|▏ | 14002/569592 [5:12:53<4767:27:31, 30.89s/it]
2%|▏ | 14002/569592 [5:12:53<4767:27:31, 30.89s/it]
2%|▏ | 14003/569592 [5:13:00<3661:17:27, 23.72s/it]
2%|▏ | 14003/569592 [5:13:00<3661:17:27, 23.72s/it]
2%|▏ | 14004/569592 [5:13:03<2738:45:00, 17.75s/it]
2%|▏ | 14004/569592 [5:13:03<2738:45:00, 17.75s/it]
2%|▏ | 14005/569592 [5:13:07<2088:27:24, 13.53s/it]
2%|▏ | 14005/569592 [5:13:07<2088:27:24, 13.53s/it]
2%|▏ | 14006/569592 [5:13:11<1622:51:02, 10.52s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96378672 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 14006/569592 [5:13:11<1622:51:02, 10.52s/it]
2%|▏ | 14007/569592 [5:13:14<1302:40:53, 8.44s/it]
2%|▏ | 14007/569592 [5:13:14<1302:40:53, 8.44s/it]
2%|▏ | 14008/569592 [5:13:19<1128:15:12, 7.31s/it]
2%|▏ | 14008/569592 [5:13:19<1128:15:12, 7.31s/it]
2%|▏ | 14009/569592 [5:13:22<951:26:50, 6.17s/it]
2%|▏ | 14009/569592 [5:13:22<951:26:50, 6.17s/it]
2%|▏ | 14010/569592 [5:13:26<843:46:38, 5.47s/it]
2%|▏ | 14010/569592 [5:13:26<843:46:38, 5.47s/it]
2%|▏ | 14011/569592 [5:13:30<764:58:58, 4.96s/it]
2%|▏ | 14011/569592 [5:13:30<764:58:58, 4.96s/it]
2%|▏ | 14012/569592 [5:13:33<680:17:35, 4.41s/it]
2%|▏ | 14012/569592 [5:13:33<680:17:35, 4.41s/it]
2%|▏ | 14013/569592 [5:13:34<519:04:00, 3.36s/it]
2%|▏ | 14013/569592 [5:13:34<519:04:00, 3.36s/it]
2%|▏ | 14014/569592 [5:13:38<547:33:37, 3.55s/it]
2%|▏ | 14014/569592 [5:13:38<547:33:37, 3.55s/it]
2%|▏ | 14015/569592 [5:13:42<590:11:28, 3.82s/it]
2%|▏ | 14015/569592 [5:13:42<590:11:28, 3.82s/it]
2%|▏ | 14016/569592 [5:13:46<589:59:22, 3.82s/it]
2%|▏ | 14016/569592 [5:13:46<589:59:22, 3.82s/it]
2%|▏ | 14017/569592 [5:13:50<594:55:09, 3.85s/it]
2%|▏ | 14017/569592 [5:13:50<594:55:09, 3.85s/it]
2%|▏ | 14018/569592 [5:13:51<457:43:39, 2.97s/it]
2%|▏ | 14018/569592 [5:13:51<457:43:39, 2.97s/it]
2%|▏ | 14019/569592 [5:13:55<491:26:02, 3.18s/it]
2%|▏ | 14019/569592 [5:13:55<491:26:02, 3.18s/it]
2%|▏ | 14020/569592 [5:13:58<494:52:26, 3.21s/it]
2%|▏ | 14020/569592 [5:13:58<494:52:26, 3.21s/it]
2%|▏ | 14021/569592 [5:14:02<549:50:24, 3.56s/it]
2%|▏ | 14021/569592 [5:14:02<549:50:24, 3.56s/it]
2%|▏ | 14022/569592 [5:14:03<426:16:36, 2.76s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 14022/569592 [5:14:03<426:16:36, 2.76s/it]
2%|▏ | 14023/569592 [5:14:07<470:15:48, 3.05s/it]
2%|▏ | 14023/569592 [5:14:07<470:15:48, 3.05s/it]
2%|▏ | 14024/569592 [5:14:10<489:51:12, 3.17s/it]
2%|▏ | 14024/569592 [5:14:10<489:51:12, 3.17s/it]
2%|▏ | 14025/569592 [5:14:16<602:19:21, 3.90s/it]
2%|▏ | 14025/569592 [5:14:16<602:19:21, 3.90s/it]
2%|▏ | 14026/569592 [5:14:17<467:10:03, 3.03s/it]
2%|▏ | 14026/569592 [5:14:17<467:10:03, 3.03s/it]
2%|▏ | 14027/569592 [5:14:18<369:22:44, 2.39s/it]
2%|▏ | 14027/569592 [5:14:18<369:22:44, 2.39s/it]
2%|▏ | 14028/569592 [5:14:19<302:49:03, 1.96s/it]
2%|▏ | 14028/569592 [5:14:19<302:49:03, 1.96s/it]
2%|▏ | 14029/569592 [5:14:20<255:44:11, 1.66s/it]
2%|▏ | 14029/569592 [5:14:20<255:44:11, 1.66s/it]
2%|▏ | 14030/569592 [5:14:21<224:05:05, 1.45s/it]
2%|▏ | 14030/569592 [5:14:21<224:05:05, 1.45s/it]
2%|▏ | 14031/569592 [5:14:25<345:15:03, 2.24s/it]
2%|▏ | 14031/569592 [5:14:25<345:15:03, 2.24s/it]
2%|▏ | 14032/569592 [5:14:33<612:09:18, 3.97s/it]
2%|▏ | 14032/569592 [5:14:33<612:09:18, 3.97s/it]
2%|▏ | 14033/569592 [5:14:34<472:04:36, 3.06s/it]
2%|▏ | 14033/569592 [5:14:34<472:04:36, 3.06s/it]
2%|▏ | 14034/569592 [5:14:38<514:54:04, 3.34s/it]
2%|▏ | 14034/569592 [5:14:38<514:54:04, 3.34s/it]
2%|▏ | 14035/569592 [5:14:43<612:54:42, 3.97s/it]
2%|▏ | 14035/569592 [5:14:43<612:54:42, 3.97s/it]
2%|▏ | 14036/569592 [5:14:47<604:07:32, 3.91s/it]
2%|▏ | 14036/569592 [5:14:47<604:07:32, 3.91s/it]
2%|▏ | 14037/569592 [5:14:52<644:58:53, 4.18s/it]
2%|▏ | 14037/569592 [5:14:52<644:58:53, 4.18s/it]
2%|▏ | 14038/569592 [5:14:55<600:32:20, 3.89s/it]
2%|▏ | 14038/569592 [5:14:55<600:32:20, 3.89s/it]
2%|▏ | 14039/569592 [5:14:58<570:18:30, 3.70s/it]
2%|▏ | 14039/569592 [5:14:58<570:18:30, 3.70s/it]
2%|▏ | 14040/569592 [5:15:02<559:07:56, 3.62s/it]
2%|▏ | 14040/569592 [5:15:02<559:07:56, 3.62s/it]
2%|▏ | 14041/569592 [5:15:03<432:51:24, 2.80s/it]
2%|▏ | 14041/569592 [5:15:03<432:51:24, 2.80s/it]
2%|▏ | 14042/569592 [5:15:07<496:17:27, 3.22s/it]
2%|▏ | 14042/569592 [5:15:07<496:17:27, 3.22s/it]
2%|▏ | 14043/569592 [5:15:11<523:02:33, 3.39s/it]
2%|▏ | 14043/569592 [5:15:11<523:02:33, 3.39s/it]
2%|▏ | 14044/569592 [5:15:16<614:17:34, 3.98s/it]
2%|▏ | 14044/569592 [5:15:16<614:17:34, 3.98s/it]
2%|▏ | 14045/569592 [5:15:20<596:02:16, 3.86s/it]
2%|▏ | 14045/569592 [5:15:20<596:02:16, 3.86s/it]
2%|▏ | 14046/569592 [5:15:20<458:09:43, 2.97s/it]
2%|▏ | 14046/569592 [5:15:20<458:09:43, 2.97s/it]
2%|▏ | 14047/569592 [5:15:25<521:12:03, 3.38s/it]
2%|▏ | 14047/569592 [5:15:25<521:12:03, 3.38s/it]
2%|▏ | 14048/569592 [5:15:28<517:08:23, 3.35s/it]
2%|▏ | 14048/569592 [5:15:28<517:08:23, 3.35s/it]
2%|▏ | 14049/569592 [5:15:31<506:51:57, 3.28s/it]
2%|▏ | 14049/569592 [5:15:31<506:51:57, 3.28s/it]
2%|▏ | 14050/569592 [5:15:35<528:25:14, 3.42s/it]
2%|▏ | 14050/569592 [5:15:35<528:25:14, 3.42s/it]
2%|▏ | 14051/569592 [5:15:36<415:45:03, 2.69s/it]
2%|▏ | 14051/569592 [5:15:36<415:45:03, 2.69s/it]
2%|▏ | 14052/569592 [5:15:39<442:39:01, 2.87s/it]
2%|▏ | 14052/569592 [5:15:39<442:39:01, 2.87s/it]
2%|▏ | 14053/569592 [5:15:44<515:47:30, 3.34s/it]
2%|▏ | 14053/569592 [5:15:44<515:47:30, 3.34s/it]
2%|▏ | 14054/569592 [5:15:45<407:49:12, 2.64s/it]
2%|▏ | 14054/569592 [5:15:45<407:49:12, 2.64s/it]
2%|▏ | 14055/569592 [5:15:46<329:52:54, 2.14s/it]
2%|▏ | 14055/569592 [5:15:46<329:52:54, 2.14s/it]
2%|▏ | 14056/569592 [5:15:47<274:33:32, 1.78s/it]
2%|▏ | 14056/569592 [5:15:47<274:33:32, 1.78s/it]
2%|▏ | 14057/569592 [5:15:48<235:50:57, 1.53s/it]
2%|▏ | 14057/569592 [5:15:48<235:50:57, 1.53s/it]
2%|▏ | 14058/569592 [5:15:51<340:07:52, 2.20s/it]
2%|▏ | 14058/569592 [5:15:51<340:07:52, 2.20s/it]
2%|▏ | 14059/569592 [5:15:55<418:50:57, 2.71s/it]
2%|▏ | 14059/569592 [5:15:55<418:50:57, 2.71s/it]
2%|▏ | 14060/569592 [5:15:59<464:10:41, 3.01s/it]
2%|▏ | 14060/569592 [5:15:59<464:10:41, 3.01s/it]
2%|▏ | 14061/569592 [5:16:00<367:48:20, 2.38s/it]
2%|▏ | 14061/569592 [5:16:00<367:48:20, 2.38s/it]
2%|▏ | 14062/569592 [5:16:01<300:56:03, 1.95s/it]
2%|▏ | 14062/569592 [5:16:01<300:56:03, 1.95s/it]
2%|▏ | 14063/569592 [5:16:02<253:43:02, 1.64s/it]
2%|▏ | 14063/569592 [5:16:02<253:43:02, 1.64s/it]
2%|▏ | 14064/569592 [5:16:03<220:16:04, 1.43s/it]
2%|▏ | 14064/569592 [5:16:03<220:16:04, 1.43s/it]
2%|▏ | 14065/569592 [5:16:08<407:12:28, 2.64s/it]
2%|▏ | 14065/569592 [5:16:09<407:12:28, 2.64s/it]
2%|▏ | 14066/569592 [5:16:10<374:20:10, 2.43s/it]
2%|▏ | 14066/569592 [5:16:10<374:20:10, 2.43s/it]
2%|▏ | 14067/569592 [5:16:14<463:53:48, 3.01s/it]
2%|▏ | 14067/569592 [5:16:14<463:53:48, 3.01s/it]
2%|▏ | 14068/569592 [5:16:15<371:19:07, 2.41s/it]
2%|▏ | 14068/569592 [5:16:15<371:19:07, 2.41s/it]
2%|▏ | 14069/569592 [5:16:16<304:37:04, 1.97s/it]
2%|▏ | 14069/569592 [5:16:16<304:37:04, 1.97s/it]
2%|▏ | 14070/569592 [5:16:18<306:04:55, 1.98s/it]
2%|▏ | 14070/569592 [5:16:18<306:04:55, 1.98s/it]
2%|▏ | 14071/569592 [5:16:22<400:16:15, 2.59s/it]
2%|▏ | 14071/569592 [5:16:22<400:16:15, 2.59s/it]
2%|▏ | 14072/569592 [5:16:23<324:21:20, 2.10s/it]
2%|▏ | 14072/569592 [5:16:23<324:21:20, 2.10s/it]
2%|▏ | 14073/569592 [5:16:24<270:11:45, 1.75s/it]
2%|▏ | 14073/569592 [5:16:24<270:11:45, 1.75s/it]
2%|▏ | 14074/569592 [5:16:29<417:32:32, 2.71s/it]
2%|▏ | 14074/569592 [5:16:29<417:32:32, 2.71s/it]
2%|▏ | 14075/569592 [5:16:30<337:12:25, 2.19s/it]
2%|▏ | 14075/569592 [5:16:30<337:12:25, 2.19s/it]
2%|▏ | 14076/569592 [5:16:34<409:36:23, 2.65s/it]
2%|▏ | 14076/569592 [5:16:34<409:36:23, 2.65s/it]
2%|▏ | 14077/569592 [5:16:35<332:27:40, 2.15s/it]
2%|▏ | 14077/569592 [5:16:35<332:27:40, 2.15s/it]
2%|▏ | 14078/569592 [5:16:39<437:14:00, 2.83s/it]
2%|▏ | 14078/569592 [5:16:39<437:14:00, 2.83s/it]
2%|▏ | 14079/569592 [5:16:40<349:07:15, 2.26s/it]
2%|▏ | 14079/569592 [5:16:40<349:07:15, 2.26s/it]
2%|▏ | 14080/569592 [5:16:41<288:15:58, 1.87s/it]
2%|▏ | 14080/569592 [5:16:41<288:15:58, 1.87s/it]
2%|▏ | 14081/569592 [5:16:45<387:07:20, 2.51s/it]
2%|▏ | 14081/569592 [5:16:45<387:07:20, 2.51s/it]
2%|▏ | 14082/569592 [5:16:49<461:48:17, 2.99s/it]
2%|▏ | 14082/569592 [5:16:49<461:48:17, 2.99s/it]
2%|▏ | 14083/569592 [5:16:53<488:15:43, 3.16s/it]
2%|▏ | 14083/569592 [5:16:53<488:15:43, 3.16s/it]
2%|▏ | 14084/569592 [5:16:54<386:24:56, 2.50s/it]
2%|▏ | 14084/569592 [5:16:54<386:24:56, 2.50s/it]
2%|▏ | 14085/569592 [5:16:55<330:16:49, 2.14s/it]
2%|▏ | 14085/569592 [5:16:55<330:16:49, 2.14s/it]
2%|▏ | 14086/569592 [5:17:01<484:27:45, 3.14s/it]
2%|▏ | 14086/569592 [5:17:01<484:27:45, 3.14s/it]
2%|▏ | 14087/569592 [5:17:02<382:22:05, 2.48s/it]
2%|▏ | 14087/569592 [5:17:02<382:22:05, 2.48s/it]
2%|▏ | 14088/569592 [5:17:02<311:41:48, 2.02s/it]
2%|▏ | 14088/569592 [5:17:03<311:41:48, 2.02s/it]
2%|▏ | 14089/569592 [5:17:07<408:32:08, 2.65s/it]
2%|▏ | 14089/569592 [5:17:07<408:32:08, 2.65s/it]
2%|▏ | 14090/569592 [5:17:11<473:40:06, 3.07s/it]
2%|▏ | 14090/569592 [5:17:11<473:40:06, 3.07s/it]
2%|▏ | 14091/569592 [5:17:15<519:00:54, 3.36s/it]
2%|▏ | 14091/569592 [5:17:15<519:00:54, 3.36s/it]
2%|▏ | 14092/569592 [5:17:19<544:44:47, 3.53s/it]
2%|▏ | 14092/569592 [5:17:19<544:44:47, 3.53s/it]
2%|▏ | 14093/569592 [5:17:20<432:48:19, 2.80s/it]
2%|▏ | 14093/569592 [5:17:20<432:48:19, 2.80s/it]
2%|▏ | 14094/569592 [5:17:24<485:19:49, 3.15s/it]
2%|▏ | 14094/569592 [5:17:24<485:19:49, 3.15s/it]
2%|▏ | 14095/569592 [5:17:28<534:50:58, 3.47s/it]
2%|▏ | 14095/569592 [5:17:28<534:50:58, 3.47s/it]
2%|▏ | 14096/569592 [5:17:32<551:43:31, 3.58s/it]
2%|▏ | 14096/569592 [5:17:32<551:43:31, 3.58s/it]
2%|▏ | 14097/569592 [5:17:33<428:25:22, 2.78s/it]
2%|▏ | 14097/569592 [5:17:33<428:25:22, 2.78s/it]
2%|▏ | 14098/569592 [5:17:37<491:53:32, 3.19s/it]
2%|▏ | 14098/569592 [5:17:37<491:53:32, 3.19s/it]
2%|▏ | 14099/569592 [5:17:38<387:13:19, 2.51s/it]
2%|▏ | 14099/569592 [5:17:38<387:13:19, 2.51s/it]
2%|▏ | 14100/569592 [5:17:42<468:57:52, 3.04s/it]
2%|▏ | 14100/569592 [5:17:42<468:57:52, 3.04s/it]
2%|▏ | 14101/569592 [5:17:46<519:33:13, 3.37s/it]
2%|▏ | 14101/569592 [5:17:46<519:33:13, 3.37s/it]
2%|▏ | 14102/569592 [5:17:50<521:47:43, 3.38s/it]
2%|▏ | 14102/569592 [5:17:50<521:47:43, 3.38s/it]
2%|▏ | 14103/569592 [5:17:53<524:28:18, 3.40s/it]
2%|▏ | 14103/569592 [5:17:53<524:28:18, 3.40s/it]
2%|▏ | 14104/569592 [5:17:57<552:50:52, 3.58s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2%|▏ | 14104/569592 [5:17:57<552:50:52, 3.58s/it]
2%|▏ | 14105/569592 [5:17:58<431:25:39, 2.80s/it]
2%|▏ | 14105/569592 [5:17:58<431:25:39, 2.80s/it]
2%|▏ | 14106/569592 [5:18:02<482:23:53, 3.13s/it]
2%|▏ | 14106/569592 [5:18:02<482:23:53, 3.13s/it]
2%|▏ | 14107/569592 [5:18:06<521:05:59, 3.38s/it]
2%|▏ | 14107/569592 [5:18:06<521:05:59, 3.38s/it]
2%|▏ | 14108/569592 [5:18:10<552:33:57, 3.58s/it]
2%|▏ | 14108/569592 [5:18:10<552:33:57, 3.58s/it]
2%|▏ | 14109/569592 [5:18:13<550:50:11, 3.57s/it]
2%|▏ | 14109/569592 [5:18:13<550:50:11, 3.57s/it]
2%|▏ | 14110/569592 [5:18:20<697:13:49, 4.52s/it]
2%|▏ | 14110/569592 [5:18:20<697:13:49, 4.52s/it]
2%|▏ | 14111/569592 [5:18:21<528:00:56, 3.42s/it]
2%|▏ | 14111/569592 [5:18:21<528:00:56, 3.42s/it]
2%|▏ | 14112/569592 [5:18:22<410:00:19, 2.66s/it]
2%|▏ | 14112/569592 [5:18:22<410:00:19, 2.66s/it]
2%|▏ | 14113/569592 [5:18:27<507:52:25, 3.29s/it]
2%|▏ | 14113/569592 [5:18:27<507:52:25, 3.29s/it]
2%|▏ | 14114/569592 [5:18:30<533:15:18, 3.46s/it]
2%|▏ | 14114/569592 [5:18:30<533:15:18, 3.46s/it]
2%|▏ | 14115/569592 [5:18:34<542:59:18, 3.52s/it]
2%|▏ | 14115/569592 [5:18:34<542:59:18, 3.52s/it]
2%|▏ | 14116/569592 [5:18:38<538:10:40, 3.49s/it]
2%|▏ | 14116/569592 [5:18:38<538:10:40, 3.49s/it]
2%|▏ | 14117/569592 [5:18:41<543:38:11, 3.52s/it]
2%|▏ | 14117/569592 [5:18:41<543:38:11, 3.52s/it]
2%|▏ | 14118/569592 [5:18:42<423:26:19, 2.74s/it]
2%|▏ | 14118/569592 [5:18:42<423:26:19, 2.74s/it]
2%|▏ | 14119/569592 [5:18:46<456:59:31, 2.96s/it]
2%|▏ | 14119/569592 [5:18:46<456:59:31, 2.96s/it]
2%|▏ | 14120/569592 [5:18:49<498:08:19, 3.23s/it]
2%|▏ | 14120/569592 [5:18:49<498:08:19, 3.23s/it]
2%|▏ | 14121/569592 [5:18:53<521:25:58, 3.38s/it]
2%|▏ | 14121/569592 [5:18:53<521:25:58, 3.38s/it]
2%|▏ | 14122/569592 [5:18:56<501:59:11, 3.25s/it]
2%|▏ | 14122/569592 [5:18:56<501:59:11, 3.25s/it]
2%|▏ | 14123/569592 [5:19:00<544:50:35, 3.53s/it]
2%|▏ | 14123/569592 [5:19:00<544:50:35, 3.53s/it]
2%|▏ | 14124/569592 [5:19:04<533:19:45, 3.46s/it]
2%|▏ | 14124/569592 [5:19:04<533:19:45, 3.46s/it]
2%|▏ | 14125/569592 [5:19:04<415:53:59, 2.70s/it]
2%|▏ | 14125/569592 [5:19:05<415:53:59, 2.70s/it]
2%|▏ | 14126/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [5:19:08<458:35:37, 2.97s/it]
2%|▏ | 14126/569592 [5:19:08<458:35:37, 2.97s/it]
2%|▏ | 14127/569592 [5:19:09<365:22:02, 2.37s/it]
2%|▏ | 14127/569592 [5:19:09<365:22:02, 2.37s/it]
2%|▏ | 14128/569592 [5:19:13<441:03:30, 2.86s/it]
2%|▏ | 14128/569592 [5:19:13<441:03:30, 2.86s/it]
2%|▏ | 14129/569592 [5:19:17<478:52:23, 3.10s/it]
2%|▏ | 14129/569592 [5:19:17<478:52:23, 3.10s/it]
2%|▏ | 14130/569592 [5:19:20<498:34:50, 3.23s/it]
2%|▏ | 14130/569592 [5:19:20<498:34:50, 3.23s/it]
2%|▏ | 14131/569592 [5:19:21<390:11:52, 2.53s/it]
2%|▏ | 14131/569592 [5:19:21<390:11:52, 2.53s/it]
2%|▏ | 14132/569592 [5:19:26<477:44:12, 3.10s/it]
2%|▏ | 14132/569592 [5:19:26<477:44:12, 3.10s/it]
2%|▏ | 14133/569592 [5:19:29<482:10:57, 3.13s/it]
2%|▏ | 14133/569592 [5:19:29<482:10:57, 3.13s/it]
2%|▏ | 14134/569592 [5:19:33<518:31:09, 3.36s/it]
2%|▏ | 14134/569592 [5:19:33<518:31:09, 3.36s/it]
2%|▏ | 14135/569592 [5:19:36<537:16:33, 3.48s/it]
2%|▏ | 14135/569592 [5:19:36<537:16:33, 3.48s/it]
2%|▏ | 14136/569592 [5:19:37<418:04:29, 2.71s/it]
2%|▏ | 14136/569592 [5:19:37<418:04:29, 2.71s/it]
2%|▏ | 14137/569592 [5:19:41<439:04:49, 2.85s/it]
2%|▏ | 14137/569592 [5:19:41<439:04:49, 2.85s/it]
2%|▏ | 14138/569592 [5:19:44<467:29:56, 3.03s/it]
2%|▏ | 14138/569592 [5:19:44<467:29:56, 3.03s/it]
2%|▏ | 14139/569592 [5:19:45<380:00:50, 2.46s/it]
2%|▏ | 14139/569592 [5:19:45<380:00:50, 2.46s/it]
2%|▏ | 14140/569592 [5:19:48<421:25:07, 2.73s/it]
2%|▏ | 14140/569592 [5:19:48<421:25:07, 2.73s/it]
2%|▏ | 14141/569592 [5:19:52<449:48:25, 2.92s/it]
2%|▏ | 14141/569592 [5:19:52<449:48:25, 2.92s/it]
2%|▏ | 14142/569592 [5:19:57<533:19:19, 3.46s/it]
2%|▏ | 14142/569592 [5:19:57<533:19:19, 3.46s/it]
2%|▏ | 14143/569592 [5:19:57<414:59:38, 2.69s/it]
2%|▏ | 14143/569592 [5:19:57<414:59:38, 2.69s/it]
2%|▏ | 14144/569592 [5:19:58<333:28:29, 2.16s/it]
2%|▏ | 14144/569592 [5:19:58<333:28:29, 2.16s/it]
2%|▏ | 14145/569592 [5:19:59<278:08:48, 1.80s/it]
2%|▏ | 14145/569592 [5:19:59<278:08:48, 1.80s/it]
2%|▏ | 14146/569592 [5:20:03<354:51:36, 2.30s/it]
2%|▏ | 14146/569592 [5:20:03<354:51:36, 2.30s/it]
2%|▏ | 14147/569592 [5:20:04<298:23:55, 1.93s/it]
2%|▏ | 14147/569592 [5:20:04<298:23:5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93447650 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
5, 1.93s/it]
2%|▏ | 14148/569592 [5:20:08<409:54:34, 2.66s/it]
2%|▏ | 14148/569592 [5:20:08<409:54:34, 2.66s/it]
2%|▏ | 14149/569592 [5:20:13<485:25:27, 3.15s/it]
2%|▏ | 14149/569592 [5:20:13<485:25:27, 3.15s/it]
2%|▏ | 14150/569592 [5:20:16<498:56:25, 3.23s/it]
2%|▏ | 14150/569592 [5:20:16<498:56:25, 3.23s/it]
2%|▏ | 14151/569592 [5:20:19<492:23:29, 3.19s/it]
2%|▏ | 14151/569592 [5:20:19<492:23:29, 3.19s/it]
2%|▏ | 14152/569592 [5:20:23<515:16:23, 3.34s/it]
2%|▏ | 14152/569592 [5:20:23<515:16:23, 3.34s/it]
2%|▏ | 14153/569592 [5:20:26<503:23:26, 3.26s/it]
2%|▏ | 14153/569592 [5:20:26<503:23:26, 3.26s/it]
2%|▏ | 14154/569592 [5:20:29<510:35:55, 3.31s/it]
2%|▏ | 14154/569592 [5:20:29<510:35:55, 3.31s/it]
2%|▏ | 14155/569592 [5:20:30<399:34:12, 2.59s/it]
2%|▏ | 14155/569592 [5:20:30<399:34:12, 2.59s/it]
2%|▏ | 14156/569592 [5:20:33<432:57:11, 2.81s/it]
2%|▏ | 14156/569592 [5:20:33<432:57:11, 2.81s/it]
2%|▏ | 14157/569592 [5:20:34<345:50:09, 2.24s/it]
2%|▏ | 14157/569592 [5:20:34<345:50:09, 2.24s/it]
2%|▏ | 14158/569592 [5:20:39<435:47:25, 2.82s/it]
2%|▏ | 14158/569592 [5:20:39<435:47:25, 2.82s/it]
2%|▏ | 14159/569592 [5:20:40<348:33:04, 2.26s/it]
2%|▏ | 14159/569592 [5:20:40<348:33:04, 2.26s/it]
2%|▏ | 14160/569592 [5:20:43<422:08:06, 2.74s/it]
2%|▏ | 14160/569592 [5:20:43<422:08:06, 2.74s/it]
2%|▏ | 14161/569592 [5:20:44<338:54:39, 2.20s/it]
2%|▏ | 14161/569592 [5:20:44<338:54:39, 2.20s/it]
2%|▏ | 14162/569592 [5:20:48<400:57:17, 2.60s/it]
2%|▏ | 14162/569592 [5:20:48<400:57:17, 2.60s/it]
2%|▏ | 14163/569592 [5:20:49<323:30:43, 2.10s/it]
2%|▏ | 14163/569592 [5:20:49<323:30:43, 2.10s/it]
2%|▏ | 14164/569592 [5:20:50<272:14:18, 1.76s/it]
2%|▏ | 14164/569592 [5:20:50<272:14:18, 1.76s/it]
2%|▏ | 14165/569592 [5:20:54<370:02:56, 2.40s/it]
2%|▏ | 14165/569592 [5:20:54<370:02:56, 2.40s/it]
2%|▏ | 14166/569592 [5:20:58<455:21:16, 2.95s/it]
2%|▏ | 14166/569592 [5:20:58<455:21:16, 2.95s/it]
2%|▏ | 14167/569592 [5:21:01<471:47:17, 3.06s/it]
2%|▏ | 14167/569592 [5:21:01<471:47:17, 3.06s/it]
2%|▏ | 14168/569592 [5:21:05<487:31:51, 3.16s/it]
2%|▏ | 14168/569592 [5:21:05<487:31:51, 3.16s/it]
2%|▏ | 14169/569592 [5:21:06<391:52:51, 2.54s/it]
2%|▏ | 14169/569592 [5:21:06<391:52:51, 2.54s/it]
2%|▏ | 14170/569592 [5:21:07<320:34:13, 2.08s/it]
2%|▏ | 14170/569592 [5:21:07<320:34:13, 2.08s/it]
2%|▏ | 14171/569592 [5:21:08<267:50:45, 1.74s/it]
2%|▏ | 14171/569592 [5:21:08<267:50:45, 1.74s/it]
2%|▏ | 14172/569592 [5:21:11<355:45:49, 2.31s/it]
2%|▏ | 14172/569592 [5:21:11<355:45:49, 2.31s/it]
2%|▏ | 14173/569592 [5:21:12<293:16:17, 1.90s/it]
2%|▏ | 14173/569592 [5:21:12<293:16:17, 1.90s/it]
2%|▏ | 14174/569592 [5:21:17<443:54:40, 2.88s/it]
2%|▏ | 14174/569592 [5:21:17<443:54:40, 2.88s/it]
2%|▏ | 14175/569592 [5:21:18<358:13:41, 2.32s/it]
2%|▏ | 14175/569592 [5:21:18<358:13:41, 2.32s/it]
2%|▏ | 14176/569592 [5:21:22<414:40:14, 2.69s/it]
2%|▏ | 14176/569592 [5:21:22<414:40:14, 2.69s/it]
2%|▏ | 14177/569592 [5:21:25<445:11:15, 2.89s/it]
2%|▏ | 14177/569592 [5:21:25<445:11:15, 2.89s/it]
2%|▏ | 14178/569592 [5:21:29<469:59:57, 3.05s/it]
2%|▏ | 14178/569592 [5:21:29<469:59:57, 3.05s/it]
2%|▏ | 14179/569592 [5:21:30<374:20:44, 2.43s/it]
2%|▏ | 14179/569592 [5:21:30<374:20:44, 2.43s/it]
2%|▏ | 14180/569592 [5:21:34<485:42:11, 3.15s/it]
2%|▏ | 14180/569592 [5:21:35<485:42:11, 3.15s/it]
2%|▏ | 14181/569592 [5:21:35<382:39:41, 2.48s/it]
2%|▏ | 14181/569592 [5:21:35<382:39:41, 2.48s/it]
2%|▏ | 14182/569592 [5:21:39<450:48:16, 2.92s/it]
2%|▏ | 14182/569592 [5:21:39<450:48:16, 2.92s/it]
2%|▏ | 14183/569592 [5:21:40<358:03:42, 2.32s/it]
2%|▏ | 14183/569592 [5:21:40<358:03:42, 2.32s/it]
2%|▏ | 14184/569592 [5:21:44<424:52:43, 2.75s/it]
2%|▏ | 14184/569592 [5:21:44<424:52:43, 2.75s/it]
2%|▏ | 14185/569592 [5:21:45<340:48:34, 2.21s/it]
2%|▏ | 14185/569592 [5:21:45<340:48:34, 2.21s/it]
2%|▏ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 14186/569592 [5:21:46<283:14:11, 1.84s/it]
2%|▏ | 14186/569592 [5:21:46<283:14:11, 1.84s/it]
2%|▏ | 14187/569592 [5:21:47<241:44:40, 1.57s/it]
2%|▏ | 14187/569592 [5:21:47<241:44:40, 1.57s/it]
2%|▏ | 14188/569592 [5:21:48<212:10:34, 1.38s/it]
2%|▏ | 14188/569592 [5:21:48<212:10:34, 1.38s/it]
2%|▏ | 14189/569592 [5:21:51<313:15:05, 2.03s/it]
2%|▏ | 14189/569592 [5:21:51<313:15:05, 2.03s/it]
2%|▏ | 14190/569592 [5:21:52<262:05:36, 1.70s/it]
2%|▏ | 14190/569592 [5:21:52<262:05:36, 1.70s/it]
2%|▏ | 14191/569592 [5:21:56<345:35:46, 2.24s/it]
2%|▏ | 14191/569592 [5:21:56<345:35:46, 2.24s/it]
2%|▏ | 14192/569592 [5:21:57<287:27:19, 1.86s/it]
2%|▏ | 14192/569592 [5:21:57<287:27:19, 1.86s/it]
2%|▏ | 14193/569592 [5:22:00<328:28:18, 2.13s/it]
2%|▏ | 14193/569592 [5:22:00<328:28:18, 2.13s/it]
2%|▏ | 14194/569592 [5:22:01<301:15:34, 1.95s/it]
2%|▏ | 14194/569592 [5:22:01<301:15:34, 1.95s/it]
2%|▏ | 14195/569592 [5:22:05<371:43:06, 2.41s/it]
2%|▏ | 14195/569592 [5:22:05<371:43:06, 2.41s/it]
2%|▏ | 14196/569592 [5:22:08<414:06:39, 2.68s/it]
2%|▏ | 14196/569592 [5:22:08<414:06:39, 2.68s/it]
2%|▏ | 14197/569592 [5:22:10<379:11:41, 2.46s/it]
2%|▏ | 14197/569592 [5:22:10<379:11:41, 2.46s/it]
2%|▏ | 14198/569592 [5:22:12<358:34:42, 2.32s/it]
2%|▏ | 14198/569592 [5:22:12<358:34:42, 2.32s/it]
2%|▏ | 14199/569592 [5:22:16<431:42:50, 2.80s/it]
2%|▏ | 14199/569592 [5:22:16<431:42:50, 2.80s/it]
2%|▏ | 14200/569592 [5:22:17<345:56:05, 2.24s/it]
2%|▏ | 14200/569592 [5:22:17<345:56:05, 2.24s/it]
2%|▏ | 14201/569592 [5:22:21<425:32:48, 2.76s/it]
2%|▏ | 14201/569592 [5:22:21<425:32:48, 2.76s/it]
2%|▏ | 14202/569592 [5:22:24<465:39:53, 3.02s/it]
2%|▏ | 14202/569592 [5:22:24<465:39:53, 3.02s/it]
2%|▏ | 14203/569592 [5:22:28<499:09:35, 3.24s/it]
2%|▏ | 14203/569592 [5:22:28<499:09:35, 3.24s/it]
2%|▏ | 14204/569592 [5:22:31<503:26:41, 3.26s/it]
2%|▏ | 14204/569592 [5:22:31<503:26:41, 3.26s/it]
2%|▏ | 14205/569592 [5:22:35<526:45:32, 3.41s/it]
2%|▏ | 14205/569592 [5:22:35<526:45:32, 3.41s/it]
2%|▏ | 14206/569592 [5:22:36<412:04:18, 2.67s/it]
2%|▏ | 14206/569592 [5:22:36<412:04:18, 2.67s/it]
2%|▏ | 14207/569592 [5:22:39<437:45:37, 2.84s/it]
2%|▏ | 14207/569592 [/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
5:22:39<437:45:37, 2.84s/it]
2%|▏ | 14208/569592 [5:22:45<551:01:37, 3.57s/it]
2%|▏ | 14208/569592 [5:22:45<551:01:37, 3.57s/it]
2%|▏ | 14209/569592 [5:22:48<551:25:03, 3.57s/it]
2%|▏ | 14209/569592 [5:22:48<551:25:03, 3.57s/it]
2%|▏ | 14210/569592 [5:22:51<527:21:28, 3.42s/it]
2%|▏ | 14210/569592 [5:22:51<527:21:28, 3.42s/it]
2%|▏ | 14211/569592 [5:22:55<526:45:48, 3.41s/it]
2%|▏ | 14211/569592 [5:22:55<526:45:48, 3.41s/it]
2%|▏ | 14212/569592 [5:22:58<534:31:12, 3.46s/it]
2%|▏ | 14212/569592 [5:22:58<534:31:12, 3.46s/it]
2%|▏ | 14213/569592 [5:23:01<519:30:34, 3.37s/it]
2%|▏ | 14213/569592 [5:23:01<519:30:34, 3.37s/it]
2%|▏ | 14214/569592 [5:23:04<508:46:22, 3.30s/it]
2%|▏ | 14214/569592 [5:23:04<508:46:22, 3.30s/it]
2%|▏ | 14215/569592 [5:23:08<508:22:10, 3.30s/it]
2%|▏ | 14215/569592 [5:23:08<508:22:10, 3.30s/it]
2%|▏ | 14216/569592 [5:23:11<509:04:27, 3.30s/it]
2%|▏ | 14216/569592 [5:23:11<509:04:27, 3.30s/it]
2%|▏ | 14217/569592 [5:23:15<560:41:00, 3.63s/it]
2%|▏ | 14217/569592 [5:23:15<560:41:00, 3.63s/it]
2%|▏ | 14218/569592 [5:23:16<438:07:36, 2.84s/it]
2%|▏ | 14218/569592 [5:23:16<438:07:36, 2.84s/it]
2%|▏ | 14219/569592 [5:23:20<476:37:46, 3.09s/it]
2%|▏ | 14219/569592 [5:23:20<476:37:46, 3.09s/it]
2%|▏ | 14220/569592 [5:23:24<502:26:17, 3.26s/it]
2%|▏ | 14220/569592 [5:23:24<502:26:17, 3.26s/it]
2%|▏ | 14221/569592 [5:23:27<520:24:11, 3.37s/it]
2%|▏ | 14221/569592 [5:23:27<520:24:11, 3.37s/it]
2%|▏ | 14222/569592 [5:23:28<406:36:09, 2.64s/it]
2%|▏ | 14222/569592 [5:23:28<406:36:09, 2.64s/it]
2%|▏ | 14223/569592 [5:23:31<431:14:16, 2.80s/it]
2%|▏ | 14223/569592 [5:23:32<431:14:16, 2.80s/it]
2%|▏ | 14224/569592 [5:23:33<348:38:59, 2.26s/it]
2%|▏ | 14224/569592 [5:23:33<348:38:59, 2.26s/it]
2%|▏ | 14225/569592 [5:23:36<423:18:22, 2.74s/it]
2%|▏ | 14225/569592 [5:23:36<423:18:22, 2.74s/it]
2%|▏ | 14226/569592 [5:23:40<443:24:40, 2.87s/it]
2%|▏ | 14226/569592 [5:23:40<443:24:40, 2.87s/it]
2%|▏ | 14227/569592 [5:23:44<512:30:54, 3.32s/it]
2%|▏ | 14227/569592 [5:23:44<512:30:54, 3.32s/it]
2%|▏ | 14228/569592 [5:23:49<578:17:06, 3.75s/it]
2%|▏ | 14228/569592 [5:23:49<578:17:06, 3.75s/it]
2%|▏ | 14229/569592 [5:23:52<550:00:56, 3.57s/it]
2%|▏ | 14229/569592 [5:23:52<550:00:56, 3.57s/it]
2%|▏ | 14230/569592 [5:23:55<548:40:16, 3.56s/it]
2%|▏ | 14230/569592 [5:23:55<548:40:16, 3.56s/it]
2%|▏ | 14231/569592 [5:23:58<522:55:00, 3.39s/it]
2%|▏ | 14231/569592 [5:23:58<522:55:00, 3.39s/it]
2%|▏ | 14232/569592 [5:24:02<549:06:53, 3.56s/it]
2%|▏ | 14232/569592 [5:24:03<549:06:53, 3.56s/it]
2%|▏ | 14233/569592 [5:24:06<543:40:00, 3.52s/it]
2%|▏ | 14233/569592 [5:24:06<543:40:00, 3.52s/it]
2%|▏ | 14234/569592 [5:24:07<422:38:11, 2.74s/it]
2%|▏ | 14234/569592 [5:24:07<422:38:11, 2.74s/it]
2%|▏ | 14235/569592 [5:24:11<503:32:25, 3.26s/it]
2%|▏ | 14235/569592 [5:24:11<503:32:25, 3.26s/it]
2%|▏ | 14236/569592 [5:24:14<501:50:27, 3.25s/it]
2%|▏ | 14236/569592 [5:24:14<501:50:27, 3.25s/it]
2%|▏ | 14237/569592 [5:24:18<498:44:02, 3.23s/it]
2%|▏ | 14237/569592 [5:24:18<498:44:02, 3.23s/it]
2%|▏ | 14238/569592 [5:24:23<579:04:07, 3.75s/it]
2%|▏ | 14238/569592 [5:24:23<579:04:07, 3.75s/it]
2%|▏ | 14239/569592 [5:24:26<563:17:17, 3.65s/it]
2%|▏ | 14239/569592 [5:24:26<563:17:17, 3.65s/it]
3%|▎ | 14240/569592 [5:24:29<532:36:33, 3.45s/it]
3%|▎ | 14240/569592 [5:24:29<532:36:33, 3.45s/it]
3%|▎ | 14241/569592 [5:24:32<529:01:08, 3.43s/it]
3%|▎ | 14241/569592 [5:24:32<529:01:08, 3.43s/it]
3%|▎ | 14242/569592 [5:24:36<533:21:04, 3.46s/it]
3%|▎ | 14242/569592 [5:24:36<533:21:04, 3.46s/it]
3%|▎ | 14243/569592 [5:24:37<414:36:22, 2.69s/it]
3%|▎ | 14243/569592 [5:24:37<414:36:22, 2.69s/it]
3%|▎ | 14244/569592 [5:24:41<486:51:42, 3.16s/it]
3%|▎ | 14244/569592 [5:24:41<486:51:42, 3.16s/it]
3%|▎ | 14245/569592 [5:24:46<566:36:38, 3.67s/it]
3%|▎ | 14245/569592 [5:24:46<566:36:38, 3.67s/it]
3%|▎ | 14246/569592 [5:24:49<549:03:18, 3.56s/it]
3%|▎ | 14246/569592 [5:24:49<549:03:18, 3.56s/it]
3%|▎ | 14247/569592 [5:24:53<554:25:48, 3.59s/it]
3%|▎ | 14247/569592 [5:24:53<554:25:48, 3.59s/it]
3%|▎ | 14248/569592 [5:24:57<602:01:42, 3.90s/it]
3%|▎ | 14248/569592 [5:24:57<602:01:42, 3.90s/it]
3%|▎ | 14249/569592 [5:24:58<462:03:51, 3.00s/it]
3%|▎ | 14249/569592 [5:24:58<462:03:51, 3.00s/it]
3%|▎ | 14250/569592 [5:25:02<487:14:09, 3.16s/it]
3%|▎ | 14250/569592 [5:25:02<487:14:09, 3.16s/it]
3%|▎ | 14251/569592 [5:25:05<493:58:43, 3.20s/it]
3%|▎ | 14251/569592 [5:25:05<493:58:43, 3.20s/it]
3%|▎ | 14252/569592 [5:25:06<391:58:32, 2.54s/it]
3%|▎ | 14252/569592 [5:25:06<391:58:32, 2.54s/it]
3%|▎ | 14253/569592 [5:25:07<319:12:38, 2.07s/it]
3%|▎ | 14253/569592 [5:25:07<319:12:38, 2.07s/it]
3%|▎ | 14254/569592 [5:25:08<266:55:07, 1.73s/it]
3%|▎ | 14254/569592 [5:25:08<266:55:07, 1.73s/it]
3%|▎ | 14255/569592 [5:25:09<232:42:19, 1.51s/it]
3%|▎ | 14255/569592 [5:25:09<232:42:19, 1.51s/it]
3%|▎ | 14256/569592 [5:25:13<324:12:49, 2.10s/it]
3%|▎ | 14256/569592 [5:25:13<324:12:49, 2.10s/it]
3%|▎ | 14257/569592 [5:25:13<271:14:16, 1.76s/it]
3%|▎ | 14257/569592 [5:25:14<271:14:16, 1.76s/it]
3%|▎ | 14258/569592 [5:25:17<358:40:35, 2.33s/it]
3%|▎ | 14258/569592 [5:25:17<358:40:35, 2.33s/it]
3%|▎ | 14259/569592 [5:25:21<417:04:57, 2.70s/it]
3%|▎ | 14259/569592 [5:25:21<417:04:57, 2.70s/it]
3%|▎ | 14260/569592 [5:25:22<335:24:14, 2.17s/it]
3%|▎ | 14260/569592 [5:25:22<335:24:14, 2.17s/it]
3%|▎ | 14261/569592 [5:25:25<406:22:09, 2.63s/it]
3%|▎ | 14261/569592 [5:25:25<406:22:09, 2.63s/it]
3%|▎ | 14262/569592 [5:25:29<457:18:43, 2.96s/it]
3%|▎ | 14262/569592 [5:25:29<457:18:43, 2.96s/it]
3%|▎ | 14263/569592 [5:25:30<362:57:36, 2.35s/it]
3%|▎ | 14263/569592 [5:25:30<362:57:36, 2.35s/it]
3%|▎ | 14264/569592 [5:25:34<417:30:13, 2.71s/it]
3%|▎ | 14264/569592 [5:25:34<417:30:13, 2.71s/it]
3%|▎ | 14265/569592 [5:25:35<336:46:36, 2.18s/it]
3%|▎ | 14265/569592 [5:25:35<336:46:36, 2.18s/it]
3%|▎ | 14266/569592 [5:25:39<427:51:04, 2.77s/it]
3%|▎ | 14266/569592 [5:25:39<427:51:04, 2.77s/it]
3%|▎ | 14267/569592 [5:25:43<518:00:33, 3.36s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 14267/569592 [5:25:43<518:00:33, 3.36s/it]
3%|▎ | 14268/569592 [5:25:47<521:27:25, 3.38s/it]
3%|▎ | 14268/569592 [5:25:47<521:27:25, 3.38s/it]
3%|▎ | 14269/569592 [5:25:48<411:50:15, 2.67s/it]
3%|▎ | 14269/569592 [5:25:48<411:50:15, 2.67s/it]
3%|▎ | 14270/569592 [5:25:52<461:57:32, 2.99s/it]
3%|▎ | 14270/569592 [5:25:52<461:57:32, 2.99s/it]
3%|▎ | 14271/569592 [5:25:56<549:56:33, 3.57s/it]
3%|▎ | 14271/569592 [5:25:57<549:56:33, 3.57s/it]
3%|▎ | 14272/569592 [5:25:57<426:53:21, 2.77s/it]
3%|▎ | 14272/569592 [5:25:57<426:53:21, 2.77s/it]
3%|▎ | 14273/569592 [5:26:01<450:34:36, 2.92s/it]
3%|▎ | 14273/569592 [5:26:01<450:34:36, 2.92s/it]
3%|▎ | 14274/569592 [5:26:04<472:59:55, 3.07s/it]
3%|▎ | 14274/569592 [5:26:04<472:59:55, 3.07s/it]
3%|▎ | 14275/569592 [5:26:08<516:25:35, 3.35s/it]
3%|▎ | 14275/569592 [5:26:08<516:25:35, 3.35s/it]
3%|▎ | 14276/569592 [5:26:09<405:37:26, 2.63s/it]
3%|▎ | 14276/569592 [5:26:09<405:37:26, 2.63s/it]
3%|▎ | 14277/569592 [5:26:10<326:44:01, 2.12s/it]
3%|▎ | 14277/569592 [5:26:10<326:44:01, 2.12s/it]
3%|▎ | 14278/569592 [5:26:14<422:56:58, 2.74s/it]
3%|▎ | 14278/569592 [5:26:14<422:56:58, 2.74s/it]
3%|▎ | 14279/569592 [5:26:18<494:34:41, 3.21s/it]
3%|▎ | 14279/569592 [5:26:18<494:34:41, 3.21s/it]
3%|▎ | 14280/569592 [5:26:22<498:47:31, 3.23s/it]
3%|▎ | 14280/569592 [5:26:22<498:47:31, 3.23s/it]
3%|▎ | 14281/569592 [5:26:23<391:19:22, 2.54s/it]
3%|▎ | 14281/569592 [5:26:23<391:19:22, 2.54s/it]
3%|▎ | 14282/569592 [5:26:26<440:35:53, 2.86s/it]
3%|▎ | 14282/569592 [5:26:26<440:35:53, 2.86s/it]
3%|▎ | 14283/569592 [5:26:29<455:07:44, 2.95s/it]
3%|▎ | 14283/569592 [5:26:29<455:07:44, 2.95s/it]
3%|▎ | 14284/569592 [5:26:30<364:36:09, 2.36s/it]
3%|▎ | 14284/569592 [5:26:30<364:36:09, 2.36s/it]
3%|▎ | 14285/569592 [5:26:34<411:45:57, 2.67s/it]
3%|▎ | 14285/569592 [5:26:34<411:45:57, 2.67s/it]
3%|▎ | 14286/569592 [5:26:37<447:28:40, 2.90s/it]
3%|▎ | 14286/569592 [5:26:37<447:28:40, 2.90s/it]
3%|▎ | 14287/569592 [5:26:38<359:36:06, 2.33s/it]
3%|▎ | 14287/569592 [5:26:38<359:36:06, 2.33s/it]
3%|▎ | 14288/569592 [5:26:42<405:53:10, 2.63s/it]
3%|▎ | 14288/569592 [5:26:42<405:53:10, 2.63s/it]
3%|▎ | 14289/569592 [5:26:43<328:42:08, 2.13s/it]
3%|▎ | 14289/569592 [5:26:43<328:42:08, 2.13s/it]
3%|▎ | 14290/569592 [5:26:46<398:44:37, 2.59s/it]
3%|▎ | 14290/569592 [5:26:46<398:44:37, 2.59s/it]
3%|▎ | 14291/569592 [5:26:50<443:44:06, 2.88s/it]
3%|▎ | 14291/569592 [5:26:50<443:44:06, 2.88s/it]
3%|▎ | 14292/569592 [5:26:51<354:39:51, 2.30s/it]
3%|▎ | 14292/569592 [5:26:51<354:39:51, 2.30s/it]
3%|▎ | 14293/569592 [5:26:56<486:03:26, 3.15s/it]
3%|▎ | 14293/569592 [5:26:56<486:03:26, 3.15s/it]
3%|▎ | 14294/569592 [5:26:57<387:36:45, 2.51s/it]
3%|▎ | 14294/569592 [5:26:57<387:36:45, 2.51s/it]
3%|▎ | 14295/569592 [5:26:58<316:26:31, 2.05s/it]
3%|▎ | 14295/569592 [5:26:58<316:26:31, 2.05s/it]
3%|▎ | 14296/569592 [5:27:02<426:50:56, 2.77s/it]
3%|▎ | 14296/569592 [5:27:02<426:50:56, 2.77s/it]
3%|▎ | 14297/569592 [5:27:06<462:24:01, 3.00s/it]
3%|▎ | 14297/569592 [5:27:06<462:24:01, 3.00s/it]
3%|▎ | 14298/569592 [5:27:07<368:26:23, 2.39s/it]
3%|▎ | 14298/569592 [5:27:07<368:26:23, 2.39s/it]
3%|▎ | 14299/569592 [5:27:08<303:47:02, 1.97s/it]
3%|▎ | 14299/569592 [5:27:08<303:47:02, 1.97s/it]
3%|▎ | 14300/569592 [5:27:09<255:40:49, 1.66s/it]
3%|▎ | 14300/569592 [5:27:09<255:40:49, 1.66s/it]
3%|▎ | 14301/569592 [5:27:10<222:07:31, 1.44s/it]
3%|▎ | 14301/569592 [5:27:10<222:07:31, 1.44s/it]
3%|▎ | 14302/569592 [5:27:13<321:21:26, 2.08s/it]
3%|▎ | 14302/569592 [5:27:13<321:21:26, 2.08s/it]
3%|▎ | 14303/569592 [5:27:14<267:58:33, 1.74s/it]
3%|▎ | 14303/569592 [5:27:14<267:58:33, 1.74s/it]
3%|▎ | 14304/569592 [5:27:19<430:29:24, 2.79s/it]
3%|▎ | 14304/569592 [5:27:19<430:29:24, 2.79s/it]
3%|▎ | 14305/569592 [5:27:20<347:29:16, 2.25s/it]
3%|▎ | 14305/569592 [5:27:20<347:29:16, 2.25s/it]
3%|▎ | 14306/569592 [5:27:22<297:21:19, 1.93s/it]
3%|▎ | 14306/569592 [5:27:22<297:21:19, 1.93s/it]
3%|▎ | 14307/569592 [5:27:23<252:32:36, 1.64s/it]
3%|▎ | 14307/569592 [5:27:23<252:32:36, 1.64s/it]
3%|▎ | 14308/569592 [5:27:26<337:52:32, 2.19s/it]
3%|▎ | 14308/569592 [5:27:26<337:52:32, 2.19s/it]
3%|▎ | 14309/569592 [5:27:27<280:33:14, 1.82s/it]
3%|▎ | 14309/569592 [5:27:27<280:33:14, 1.82s/it]
3%|▎ | 14310/569592 [5:27:29<311:49:09, 2.02s/it]
3%|▎ | 14310/569592 [5:27:29<311:49:09, 2.02s/it]
3%|▎ | 14311/569592 [5:27:31<289:41:38, 1.88s/it]
3%|▎ | 14311/569592 [5:27:31<289:41:38, 1.88s/it]
3%|▎ | 14312/569592 [5:27:36<434:02:37, 2.81s/it]
3%|▎ | 14312/569592 [5:27:36<434:02:37, 2.81s/it]
3%|▎ | 14313/569592 [5:27:37<352:38:21, 2.29s/it]
3%|▎ | 14313/569592 [5:27:37<352:38:21, 2.29s/it]
3%|▎ | 14314/569592 [5:27:41<432:08:21, 2.80s/it]
3%|▎ | 14314/569592 [5:27:41<432:08:21, 2.80s/it]
3%|▎ | 14315/569592 [5:27:45<470:19:37, 3.05s/it]
3%|▎ | 14315/569592 [5:27:45<470:19:37, 3.05s/it]
3%|▎ | 14316/569592 [5:27:49<513:06:35, 3.33s/it]
3%|▎ | 14/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
316/569592 [5:27:49<513:06:35, 3.33s/it]
3%|▎ | 14317/569592 [5:27:50<405:45:18, 2.63s/it]
3%|▎ | 14317/569592 [5:27:50<405:45:18, 2.63s/it]
3%|▎ | 14318/569592 [5:27:53<459:33:22, 2.98s/it]
3%|▎ | 14318/569592 [5:27:53<459:33:22, 2.98s/it]
3%|▎ | 14319/569592 [5:27:54<366:37:00, 2.38s/it]
3%|▎ | 14319/569592 [5:27:54<366:37:00, 2.38s/it]
3%|▎ | 14320/569592 [5:27:55<299:58:33, 1.94s/it]
3%|▎ | 14320/569592 [5:27:55<299:58:33, 1.94s/it]
3%|▎ | 14321/569592 [5:27:59<396:00:33, 2.57s/it]
3%|▎ | 14321/569592 [5:27:59<396:00:33, 2.57s/it]
3%|▎ | 14322/569592 [5:28:05<5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
15:44:14, 3.34s/it]
3%|▎ | 14322/569592 [5:28:05<515:44:14, 3.34s/it]
3%|▎ | 14323/569592 [5:28:08<538:23:43, 3.49s/it]
3%|▎ | 14323/569592 [5:28:08<538:23:43, 3.49s/it]
3%|▎ | 14324/569592 [5:28:09<421:42:17, 2.73s/it]
3%|▎ | 14324/569592 [5:28:09<421:42:17, 2.73s/it]
3%|▎ | 14325/569592 [5:28:13<474:39:03, 3.08s/it]
3%|▎ | 14325/569592 [5:28:13<474:39:03, 3.08s/it]
3%|▎ | 14326/569592 [5:28:17<521:17:14, 3.38s/it]
3%|▎ | 14326/569592 [5:28:17<521:17:14, 3.38s/it]
3%|▎ | 14327/569592 [5:28:21<523:22:55, 3.39s/it]
3%|▎ | 14327/569592 [5:28:21<523:22:55, 3.39s/it]
3%|▎ | 14328/569592 [5:28:24<536:51:32, 3.48s/it]
3%|▎ | 14328/569592 [5:28:24<536:51:32, 3.48s/it]
3%|▎ | 14329/569592 [5:28:29<583:26:13, 3.78s/it]
3%|▎ | 14329/569592 [5:28:29<583:26:13, 3.78s/it]
3%|▎ | 14330/569592 [5:28:32<548:23:57, 3.56s/it]
3%|▎ | 14330/569592 [5:28:32<548:23:57, 3.56s/it]
3%|▎ | 14331/569592 [5:28:35<523:31:03, 3.39s/it]
3%|▎ | 14331/569592 [5:28:35<523:31:03, 3.39s/it]
3%|▎ | 14332/569592 [5:28:36<409:41:46, 2.66s/it]
3%|▎ | 14332/569592 [5:28:36<409:41:46, 2.66s/it]
3%|▎ | 143/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
33/569592 [5:28:39<450:46:15, 2.92s/it]
3%|▎ | 14333/569592 [5:28:39<450:46:15, 2.92s/it]
3%|▎ | 14334/569592 [5:28:43<486:35:16, 3.15s/it]
3%|▎ | 14334/569592 [5:28:43<486:35:16, 3.15s/it]
3%|▎ | 14335/569592 [5:28:46<483:02:03, 3.13s/it]
3%|▎ | 14335/569592 [5:28:46<483:02:03, 3.13s/it]
3%|▎ | 14336/569592 [5:28:50<502:43:04, 3.26s/it]
3%|▎ | 14336/569592 [5:28:50<502:43:04, 3.26s/it]
3%|▎ | 14337/569592 [5:28:53<512:30:59, 3.32s/it]
3%|▎ | 14337/569592 [5:28:53<512:30:59, 3.32s/it]
3%|▎ | 14338/569592 [5:28:54<401:13:45, 2.60s/it]
3%|▎ | 14338/569592 [5:28:54<401:13:45, 2.60s/it]
3%|▎ | 14339/569592 [5:28:57<430:30:14, 2.79s/it]
3%|▎ | 14339/569592 [5:28:57<430:30:14, 2.79s/it]
3%|▎ | 14340/569592 [5:29:01<451:46:02, 2.93s/it]
3%|▎ | 14340/569592 [5:29:01<451:46:02, 2.93s/it]
3%|▎ | 14341/569592 [5:29:04<459:28:54, 2.98s/it]
3%|▎ | 14341/569592 [5:29:04<459:28:54, 2.98s/it]
3%|▎ | 14342/569592 [5:29:07<480:55:22, 3.12s/it]
3%|▎ | 14342/569592 [5:29:07<480:55:22, 3.12s/it]
3%|▎ | 14343/569592 [5:29:10<490:53:37, 3.18s/it]
3%|▎ | 14343/569592 [5:29:11<490:53:37, 3.18s/it]
3%|▎ | 14344/569592 [5:29:11<385:18:40, 2.50s/it]
3%|▎ | 14344/569592 [5:29:11<385:18:40, 2.50s/it]
3%|▎ | 14345/569592 [5:29:15<428:29:21, 2.78s/it]
3%|▎ | 14345/569592 [5:29:15<428:29:21, 2.78s/it]
3%|▎ | 14346/569592 [5:29:19<475:51:22, 3.09s/it]
3%|▎ | 14346/569592 [5:29:19<475:51:22, 3.09s/it]
3%|▎ | 14347/569592 [5:29:20<376:01:48, 2.44s/it]
3%|▎ | 14347/569592 [5:29:20<376:01:48, 2.44s/it]
3%|▎ | 14348/569592 [5:29:25<503:18:15, 3.26s/it]
3%|▎ | 14348/569592 [5:29:25<503:18:15, 3.26s/it]
3%|▎ | 14349/569592 [5:29:28<500:18:01, 3.24s/it]
3%|▎ | 14349/569592 [5:29:28<500:18:01, 3.24s/it]
3%|▎ | 14350/569592 [5:29:33<582:14:03, 3.78s/it]
3%|▎ | 14350/569592 [5:29:33<582:14:03, 3.78s/it]
3%|▎ | 14351/569592 [5:29:36<555:14:00, 3.60s/it]
3%|▎ | 14351/569592 [5:29:36<555:14:00, 3.60s/it]
3%|▎ | 14352/569592 [5:29:37<429:39:23, 2.79s/it]
3%|▎ | 14352/569592 [5:29:37<429:39:23, 2.79s/it]
3%|▎ | 14353/569592 [5:29:41<476:42:00, 3.09s/it]
3%|▎ | 14353/569592 [5:29:41<476:42:00, 3.09s/it]
3%|▎ | 14354/569592 [5:29:42<376:53:48, 2.44s/it]
3%|▎ | 14354/569592 [5:29:42<376:53:48, 2.44s/it]
3%|▎ | 14355/569592 [5:29:46<447:09:48, 2.90s/it]
3%|▎ | 14355/569592 [5:29:46<447:09:48, 2.90s/it]
3%|▎ | 14356/569592 [5:29:49<485:43:11, 3.15s/it]
3%|▎ | 14356/569592 [5:29:49<485:43:11, 3.15s/it]
3%|▎ | 14357/569592 [5:29:53<499:16:03, 3.24s/it]
3%|▎ | 14357/569592 [5:29:53<499:16:03, 3.24s/it]
3%|▎ | 14358/569592 [5:29:58<561:59:12, 3.64s/it]
3%|▎ | 14358/569592 [5:29:58<561:59:12, 3.64s/it]
3%|▎ | 14359/569592 [5:30:01<562:45:09, 3.65s/it]
3%|▎ | 14359/569592 [5:30:01<562:45:09, 3.65s/it]
3%|▎ | 14360/569592 [5:30:04<545:04:53, 3.53s/it]
3%|▎ | 14360/569592 [5:30:04<545:04:53, 3.53s/it]
3%|▎ | 14361/569592 [5:30:05<427:03:13, 2.77s/it]
3%|▎ | 14361/569592 [5:30:05<427:03:13, 2.77s/it]
3%|▎ | 14362/569592 [5:30:09<473:18:30, 3.07s/it]
3%|▎ | 14362/569592 [5:30:09<473:18:30, 3.07s/it]
3%|▎ | 14363/569592 [5:30:14<554:18:31, 3.59s/it]
3%|▎ | 14363/569592 [5:30:14<554:18:31, 3.59s/it]
3%|▎ | 14364/569592 [5:30:18<584:40:49, 3.79s/it]
3%|▎ | 14364/569592 [5:30:18<584:40:49, 3.79s/it]
3%|▎ | 14365/569592 [5:30:22<561:28:04, 3.64s/it]
3%|▎ | 14365//home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
569592 [5:30:22<561:28:04, 3.64s/it]
3%|▎ | 14366/569592 [5:30:25<546:33:11, 3.54s/it]
3%|▎ | 14366/569592 [5:30:25<546:33:11, 3.54s/it]
3%|▎ | 14367/569592 [5:30:28<526:33:58, 3.41s/it]
3%|▎ | 14367/569592 [5:30:28<526:33:58, 3.41s/it]
3%|▎ | 14368/569592 [5:30:31<523:24:01, 3.39s/it]
3%|▎ | 14368/569592 [5:30:31<523:24:01, 3.39s/it]
3%|▎ | 14369/569592 [5:30:32<409:44:16, 2.66s/it]
3%|▎ | 14369/569592 [5:30:32<409:44:16, 2.66s/it]
3%|▎ | 14370/569592 [5:30:36<478:34:06, 3.10s/it]
3%|▎ | 14370/569592 [5:30:36<478:34:06, 3.10s/it]
3%|▎ | 14371/569592 [5:30:37<377:58:47, 2.45s/it]
3%|▎ | 14371/569592 [5:30:37<377:58:47, 2.45s/it]
3%|▎ | 14372/569592 [5:30:38<312:22:28, 2.03s/it]
3%|▎ | 14372/569592 [5:30:38<312:22:28, 2.03s/it]
3%|▎ | 14373/569592 [5:30:42<379:46:46, 2.46s/it]
3%|▎ | 14373/569592 [5:30:42<379:46:46, 2.46s/it]
3%|▎ | 14374/569592 [5:30:46<439:02:11, 2.85s/it]
3%|▎ | 14374/569592 [5:30:46<439:02:11, 2.85s/it]
3%|▎ | 14375/569592 [5:30:49<479:56:48, 3.11s/it]
3%|▎ | 14375/569592 [5:30:49<479:56:48, 3.11s/it]
3%|▎ | 14376/569592 [5:30:54<538:59:21, 3.49s/it]
3%|/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101826828 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
▎ | 14376/569592 [5:30:54<538:59:21, 3.49s/it]
3%|▎ | 14377/569592 [5:30:57<521:13:15, 3.38s/it]
3%|▎ | 14377/569592 [5:30:57<521:13:15, 3.38s/it]
3%|▎ | 14378/569592 [5:30:58<406:23:03, 2.63s/it]
3%|▎ | 14378/569592 [5:30:58<406:23:03, 2.63s/it]
3%|▎ | 14379/569592 [5:31:01<427:57:36, 2.77s/it]
3%|▎ | 14379/569592 [5:31:01<427:57:36, 2.77s/it]
3%|▎ | 14380/569592 [5:31:04<445:55:08, 2.89s/it]
3%|▎ | 14380/569592 [5:31:04<445:55:08, 2.89s/it]
3%|▎ | 14381/569592 [5:31:05<356:20:19, 2.31s/it]
3%|▎ | 14381/569592 [5:31:05<356:20:19, 2.31s/it]
3%|▎ | 14382/569592 [5:31:08<409:26:48, 2.65s/it]
3%|▎ | 14382/569592 [5:31:08<409:26:48, 2.65s/it]
3%|▎ | 14383/569592 [5:31:09<331:00:20, 2.15s/it]
3%|▎ | 14383/569592 [5:31:09<331:00:20, 2.15s/it]
3%|▎ | 14384/569592 [5:31:13<408:07:55, 2.65s/it]
3%|▎ | 14384/569592 [5:31:13<408:07:55, 2.65s/it]
3%|▎ | 14385/569592 [5:31:17<458:24:00, 2.97s/it]
3%|▎ | 14385/569592 [5:31:17<458:24:00, 2.97s/it]
3%|▎ | 14386/569592 [5:31:21<489:42:46, 3.18s/it]
3%|▎ | 14386/569592 [5:31:21<489:42:46, 3.18s/it]
3%|▎ | 14387/569592 [5:31:21<384:42:18, 2.49s/it]
3%|▎ | 14387/569592 [5:31:21<384:42:18, 2.49s/it]
3%|▎ | 14388/569592 [5:31:25<453:20:09, 2.94s/it]
3%|▎ | 14388/569592 [5:31:25<453:20:09, 2.94s/it]
3%|▎ | 14389/569592 [5:31:26<363:04:57, 2.35s/it]
3%|▎ | 14389/569592 [5:31:26<363:04:57, 2.35s/it]
3%|▎ | 14390/569592 [5:31:27<300:45:24, 1.95s/it]
3%|▎ | 14390/569592 [5:31:27<300:45:24, 1.95s/it]
3%|▎ | 14391/569592 [5:31:31<375:54:56, 2.44s/it]
3%|▎ | 14391/569592 [5:31:31<375:54:56, 2.44s/it]
3%|▎ | 14392/569592 [5:31:32<307:31:00, 1.99s/it]
3%|▎ | 14392/569592 [5:31:32<307:31:00, 1.99s/it]
3%|▎ | 14393/569592 [5:31:36<408:55:14, 2.65s/it]
3%|▎ | 14393/569592 [5:31:36<408:55:14, 2.65s/it]
3%|▎ | 14394/569592 [5:31:37<331:53:39, 2.15s/it]
3%|▎ | 14394/569592 [5:31:37<331:53:39, 2.15s/it]
3%|▎ | 14395/569592 [5:31:38<277:18:19, 1.80s/it]
3%|▎ | 14395/569592 [5:31:38<277:18:19, 1.80s/it]
3%|▎ | 14396/569592 [5:31:43<438:58:44, 2.85s/it]
3%|▎ | 14396/569592 [5:31:43<438:58:44, 2.85s/it]
3%|▎ | 14397/569592 [5:31:47<488:09:08, 3.17s/it]
3%|▎ | 14397/569592 [5:31:47<488:09:08, 3.17s/it]
3%|▎ | 14398/569592 [5:31:48<388:09:09, 2.52s/it]
3%|▎ | 14398/569592 [5:31:48<388:09:09, 2.52s/it]
3%|▎ | 14399/569592 [5:31:52<442:56:37, 2.87s/it]
3%|▎ | 14399/569592 [5:31:52<442:56:37, 2.87s/it]
3%|▎ | 14400/569592 [5:31:56<479:20:54, 3.11s/it]
3%|▎ | 14400/569592 [5:31:56<479:20:54, 3.11s/it]
3%|▎ | 14401/569592 [5:32:01<559:50:44, 3.63s/it]
3%|▎ | 14401/569592 [5:32:01<559:50:44, 3.63s/it]
3%|▎ | 14402/569592 [5:32:01<433:39:48, 2.81s/it]
3%|▎ | 14402/569592 [5:32:01<433:39:48, 2.81s/it]
3%|▎ | 14403/569592 [5:32:05<460:40:26, 2.99s/it]
3%|▎ | 14403/569592 [5:32:05<460:40:26, 2.99s/it]
3%|▎ | 14404/569592 [5:32:08<478:38:35, 3.10s/it]
3%|▎ | 14404/569592 [5:32:08<478:38:35, 3.10s/it]
3%|▎ | 14405/569592 [5:32:09<379:56:48, 2.46s/it]
3%|▎ | 14405/569592 [5:32:09<379:56:48, 2.46s/it]
3%|▎ | 14406/569592 [5:32:10<309:21:58, 2.01s/it]
3%|▎ | 14406/569592 [5:32:10<309:21:58, 2.01s/it]
3%|▎ | 14407/569592 [5:32:14<381:40:21, 2.47s/it]
3%|▎ | 14407/569592 [5:32:14<381:40:21, 2.47s/it]
3%|▎ | 14408/569592 [5:32:15<310:48:43, 2.02s/it]
3%|▎ | 14408/569592 [5:32:15<310:48:43, 2.02s/it]
3%|▎ | 14409/569592 [5:32:20<451:06:04, 2.93s/it]
3%|▎ | 14409/569592 [5:32:20<451:06:04, 2.93s/it]
3%|▎ | 14410/569592 [5:32:23<462:08:59, 3.00s/it]
3%|▎ | 14410/569592 [5:32:23<462:08:59, 3.00s/it]
3%|▎ | 14411/569592 [5:32:24<367:44:12, 2.38s/it]
3%|▎ | 14411/569592 [5:32:24<367:44:12, 2.38s/it]
3%|▎ | 14412/569592 [5:32:25<301:27:55, 1.95s/it]
3%|▎ | 14412/569592 [5:32:25<301:27:55, 1.95s/it]
3%|▎ | 14413/569592 [5:32:26<256:43:24, 1.66s/it]
3%|▎ | 14413/569592 [5:32:26<256:43:24, 1.66s/it]
3%|▎ | 14414/569592 [5:32:27<223:55:09, 1.45s/it]
3%|▎ | 14414/569592 [5:32:27<223:55:09, 1.45s/it]
3%|▎ | 14415/569592 [5:32:30<320:43:26, 2.08s/it]
3%|▎ | 14415/569592 [5:32:30<320:43:26, 2.08s/it]
3%|▎ | 14416/569592 [5:32:31<267:31:16, 1.73s/it]
3%|▎ | 14416/569592 [5:32:31<267:31:16, 1.73s/it]
3%|▎ | 14417/569592 [5:32:34<340:27:00, 2.21s/it]
3%|▎ | 14417/569592 [5:32:34<340:27:00, 2.21s/it]
3%|▎ | 14418/569592 [5:32:35<282:15:06, 1.83s/it]
3%|▎ | 14418/569592 [5:32:35<282:15:06, 1.83s/it]
3%|▎ | 14419/569592 [5:32:36<246:54:38, 1.60s/it]
3%|▎ | 14419/569592 [5:32:36<246:54:38, 1.60s/it]
3%|▎ | 14420/569592 [5:32:38<262:05:23, 1.70s/it]
3%|▎ | 14420/569592 [5:32:38<262:05:23, 1.70s/it]
3%|▎ | 14421/569592 [5:32:42<367:42:17, 2.38s/it]
3%|▎ | 14421/569592 [5:32:42<367:42:17, 2.38s/it]
3%|▎ | 14422/569592 [5:32:45<383:04:31, 2.48s/it]
3%|▎ | 14422/569592 [5:32:45<383:04:31, 2.48s/it]
3%|▎ | 14423/569592 [5:32:47<357:53:05, 2.32s/it]
3%|▎ | 14423/569592 [5:32:47<357:53:05, 2.32s/it]
3%|▎ | 14424/569592 [5:32:49<343:45:25, 2.23s/it]
3%|▎ | 14424/569592 [5:32:49<343:45:25, 2.23s/it]
3%|▎ | 14425/569592 [5:32:53<421:18:29, 2.73s/it]
3%|▎ | 14425/569592 [5:32:53<421:18:29, 2.73s/it]
3%|▎ | 14426/569592 [5:32:56<419:07:45, 2.72s/it]
3%|▎ | 14426/569592 [5:32:56<419:07:45, 2.72s/it]
3%|▎ | 14427/569592 [5:33:01<535:31:06, 3.47s/it]
3%|▎ | 14427/569592 [5:33:01<535:31:06, 3.47s/it]
3%|▎ | 14428/569592 [5:33:06<607:27:19, 3.94s/it]
3%|▎ | 14428/569592 [5:33:06<607:27:19, 3.94s/it]
3%|▎ | 14429/569592 [5:33:10<602:19:07, 3.91s/it]
3%|▎ | 14429/569592 [5:33:10<602:19:07, 3.91s/it]
3%|▎ | 14430/569592 [5:33:11<465:03:38, 3.02s/it]
3%|▎ | 14430/569592 [5:33:11<465:03:38, 3.02s/it]
3%|▎ | 14431/569592 [5:33:15<508:34:42, 3.30s/it]
3%|▎ | 14431/569592 [5:33:15<508:34:42, 3.30s/it]
3%|▎ | 14432/569592 [5:33:16<405:31:55, 2.63s/it]
3%|▎ | 14432/569592 [5:33:16<405:31:55, 2.63s/it]
3%|▎ | 14433/569592 [5:33:19<457:05:21, 2.96s/it]
3%|▎ | 14433/569592 [5:33:19<457:05:21, 2.96s/it]
3%|▎ | 14434/569592 [5:33:23<481:18:52, 3.12s/it]
3%|▎ | 14434/569592 [5:33:23<481:18:52, 3.12s/it]
3%|▎ | 14435/569592 [5:33:24<381:38:58, 2.47s/it]
3%|▎ | 14435/569592 [5:33:24<381:38:58, 2.47s/it]
3%|▎ | 14436/569592 [5:33:27<422:58:45, 2.74s/it]
3%|▎ | 14436/569592 [5:33:27<422:58:45, 2.74s/it]
3%|▎ | 14437/569592 [5:33:28<344:37:30, 2.23s/it]
3%|▎ | 14437/569592 [5:33:28<344:37:30, 2.23s/it]
3%|▎ | 14438/569592 [5:33:32<400:38:46, 2.60s/it]
3%|▎ | 14438/569592 [5:33:32<400:38:46, 2.60s/it]
3%|▎ | 14439/569592 [5:33:36<454:09:03, 2.95s/it]
3%|▎ | 14439/569592 [5:33:36<454:09:03, 2.95s/it]
3%|▎ | 14440/569592 [5:33:40<515:37:52, 3.34s/it]
3%|▎ | 14440/569592 [5:33:40<515:37:52, 3.34s/it]
3%|▎ | 14441/569592 [5:33:44<570:19:32, 3.70s/it]
3%|▎ | 14441/569592 [5:33:44<570:19:32, 3.70s/it]
3%|▎ | 14442/569592 [5:33:47<542:10:26, 3.52s/it]
3%|▎ | 14442/569592 [5:33:47<542:10:26, 3.52s/it]
3%|▎ | 14443/569592 [5:33:50<520:36:27, 3.38s/it]
3%|▎ | 14443/569592 [5:33:50<520:36:27, 3.38s/it]
3%|▎ | 14444/569592 [5:33:54<531:09:24, 3.44s/it]
3%|▎ | 14444/569592 [5:33:54<531:09:24, 3.44s/it]
3%|▎ | 14445/569592 [5:33:59<593:28:58, 3.85s/it]
3%|▎ | 14445/569592 [5:33:59<593:28:58, 3.85s/it]
3%|▎ | 14446/569592 [5:34:02<577:54:35, 3.75s/it]
3%|▎ | 14446/569592 [5:34:02<577:54:35, 3.75s/it]
3%|▎ | 14447/569592 [5:34:07<604:20:34, 3.92s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14447/569592 [5:34:07<604:20:34, 3.92s/it]
3%|▎ | 14448/569592 [5:34:10<556:56:53, 3.61s/it]
3%|▎ | 14448/569592 [5:34:10<556:56:53, 3.61s/it]
3%|▎ | 14449/569592 [5:34:12<523:50:58, 3.40s/it]
3%|▎ | 14449/569592 [5:34:13<523:50:58, 3.40s/it]
3%|▎ | 14450/569592 [5:34:16<525:50:08, 3.41s/it]
3%|▎ | 14450/569592 [5:34:16<525:50:08, 3.41s/it]
3%|▎ | 14451/569592 [5:34:20<542:22:48, 3.52s/it]
3%|▎ | 14451/569592 [5:34:20<542:22:48, 3.52s/it]
3%|▎ | 14452/569592 [5:34:21<422:41:45, 2.74s/it]
3%|▎ | 14452/569592 [5:34:21<422:41:45, 2.74s/it]
3%|▎ | 14453/569592 [5:34:24<444:42:15, 2.88s/it]
3%|▎ | 14453/569592 [5:34:24<444:42:15, 2.88s/it]
3%|▎ | 14454/569592 [5:34:29<550:50:42, 3.57s/it]
3%|▎ | 14454/569592 [5:34:29<550:50:42, 3.57s/it]
3%|▎ | 14455/569592 [5:34:32<538:37:07, 3.49s/it]
3%|▎ | 14455/569592 [5:34:32<538:37:07, 3.49s/it]
3%|▎ | 14456/569592 [5:34:36<544:18:02, 3.53s/it]
3%|▎ | 14456/569592 [5:34:36<544:18:02, 3.53s/it]
3%|▎ | 14457/569592 [5:34:39<523:57:12, 3.40s/it]
3%|▎ | 14457/569592 [5:34:39<523:57:12, 3.40s/it]
3%|▎ | 14458/569592 [5:34:44<574:34:43, 3.73s/it]
3%|▎ | 14458/569592 [5:34:44<574:34:43, 3.73s/it]
3%|▎ | 14459/569592 [5:34:47<550:02:53, 3.57s/it]
3%|▎ | 14459/569592 [5:34:47<550:02:53, 3.57s/it]
3%|▎ | 14460/569592 [5:34:51<564:14:07, 3.66s/it]
3%|▎ | 14460/569592 [5:34:51<564:14:07, 3.66s/it]
3%|▎ | 14461/569592 [5:34:51<436:46:42, 2.83s/it]
3%|▎ | 14461/569592 [5:34:52<436:46:42, 2.83s/it]
3%|▎ | 14462/569592 [5:34:52<349:10:53, 2.26s/it]
3%|▎ | 14462/569592 [5:34:52<349:10:53, 2.26s/it]
3%|▎ | 14463/569592 [5:34:57<477:55:52, 3.10s/it]
3%|▎ | 14463/569592 [/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
5:34:57<477:55:52, 3.10s/it]
3%|▎ | 14464/569592 [5:35:02<531:47:20, 3.45s/it]
3%|▎ | 14464/569592 [5:35:02<531:47:20, 3.45s/it]
3%|▎ | 14465/569592 [5:35:03<412:53:16, 2.68s/it]
3%|▎ | 14465/569592 [5:35:03<412:53:16, 2.68s/it]
3%|▎ | 14466/569592 [5:35:06<464:04:09, 3.01s/it]
3%|▎ | 14466/569592 [5:35:06<464:04:09, 3.01s/it]
3%|▎ | 14467/569592 [5:35:07<366:49:23, 2.38s/it]
3%|▎ | 14467/569592 [5:35:07<366:49:23, 2.38s/it]
3%|▎ | 14468/569592 [5:35:11<440:08:45, 2.85s/it]
3%|▎ | 14468/569592 [5:35:11<440:08:45, 2.85s/it]
3%|▎ | 14469/569592 [5:35:12<350:24:49, 2.27s/it]
3%|▎ | 14469/569592 [5:35:12<350:24:49, 2.27s/it]
3%|▎ | 14470/569592 [5:35:13<288:02:38, 1.87s/it]
3%|▎ | 14470/569592 [5:35:13<288:02:38, 1.87s/it]
3%|▎ | 14471/569592 [5:35:17<360:15:33, 2.34s/it]
3%|▎ | 14471/569592 [5:35:17<360:15:33, 2.34s/it]
3%|▎ | 14472/569592 [5:35:20<416:20:46, 2.70s/it]
3%|▎ | 14472/569592 [5:35:20<416:20:46, 2.70s/it]
3%|▎ | 14473/569592 [5:35:24<466:03:33, 3.02s/it]
3%|▎ | 14473/569592 [5:35:24<466:03:33, 3.02s/it]
3%|▎ | 14474/569592 [5:35:29<563:36:55, 3.66s/it]
3%|▎ | 14474/569592 [5:35:29<563:36:55, 3.66s/it]
3%|▎ | 14475/569592 [5:35:33<557:41:36, 3.62s/it]
3%|▎ | 14475/569592 [5:35:33<557:41:36, 3.62s/it]
3%|▎ | 14476/569592 [5:35:37<601:01:59, 3.90s/it]
3%|▎ | 14476/569592 [5:35:37<601:01:59, 3.90s/it]
3%|▎ | 14477/569592 [5:35:41<579:14:07, 3.76s/it]
3%|▎ | 14477/569592 [5:35:41<579:14:07, 3.76s/it]
3%|▎ | 14478/569592 [5:35:41<446:01:45, 2.89s/it]
3%|▎ | 14478/569592 [5:35:41<446:01:45, 2.89s/it]
3%|▎ | 14479/569592 [5:35:45<491:12:30, 3.19s/it]
3%|▎ | 14479/569592 [5:35:45<491:12:30, 3.19s/it]
3%|▎ | 14480/569592 [5:35:50<564:11:03, 3.66s/it]
3%|▎ | 14480/569592 [5:35:50<564:11:03, 3.66s/it]
3%|▎ | 14481/569592 [5:35:53<549:46:27, 3.57s/it]
3%|▎ | 14481/569592 [5:35:53<549:46:27, 3.57s/it]
3%|▎ | 14482/569592 [5:35:57<537:11:26, 3.48s/it]
3%|▎ | 14482/569592 [5:35:57<537:11:26, 3.48s/it]
3%|▎ | 14483/569592 [5:36:01<573:24:27, 3.72s/it]
3%|▎ | 14483/569592 [5:36:01<573:24:27, 3.72s/it]
3%|▎ | 14484/569592 [5:36:02<442:19:41, 2.87s/it]
3%|▎ | 14484/569592 [5:36:02<442:19:41, 2.87s/it]
3%|▎ | 14485/569592 [5:36:05<465:40:42, 3.02s/it]
3%|▎ | 14485/569592 [5:36:05<465:40:42, 3.02s/it]
3%|▎ | 14486/569592 [5:36:06<368:35:20, 2.39s/it]
3%|▎ | 14486/569592 [5:36:06<368:35:20, 2.39s/it]
3%|▎ | 14487/569592 [5:36:10<431:59:47, 2.80s/it]
3%|▎ | 14487/569592 [5:36:10<431:59:47, 2.80s/it]
3%|▎ | 14488/569592 [5:36:13<467:00:45, 3.03s/it]
3%|▎ | 14488/569592 [5:36:13<467:00:45, 3.03s/it]
3%|▎ | 14489/569592 [5:36:17<480:54:38, 3.12s/it]
3%|▎ | 14489/569592 [5:36:17<480:54:38, 3.12s/it]
3%|▎ | 14490/569592 [5:36:18<380:37:32, 2.47s/it]
3%|▎ | 14490/569592 [5:36:18<380:37:32, 2.47s/it]
3%|▎ | 14491/569592 [5:36:21<428:36:39, 2.78s/it]
3%|▎ | 14491/569592 [5:36:21<428:36:39, 2.78s/it]
3%|▎ | 14492/569592 [5:36:25<468:11:27, 3.04s/it]
3%|▎ | 14492/569592 [5:36:25<468:11:27, 3.04s/it]
3%|▎ | 14493/569592 [5:36:28<479:28:51, 3.11s/it]
3%|▎ | 14493/569592 [5:36:28<479:28:51, 3.11s/it]
3%|▎ | 14494/569592 [5:36:31<489:19:46, 3.17s/it]
3%|▎ | 14494/569592 [5:36:31<489:19:46, 3.17s/it]
3%|▎ | 14495/569592 [5:36:34<481:42:48, 3.12s/it]
3%|▎ | 14495/569592 [5:36:34<481:42:48, 3.12s/it]
3%|▎ | 14496/569592 [5:36:36<386:48:58, 2.51s/it]
3%|▎ | 14496/569592 [5:36:36<386:48:58, 2.51s/it]
3%|▎ | 14497/569592 [5:36:37<317:45:10, 2.06s/it]
3%|▎ | 14497/569592 [5:36:37<317:45:10, 2.06s/it]
3%|▎ | 14498/569592 [5:36:38<267:41:09, 1.74s/it]
3%|▎ | 14498/569592 [5:36:38<267:41:09, 1.74s/it]
3%|▎ | 14499/569592 [5:36:41<369:54:20, 2.40s/it]
3%|▎ | 14499/569592 [5:36:41<369:54:20, 2.40s/it]
3%|▎ | 14500/569592 [5:36:42<304:54:34, 1.98s/it]
3%|▎ | 14500/569592 [5:36:42<304:54:34, 1.98s/it]
3%|▎ | 14501/569592 [5:36:46<392:29:28, 2.55s/it]
3%|▎ | 14501/569592 [5:36:46<392:29:28, 2.55s/it]
3%|▎ | 14502/569592 [5:36:50<430:06:21, 2.79s/it]
3%|▎ | 14502/569592 [5:36:50<430:06:21, 2.79s/it]
3%|▎ | 14503/569592 [5:36:53<472:18:35, 3.06s/it]
3%|▎ | 14503/569592 [5:36:53<472:18:35, 3.06s/it]
3%|▎ | 14504/569592 [5:36:54<373:57:59, 2.43s/it]
3%|▎ | 14504/569592 [5:36:54<373:57:59, 2.43s/it]
3%|▎ | 14505/569592 [5:36:58<415:34:04, 2.70s/it]
3%|▎ | 14505/569592 [5:36:58<415:34:04, 2.70s/it]
3%|▎ | 14506/569592 [5:37:01<453:00:36, 2.94s/it]
3%|▎ | 14506/569592 [5:37:01<453:00:36, 2.94s/it]
3%|▎ | 14507/569592 [5:37:02<361:11:14, 2.34s/it]
3%|▎ | 14507/569592 [5:37:02<361:11:14, 2.34s/it]
3%|▎ | 14508/569592 [5:37:06<414:48:38, 2.69s/it]
3%|▎ | 14508/569592 [5:37:06<414:48:38, 2.69s/it]
3%|▎ | 14509/569592 [5:37:09<449:32:23, 2.92s/it]
3%|▎ | 14509/569592 [5:37:09<449:32:23, 2.92s/it]
3%|▎ | 14510/569592 [5:37:10<361:44:00, 2.35s/it]
3%|▎ | 14510/569592 [5:37:10<361:44:00, 2.35s/it]
3%|▎ | 14511/569592 [5:37:14<424:38:04, 2.75s/it]
3%|▎ | 14511/569592 [5:37:14<424:38:04, 2.75s/it]
3%|▎ | 14512/569592 [5:37:15<344:20:51, 2.23s/it]
3%|▎ | 14512/569592 [5:37:15<344:20:51, 2.23s/it]
3%|▎ | 14513/569592 [5:37:19<419:33:31, 2.72s/it]
3%|▎ | 14513/569592 [5:37:19<419:33:31, 2.72s/it]
3%|▎ | 14514/569592 [5:37:22<458:34:01, 2.97s/it]
3%|▎ | 14514/569592 [5:37:22<458:34:01, 2.97s/it]
3%|▎ | 14515/569592 [5:37:26<491:11:22, 3.19s/it]
3%|▎ | 14515/569592 [5:37:26<491:11:22, 3.19s/it]
3%|▎ | 14516/569592 [5:37:29<485:18:16, 3.15s/it]
3%|▎ | 14516/569592 [5:37:29<485:18:16, 3.15s/it]
3%|▎ | 14517/569592 [5:37:30<381:35:34, 2.47s/it]
3%|▎ | 14517/569592 [5:37:30<381:35:34, 2.47s/it]
3%|▎ | 14518/569592 [5:37:35<497:27:18, 3.23s/it]
3%|▎ | 14518/569592 [5:37:35<497:27:18, 3.23s/it]
3%|▎ | 14519/569592 [5:37:38<505:32:24, 3.28s/it]
3%|▎ | 14519/569592 [5:37:38<505:32:24, 3.28s/it]
3%|▎ | 14520/569592 [5:37:39<397:29:47, 2.58s/it]
3%|▎ | 14520/569592 [5:37:39<397:29:47, 2.58s/it]
3%|▎ | 14521/569592 [5:37:42<426:42:04, 2.77s/it]
3%|▎ | 14521/569592 [5:37:42<426:42:04, 2.77s/it]
3%|▎ | 14522/569592 [5:37:46<464:04:16, 3.01s/it]
3%|▎ | 14522/569592 [5:37:46<464:04:16, 3.01s/it]
3%|▎ | 14523/569592 [5:37:47<374:59:15, 2.43s/it]
3%|▎ | 14523/569592 [5:37:47<374:59:15, 2.43s/it]
3%|▎ | 14524/569592 [5:37:48<306:55:07, 1.99s/it]
3%|▎ | 14524/569592 [5:37:48<306:55:07, 1.99s/it]
3%|▎ | 14525/569592 [5:37:51<369:40:58, 2.40s/it]
3%|▎ | 14525/569592 [5:37:51<369:40:58, 2.40s/it]
3%|▎ | 14526/569592 [5:37:52<303:22:50, 1.97s/it]
3%|▎ | 14526/569592 [5:37:52<303:22:50, 1.97s/it]
3%|▎ | 14527/569592 [5:37:53<257:00:45, 1.67s/it]
3%|▎ | 14527/569592 [5:37:53<257:00:45, 1.67s/it]
3%|▎ | 14528/569592 [5:37:57<364:06:12, 2.36s/it]
3%|▎ | 14528/569592 [5:37:57<364:06:12, 2.36s/it]
3%|▎ | 14529/569592 [5:37:58<300:04:39, 1.95s/it]
3%|▎ | 14529/569592 [5:37:58<300:04:39, 1.95s/it]
3%|▎ | 14530/569592 [5:37:59<254:31:42, 1.65s/it]
3%|▎ | 14530/569592 [5:37:59<254:31:42, 1.65s/it]
3%|▎ | 14531/569592 [5:38:00<223:09:54, 1.45s/it]
3%|▎ | 14531/569592 [5:38:00<223:09:54, 1.45s/it]
3%|▎ | 14532/569592 [5:38:02<226:49:21, 1.47s/it]
3%|▎ | 14532/569592 [5:38:02<226:49:21, 1.47s/it]
3%|▎ | 14533/569592 [5:38:06<369:20:32, 2.40s/it]
3%|▎ | 14533/569592 [5:38:06<369:20:32, 2.40s/it]
3%|▎ | 14534/569592 [5:38:10<414:15:54, 2.69s/it]
3%|▎ | 14534/569592 [5:38:10<414:15:54, 2.69s/it]
3%|▎ | 14535/569592 [5:38:11<353:10:05, 2.29s/it]
3%|▎ | 14535/569592 [5:38:11<353:10:05, 2.29s/it]
3%|▎ | 14536/569592 [5:38:12<290:42:32, 1.89s/it]
3%|▎ | 14536/569592 [5:38:12<290:42:32, 1.89s/it]
3%|▎ | 14537/569592 [5:38:13<246:39:38, 1.60s/it]
3%|▎ | 14537/569592 [5:38:13<246:39:38, 1.60s/it]
3%|▎ | 14538/569592 [5:38:16<324:33:03, 2.10s/it]
3%|▎ | 14538/569592 [5:38:16<324:33:03, 2.10s/it]
3%|▎ | 14539/569592 [5:38:20<402:46:20, 2.61s/it]
3%|▎ | 14539/569592 [5:38:20<402:46:20, 2.61s/it]
3%|▎ | 14540/569592 [5:38:24<472:20:33, 3.06s/it]
3%|▎ | 14540/569592 [5:38:24<472:20:33, 3.06s/it]
3%|▎ | 14541/569592 [5:38:25<374:13:29, 2.43s/it]
3%|▎ | 14541/569592 [5:38:25<374:13:29, 2.43s/it]
3%|▎ | 14542/569592 [5:38:26<310:21:59, 2.01s/it]
3%|▎ | 14542/569592 [5:38:26<310:21:59, 2.01s/it]
3%|▎ | 14543/569592 [5:38:31<451:02:14, 2.93s/it]
3%|▎ | 14543/569592 [5:38:31<451:02:14, 2.93s/it]
3%|▎ | 14544/569592 [5:38:34<470:59:56, 3.05s/it]
3%|▎ | 14544/569592 [5:38:34<470:59:56, 3.05s/it]
3%|▎ | 14545/569592 [5:38:35<375:03:55, 2.43s/it]
3%|▎ | 14545/569592 [5:38:35<375:03:55, 2.43s/it]
3%|▎ | 14546/569592 [5:38:40<486:17:06, 3.15s/it]
3%|▎ | 14546/569592 [5:38:40<486:17:06, 3.15s/it]
3%|▎ | 14547/569592 [5:38:41<383:45:19, 2.49s/it]
3%|▎ | 14547/569592 [5:38:41<383:45:19, 2.49s/it]
3%|▎ | 14548/569592 [5:38:43<365:57:38, 2.37s/it]
3%|▎ | 14548/569592 [5:38:43<365:57:38, 2.37s/it]
3%|▎ | 14549/569592 [5:38:45<322:55:32, 2.09s/it]
3%|▎ | 14549/569592 [5:38:45<322:55:32, 2.09s/it]
3%|▎ | 14550/569592 [5:38:47<323:19:08, 2.10s/it]
3%|▎ | 14550/569592 [5:38:47<323:19:08, 2.10s/it]
3%|▎ | 14551/569592 [5:38:53<519:35:34, 3.37s/it]
3%|▎ | 14551/569592 [5:38:53<519:35:34, 3.37s/it]
3%|▎ | 14552/569592 [5:38:57<549:09:39, 3.56s/it]
3%|▎ | 14552/569592 [5:38:57<549:09:39, 3.56s/it]
3%|▎ | 14553/569592 [5:39:01<538:18:20, 3.49s/it]
3%|▎ | 14553/569592 [5:39:01<538:18:20, 3.49s/it]
3%|▎ | 14554/569592 [5:39:04<525:04:59, 3.41s/it]
3%|▎ | 14554/569592 [5:39:04<525:04:59, 3.41s/it]
3%|▎ | 14555/569592 [5:39:05<412:26:49, 2.68s/it]
3%|▎ | 14555/569592 [5:39:05<412:26:49, 2.68s/it]
3%|▎ | 14556/569592 [5:39:08<449:52:18, 2.92s/it]
3%|▎ | 14556/569592 [5:39:08<449:52:18, 2.92s/it]
3%|▎ | 14557/569592 [5:39:13<518:27:46, 3.36s/it]
3%|▎ | 14557/569592 [5:39:13<518:27:46, 3.36s/it]
3%|▎ | 14558/569592 [5:39:16<535:58:04, 3.48s/it]
3%|▎ | 14558/569592 [5:39:16<535:58:04, 3.48s/it]
3%|▎ | 14559/569592 [5:39:21<580:34:10, 3.77s/it]
3%|▎ | 14559/569592 [5:39:21<580:34:10, 3.77s/it]
3%|▎ | 14560/569592 [5:39:24<547:25:55, 3.55s/it]
3%|▎ | 14560/569592 [5:39:24<547:25:55, 3.55s/it]
3%|▎ | 14561/569592 [5:39:27<526:49:15, 3.42s/it]
3%|▎ | 14561/569592 [5:39:27<526:49:15, 3.42s/it]
3%|▎ | 14562/569592 [5:39:30<530:28:32, 3.44s/it]
3%|▎ | 14562/569592 [5:39:30<530:28:32, 3.44s/it]
3%|▎ | 14563/569592 [5:39:34<518:59:06, 3.37s/it]
3%|▎ | 14563/569592 [5:39:34<518:59:06, 3.37s/it]
3%|▎ | 14564/569592 [5:39:38<543:33:36, 3.53s/it]
3%|▎ | 14564/569592 [5:39:38<543:33:36, 3.53s/it]
3%|▎ | 14565/569592 [5:39:38<423:48:48, 2.75s/it]
3%|▎ | 14565/569592 [5:39:38<423:48:48, 2.75s/it]
3%|▎ | 14566/569592 [5:39:42<443:16:48, 2.88s/it]
3%|▎ | 14566/569592 [5:39:42<443:16:48, 2.88s/it]
3%|▎ | 14567/569592 [5:39:45<467:00:16, 3.03s/it]
3%|▎ | 14567/569592 [5:39:45<467:00:16, 3.03s/it]
3%|▎ | 14568/569592 [5:39:50<558:18:57, 3.62s/it]
3%|▎ | 14568/569592 [5:39:50<558:18:57, 3.62s/it]
3%|▎ | 14569/569592 [5:39:54<551:15:40, 3.58s/it]
3%|▎ | 14569/569592 [5:39:54<551:15:40, 3.58s/it]
3%|▎ | 14570/569592 [5:39:57<537:38:06, 3.49s/it]
3%|▎ | 14570/569592 [5:39:57<537:38:06, 3.49s/it]
3%|▎ | 14571/569592 [5:39:58<421:33:45, 2.73s/it]
3%|▎ | 14571/569592 [5:39:58<421:33:45, 2.73s/it]
3%|▎ | 14572/569592 [5:40:01<459:31:47, 2.98s/it]
3%|▎ | 14/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101754940 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
572/569592 [5:40:01<459:31:47, 2.98s/it]
3%|▎ | 14573/569592 [5:40:02<364:46:34, 2.37s/it]
3%|▎ | 14573/569592 [5:40:02<364:46:34, 2.37s/it]
3%|▎ | 14574/569592 [5:40:07<452:22:49, 2.93s/it]
3%|▎ | 14574/569592 [5:40:07<452:22:49, 2.93s/it]
3%|▎ | 14575/569592 [5:40:10<467:26:35, 3.03s/it]
3%|▎ | 14575/569592 [5:40:10<467:26:35, 3.03s/it]
3%|▎ | 14576/569592 [5:40:13<491:44:38, 3.19s/it]
3%|▎ | 14576/569592 [5:40:13<491:44:38, 3.19s/it]
3%|▎ | 14577/569592 [5:40:17<504:04:18, 3.27s/it]
3%|▎ | 14577/569592 [5:40:17<504:04:18, 3.27s/it]
3%|▎ | 14578/569592 [5:40:20<491:11:39, 3.19s/it]
3%|▎ | 14578/569592 [5:40:20<491:11:39, 3.19s/it]
3%|▎ | 14579/569592 [5:40:23<490:47:17, 3.18s/it]
3%|▎ | 14579/569592 [5:40:23<490:47:17, 3.18s/it]
3%|▎ | 14580/569592 [5:40:28<556:40:13, 3.61s/it]
3%|▎ | 14580/569592 [5:40:28<556:40:13, 3.61s/it]
3%|▎ | 14581/569592 [5:40:31<560:40:07, 3.64s/it]
3%|▎ | 14581/569592 [5:40:31<560:40:07, 3.64s/it]
3%|▎ | 14582/569592 [5:40:35<567:46:20, 3.68s/it]
3%|▎ | 14582/569592 [5:40:35<567:46:20, 3.68s/it]
3%|▎ | 14583/569592 [5:40:36<440:34:15, 2.86s/it]
3%|▎ | 14583/569592 [5:40:36<440:34:15, 2.86s/it]
3%|▎ | 14584/569592 [5:40:39<462:21:15, 3.00s/it]
3%|▎ | 14584/569592 [5:40:39<462:21:15, 3.00s/it]
3%|▎ | 14585/569592 [5:40:43<500:32:42, 3.25s/it]
3%|▎ | 14585/569592 [5:40:43<500:32:42, 3.25s/it]
3%|▎ | 14586/569592 [5:40:47<529:40:05, 3.44s/it]
3%|▎ | 14586/569592 [5:40:47<529:40:05, 3.44s/it]
3%|▎ | 14587/569592 [5:40:48<414:59:13, 2.69s/it]
3%|▎ | 14587/569592 [5:40:48<414:59:13, 2.69s/it]
3%|▎ | 14588/569592 [5:40:51<440:08:00, 2.85s/it]
3%|▎ | 14588/569592 [5:40:51<440:08:00, 2.85s/it]
3%|▎ | 14589/569592 [5:40:55<461:43:55, 3.00s/it]
3%|▎ | 14589/569592 [5:40:55<461:43:55, 3.00s/it]
3%|▎ | 14590/569592 [5:41:00<580:48:23, 3.77s/it]
3%|▎ | 14590/569592 [5:41:00<580:48:23, 3.77s/it]
3%|▎ | 14591/569592 [5:41:04<569:51:02, 3.70s/it]
3%|▎ | 14591/569592 [5:41:04<569:51:02, 3.70s/it]
3%|▎ | 14592/569592 [5:41:07<576:55:03, 3.74s/it]
3%|▎ | 14592/569592 [5:41:07<576:55:03, 3.74s/it]
3%|▎ | 14593/569592 [5:41:08<445:06:15, 2.89s/it]
3%|▎ | 14593/569592 [5:41:08<445:06:15, 2.89s/it]
3%|▎ | 14594/569592 [5:41:12<462:34:57, 3.00s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14594/569592 [5:41:12<462:34:57, 3.00s/it]
3%|▎ | 14595/569592 [5:41:15<474:58:32, 3.08s/it]
3%|▎ | 14595/569592 [5:41:15<474:58:32, 3.08s/it]
3%|▎ | 14596/569592 [5:41:20<572:53:28, 3.72s/it]
3%|▎ | 14596/569592 [5:41:20<572:53:28, 3.72s/it]
3%|▎ | 14597/569592 [5:41:23<542:59:26, 3.52s/it]
3%|▎ | 14597/569592 [5:41:23<542:59:26, 3.52s/it]
3%|▎ | 14598/569592 [5:41:24<421:26:05, 2.73s/it]
3%|▎ | 14598/569592 [5:41:24<421:26:05, 2.73s/it]
3%|▎ | 14599/569592 [5:41:29<529:45:07, 3.44s/it]
3%|▎ | 14599/569592 [5:41:29<529:45:07, 3.44s/it]
3%|▎ | 14600/569592 [5:41:33<531:26:26, 3.45s/it]
3%|▎ | 14600/569592 [5:41:33<531:26:26, 3.45s/it]
3%|▎ | 14601/569592 [5:41:36<525:28:34, 3.41s/it]
3%|▎ | 14601/569592 [5:41:36<525:28:34, 3.41s/it]
3%|▎ | 14602/569592 [5:41:39<524:02:14, 3.40s/it]
3%|▎ | 14602/569592 [5:41:39<524:02:14, 3.40s/it]
3%|▎ | 14603/569592 [5:41:43<523:48:07, 3.40s/it]
3%|▎ | 14603/569592 [5:41:43<523:48:07, 3.40s/it]
3%|▎ | 14604/569592 [5:41:46<541:24:51, 3.51s/it]
3%|▎ | 14604/569592 [5:41:47<541:24:51, 3.51s/it]
3%|▎ | 14605/569592 [5:41:47<424:08:27, 2.75s/it]
3%|▎ | 14605/569592 [5:41:47<424:08:27, 2.75s/it]
3%|▎ | 14606/569592 [5:41:52<496:47:02, 3.22s/it]
3%|▎ | 14606/569592 [5:41:52<496:47:02, 3.22s/it]
3%|▎ | 14607/569592 [5:41:53<391:13:03, 2.54s/it]
3%|▎ | 14607/569592 [5:41:53<391:13:03, 2.54s/it]
3%|▎ | 14608/569592 [5:41:54<319:59:41, 2.08s/it]
3%|▎ | 14608/569592 [5:41:54<319:59:41, 2.08s/it]
3%|▎ | 14609/569592 [5:41:57<395:03:09, 2.56s/it]
3%|▎ | 14609/569592 [5:41:57<395:03:09, 2.56s/it]
3%|▎ | 14610/569592 [5:41:58<321:39:00, 2.09s/it]
3%|▎ | 14610/569592 [5:41:58<321:39:00, 2.09s/it]
3%|▎ | 14611/569592 [5:42:02<390:36:53, 2.53s/it]
3%|▎ | 14611/569592 [5:42:02<390:36:53, 2.53s/it]
3%|▎ | 14612/569592 [5:42:06<448:55:11, 2.91s/it]
3%|▎ | 14612/569592 [5:42:06<448:55:11, 2.91s/it]
3%|▎ | 14613/569592 [5:42:07<358:34:13, 2.33s/it]
3%|▎ | 14613/569592 [5:42:07<358:34:13, 2.33s/it]
3%|▎ | 14614/569592 [5:42:08<297:47:08, 1.93s/it]
3%|▎ | 14614/569592 [5:42:08<297:47:08, 1.93s/it]
3%|▎ | 14615/569592 [5:42:09<251:23:01, 1.63s/it]
3%|▎ | 14615/569592 [5:42:09<251:23:01, 1.63s/it]
3%|▎ | 14616/569592 [5:42:10<219:30:20, 1.42s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14616/569592 [5:42:10<219:30:20, 1.42s/it]
3%|▎ | 14617/569592 [5:42:13<306:48:35, 1.99s/it]
3%|▎ | 14617/569592 [5:42:13<306:48:35, 1.99s/it]
3%|▎ | 14618/569592 [5:42:17<385:09:57, 2.50s/it]
3%|▎ | 14618/569592 [5:42:17<385:09:57, 2.50s/it]
3%|▎ | 14619/569592 [5:42:18<313:15:52, 2.03s/it]
3%|▎ | 14619/569592 [5:42:18<313:15:52, 2.03s/it]
3%|▎ | 14620/569592 [5:42:21<382:12:14, 2.48s/it]
3%|▎ | 14620/569592 [5:42:21<382:12:14, 2.48s/it]
3%|▎ | 14621/569592 [5:42:24<419:08:51, 2.72s/it]
3%|▎ | 14621/569592 [5:42:24<419:08:51, 2.72s/it]
3%|▎ | 14622/569592 [5:42:25<336:59:58, 2.19s/it]
3%|▎ | 14622/569592 [5:42:25<336:59:58, 2.19s/it]
3%|▎ | 14623/569592 [5:42:29<405:32:14, 2.63s/it]
3%|▎ | 14623/569592 [5:42:29<405:32:14, 2.63s/it]
3%|▎ | 14624/569592 [5:42:32<434:28:49, 2.82s/it]
3%|▎ | 14624/569592 [5:42:32<434:28:49, 2.82s/it]
3%|▎ | 14625/569592 [5:42:33<346:46:09, 2.25s/it]
3%|▎ | 14625/569592 [5:42:33<346:46:09, 2.25s/it]
3%|▎ | 14626/569592 [5:42:34<286:07:17, 1.86s/it]
3%|▎ | 14626/569592 [5:42:34<286:07:17, 1.86s/it]
3%|▎ | 14627/569592 [5:42:38<376:1/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9:28, 2.44s/it]
3%|▎ | 14627/569592 [5:42:38<376:19:28, 2.44s/it]
3%|▎ | 14628/569592 [5:42:39<306:07:25, 1.99s/it]
3%|▎ | 14628/569592 [5:42:39<306:07:25, 1.99s/it]
3%|▎ | 14629/569592 [5:42:43<387:40:19, 2.51s/it]
3%|▎ | 14629/569592 [5:42:43<387:40:19, 2.51s/it]
3%|▎ | 14630/569592 [5:42:46<452:46:14, 2.94s/it]
3%|▎ | 14630/569592 [5:42:46<452:46:14, 2.94s/it]
3%|▎ | 14631/569592 [5:42:50<466:10:22, 3.02s/it]
3%|▎ | 14631/569592 [5:42:50<466:10:22, 3.02s/it]
3%|▎ | 14632/569592 [5:42:54<517:41:17, 3.36s/it]
3%|▎ | 14632/569592 [5:42:54<517:41:17, 3.36s/it]
3%|▎ | 14633/569592 [5:42:57<528:39:13, 3.43s/it]
3%|▎ | 14633/569592 [5:42:57<528:39:13, 3.43s/it]
3%|▎ | 14634/569592 [5:43:01<532:13:55, 3.45s/it]
3%|▎ | 14634/569592 [5:43:01<532:13:55, 3.45s/it]
3%|▎ | 14635/569592 [5:43:05<562:25:50, 3.65s/it]
3%|▎ | 14635/569592 [5:43:05<562:25:50, 3.65s/it]
3%|▎ | 14636/569592 [5:43:08<548:07:03, 3.56s/it]
3%|▎ | 14636/569592 [5:43:08<548:07:03, 3.56s/it]
3%|▎ | 14637/569592 [5:43:12<540:58:41, 3.51s/it]
3%|▎ | 14637/569592 [5:43:12<540:58:41, 3.51s/it]
3%|▎ | 14638/569592 [5:43:13<422:19:42, 2.74s/it]
3%|▎ | 14638/569592 [5:43:13<422:19:42, 2.74s/it]
3%|▎ | 14639/569592 [5:43:14<338:10:25, 2.19s/it]
3%|▎ | 14639/569592 [5:43:14<338:10:25, 2.19s/it]
3%|▎ | 14640/569592 [5:43:17<406:09:39, 2.63s/it]
3%|▎ | 14640/569592 [5:43:17<406:09:39, 2.63s/it]
3%|▎ | 14641/569592 [5:43:18<332:12:47, 2.16s/it]
3%|▎ | 14641/569592 [5:43:18<332:12:47, 2.16s/it]
3%|▎ | 14642/569592 [5:43:19<275:24:43, 1.79s/it]
3%|▎ | 14642/569592 [5:43:19<275:24:43, 1.79s/it]
3%|▎ | 14643/569592 [5:43:23<363:09:19, 2.36s/it]
3%|▎ | 14643/569592 [5:43:23<363:09:19, 2.36s/it]
3%|▎ | 14644/569592 [5:43:26<415:42:06, 2.70s/it]
3%|▎ | 14644/569592 [5:43:26<415:42:06, 2.70s/it]
3%|▎ | 14645/569592 [5:43:27<335:36:13, 2.18s/it]
3%|▎ | 14645/569592 [5:43:27<335:36:13, 2.18s/it]
3%|▎ | 14646/569592 [5:43:28<278:48:05, 1.81s/it]
3%|▎ | 14646/569592 [5:43:28<278:48:05, 1.81s/it]
3%|▎ | 14647/569592 [5:43:32<364:03:00, 2.36s/it]
3%|▎ | 14647/569592 [5:43:32<364:03:00, 2.36s/it]
3%|▎ | 14648/569592 [5:43:33<297:33:44, 1.93s/it]
3%|▎ | 14648/569592 [5:43:33<297:33:44, 1.93s/it]
3%|▎ | 14649/569592 [5:43:34<251:15:11, 1.63s/it]
3%|▎ | 14649/569592 [5:43:34<251:15:11, 1.63s/it]
3%|▎ | 14650/569592 [5:43:37<343:03:17, 2.23s/it]
3%|▎ | 14650/569592 [5:43:37<343:03:17, 2.23s/it]
3%|▎ | 14651/569592 [5:43:38<283:40:42, 1.84s/it]
3%|▎ | 14651/569592 [5:43:38<283:40:42, 1.84s/it]
3%|▎ | 14652/569592 [5:43:39<242:17:44, 1.57s/it]
3%|▎ | 14652/569592 [5:43:39<242:17:44, 1.57s/it]
3%|▎ | 14653/569592 [5:43:45<422:21:42, 2.74s/it]
3%|▎ | 14653/569592 [5:43:45<422:21:42, 2.74s/it]
3%|▎ | 14654/569592 [5:43:46<357:45:17, 2.32s/it]
3%|▎ | 14654/569592 [5:43:46<357:45:17, 2.32s/it]
3%|▎ | 14655/569592 [5:43:47<293:23:19, 1.90s/it]
3%|▎ | 14655/569592 [5:43:47<293:23:19, 1.90s/it]
3%|▎ | 14656/569592 [5:43:51<380:02:59, 2.47s/it]
3%|▎ | 14656/569592 [5:43:51<380:02:59, 2.47s/it]
3%|▎ | 14657/569592 [5:43:52<310:54:08, 2.02s/it]
3%|▎ | 14657/569592 [5:43:52<310:54:08, 2.02s/it]
3%|▎ | 14658/569592 [5:43:57<449:05:29, 2.91s/it]
3%|▎ | 14658/569592 [5:43:57<449:05:29, 2.91s/it]
3%|▎ | 14659/569592 [5:44:00<482:11:09, 3.13s/it]
3%|▎ | 14659/569592 [5:44:00<482:11:09, 3.13s/it]
3%|▎ | 14660/569592 [5:44:02<387:25:29, 2.51s/it]
3%|▎ | 14660/569592 [5:44:02<387:25:29, 2.51s/it]
3%|▎ | 14661/569592 [5:44:03<350:00:59, 2.27s/it]
3%|▎ | 14661/569592 [5:44:03<350:00:59, 2.27s/it]
3%|▎ | 14662/569592 [5:44:07<406:50:55, 2.64s/it]
3%|▎ | 14662/569592 [5:44:07<406:50:55, 2.64s/it]
3%|▎ | 14663/569592 [5:44:08<356:56:12, 2.32s/it]
3%|▎ | 14663/569592 [5:44:08<356:56:12, 2.32s/it]
3%|▎ | 14664/569592 [5:44:12<426:33:34, 2.77s/it]
3%|▎ | 14664/569592 [5:44:12<426:33:34, 2.77s/it]
3%|▎ | 14665/569592 [5:44:13<349:39:52, 2.27s/it]
3%|▎ | 14665/569592 [5:44:13<349:39:52, 2.27s/it]
3%|▎ | 14666/569592 [5:44:17<424:31:30, 2.75s/it]
3%|▎ | 14666/569592 [5:44:17<424:31:30, 2.75s/it]
3%|▎ | 14667/569592 [5:44:19<366:13:28, 2.38s/it]
3%|▎ | 14667/569592 [5:44:19<366:13:28, 2.38s/it]
3%|▎ | 14668/569592 [5:44:20<326:25:43, 2.12s/it]
3%|▎ | 14668/569592 [5:44:20<326:25:43, 2.12s/it]
3%|▎ | 14669/569592 [5:44:25<441:03:59, 2.86s/it]
3%|▎ | 14669/569592 [5:44:25<441:03:59, 2.86s/it]
3%|▎ | 14670/569592 [5:44:28<472:39:37, 3.07s/it]
3%|▎ | 14670/569592 [5:44:28<472:39:37, 3.07s/it]
3%|▎ | 14671/569592 [5:44:32<483:47:21, 3.14s/it]
3%|▎ | 14671/569592 [5:44:32<483:47:21, 3.14s/it]
3%|▎ | 14672/569592 [5:44:35<496:42:05, 3.22s/it]
3%|▎ | 14672/569592 [5:44:35<496:42:05, 3.22s/it]
3%|▎ | 14673/569592 [5:44:36<398:42:39, 2.59s/it]
3%|▎ | 14673/569592 [5:44:36<398:42:39, 2.59s/it]
3%|▎ | 14674/569592 [5:44:40<439:12:58, 2.85s/it]
3%|▎ | 14674/569592 [5:44:40<439:12:58, 2.85s/it]
3%|▎ | 14675/569592 [5:44:43<482:23:22, 3.13s/it]
3%|▎ | 14675/569592 [5:44:43<482:23:22, 3.13s/it]
3%|▎ | 14676/569592 [5:44:47<503:41:18, 3.27s/it]
3%|▎ | 14676/569592 [5:44:47<503:41:18, 3.27s/it]
3%|▎ | 14677/569592 [5:44:51<522:36:43, 3.39s/it]
3%|▎ | 14677/569592 [5:44:51<522:36:43, 3.39s/it]
3%|▎ | 14678/569592 [5:44:55<551:41:18, 3.58s/it]
3%|▎ | 14678/569592 [5:44:55<551:41:18, 3.58s/it]
3%|▎ | 14679/569592 [5:44:59<575:01:55, 3.73s/it]
3%|▎ | 14679/569592 [5:44:59<575:01:55, 3.73s/it]
3%|▎ | 14680/569592 [5:45:02<569:37:49, 3.70s/it]
3%|▎ | 14680/569592 [5:45:02<569:37:49, 3.70s/it]
3%|▎ | 14681/569592 [5:45:06<554:54:41, 3.60s/it]
3%|▎ | 14681/569592 [5:45:06<554:54:41, 3.60s/it]
3%|▎ | 14682/569592 [5:45:10<567:21:42, 3.68s/it]
3%|▎ | 14682/569592 [5:45:10<567:21:42, 3.68s/it]
3%|▎ | 14683/569592 [5:45:13<547:10:51, 3.55s/it]
3%|▎ | 14683/569592 [5:45:13<547:10:51, 3.55s/it]
3%|▎ | 14684/569592 [5:45:14<429:06:58, 2.78s/it]
3%|▎ | 14684/569592 [5:45:14<429:06:58, 2.78s/it]
3%|▎ | 14685/569592 [5:45:17<453:09:21, 2.94s/it]
3%|▎ | 14685/569592 [5:45:17<453:09:21, 2.94s/it]
3%|▎ | 14686/569592 [5:45:18<361:58:17, 2.35s/it]
3%|▎ | 14686/569592 [5:45:18<361:58:17, 2.35s/it]
3%|▎ | 14687/569592 [5:45:22<455:24:32, 2.95s/it]
3%|▎ | 14687/569592 [5:45:23<455:24:32, 2.95s/it]
3%|▎ | 14688/569592 [5:45:26<485:49:01, 3.15s/it]
3%|▎ | 14688/569592 [5:45:26<485:49:01, 3.15s/it]
3%|▎ | 14689/569592 [5:45:29<483:01:48, 3.13s/it]
3%|▎ | 14689/569592 [5:45:29<483:01:48, 3.13s/it]
3%|▎ | 14690/569592 [5:45:33<497:01:03, 3.22s/it]
3%|▎ | 14690/569592 [5:45:33<497:01:03, 3.22s/it]
3%|▎ | 14691/569592 [5:45:34<389:42:56, 2.53s/it]
3%|▎ | 14691/569592 [5:45:34<389:42:56, 2.53s/it]
3%|▎ | 14692/569592 [5:45:39<509:01:57, 3.30s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14692/569592 [5:45:39<509:01:57, 3.30s/it]
3%|▎ | 14693/569592 [5:45:42<528:38:34, 3.43s/it]
3%|▎ | 14693/569592 [5:45:42<528:38:34, 3.43s/it]
3%|▎ | 14694/569592 [5:45:47<589:52:12, 3.83s/it]
3%|▎ | 14694/569592 [5:45:47<589:52:12, 3.83s/it]
3%|▎ | 14695/569592 [5:45:50<560:41:14, 3.64s/it]
3%|▎ | 14695/569592 [5:45:50<560:41:14, 3.64s/it]
3%|▎ | 14696/569592 [5:45:55<618:50:47, 4.01s/it]
3%|▎ | 14696/569592 [5:45:55<618:50:47, 4.01s/it]
3%|▎ | 14697/569592 [5:45:59<619:18:50, 4.02s/it]
3%|▎ | 14697/569592 [5:45:59<619:18:50, 4.02s/it]
3%|▎ | 14698/569592 [5:46:02<581:46:42, 3.77s/it]
3%|▎ | 14698/569592 [5:46:02<581:46:42, 3.77s/it]
3%|▎ | 14699/569592 [5:46:07<603:25:16, 3.91s/it]
3%|▎ | 14699/569592 [5:46:07<603:25:16, 3.91s/it]
3%|▎ | 14700/569592 [5:46:08<464:17:41, 3.01s/it]
3%|▎ | 14700/569592 [5:46:08<464:17:41, 3.01s/it]
3%|▎ | 14701/569592 [5:46:11<471:47:33, 3.06s/it]
3%|▎ | 14701/569592 [5:46:11<471:47:33, 3.06s/it]
3%|▎ | 14702/569592 [5:46:12<374:55:03, 2.43s/it]
3%|▎ | 14702/569592 [5:46:12<374:55:03, 2.43s/it]
3%|▎ | 14703/569592 [5:46:15<433:36:09, 2.81s/it]
3%|▎ | 14703/569592 [5:46:15<433:36:09, 2.81s/it]
3%|▎ | 14704/569592 [5:46:19<479:27:04, 3.11s/it]
3%|▎ | 14704/569592 [5:46:19<479:27:04, 3.11s/it]
3%|▎ | 14705/569592 [5:46:23<492:03:52, 3.19s/it]
3%|▎ | 14705/569592 [5:46:23<492:03:52, 3.19s/it]
3%|▎ | 14706/569592 [5:46:26<488:24:38, 3.17s/it]
3%|▎ | 14706/569592 [5:46:26<488:24:38, 3.17s/it]
3%|▎ | 14707/569592 [5:46:29<493:28:04, 3.20s/it]
3%|▎ | 14707/569592 [5:46:29<493:28:04, 3.20s/it]
3%|▎ | 14708/569592 [5:46:30<387:55:34, 2.52s/it]
3%|▎ | 14708/569592 [5:46:30<387:55:34, 2.52s/it]
3%|▎ | 14709/569592 [5:46:34<482:13:47, 3.13s/it]
3%|▎ | 14709/569592 [5:46:34<482:13:47, 3.13s/it]
3%|▎ | 14710/569592 [5:46:38<486:30:59, 3.16s/it]
3%|▎ | 14710/569592 [5:46:38<486:30:59, 3.16s/it]
3%|▎ | 14711/569592 [5:46:39<390:45:18, 2.54s/it]
3%|▎ | 14711/569592 [5:46:39<390:45:18, 2.54s/it]
3%|▎ | 14712/569592 [5:46:42<429:50:42, 2.79s/it]
3%|▎ | 14712/569592 [5:46:42<429:50:42, 2.79s/it]
3%|▎ | 14713/569592 [5:46:46<463:23:09, 3.01s/it]
3%|▎ | 14713/569592 [5:46:46<463:23:09, 3.01s/it]
3%|▎ | 14714/569592 [5:46:47<370:28:07, 2.40s/it]
3%|▎ | 14714/569592 [5:46:47<370:28:07, 2.40s/it]
3%|▎ | 14715/569592 [5:46:48<303:49:16, 1.97s/it]
3%|▎ | 14715/569592 [5:46:48<303:49:16, 1.97s/it]
3%|▎ | 14716/569592 [5:46:49<267:45:04, 1.74s/it]
3%|▎ | 14716/569592 [5:46:49<267:45:04, 1.74s/it]
3%|▎ | 14717/569592 [5:46:53<375:57:26, 2.44s/it]
3%|▎ | 14717/569592 [5:46:53<375:57:26, 2.44s/it]
3%|▎ | 14718/569592 [5:46:57<439:58:31, 2.85s/it]
3%|▎ | 14718/569592 [5:46:57<439:58:31, 2.85s/it]
3%|▎ | 14719/569592 [5:47:00<480:47:51, 3.12s/it]
3%|▎ | 14719/569592 [5:47:00<480:47:51, 3.12s/it]
3%|▎ | 14720/569592 [5:47:04<519:35:04, 3.37s/it]
3%|▎ | 14720/569592 [5:47:04<519:35:04, 3.37s/it]
3%|▎ | 14721/569592 [5:47:08<520:51:24, 3.38s/it]
3%|▎ | 14721/569592 [5:47:08<520:51:24, 3.38s/it]
3%|▎ | 14722/569592 [5:47:11<514:04:47, 3.34s/it]
3%|▎ | 14722/569592 [5:47:11<514:04:47, 3.34s/it]
3%|▎ | 14723/569592 [5:47:15<519:12:14, 3.37s/it]
3%|▎ | 14723/569592 [5:47:15<519:12:14, 3.37s/it]
3%|▎ | 14724/569592 [5:47:18<522:29:33, 3.39s/it]
3%|▎ | 14724/569592 [5:47:18<522:29:33, 3.39s/it]
3%|▎ | 14725/569592 [5:47:19<407:42:16, 2.65s/it]
3%|▎ | 14725/569592 [5:47:19<407:42:16, 2.65s/it]
3%|▎ | 14726/569592 [5:47:22<429:04:54, 2.78s/it]
3%|▎ | 14726/569592 [5:47:22<429:04:54, 2.78s/it]
3%|▎ | 14727/569592 [5:47:26<466:18:56, 3.03s/it]
3%|▎ | 14727/569592 [5:47:26<466:18:56, 3.03s/it]
3%|▎ | 14728/569592 [5:47:27<372:38:17, 2.42s/it]
3%|▎ | 14728/569592 [5:47:27<372:38:17, 2.42s/it]
3%|▎ | 14729/569592 [5:47:30<418:25:43, 2.71s/it]
3%|▎ | 14729/569592 [5:47:30<418:25:43, 2.71s/it]
3%|▎ | 14730/569592 [5:47:33<447:40:43, 2.90s/it]
3%|▎ | 14730/569592 [5:47:33<447:40:43, 2.90s/it]
3%|▎ | 14731/569592 [5:47:37<477:01:45, 3.10s/it]
3%|▎ | 14731/569592 [5:47:37<477:01:45, 3.10s/it]
3%|▎ | 14732/569592 [5:47:38<381:24:48, 2.47s/it]
3%|▎ | 14732/569592 [5:47:38<381:24:48, 2.47s/it]
3%|▎ | 14733/569592 [5:47:39<310:03:00, 2.01s/it]
3%|▎ | 14733/569592 [5:47:39<310:03:00, 2.01s/it]
3%|▎ | 14734/569592 [5:47:40<270:22:35, 1.75s/it]
3%|▎ | 14734/569592 [5:47:40<270:22:35, 1.75s/it]
3%|▎ | 14735/569592 [5:47:43<345:00:20, 2.24s/it]
3%|▎ | 14735/569592 [5:47:43<345:00:20, 2.24s/it]
3%|▎ | 14736/569592 [5:47:47<411:36:24, 2.67s/it]
3%|▎ | 14736/569592 [5:47:47<411:36:24, 2.67s/it]
3%|▎ | 14737/569592 [5:47:50<441:24:38, 2.86s/it]
3%|▎ | 14737/569592 [5:47:50<441:24:38, 2.86s/it]
3%|▎ | 14738/569592 [5:47:54<467:56:32, 3.04s/it]
3%|▎ | 14738/569592 [5:47:54<467:56:32, 3.04s/it]
3%|▎ | 14739/569592 [5:47:55<371:42:25, 2.41s/it]
3%|▎ | 14739/569592 [5:47:55<371:42:25, 2.41s/it]
3%|▎ | 14740/569592 [5:47:56<304:07:02, 1.97s/it]
3%|▎ | 14740/569592 [5:47:56<304:07:02, 1.97s/it]
3%|▎ | 14741/569592 [5:47:57<264:38:33, 1.72s/it]
3%|▎ | 14741/569592 [5:47:57<264:38:33, 1.72s/it]
3%|▎ | 14742/569592 [5:48:01<366:01:07, 2.37s/it]
3%|▎ | 14742/569592 [5:48:01<366:01:07, 2.37s/it]
3%|▎ | 14743/569592 [5:48:04<419:18:05, 2.72s/it]
3%|▎ | 14743/569592 [5:48:04<419:18:05, 2.72s/it]
3%|▎ | 14744/569592 [5:48:08<458:01:03, 2.97s/it]
3%|▎ | 14744/569592 [5:48:08<458:01:03, 2.97s/it]
3%|▎ | 14745/569592 [5:48:13<563:07:25, 3.65s/it]
3%|▎ | 14745/569592 [5:48:13<563:07:25, 3.65s/it]
3%|▎ | 14746/569592 [5:48:14<435:42:57, 2.83s/it]
3%|▎ | 14746/569592 [5:48:14<435:42:57, 2.83s/it]
3%|▎ | 14747/569592 [5:48:15<348:29:49, 2.26s/it]
3%|▎ | 14747/569592 [5:48:15<348:29:49, 2.26s/it]
3%|▎ | 14748/569592 [5:48:20<472:03:37, 3.06s/it]
3%|▎ | 14748/569592 [5:48:20<472:03:37, 3.06s/it]
3%|▎ | 14749/569592 [5:48:23<493:50:26, 3.20s/it]
3%|▎ | 14749/569592 [5:48:23<493:50:26, 3.20s/it]
3%|▎ | 14750/569592 [5:48:27<516:12:47, 3.35s/it]
3%|▎ | 14750/569592 [5:48:27<516:12:47, 3.35s/it]
3%|▎ | 14751/569592 [5:48:30<514:34:33, 3.34s/it]
3%|▎ | 14751/569592 [5:48:30<514:34:33, 3.34s/it]
3%|▎ | 14752/569592 [5:48:31<402:10:06, 2.61s/it]
3%|▎ | 14752/569592 [5:48:31<402:10:06, 2.61s/it]
3%|▎ | 14753/569592 [5:48:35<440:23:17, 2.86s/it]
3%|▎ | 14753/569592 [5:48:35<440:23:17, 2.86s/it]
3%|▎ | 14754/569592 [5:48:39<508:32:05, 3.30s/it]
3%|▎ | 14754/569592 [5:48:39<508:32:05, 3.30s/it]
3%|▎ | 14755/569592 [5:48:40<398:49:10, 2.59s/it]
3%|▎ | 14755/569592 [5:48:40<398:49:10, 2.59s/it]
3%|▎ | 14756/569592 [5:48:41<323:03:01, 2.10s/it]
3%|▎ | 14756/569592 [5:48:41<323:03:01, 2.10s/it]
3%|▎ | 14757/569592 [5:48:46<457:53:09, 2.97s/it]
3%|▎ | 14757/569592 [5:48:46<457:53:09, 2.97s/it]
3%|▎ | 14758/569592 [5:48:47<363:32:12, 2.36s/it]
3%|▎ | 14758/569592 [5:48:47<363:32:12, 2.36s/it]
3%|▎ | 14759/569592 [5:48:48<297:57:23, 1.93s/it]
3%|▎ | 14759/569592 [5:48:48<297:57:23, 1.93s/it]
3%|▎ | 14760/569592 [5:48:51<372:01:11, 2.41s/it]
3%|▎ | 14760/569592 [5:48:51<372:01:11, 2.41s/it]
3%|▎ | 14761/569592 [5:48:56<489:57:20, 3.18s/it]
3%|▎ | 14761/569592 [5:48:56<489:57:20, 3.18s/it]
3%|▎ | 14762/569592 [5:48:57<389:22:57, 2.53s/it]
3%|▎ | 14762/569592 [5:48:57<389:22:57, 2.53s/it]
3%|▎ | 14763/569592 [5:48:58<316:55:33, 2.06s/it]
3%|▎ | 14763/569592 [5:48:58<316:55:33, 2.06s/it]
3%|▎ | 14764/569592 [5:49:02<377:14:45, 2.45s/it]
3%|▎ | 14764/569592 [5:49:02<377:14:45, 2.45s/it]
3%|▎ | 14765/569592 [5:49:03<307:23:54, 1.99s/it]
3%|▎ | 14765/569592 [5:49:03<307:23:54, 1.99s/it]
3%|▎ | 14766/569592 [5:49:06<377:39:52, 2.45s/it]
3%|▎ | 14766/569592 [5:49:06<377:39:52, 2.45s/it]
3%|▎ | 14767/569592 [5:49:07<310:53:55, 2.02s/it]
3%|▎ | 14767/569592 [5:49:07<310:53:55, 2.02s/it]
3%|▎ | 14768/569592 [5:49:12<424:08:40, 2.75s/it]
3%|▎ | 14768/569592 [5:49:12<424:08:40, 2.75s/it]
3%|▎ | 14769/569592 [5:49:15<459:13:36, 2.98s/it]
3%|▎ | 14769/569592 [5:49:15<459:13:36, 2.98s/it]
3%|▎ | 14770/569592 [5:49:16<366:28:52, 2.38s/it]
3%|▎ | 14770/569592 [5:49:16<366:28:52, 2.38s/it]
3%|▎ | 14771/569592 [5:49:17<300:36:29, 1.95s/it]
3%|▎ | 14771/569592 [5:49:17<300:36:29, 1.95s/it]
3%|▎ | 14772/569592 [5:49:22<434:57:07, 2.82s/it]
3%|▎ | 14772/569592 [5:49:22<434:57:07, 2.82s/it]
3%|▎ | 14773/569592 [5:49:26<479:29:33, 3.11s/it]
3%|▎ | 14773/569592 [5:49:26<479:29:33, 3.11s/it]
3%|▎ | 14774/569592 [5:49:29<501:38:24, 3.25s/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100663296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/it]
3%|▎ | 14774/569592 [5:49:29<501:38:24, 3.25s/it]
3%|▎ | 14775/569592 [5:49:30<392:17:39, 2.55s/it]
3%|▎ | 14775/569592 [5:49:30<392:17:39, 2.55s/it]
3%|▎ | 14776/569592 [5:49:31<317:52:20, 2.06s/it]
3%|▎ | 14776/569592 [5:49:31<317:52:20, 2.06s/it]
3%|▎ | 14777/569592 [5:49:32<268:25:04, 1.74s/it]
3%|▎ | 14777/569592 [5:49:32<268:25:04, 1.74s/it]
3%|▎ | 14778/569592 [5:49:33<236:22:06, 1.53s/it]
3%|▎ | 14778/569592 [5:49:33<236:22:06, 1.53s/it]
3%|▎ | 14779/569592 [5:49:34<210:15:19, 1.36s/it]
3%|▎ | 14779/569592 [5:49:34<210:15:19, 1.36s/it]
3%|▎ | 14780/569592 [5:49:35<190:24:19, 1.24s/it]
3%|▎ | 14780/569592 [5:49:35<190:24:19, 1.24s/it]
3%|▎ | 14781/569592 [5:49:38<284:30:52, 1.85s/it]
3%|▎ | 14781/569592 [5:49:38<284:30:52, 1.85s/it]
3%|▎ | 14782/569592 [5:49:42<374:17:41, 2.43s/it]
3%|▎ | 14782/569592 [5:49:42<374:17:41, 2.43s/it]
3%|▎ | 14783/569592 [5:49:44<371:04:39, 2.41s/it]
3%|▎ | 14783/569592 [5:49:44<371:04:39, 2.41s/it]
3%|▎ | 14784/569592 [5:49:48<413:02:43, 2.68s/it]
3%|▎ | 14784/569592 [5:49:48<413:02:43, 2.68s/it]
3%|▎ | 14785/569592 [5:49:52<492:06:14, 3.19s/it]
3%|▎ | 14785/569592 [5:49:52<492:06:14, 3.19s/it]
3%|▎ | 14786/569592 [5:49:53<391:08:37, 2.54s/it]
3%|▎ | 14786/569592 [5:49:53<391:08:37, 2.54s/it]
3%|▎ | 14787/569592 [5:49:55<359:27:47, 2.33s/it]
3%|▎ | 14787/569592 [5:49:55<359:27:47, 2.33s/it]
3%|▎ | 14788/569592 [5:50:01<537:16:48, 3.49s/it]
3%|▎ | 14788/569592 [5:50:01<537:16:48, 3.49s/it]
3%|▎ | 14789/569592 [5:50:05<560:51:28, 3.64s/it]
3%|▎ | 14789/569592 [5:50:05<560:51:28, 3.64s/it]
3%|▎ | 14790/569592 [5:50:09<581:07:14, 3.77s/it]
3%|▎ | 14790/569592 [5:50:09<581:07:14, 3.77s/it]
3%|▎ | 14791/569592 [5:50:13<582:58:50, 3.78s/it]
3%|▎ | 14791/569592 [5:50:13<582:58:50, 3.78s/it]
3%|▎ | 14792/569592 [5:50:14<450:17:41, 2.92s/it]
3%|▎ | 14792/569592 [5:50:14<450:17:41, 2.92s/it]
3%|▎ | 14793/569592 [5:50:17<464:34:08, 3.01s/it]
3%|▎ | 14793/569592 [5:50:17<464:34:08, 3.01s/it]
3%|▎ | 14794/569592 [5:50:21<482:12:11, 3.13s/it]
3%|▎ | 14794/569592 [5:50:21<482:12:11, 3.13s/it]
3%|▎ | 14795/569592 [5:50:22<382:15:21, 2.48s/it]
3%|▎ | 14795/569592 [5:50:22<382:15:21, 2.48s/it]
3%|▎ | 14796/569592 [5:50:25<430:25:33, 2.79s/it]
3%|▎ | 14796/569592 [5:50:25<430:25:33, 2.79s/it]
3%|▎ | 14797/569592 [5:50:30<532:06:37, 3.45s/it]
3%|▎ | 14797/569592 [5:50:30<532:06:37, 3.45s/it]
3%|▎ | 14798/569592 [5:50:34<538:50:12, 3.50s/it]
3%|▎ | 14798/569592 [5:50:34<538:50:12, 3.50s/it]
3%|▎ | 14799/569592 [5:50:37<524:00:43, 3.40s/it]
3%|▎ | 14799/569592 [5:50:37<524:00:43, 3.40s/it]
3%|▎ | 14800/569592 [5:50:41<548:53:55, 3.56s/it]
3%|▎ | 14800/569592 [5:50:41<548:53:55, 3.56s/it]
3%|▎ | 14801/569592 [5:50:44<525:25:55, 3.41s/it]
3%|▎ | 14801/569592 [5:50:44<525:25:55, 3.41s/it]
3%|▎ | 14802/569592 [5:50:45<412:35:32, 2.68s/it]
3%|▎ | 14802/569592 [5:50:45<412:35:32, 2.68s/it]
3%|▎ | 14803/569592 [5:50:49<480:26:05, 3.12s/it]
3%|▎ | 14803/569592 [5:50:49<480:26:05, 3.12s/it]
3%|▎ | 14804/569592 [5:50:52<486:04:54, 3.15s/it]
3%|▎ | 14804/569592 [5:50:52<486:04:54, 3.15s/it]
3%|▎ | 14805/569592 [5:50:57<578:28:05, 3.75s/it]
3%|▎ | 14805/569592 [5:50:57<578:28:05, 3.75s/it]
3%|▎ | 14806/569592 [5:51:02<623:02:54, 4.04s/it]
3%|▎ | 14806/569592 [5:51:02<623:02:54, 4.04s/it]
3%|▎ | 14807/569592 [5:51:03<476:52:38, 3.09s/it]
3%|▎ | 14807/569592 [5:51:03<476:52:38, 3.09s/it]
3%|▎ | 14808/569592 [5:51:07<502:47:24, 3.26s/it]
3%|▎ | 14808/569592 [5:51:07<502:47:24, 3.26s/it]
3%|▎ | 14809/569592 [5:51:07<394:41:52, 2.56s/it]
3%|▎ | 14809/569592 [5:51:07<394:41:52, 2.56s/it]
3%|▎ | 14810/569592 [5:51:11<452:25:36, 2.94s/it]
3%|▎ | 14810/569592 [5:51:11<452:25:36, 2.94s/it]
3%|▎ | 14811/569592 [5:51:15<482:10:29, 3.13s/it]
3%|▎ | 14811/569592 [5:51:15<482:10:29, 3.13s/it]
3%|▎ | 14812/569592 [5:51:18<485:58:45, 3.15s/it]
3%|▎ | 14812/569592 [5:51:18<485:58:45, 3.15s/it]
3%|▎ | 14813/569592 [5:51:19<383:43:30, 2.49s/it]
3%|▎ | 14813/569592 [5:51:19<383:43:30, 2.49s/it]
3%|▎ | 14814/569592 [5:51:23<455:33:33, 2.96s/it]
3%|▎ | 14814/569592 [5:51:23<455:33:33, 2.96s/it]
3%|▎ | 14815/569592 [5:51:28<556:21:34, 3.61s/it]
3%|▎ | 14815/569592 [5:51:28<556:21:34, 3.61s/it]
3%|▎ | 14816/569592 [5:51:32<550:18:26, 3.57s/it]
3%|▎ | 14816/569592 [5:51:32<550:18:26, 3.57s/it]
3%|▎ | 14817/569592 [5:51:37<615:17:18, 3.99s/it]
3%|▎ | 14817/569592 [5:51:37<615:17:18, 3.99s/it]
3%|▎ | 14818/569592 [5:51:40<567:19:24, 3.68s/it]
3%|▎ | 14818/569592 [5:51:40<567:19:24, 3.68s/it]
3%|▎ | 14819/569592 [5:51:43<557:56:23, 3.62s/it]
3%|▎ | 14819/569592 [5:51:43<557:56:23, 3.62s/it]
3%|▎ | 14820/569592 [5:51:46<544:14:39, 3.53s/it]
3%|▎ | 14820/569592 [5:51:46<544:14:39, 3.53s/it]
3%|▎ | 14821/569592 [5:51:50<541:20:41, 3.51s/it]
3%|▎ | 14821/569592 [5:51:50<541:20:41, 3.51s/it]
3%|▎ | 14822/569592 [5:51:54<568:27:16, 3.69s/it]
3%|▎ | 14822/569592 [5:51:54<568:27:16, 3.69s/it]
3%|▎ | 14823/569592 [5:51:58<565:20:59, 3.67s/it]
3%|▎ | 14823/569592 [5:51:58<565:20:59, 3.67s/it]
3%|▎ | 14824/569592 [5:52:02<615:41:21, 4.00s/it]
3%|▎ | 14824/569592 [5:52:02<615:41:21, 4.00s/it]
3%|▎ | 14825/569592 [5:52:06<582:58:36, 3.78s/it]
3%|▎ | 14825/569592 [5:52:06<582:58:36, 3.78s/it]
3%|▎ | 14826/569592 [5:52:09<548:00:38, 3.56s/it]
3%|▎ | 14826/569592 [5:52:09<548:00:38, 3.56s/it]
3%|▎ | 14827/569592 [5:52:12<534:37:31, 3.47s/it]
3%|▎ | 14827/569592 [5:52:12<534:37:31, 3.47s/it]
3%|▎ | 14828/569592 [5:52:13<417:10:24, 2.71s/it]
3%|▎ | 14828/569592 [5:52:13<417:10:24, 2.71s/it]
3%|▎ | 14829/569592 [5:52:16<439:33:47, 2.85s/it]
3%|▎ | 14829/569592 [5:52:16<439:33:47, 2.85s/it]
3%|▎ | 14830/569592 [5:52:17<356:39:20, 2.31s/it]
3%|▎ | 14830/569592 [5:52:17<356:39:20, 2.31s/it]
3%|▎ | 14831/569592 [5:52:21<435:04:58, 2.82s/it]
3%|▎ | 14831/569592 [5:52:21<435:04:58, 2.82s/it]
3%|▎ | 14832/569592 [5:52:25<465:48:55, 3.02s/it]
3%|▎ | 14832/569592 [5:52:25<465:48:55, 3.02s/it]
3%|▎ | 14833/569592 [5:52:26<370:58:57, 2.41s/it]
3%|▎ | 14833/569592 [5:52:26<370:58:57, 2.41s/it]
3%|▎ | 14834/569592 [5:52:29<434:18:11, 2.82s/it]
3%|▎ | 14834/569592 [5:52:29<434:18:11, 2.82s/it]
3%|▎ | 14835/569592 [5:52:33<466:08:13, 3.02s/it]
3%|▎ | 14835/569592 [5:52:33<466:08:13, 3.02s/it]
3%|▎ | 14836/569592 [5:52:37<494:54:28, 3.21s/it]
3%|▎ | 14836/569592 [5:52:37<494:54:28, 3.21s/it]
3%|▎ | 14837/569592 [5:52:40<501:50:27, 3.26s/it]
3%|▎ | 14837/569592 [5:52:40<501:50:27, 3.26s/it]
3%|▎ | 14838/569592 [5:52:43<500:25:10, 3.25s/it]
3%|▎ | 14838/569592 [5:52:43<500:25:10, 3.25s/it]
3%|▎ | 14839/569592 [5:52:47<517:14:35, 3.36s/it]
3%|▎ | 14839/569592 [5:52:47<517:14:35, 3.36s/it]
3%|▎ | 14840/569592 [5:52:50<514:53:48, 3.34s/it]
3%|▎ | 14840/569592 [5:52:50<514:53:48, 3.34s/it]
3%|▎ | 14841/569592 [5:52:51<401:50:44, 2.61s/it]
3%|▎ | 14841/569592 [5:52:51<401:50:44, 2.61s/it]
3%|▎ | 14842/569592 [5:52:54<429:03:12, 2.78s/it]
3%|▎ | 14842/569592 [5:52:54<429:03:12, 2.78s/it]
3%|▎ | 14843/569592 [5:52:55<343:55:14, 2.23s/it]
3%|▎ | 14843/569592 [5:52:55<343:55:14, 2.23s/it]
3%|▎ | 14844/569592 [5:52:59<425:09:41, 2.76s/it]
3%|▎ | 14844/569592 [5:52:59<425:09:41, 2.76s/it]
3%|▎ | 14845/569592 [5:53:03<467:24:10, 3.03s/it]
3%|▎ | 14845/569592 [5:53:03<467:24:10, 3.03s/it]
3%|▎ | 14846/569592 [5:53:07<513:26:06, 3.33s/it]
3%|▎ | 14846/569592 [5:53:07<513:26:06, 3.33s/it]
3%|▎ | 14847/569592 [5:53:08<402:18:22, 2.61s/it]
3%|▎ | 14847/569592 [5:53:08<402:18:22, 2.61s/it]
3%|▎ | 14848/569592 [5:53:11<437:39:16, 2.84s/it]
3%|▎ | 14848/569592 [5:53:11<437:39:16, 2.84s/it]
3%|▎ | 14849/569592 [5:53:12<349:43:44, 2.27s/it]
3%|▎ | 14849/569592 [5:53:12<349:43:44, 2.27s/it]
3%|▎ | 14850/569592 [5:53:16<422:45:26, 2.74s/it]
3%|▎ | 14850/569592 [5:53:16<422:45:26, 2.74s/it]
3%|▎ | 14851/569592 [5:53:17<343:07:15, 2.23s/it]
3%|▎ | 14851/569592 [5:53:17<343:07:15, 2.23s/it]
3%|▎ | 14852/569592 [5:53:18<284:16:59, 1.84s/it]
3%|▎ | 14852/569592 [5:53:18<284:16:59, 1.84s/it]
3%|▎ | 14853/569592 [5:53:22<379:39:31, 2.46s/it]
3%|▎ | 14853/569592 [5:53:22<379:39:31, 2.46s/it]
3%|▎ | 14854/569592 [5:53:23<310:21:23, 2.01s/it]
3%|▎ | 14854/569592 [5:53:23<310:21:23, 2.01s/it]
3%|▎ | 14855/569592 [5:53:24<262:11:11, 1.70s/it]
3%|▎ | 14855/569592 [5:53:24<262:11:11, 1.70s/it]
3%|▎ | 14856/569592 [5:53:27<359:07:14, 2.33s/it]
3%|▎ | 14856/569592 [5:53:27<359:07:14, 2.33s/it]
3%|▎ | 14857/569592 [5:53:28<295:44:08, 1.92s/it]
3%|▎ | 14857/569592 [5:53:28<295:44:08, 1.92s/it]
3%|▎ | 14858/569592 [5:53:32<380:47:11, 2.47s/it]
3%|▎ | 14858/569592 [5:53:32<380:47:11, 2.47s/it]
3%|▎ | 14859/569592 [5:53:36<449:43:28, 2.92s/it]
3%|▎ | 14859/569592 [5:53:36<449:43:28, 2.92s/it]
3%|▎ | 14860/569592 [5:53:40<493:36:01, 3.20s/it]
3%|▎ | 14860/569592 [5:53:40<493:36:01, 3.20s/it]
3%|▎ | 14861/569592 [5:53:41<388:39:22, 2.52s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14861/569592 [5:53:41<388:39:22, 2.52s/it]
3%|▎ | 14862/569592 [5:53:45<442:24:09, 2.87s/it]
3%|▎ | 14862/569592 [5:53:45<442:24:09, 2.87s/it]
3%|▎ | 14863/569592 [5:53:48<478:30:17, 3.11s/it]
3%|▎ | 14863/569592 [5:53:48<478:30:17, 3.11s/it]
3%|▎ | 14864/569592 [5:53:49<378:53:35, 2.46s/it]
3%|▎ | 14864/569592 [5:53:49<378:53:35, 2.46s/it]
3%|▎ | 14865/569592 [5:53:52<414:38:04, 2.69s/it]
3%|▎ | 14865/569592 [5:53:52<414:38:04, 2.69s/it]
3%|▎ | 14866/569592 [5:53:56<466:35:14, 3.03s/it]
3%|▎ | 14866/569592 [5:53:56<466:35:14, 3.03s/it]
3%|▎ | 14867/569592 [5:53:57<371:58:05, 2.41s/it]
3%|▎ | 14867/569592 [5:53:57<371:58:05, 2.41s/it]
3%|▎ | 14868/569592 [5:54:02<487:26:00, 3.16s/it]
3%|▎ | 14868/569592 [5:54:02<487:26:00, 3.16s/it]
3%|▎ | 14869/569592 [5:54:07<551:50:07, 3.58s/it]
3%|▎ | 14869/569592 [5:54:07<551:50:07, 3.58s/it]
3%|▎ | 14870/569592 [5:54:10<553:58:48, 3.60s/it]
3%|▎ | 14870/569592 [5:54:10<553:58:48, 3.60s/it]
3%|▎ | 14871/569592 [5:54:11<428:35:00, 2.78s/it]
3%|▎ | 14871/569592 [5:54:11<428:35:00, 2.78s/it]
3%|▎ | 14872/569592 [5:54:15<490:47:28, 3.19s/it]
3%|▎ | 14872/569592 [5:54:15<490:47:28, 3.19s/it]
3%|▎ | 14873/569592 [5:54:16<387:37:18, 2.52s/it]
3%|▎ | 14873/569592 [5:54:16<387:37:18, 2.52s/it]
3%|▎ | 14874/569592 [5:54:17<316:06:17, 2.05s/it]
3%|▎ | 14874/569592 [5:54:17<316:06:17, 2.05s/it]
3%|▎ | 14875/569592 [5:54:18<265:30:06, 1.72s/it]
3%|▎ | 14875/569592 [5:54:18<265:30:06, 1.72s/it]
3%|▎ | 14876/569592 [5:54:19<230:06:59, 1.49s/it]
3%|▎ | 14876/569592 [5:54:19<230:06:59, 1.49s/it]
3%|▎ | 14877/569592 [5:54:25<409:38:46, 2.66s/it]
3%|▎ | 14877/569592 [5:54:25<409:38:46, 2.66s/it]
3%|▎ | 14878/569592 [5:54:29<492:32:00, 3.20s/it]
3%|▎ | 14878/569592 [5:54:29<492:32:00, 3.20s/it]
3%|▎ | 14879/569592 [5:54:32<503:12:32, 3.27s/it]
3%|▎ | 14879/569592 [5:54:32<503:12:32, 3.27s/it]
3%|▎ | 14880/569592 [5:54:33<399:01:20, 2.59s/it]
3%|▎ | 14880/569592 [5:54:33<399:01:20, 2.59s/it]
3%|▎ | 14881/569592 [5:54:34<322:38:28, 2.09s/it]
3%|▎ | 14881/569592 [5:54:34<322:38:28, 2.09s/it]
3%|▎ | 14882/569592 [5:54:38<383:02:32, 2.49s/it]
3%|▎ | 14882/569592 [5:54:38<383:02:32, 2.49s/it]
3%|▎ | 14883/569592 [5:54:39<310:20:36, 2.01s/it]
3%|▎ | 14883/569592 [5:54:39<310:20:36, 2.01s/it]
3%|▎ | 14884/569592 [5:54:40<261:23:43, 1.70s/it]
3%|▎ | 14884/569592 [5:54:40<261:23:43, 1.70s/it]
3%|▎ | 14885/569592 [5:54:41<226:06:34, 1.47s/it]
3%|▎ | 14885/569592 [5:54:41<226:06:34, 1.47s/it]
3%|▎ | 14886/569592 [5:54:45<375:00:02, 2.43s/it]
3%|▎ | 14886/569592 [5:54:45<375:00:02, 2.43s/it]
3%|▎ | 14887/569592 [5:54:46<306:38:16, 1.99s/it]
3%|▎ | 14887/569592 [5:54:46<306:38:16, 1.99s/it]
3%|▎ | 14888/569592 [5:54:47<257:50:00, 1.67s/it]
3%|▎ | 14888/569592 [5:54:47<257:50:00, 1.67s/it]
3%|▎ | 14889/569592 [5:54:51<376:25:21, 2.44s/it]
3%|▎ | 14889/569592 [5:54:51<376:25:21, 2.44s/it]
3%|▎ | 14890/569592 [5:54:52<308:40:34, 2.00s/it]
3%|▎ | 14890/569592 [5:54:52<308:40:34, 2.00s/it]
3%|▎ | 14891/569592 [5:54:56<372:55:27, 2.42s/it]
3%|▎ | 14891/569592 [5:54:56<372:55:27, 2.42s/it]
3%|▎ | 14892/569592 [5:54:57<325:43:42, 2.11s/it]
3%|▎ | 14892/569592 [5:54:57<325:43:42, 2.11s/it]
3%|▎ | 14893/569592 [5:55:00<378:26:07, 2.46s/it]
3%|▎ | 14893/569592 [5:55:00<378:26:07, 2.46s/it]
3%|▎ | 14894/569592 [5:55:01<308:37:10, 2.00s/it]
3%|▎ | 14894/569592 [5:55:01<308:37:10, 2.00s/it]
3%|▎ | 14895/569592 [5:55:05<373:24:27, 2.42s/it]
3%|▎ | 14895/569592 [5:55:05<373:24:27, 2.42s/it]
3%|▎ | 14896/569592 [5:55:07<364:02:22, 2.36s/it]
3%|▎ | 14896/569592 [5:55:07<364:02:22, 2.36s/it]
3%|▎ | 14897/569592 [5:55:11<458:39:48, 2.98s/it]
3%|▎ | 14897/569592 [5:55:11<458:39:48, 2.98s/it]
3%|▎ | 14898/569592 [5:55:16<553:40:09, 3.59s/it]
3%|▎ | 14898/569592 [5:55:16<553:40:09, 3.59s/it]
3%|▎ | 14899/569592 [5:55:20<543:42:38, 3.53s/it]
3%|▎ | 14899/569592 [5:55:20<543:42:38, 3.53s/it]
3%|▎ | 14900/569592 [5:55:23<549:04:53, 3.56s/it]
3%|▎ | 14900/569592 [5:55:24<549:04:53, 3.56s/it]
3%|▎ | 14901/569592 [5:55:24<428:26:44, 2.78s/it]
3%|▎ | 14901/569592 [5:55:25<428:26:44, 2.78s/it]
3%|▎ | 14902/569592 [5:55:28<473:57:21, 3.08s/it]
3%|▎ | 14902/569592 [5:55:28<473:57:21, 3.08s/it]
3%|▎ | 14903/569592 [5:55:32<500:11:06, 3.25s/it]
3%|▎ | 14903/569592 [5:55:32<500:11:06, 3.25s/it]
3%|▎ | 14904/569592 [5:55:35<504:32:42, 3.27s/it]
3%|▎ | 14904/569592 [5:55:35<504:32:42, 3.27s/it]
3%|▎ | 14905/569592 [5:55:39<518:53:42, 3.37s/it]
3%|▎ | 14905/569592 [5:55:39<518:53:42, 3.37s/it]
3%|▎ | 14906/569592 [5:55:43<543:00:32, 3.52s/it]
3%|▎ | 14906/569592 [5:55:43<543:00:32, 3.52s/it]
3%|▎ | 14907/569592 [5:55:46<521:24:42, 3.38s/it]
3%|▎ | 14907/569592 [5:55:46<521:24:42, 3.38s/it]
3%|▎ | 14908/569592 [5:55:47<407:24:48, 2.64s/it]
3%|▎ | 14908/569592 [5:55:47<407:24:48, 2.64s/it]
3%|▎ | 14909/569592 [5:55:48<330:27:50, 2.14s/it]
3%|▎ | 14909/569592 [5:55:48<330:27:50, 2.14s/it]
3%|▎ | 14910/569592 [5:55:49<280:04:23, 1.82s/it]
3%|▎ | 14910/569592 [5:55:49<280:04:23, 1.82s/it]
3%|▎ | 14911/569592 [5:55:53<387:24:11, 2.51s/it]
3%|▎ | 14911/569592 [5:55:53<387:24:11, 2.51s/it]
3%|▎ | 14912/569592 [5:55:57<451:12:32, 2.93s/it]
3%|▎ | 14912/569592 [5:55:57<451:12:32, 2.93s/it]
3%|▎ | 14913/569592 [5:56:00<489:28:28, 3.18s/it]
3%|▎ | 14913/569592 [5:56:00<489:28:28, 3.18s/it]
3%|▎ | 14914/569592 [5:56:04<509:18:35, 3.31s/it]
3%|▎ | 14914/569592 [5:56:04<509:18:35, 3.31s/it]
3%|▎ | 14915/569592 [5:56:07<504:00:15, 3.27s/it]
3%|▎ | 14915/569592 [5:56:07<504:00:15, 3.27s/it]
3%|▎ | 14916/569592 [5:56:11<530:22:36, 3.44s/it]
3%|▎ | 14916/569592 [5:56:11<530:22:36, 3.44s/it]
3%|▎ | 14917/569592 [5:56:15<550:00:35, 3.57s/it]
3%|▎ | 14917/569592 [5:56:15<550:00:35, 3.57s/it]
3%|▎ | 14918/569592 [5:56:19<583:50:59, 3.79s/it]
3%|▎ | 14918/569592 [5:56:19<583:50:59, 3.79s/it]
3%|▎ | 14919/569592 [5:56:20<451:59:20, 2.93s/it]
3%|▎ | 14919/569592 [5:56:20<451:59:20, 2.93s/it]
3%|▎ | 14920/569592 [5:56:24<469:09:58, 3.05s/it]
3%|▎ | 14920/569592 [5:56:24<469:09:58, 3.05s/it]
3%|▎ | 14921/569592 [5:56:24<371:02:00, 2.41s/it]
3%|▎ | 14921/569592 [5:56:24<371:02:00, 2.41s/it]
3%|▎ | 14922/569592 [5:56:25<302:50:57, 1.97s/it]
3%|▎ | 14922/569592 [5:56:25<302:50:57, 1.97s/it]
3%|▎ | 14923/569592 [5:56:29<377:21:30, 2.45s/it]
3%|▎ | 14923/569592 [5:56:29<377:21:30, 2.45s/it]
3%|▎ | 14924/569592 [5:56:32<425:33:21, 2.76s/it]
3%|▎ | 14924/569592 [5:56:32<425:33:21, 2.76s/it]
3%|▎ | 14925/569592 [5:56:36<470:11:10, 3.05s/it]
3%|▎ | 14925/569592 [5:56:36<470:11:10, 3.05s/it]
3%|▎ | 14926/569592 [5:56:39<478:02:00, 3.10s/it]
3%|▎ | 14926/569592 [5:56:39<478:02:00, 3.10s/it]
3%|▎ | 14927/569592 [5:56:43<480:50:53, 3.12s/it]
3%|▎ | 14927/569592 [5:56:43<480:50:53, 3.12s/it]
3%|▎ | 14928/569592 [5:56:46<503:07:48, 3.27s/it]
3%|▎ | 14928/569592 [5:56:46<503:07:48, 3.27s/it]
3%|▎ | 14929/569592 [5:56:51<585:41:35, 3.80s/it]
3%|▎ | 14929/569592 [5:56:51<585:41:35, 3.80s/it]
3%|▎ | 14930/569592 [5:56:55<574:49:36, 3.73s/it]
3%|▎ | 14930/569592 [5:56:55<574:49:36, 3.73s/it]
3%|▎ | 14931/569592 [5:57:00<643:19:33, 4.18s/it]
3%|▎ | 14931/569592 [5:57:00<643:19:33, 4.18s/it]
3%|▎ | 14932/569592 [5:57:03<595:55:33, 3.87s/it]
3%|▎ | 14932/569592 [5:57:03<595:55:33, 3.87s/it]
3%|▎ | 14933/569592 [5:57:10<730:08:02, 4.74s/it]
3%|▎ | 14933/569592 [5:57:10<730:08:02, 4.74s/it]
3%|▎ | 14934/569592 [5:57:14<705:38:12, 4.58s/it]
3%|▎ | 14934/569592 [5:57:14<705:38:12, 4.58s/it]
3%|▎ | 14935/569592 [5:57:18<669:04:52, 4.34s/it]
3%|▎ | 14935/569592 [5:57:18<669:04:52, 4.34s/it]
3%|▎ | 14936/569592 [5:57:19<508:39:30, 3.30s/it]
3%|▎ | 14936/569592 [5:57:19<508:39:30, 3.30s/it]
3%|▎ | 14937/569592 [5:57:20<398:43:29, 2.59s/it]
3%|▎ | 14937/569592 [5:57:20<398:43:29, 2.59s/it]
3%|▎ | 14938/569592 [5:57:24<466:41:46, 3.03s/it]
3%|▎ | 14938/569592 [5:57:24<466:41:46, 3.03s/it]
3%|▎ | 14939/569592 [5:57:27<487:35:23, 3.16s/it]
3%|▎ | 14939/569592 [5:57:27<487:35:23, 3.16s/it]
3%|▎ | 14940/569592 [5:57:32<554:04:43, 3.60s/it]
3%|▎ | 14940/569592 [5:57:32<554:04:43, 3.60s/it]
3%|▎ | 14941/569592 [5:57:35<550:02:05, 3.57s/it]
3%|▎ | 14941/569592 [5:57:35<550:02:05, 3.57s/it]
3%|▎ | 14942/569592 [5:57:36<427:54:50, 2.78s/it]
3%|▎ | 14942/569592 [5:57:36<427:54:50, 2.78s/it]
3%|▎ | 14943/569592 [5:57:37<342:55:09, 2.23s/it]
3%|▎ | 14943/569592 [5:57:37<342:55:09, 2.23s/it]
3%|▎ | 14944/569592 [5:57:38<283:14:59, 1.84s/it]
3%|▎ | 14944/569592 [5:57:38<283:14:59, 1.84s/it]
3%|▎ | 14945/569592 [5:57:42<363:09:25, 2.36s/it]
3%|▎ | 14945/569592 [5:57:42<363:09:25, 2.36s/it]
3%|▎ | 14946/569592 [5:57:43<299:03:38, 1.94s/it]
3%|▎ | 14946/569592 [5:57:43<299:03:38, 1.94s/it]
3%|▎ | 14947/569592 [5:57:46<380:51:11, 2.47s/it]
3%|▎ | 14947/569592 [5:57:46<380:51:11, 2.47s/it]
3%|▎ | 14948/569592 [5:57:50<438:01:45, 2.84s/it]
3%|▎ | 14948/569592 [5:57:50<438:01:45, 2.84s/it]
3%|▎ | 14949/569592 [5:57:55<537:09:46, 3.49s/it]
3%|▎ | 14949/569592 [5:57:55<537:09:46, 3.49s/it]
3%|▎ | 14950/569592 [5:57:59<574:13:33, 3.73s/it]
3%|▎ | 14950/569592 [5:57:59<574:13:33, 3.73s/it]
3%|▎ | 14951/569592 [5:58:00<446:11:21, 2.90s/it]
3%|▎ | 14951/569592 [5:58:00<446:11:21, 2.90s/it]
3%|▎ | 14952/569592 [5:58:01<356:02:23, 2.31s/it]
3%|▎ | 14952/569592 [5:58:01<356:02:23, 2.31s/it]
3%|▎ | 14953/569592 [5:58:05<439:30:05, 2.85s/it]
3%|▎ | 14953/569592 [5:58:05<439:30:05, 2.85s/it]
3%|▎ | 14954/569592 [5:58:09<484:47:22, 3.15s/it]
3%|▎ | 14954/569592 [5:58:09<484:47:22, 3.15s/it]
3%|▎ | 14955/569592 [5:58:14<539:58:07, 3.50s/it]
3%|▎ | 14955/569592 [5:58:14<539:58:07, 3.50s/it]
3%|▎ | 14956/569592 [5:58:17<523:49:39, 3.40s/it]
3%|▎ | 14956/569592 [5:58:17<523:49:39, 3.40s/it]
3%|▎ | 14957/569592 [5:58:20<516:14:38, 3.35s/it]
3%|▎ | 14957/569592 [5:58:20<516:14:38, 3.35s/it]
3%|▎ | 14958/569592 [5:58:24<526:42:19, 3.42s/it]
3%|▎ | 14958/569592 [5:58:24<526:42:19, 3.42s/it]
3%|▎ | 14959/569592 [5:58:28<568:11:27, 3.69s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93447650 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14959/569592 [5:58:28<568:11:27, 3.69s/it]
3%|▎ | 14960/569592 [5:58:29<443:28:39, 2.88s/it]
3%|▎ | 14960/569592 [5:58:29<443:28:39, 2.88s/it]
3%|▎ | 14961/569592 [5:58:32<454:32:41, 2.95s/it]
3%|▎ | 14961/569592 [5:58:32<454:32:41, 2.95s/it]
3%|▎ | 14962/569592 [5:58:37<545:55:42, 3.54s/it]
3%|▎ | 14962/569592 [5:58:37<545:55:42, 3.54s/it]
3%|▎ | 14963/569592 [5:58:38<423:48:48, 2.75s/it]
3%|▎ | 14963/569592 [5:58:38<423:48:48, 2.75s/it]
3%|▎ | 14964/569592 [5:58:39<339:10:46, 2.20s/it]
3%|▎ | 14964/569592 [5:58:39<339:10:46, 2.20s/it]
3%|▎ | 14965/569592 [5:58:42<397:09:08, 2.58s/it]
3%|▎ | 14965/569592 [5:58:42<397:09:08, 2.58s/it]
3%|▎ | 14966/569592 [5:58:46<435:07:22, 2.82s/it]
3%|▎ | 14966/569592 [5:58:46<435:07:22, 2.82s/it]
3%|▎ | 14967/569592 [5:58:49<455:06:53, 2.95s/it]
3%|▎ | 14967/569592 [5:58:49<455:06:53, 2.95s/it]
3%|▎ | 14968/569592 [5:58:50<361:25:18, 2.35s/it]
3%|▎ | 14968/569592 [5:58:50<361:25:18, 2.35s/it]
3%|▎ | 14969/569592 [5:58:53<417:42:52, 2.71s/it]
3%|▎ | 14969/569592 [5:58:53<417:42:52, 2.71s/it]
3%|▎ | 14970/569592 [5:58:54<337:14:07, 2.19s/it]
3%|▎ | 14970/569592 [5:58:54<337:14:07, 2.19s/it]
3%|▎ | 14971/569592 [5:58:58<424:10:52, 2.75s/it]
3%|▎ | 14971/569592 [5:58:58<424:10:52, 2.75s/it]
3%|▎ | 14972/569592 [5:58:59<343:28:28, 2.23s/it]
3%|▎ | 14972/569592 [5:58:59<343:28:28, 2.23s/it]
3%|▎ | 14973/569592 [5:59:03<416:36:16, 2.70s/it]
3%|▎ | 14973/569592 [5:59:03<416:36:16, 2.70s/it]
3%|▎ | 14974/569592 [5:59:06<445:07:32, 2.89s/it]
3%|▎ | 14974/569592 [5:59:07<445:07:32, 2.89s/it]
3%|▎ | 14975/569592 [5:59:11<508:31:11, 3.30s/it]
3%|▎ | 14975/569592 [5:59:11<508:31:11, 3.30s/it]
3%|▎ | 14976/569592 [5:59:12<398:50:40, 2.59s/it]
3%|▎ | 14976/569592 [5:59:12<398:50:40, 2.59s/it]
3%|▎ | 14977/569592 [5:59:15<448:43:20, 2.91s/it]
3%|▎ | 14977/569592 [5:59:15<448:43:20, 2.91s/it]
3%|▎ | 14978/569592 [5:59:16<357:12:04, 2.32s/it]
3%|▎ | 14978/569592 [5:59:16<357:12:04, 2.32s/it]
3%|▎ | 14979/569592 [5:59:17<297:26:23, 1.93s/it]
3%|▎ | 14979/569592 [5:59:17<297:26:23, 1.93s/it]
3%|▎ | 14980/569592 [5:59:22<406:34:41, 2.64s/it]
3%|▎ | 14980/569592 [5:59:22<406:34:41, 2.64s/it]
3%|▎ | 14981/569592 [5:59:26<499:35:22, 3.24s/it]
3%|▎ | 14981/569592 [5:59:26<499:35:22, 3.24s/it]
3%|▎ | 14982/569592 [5:59:27<391:49:14, 2.54s/it]
3%|▎ | 14982/569592 [5:59:27<391:49:14, 2.54s/it]
3%|▎ | 14983/569592 [5:59:31<451:09:40, 2.93s/it]
3%|▎ | 14983/569592 [5:59:31<451:09:40, 2.93s/it]
3%|▎ | 14984/569592 [5:59:37<576:40:10, 3.74s/it]
3%|▎ | 14984/569592 [5:59:37<576:40:10, 3.74s/it]
3%|▎ | 14985/569592 [5:59:40<560:47:29, 3.64s/it]
3%|▎ | 14985/569592 [5:59:40<560:47:29, 3.64s/it]
3%|▎ | 14986/569592 [5:59:45<620:52:38, 4.03s/it]
3%|▎ | 14986/569592 [5:59:45<620:52:38, 4.03s/it]
3%|▎ | 14987/569592 [5:59:46<477:47:36, 3.10s/it]
3%|▎ | 14987/569592 [5:59:46<477:47:36, 3.10s/it]
3%|▎ | 14988/569592 [5:59:47<377:09:50, 2.45s/it]
3%|▎ | 14988/569592 [5:59:47<377:09:50, 2.45s/it]
3%|▎ | 14989/569592 [5:59:48<307:25:18, 2.00s/it]
3%|▎ | 14989/569592 [5:59:48<307:25:18, 2.00s/it]
3%|▎ | 14990/569592 [5:59:52<391:11:12, 2.54s/it]
3%|▎ | 14990/569592 [5:59:52<391:11:12, 2.54s/it]
3%|▎ | 14991/569592 [5:59:55<439:35:38, 2.85s/it]
3%|▎ | 14991/569592 [5:59:55<439:35:38, 2.85s/it]
3%|▎ | 14992/569592 [5:59:56<353:22:01, 2.29s/it]
3%|▎ | 14992/569592 [5:59:56<353:22:01, 2.29s/it]
3%|▎ | 14993/569592 [5:59:57<291:44:30, 1.89s/it]
3%|▎ | 14993/569592 [5:59:57<291:44:30, 1.89s/it]
3%|▎ | 14994/569592 [5:59:58<250:03:14, 1.62s/it]
3%|▎ | 14994/569592 [5:59:58<250:03:14, 1.62s/it]
3%|▎ | 14995/569592 [5:59:59<219:16:30, 1.42s/it]
3%|▎ | 14995/569592 [5:59:59<219:16:30, 1.42s/it]
3%|▎ | 14996/569592 [6:00:03<343:26:21, 2.23s/it]
3%|▎ | 14996/569592 [6:00:03<343:26:21, 2.23s/it]
3%|▎ | 14997/569592 [6:00:05<305:38:52, 1.98s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 14997/569592 [6:00:05<305:38:52, 1.98s/it]
3%|▎ | 14998/569592 [6:00:08<389:47:25, 2.53s/it]
3%|▎ | 14998/569592 [6:00:08<389:47:25, 2.53s/it]
3%|▎ | 14999/569592 [6:00:12<428:22:52, 2.78s/it]
3%|▎ | 14999/569592 [6:00:12<428:22:52, 2.78s/it]
3%|▎ | 15000/569592 [6:00:13<356:53:10, 2.32s/it]
3%|▎ | 15000/569592 [6:00:13<356:53:10, 2.32s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-14000] due to args.save_total_limit
3%|▎ | 15001/569592 [6:02:32<6691:47:29, 43.44s/it]
3%|▎ | 15001/569592 [6:02:32<6691:47:29, 43.44s/it]
3%|▎ | 15002/569592 [6:02:33<4726:15:48, 30.68s/it]
3%|▎ | 15002/569592 [6:02:33<4726:15:48, 30.68s/it]
3%|▎ | 15003/569592 [6:02:34<3351:55:52, 21.76s/it]
3%|▎ | 15003/569592 [6:02:34<3351:55:52, 21.76s/it]
3%|▎ | 15004/569592 [6:02:38<2504:48:43, 16.26s/it]
3%|▎ | 15004/569592 [6:02:38<2504:48:43, 16.26s/it]
3%|▎ | 15005/569592 [6:02:39<1798:47:02, 11.68s/it]
3%|▎ | 15005/569592 [6:02:39<1798:47:02, 11.68s/it]
3%|▎ | 15006/569592 [6:02:40<1307:48:59, 8.49s/it]
3%|▎ | 15006/569592 [6:02:40<1307:48:59, 8.49s/it]
3%|▎ | 15007/569592 [6:02:44<1095:36:19, 7.11s/it]
3%|▎ | 15007/569592 [6:02:44<1095:36:19, 7.11s/it]
3%|▎ | 15008/569592 [6:02:45<812:47:26, 5.28s/it]
3%|▎ | 15008/569592 [6:02:45<812:47:26, 5.28s/it]
3%|▎ | 15009/569592 [6:02:49<768:26:55, 4.99s/it]
3%|▎ | 15009/569592 [6:02:49<768:26:55, 4.99s/it]
3%|▎ | 15010/569592 [6:02:53<736:46:11, 4.78s/it]
3%|▎ | 15010/569592 [6:02:53<736:46:11, 4.78s/it]
3%|▎ | 15011/569592 [6:02:56<664:39:05, 4.31s/it]
3%|▎ | 1/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
5011/569592 [6:02:56<664:39:05, 4.31s/it]
3%|▎ | 15012/569592 [6:02:57<509:32:53, 3.31s/it]
3%|▎ | 15012/569592 [6:02:57<509:32:53, 3.31s/it]
3%|▎ | 15013/569592 [6:03:02<549:14:03, 3.57s/it]
3%|▎ | 15013/569592 [6:03:02<549:14:03, 3.57s/it]
3%|▎ | 15014/569592 [6:03:02<427:17:15, 2.77s/it]
3%|▎ | 15014/569592 [6:03:02<427:17:15, 2.77s/it]
3%|▎ | 15015/569592 [6:03:07<500:42:50, 3.25s/it]
3%|▎ | 15015/569592 [6:03:07<500:42:50, 3.25s/it]
3%|▎ | 15016/569592 [6:03:11<534:42:10, 3.47s/it]
3%|▎ | 15016/569592 [6:03:11<534:42:10, 3.47s/it]
3%|▎ | 15017/569592 [6:03:15<548:04:07, 3.56s/it]
3%|▎ | 15017/569592 [6:03:15<548:04:07, 3.56s/it]
3%|▎ | 15018/569592 [6:03:15<425:03:49, 2.76s/it]
3%|▎ | 15018/569592 [6:03:15<425:03:49, 2.76s/it]
3%|▎ | 15019/569592 [6:03:20<486:38:24, 3.16s/it]
3%|▎ | 15019/569592 [6:03:20<486:38:24, 3.16s/it]
3%|▎ | 15020/569592 [6:03:24<527:10:13, 3.42s/it]
3%|▎ | 15020/569592 [6:03:24<527:10:13, 3.42s/it]
3%|▎ | 15021/569592 [6:03:27<511:09:12, 3.32s/it]
3%|▎ | 15021/569592 [6:03:27<511:09:12, 3.32s/it]
3%|▎ | 15022/569592 [6:03:30<508:29:34, 3.30s/it]
3%|▎ | 15022/569592 [6:03:30<508:29:34, 3.30s/it]
3%|▎ | 15023/569592 [6:03:33<498:24:17, 3.24s/it]
3%|▎ | 15023/569592 [6:03:33<498:24:17, 3.24s/it]
3%|▎ | 15024/569592 [6:03:34<391:48:12, 2.54s/it]
3%|▎ | 15024/569592 [6:03:34<391:48:12, 2.54s/it]
3%|▎ | 15025/569592 [6:03:38<446:41:28, 2.90s/it]
3%|▎ | 15025/569592 [6:03:38<446:41:28, 2.90s/it]
3%|▎ | 15026/569592 [6:03:43<559:19:04, 3.63s/it]
3%|▎ | 15026/569592 [6:03:43<559:19:04, 3.63s/it]
3%|▎ | 15027/569592 [6:03:44<433:32:06, 2.81s/it]
3%|▎ | 15027/569592 [6:03:44<433:32:06, 2.81s/it]
3%|▎ | 15028/569592 [6:03:48<498:22:56, 3.24s/it]
3%|▎ | 15028/569592 [6:03:48<498:22:56, 3.24s/it]
3%|▎ | 15029/569592 [6:03:49<391:08:47, 2.54s/it]
3%|▎ | 15029/569592 [6:03:49<391:08:47, 2.54s/it]
3%|▎ | 15030/569592 [6:03:53<466:56:42, 3.03s/it]
3%|▎ | 15030/569592 [6:03:53<466:56:42, 3.03s/it]
3%|▎ | 15031/569592 [6:03:58<529:07:16, 3.43s/it]
3%|▎ | 15031/569592 [6:03:58<529:07:16, 3.43s/it]
3%|▎ | 15032/569592 [6:04:01<544:01:02, 3.53s/it]
3%|▎ | 15032/569592 [6:04:01<544:01:02, 3.53s/it]
3%|▎ | 15033/569592 [6:04:06<576:06:52, 3.74s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101332220 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15033/569592 [6:04:06<576:06:52, 3.74s/it]
3%|▎ | 15034/569592 [6:04:10<597:25:35, 3.88s/it]
3%|▎ | 15034/569592 [6:04:10<597:25:35, 3.88s/it]
3%|▎ | 15035/569592 [6:04:13<568:30:07, 3.69s/it]
3%|▎ | 15035/569592 [6:04:13<568:30:07, 3.69s/it]
3%|▎ | 15036/569592 [6:04:16<541:32:44, 3.52s/it]
3%|▎ | 15036/569592 [6:04:16<541:32:44, 3.52s/it]
3%|▎ | 15037/569592 [6:04:21<593:36:39, 3.85s/it]
3%|▎ | 15037/569592 [6:04:21<593:36:39, 3.85s/it]
3%|▎ | 15038/569592 [6:04:22<457:37:11, 2.97s/it]
3%|▎ | 15038/569592 [6:04:22<457:37:11, 2.97s/it]
3%|▎ | 15039/569592 [6:04:25<484:09:43, 3.14s/it]
3%|▎ | 15039/569592 [6:04:25<484:09:43, 3.14s/it]
3%|▎ | 15040/569592 [6:04:30<543:14:52, 3.53s/it]
3%|▎ | 15040/569592 [6:04:30<543:14:52, 3.53s/it]
3%|▎ | 15041/569592 [6:04:33<555:22:52, 3.61s/it]
3%|▎ | 15041/569592 [6:04:33<555:22:52, 3.61s/it]
3%|▎ | 15042/569592 [6:04:37<549:36:55, 3.57s/it]
3%|▎ | 15042/569592 [6:04:37<549:36:55, 3.57s/it]
3%|▎ | 15043/569592 [6:04:41<566:54:25, 3.68s/it]
3%|▎ | 15043/569592 [6:04:41<566:54:25, 3.68s/it]
3%|▎ | 15044/569592 [6:04:46<626:51:31, 4.07s/it]
3%|▎ | 15044/569592 [6:04:46<626:51:31, 4.07s/it]
3%|▎ | 15045/569592 [6:04:50<607:17:01, 3.94s/it]
3%|▎ | 15045/569592 [6:04:50<607:17:01, 3.94s/it]
3%|▎ | 15046/569592 [6:04:53<599:28:50, 3.89s/it]
3%|▎ | 15046/569592 [6:04:53<599:28:50, 3.89s/it]
3%|▎ | 15047/569592 [6:04:59<665:27:44, 4.32s/it]
3%|▎ | 15047/569592 [6:04:59<665:27:44, 4.32s/it]
3%|▎ | 15048/569592 [6:05:03<679:12:22, 4.41s/it]
3%|▎ | 15048/569592 [6:05:03<679:12:22, 4.41s/it]
3%|▎ | 15049/569592 [6:05:07<653:42:44, 4.24s/it]
3%|▎ | 15049/569592 [6:05:07<653:42:44, 4.24s/it]
3%|▎ | 15050/569592 [6:05:11<623:26:26, 4.05s/it]
3%|▎ | 15050/569592 [6:05:11<623:26:26, 4.05s/it]
3%|▎ | 15051/569592 [6:05:14<593:29:50, 3.85s/it]
3%|▎ | 15051/569592 [6:05:14<593:29:50, 3.85s/it]
3%|▎ | 15052/569592 [6:05:18<588:04:49, 3.82s/it]
3%|▎ | 15052/569592 [6:05:18<588:04:49, 3.82s/it]
3%|▎ | 15053/569592 [6:05:21<577:27:42, 3.75s/it]
3%|▎ | 15053/569592 [6:05:21<577:27:42, 3.75s/it]
3%|▎ | 15054/569592 [6:05:25<565:33:36, 3.67s/it]
3%|▎ | 15054/569592 [6:05:25<565:33:36, 3.67s/it]
3%|▎ | 15055/569592 [6:05:28<562:16:22, 3.65s/it]
3%|▎ | 15055/569592 [6:05:29<562:16:22, 3.65s/it]
3%|▎ | 15056/569592 [6:05:32<570:35:47, 3.70s/it]
3%|▎ | 15056/569592 [6:05:32<570:35:47, 3.70s/it]
3%|▎ | 15057/569592 [6:05:33<445:13:13, 2.89s/it]
3%|▎ | 15057/569592 [6:05:33<445:13:13, 2.89s/it]
3%|▎ | 15058/569592 [6:05:34<360:03:04, 2.34s/it]
3%|▎ | 15058/569592 [6:05:34<360:03:04, 2.34s/it]
3%|▎ | 15059/569592 [6:05:38<418:03:28, 2.71s/it]
3%|▎ | 15059/569592 [6:05:38<418:03:28, 2.71s/it]
3%|▎ | 15060/569592 [6:05:42<483:26:00, 3.14s/it]
3%|▎ | 15060/569592 [6:05:42<483:26:00, 3.14s/it]
3%|▎ | 15061/569592 [6:05:43<382:34:24, 2.48s/it]
3%|▎ | 15061/569592 [6:05:43<382:34:24, 2.48s/it]
3%|▎ | 15062/569592 [6:05:48<486:03:30, 3.16s/it]
3%|▎ | 15062/569592 [6:05:48<486:03:30, 3.16s/it]
3%|▎ | 15063/569592 [6:05:52<539:20:01, 3.50s/it]
3%|▎ | 15063/569592 [6:05:53<539:20:01, 3.50s/it]
3%|▎ | 15064/569592 [6:05:56<557:37:02, 3.62s/it]
3%|▎ | 15064/569592 [6:05:56<557:37:02, 3.62s/it]
3%|▎ | 15065/569592 [6:05:57<431:03:53, 2.80s/it]
3%|▎ | 15065/569592 [6:05:57<431:03:53, 2.80s/it]
3%|▎ | 15066/569592 [6:05:58<345:00:30, 2.24s/it]
3%|▎ | 15066/569592 [6:05:58<345:00:30, 2.24s/it]
3%|▎ | 15067/569592 [6:06:02<445:09:17, 2.89s/it]
3%|▎ | 15067/569592 [6:06:02<445:09:17, 2.89s/it]
3%|▎ | 15068/569592 [6:06:06<494:24:51, 3.21s/it]
3%|▎ | 15068/569592 [6:06:06<494:24:51, 3.21s/it]
3%|▎ | 15069/569592 [6:06:07<395:31:18, 2.57s/it]
3%|▎ | 15069/569592 [6:06:07<395:31:18, 2.57s/it]
3%|▎ | 15070/569592 [6:06:11<466:49:05, 3.03s/it]
3%|▎ | 15070/569592 [6:06:11<466:49:05, 3.03s/it]
3%|▎ | 15071/569592 [6:06:16<557:02:22, 3.62s/it]
3%|▎ | 15071/569592 [6:06:16<557:02:22, 3.62s/it]
3%|▎ | 15072/569592 [6:06:20<568:28:45, 3.69s/it]
3%|▎ | 15072/569592 [6:06:20<568:28:45, 3.69s/it]
3%|▎ | 15073/569592 [6:06:24<557:55:50, 3.62s/it]
3%|▎ | 15073/569592 [6:06:24<557:55:50, 3.62s/it]
3%|▎ | 15074/569592 [6:06:27<559:41:24, 3.63s/it]
3%|▎ | 15074/569592 [6:06:27<559:41:24, 3.63s/it]
3%|▎ | 15075/569592 [6:06:28<435:10:23, 2.83s/it]
3%|▎ | 15075/569592 [6:06:28<435:10:23, 2.83s/it]
3%|▎ | 15076/569592 [6:06:32<483:13:25, 3.14s/it]
3%|▎ | 15076/569592 [6:06:32<483:13:25, 3.14s/it]
3%|▎ | 15077/569592 [6:06:37<551:06:21, 3.58s/it]
3%|▎ | 15077/569592 [6:06:37<551:06:21, 3.58s/it]
3%|▎ | 15078/569592 [6:06:38<429:36:14, 2.79s/it]
3%|▎ | 15078/569592 [6:06:38<429:36:14, 2.79s/it]
3%|▎ | 15079/569592 [6:06:42<486:51:22, 3.16s/it]
3%|▎ | 15079/569592 [6:06:42<486:51:22, 3.16s/it]
3%|▎ | 15080/569592 [6:06:43<383:04:46, 2.49s/it]
3%|▎ | 15080/569592 [6:06:43<383:04:46, 2.49s/it]
3%|▎ | 15081/569592 [6:06:46<426:17:40, 2.77s/it]
3%|▎ | 15081/569592 [6:06:46<426:17:40, 2.77s/it]
3%|▎ | 15082/569592 [6:06:47<342:13:43, 2.22s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15082/569592 [6:06:47<342:13:43, 2.22s/it]
3%|▎ | 15083/569592 [6:06:51<417:56:34, 2.71s/it]
3%|▎ | 15083/569592 [6:06:51<417:56:34, 2.71s/it]
3%|▎ | 15084/569592 [6:06:52<337:27:47, 2.19s/it]
3%|▎ | 15084/569592 [6:06:52<337:27:47, 2.19s/it]
3%|▎ | 15085/569592 [6:06:56<431:57:09, 2.80s/it]
3%|▎ | 15085/569592 [6:06:56<431:57:09, 2.80s/it]
3%|▎ | 15086/569592 [6:07:00<481:54:46, 3.13s/it]
3%|▎ | 15086/569592 [6:07:00<481:54:46, 3.13s/it]
3%|▎ | 15087/569592 [6:07:04<512:04:50, 3.32s/it]
3%|▎ | 15087/569592 [6:07:04<512:04:50, 3.32s/it]
3%|▎ | 15088/569592 [6:07:07<526:18:52, 3.42s/it]
3%|▎ | 15088/569592 [6:07:07<526:18:52, 3.42s/it]
3%|▎ | 15089/569592 [6:07:08<411:47:01, 2.67s/it]
3%|▎ | 15089/569592 [6:07:08<411:47:01, 2.67s/it]
3%|▎ | 15090/569592 [6:07:12<477:21:57, 3.10s/it]
3%|▎ | 15090/569592 [6:07:12<477:21:57, 3.10s/it]
3%|▎ | 15091/569592 [6:07:16<505:33:55, 3.28s/it]
3%|▎ | 15091/569592 [6:07:16<505:33:55, 3.28s/it]
3%|▎ | 15092/569592 [6:07:17<401:41:12, 2.61s/it]
3%|▎ | 15092/569592 [6:07:17<401:41:12, 2.61s/it]
3%|▎ | 15093/569592 [6:07:20<429:33:22, 2.79s/it]
3%|▎ | 15093/569592 [6:07:20<429:33:22, 2.79s/it]
3%|▎ | 15094/569592 [6:07:25<501:33:31, 3.26s/it]
3%|▎ | 15094/569592 [6:07:25<501:33:31, 3.26s/it]
3%|▎ | 15095/569592 [6:07:26<395:25:35, 2.57s/it]
3%|▎ | 15095/569592 [6:07:26<395:25:35, 2.57s/it]
3%|▎ | 15096/569592 [6:07:27<330:04:17, 2.14s/it]
3%|▎ | 15096/569592 [6:07:27<330:04:17, 2.14s/it]
3%|▎ | 15097/569592 [6:07:33<496:07:55, 3.22s/it]
3%|▎ | 15097/569592 [6:07:33<496:07:55, 3.22s/it]
3%|▎ | 15098/569592 [6:07:36<503:53:36, 3.27s/it]
3%|▎ | 15098/569592 [6:07:36<503:53:36, 3.27s/it]
3%|▎ | 15099/569592 [6:07:40<545:54:42, 3.54s/it]
3%|▎ | 15099/569592 [6:07:40<545:54:42, 3.54s/it]
3%|▎ | 15100/569592 [6:07:45<610:00:39, 3.96s/it]
3%|▎ | 15100/569592 [6:07:45<610:00:39, 3.96s/it]
3%|▎ | 15101/569592 [6:07:46<469:47:35, 3.05s/it]
3%|▎ | 15101/569592 [6:07:46<469:47:35, 3.05s/it]
3%|▎ | 15102/569592 [6:07:49<485:14:18, 3.15s/it]
3%|▎ | 15102/569592 [6:07:49<485:14:18, 3.15s/it]
3%|▎ | 15103/569592 [6:07:54<571:52:12, 3.71s/it]
3%|▎ | 15103/569592 [6:07:54<571:52:12, 3.71s/it]
3%|▎ | 15104/569592 [6:07:55<441:29:00, 2.87s/it]
3%|▎ | 15104/569592 [6:07:55<441:29:00, 2.87s/it]
3%|▎ | 15105/569592 [6:07:56<355:41:40, 2.31s/it]
3%|▎ | 15105/569592 [6:07:56<355:41:40, 2.31s/it]
3%|▎ | 15106/569592 [6:07:58<306:12:32, 1.99s/it]
3%|▎ | 15106/569592 [6:07:58<306:12:32, 1.99s/it]
3%|▎ | 15107/569592 [6:08:02<400:33:31, 2.60s/it]
3%|▎ | 15107/569592 [6:08:02<400:33:31, 2.60s/it]
3%|▎ | 15108/569592 [6:08:07<529:14:16, 3.44s/it]
3%|▎ | 15108/569592 [6:08:07<529:14:16, 3.44s/it]
3%|▎ | 15109/569592 [6:08:08<413:50:45, 2.69s/it]
3%|▎ | 15109/569592 [6:08:08<413:50:45, 2.69s/it]
3%|▎ | 15110/569592 [6:08:12<497:41:28, 3.23s/it]
3%|▎ | 15110/569592 [6:08:12<497:41:28, 3.23s/it]
3%|▎ | 15111/569592 [6:08:13<391:41:25, 2.54s/it]
3%|▎ | 15111/569592 [6:08:13<391:41:25, 2.54s/it]
3%|▎ | 15112/569592 [6:08:17<451:44:53, 2.93s/it]
3%|▎ | 15112/569592 [6:08:17<451:44:53, 2.93s/it]
3%|▎ | 15113/569592 [6:08:18<357:43:38, 2.32s/it]
3%|▎ | 15113/569592 [6:08:18<357:43:38, 2.32s/it]
3%|▎ | 15114/569592 [6:08:19<297:18:02, 1.93s/it]
3%|▎ | 15114/569592 [6:08:19<297:18:02, 1.93s/it]
3%|▎ | 15115/569592 [6:08:20<251:55:12, 1.64s/it]
3%|▎ | 15115/569592 [6:08:20<251:55:12, 1.64s/it]
3%|▎ | 15116/569592 [6:08:24<353:17:11, 2.29s/it]
3%|▎ | 15116/569592 [6:08:24<353:17:11, 2.29s/it]
3%|▎ | 15117/569592 [6:08:27<403:06:50, 2.62s/it]
3%|▎ | 15117/569592 [6:08:27<403:06:50, 2.62s/it]
3%|▎ | 15118/569592 [6:08:28<326:33:24, 2.12s/it]
3%|▎ | 15118/569592 [6:08:28<326:33:24, 2.12s/it]
3%|▎ | 15119/569592 [6:08:29<271:57:36, 1.77s/it]
3%|▎ | 15119/569592 [6:08:29<271:57:36, 1.77s/it]
3%|▎ | 15120/569592 [6:08:30<234:36:15, 1.52s/it]
3%|▎ | 15120/569592 [6:08:30<234:36:15, 1.52s/it]
3%|▎ | 15121/569592 [6:08:34<323:44:53, 2.10s/it]
3%|▎ | 15121/569592 [6:08:34<323:44:53, 2.10s/it]
3%|▎ | 15122/569592 [6:08:37<383:11:41, 2.49s/it]
3%|▎ | 15122/569592 [6:08:37<383:11:41, 2.49s/it]
3%|▎ | 15123/569592 [6:08:38<312:00:55, 2.03s/it]
3%|▎ | 15123/569592 [6:08:38<312:00:55, 2.03s/it]
3%|▎ | 15124/569592 [6:08:39<262:51:35, 1.71s/it]
3%|▎ | 15124/569592 [6:08:39<262:51:35, 1.71s/it]
3%|▎ | 15125/569592 [6:08:41<300:27:10, 1.95s/it]
3%|▎ | 15125/569592 [6:08:41<300:27:10, 1.95s/it]
3%|▎ | 15126/569592 [6:08:45<387:40:53, 2.52s/it]
3%|▎ | 15126/569592 [6:08:45<387:40:53, 2.52s/it]
3%|▎ | 15127/569592 [6:08:48<423:49:42, 2.75s/it]
3%|▎ | 15127/569592 [6:08:48<423:49:42, 2.75s/it]
3%|▎ | 15128/569592 [6:08:52<474:29:40, 3.08s/it]
3%|▎ | 15128/569592 [6:08:52<474:29:40, 3.08s/it]
3%|▎ | 15129/569592 [6:08:56<481:49:59, 3.13s/it]
3%|▎ | 15129/569592 [6:08:56<481:49:59, 3.13s/it]
3%|▎ | 15130/569592 [6:08:59<512:41:01, 3.33s/it]
3%|▎ | 15130/569592 [6:08:59<512:41:01, 3.33s/it]
3%|▎ | 15131/569592 [6:09:04<556:39:22, 3.61s/it]
3%|▎ | 15131/569592 [6:09:04<556:39:22, 3.61s/it]
3%|▎ | 15132/569592 [6:09:07<568:23:00, 3.69s/it]
3%|▎ | 15132/569592 [6:09:08<568:23:00, 3.69s/it]
3%|▎ | 15133/569592 [6:09:08<441:10:51, 2.86s/it]
3%|▎ | 15133/569592 [6:09:08<441:10:51, 2.86s/it]
3%|▎ | 15134/569592 [6:09:13<501:56:00, 3.26s/it]
3%|▎ | 15134/569592 [6:09:13<501:56:00, 3.26s/it]
3%|▎ | 15135/569592 [6:09:17<535:34:29, 3.48s/it]
3%|▎ | 15135/569592 [6:09:17<535:34:29, 3.48s/it]
3%|▎ | 15136/569592 [6:09:20<514:27:34, 3.34s/it]
3%|▎ | 15136/569592 [6:09:20<514:27:34, 3.34s/it]
3%|▎ | 15137/569592 [6:09:21<401:56:06, 2.61s/it]
3%|▎ | 15137/569592 [6:09:21<401:56:06, 2.61s/it]
3%|▎ | 15138/569592 [6:09:21<324:27:37, 2.11s/it]
3%|▎ | 15138/569592 [6:09:21<324:27:37, 2.11s/it]
3%|▎ | 15139/569592 [6:09:25<386:04:12, 2.51s/it]
3%|▎ | 15139/569592 [6:09:25<386:04:12, 2.51s/it]
3%|▎ | 15140/569592 [6:09:29<438:35:43, 2.85s/it]
3%|▎ | 15140/569592 [6:09:29<438:35:43, 2.85s/it]
3%|▎ | 15141/569592 [6:09:34<543:21:23, 3.53s/it]
3%|▎ | 15141/569592 [6:09:34<543:21:23, 3.53s/it]
3%|▎ | 15142/569592 [6:09:35<421:41:06, 2.74s/it]
3%|▎ | 15142/569592 [6:09:35<421:41:06, 2.74s/it]
3%|▎ | 15143/569592 [6:09:38<446:18:16, 2.90s/it]
3%|▎ | 15143/569592 [6:09:38<446:18:16, 2.90s/it]
3%|▎ | 15144/569592 [6:09:39<355:10:24, 2.31s/it]
3%|▎ | 15144/569592 [6:09:39<355:10:24, 2.31s/it]
3%|▎ | 15145/569592 [6:09:43<428:20:03, 2.78s/it]
3%|▎ | 15145/569592 [6:09:43<428:20:03, 2.78s/it]
3%|▎ | 15146/569592 [6:09:47<502:31:16, 3.26s/it]
3%|▎ | 15146/569592 [6:09:47<502:31:16, 3.26s/it]
3%|▎ | 15147/569592 [6:09:51<519:56:57, 3.38s/it]
3%|▎ | 15147/569592 [6:09:51<519:56:57, 3.38s/it]
3%|▎ | 15148/569592 [6:09:52<416:09:14, 2.70s/it]
3%|▎ | 15148/569592 [6:09:52<416:09:14, 2.70s/it]
3%|▎ | 15149/569592 [6:09:56<466:08:00, 3.03s/it]
3%|▎ | 15149/569592 [6:09:56<466:08:00, 3.03s/it]
3%|▎ | 15150/569592 [6:09:57<370:44:47, 2.41s/it]
3%|▎ | 15150/569592 [6:09:57<370:44:47, 2.41s/it]
3%|▎ | 15151/569592 [6:10:00<441:25:05, 2.87s/it]
3%|▎ | 15151/569592 [6:10:00<441:25:05, 2.87s/it]
3%|▎ | 15152/569592 [6:10:04<472:12:52, 3.07s/it]
3%|▎ | 15152/569592 [6:10:04<472:12:52, 3.07s/it]
3%|▎ | 15153/569592 [6:10:08<498:41:34, 3.24s/it]
3%|▎ | 15153/569592 [6:10:08<498:41:34, 3.24s/it]
3%|▎ | 15154/569592 [6:10:09<390:36:32, 2.54s/it]
3%|▎ | 15154/569592 [6:10:09<390:36:32, 2.54s/it]
3%|▎ | 15155/569592 [6:10:12<444:34:50, 2.89s/it]
3%|▎ | 15155/569592 [6:10:12<444:34:50, 2.89s/it]
3%|▎ | 15156/569592 [6:10:18<554:51:18, 3.60s/it]
3%|▎ | 15156/569592 [6:10:18<554:51:18, 3.60s/it]
3%|▎ | 15157/569592 [6:10:21<527:46:49, 3.43s/it]
3%|▎ | 15157/569592 [6:10:21<527:46:49, 3.43s/it]
3%|▎ | 15158/569592 [6:10:24<526:15:31, 3.42s/it]
3%|▎ | 15158/569592 [6:10:24<526:15:31, 3.42s/it]
3%|▎ | 15159/569592 [6:10:29<603:45:51, 3.92s/it]
3%|▎ | 15159/569592 [6:10:29<603:45:51, 3.92s/it]
3%|▎ | 15160/569592 [6:10:34<629:56:29, 4.09s/it]
3%|▎ | 15160/569592 [6:10:34<629:56:29, 4.09s/it]
3%|▎ | 15161/569592 [6:10:38<666:05:57, 4.33s/it]
3%|▎ | 15161/569592 [6:10:38<666:05:57, 4.33s/it]
3%|▎ | 15162/569592 [6:10:42<619:49:00, 4.02s/it]
3%|▎ | 15162/569592 [6:10:42<619:49:00, 4.02s/it]
3%|▎ | 15163/569592 [6:10:46<624:30:15, 4.06s/it]
3%|▎ | 15163/569592 [6:10:46<624:30:15, 4.06s/it]
3%|▎ | 15164/569592 [6:10:49<586:45:12, 3.81s/it]
3%|▎ | 15164/569592 [6:10:49<586:45:12, 3.81s/it]
3%|▎ | 15165/569592 [6:10:53<593:37:04, 3.85s/it]
3%|▎ | 15165/569592 [6:10:53<593:37:04, 3.85s/it]
3%|▎ | 15166/569592 [6:10:56<560:21:52, 3.64s/it]
3%|▎ | 15166/569592 [6:10:56<560:21:52, 3.64s/it]
3%|▎ | 15167/569592 [6:11:00<552:37:25, 3.59s/it]
3%|▎ | 15167/569592 [6:11:00<552:37:25, 3.59s/it]
3%|▎ | 15168/569592 [6:11:03<546:51:58, 3.55s/it]
3%|▎ | 15168/569592 [6:11:03<546:51:58, 3.55s/it]
3%|▎ | 15169/569592 [6:11:04<424:41:40, 2.76s/it]
3%|▎ | 15169/569592 [6:11:04<424:41:40, 2.76s/it]
3%|▎ | 15170/569592 [6:11:05<340:31:46, 2.21s/it]
3%|▎ | 15170/569592 [6:11:05<340:31:46, 2.21s/it]
3%|▎ | 15171/569592 [6:11:09<412:13:32, 2.68s/it]
3%|▎ | 15171/569592 [6:11:09<412:13:32, 2.68s/it]
3%|▎ | 15172/569592 [6:11:12<456:09:41, 2.96s/it]
3%|▎ | 15172/569592 [6:11:12<456:09:41, 2.96s/it]
3%|▎ | 15173/569592 [6:11:16<470:14:35, 3.05s/it]
3%|▎ | 15173/569592 [6:11:16<470:14:35, 3.05s/it]
3%|▎ | 15174/569592 [6:11:16<370:02:55, 2.40s/it]
3%|▎ | 15174/569592 [6:11:17<370:02:55, 2.40s/it]
3%|▎ | 15175/569592 [6:11:17<303:35:02, 1.97s/it]
3%|▎ | 15175/569592 [6:11:17<303:35:02, 1.97s/it]
3%|▎ | 15176/569592 [6:11:21<393:54:30, 2.56s/it]
3%|▎ | 15176/569592 [6:11:21<393:54:30, 2.56s/it]
3%|▎ | 15177/569592 [6:11:26<485:47:19, 3.15s/it]
3%|▎ | 15177/569592 [6:11:26<485:47:19, 3.15s/it]
3%|▎ | 15178/569592 [6:11:29<496:48:41, 3.23s/it]
3%|▎ | 15178/569592 [6:11:29<496:48:41, 3.23s/it]
3%|▎ | 15179/569592 [6:11:32<485:02:27, 3.15s/it]
3%|▎ | 15179/569592 [6:11:32<485:02:27, 3.15s/it]
3%|▎ | 15180/569592 [6:11:33<382:05:55, 2.48s/it]
3%|▎ | 15180/569592 [6:11:33<382:05:55, 2.48s/it]
3%|▎ | 15181/569592 [6:11:34<313:31:59, 2.04s/it]
3%|▎ | 15181/569592 [6:11:34<313:31:59, 2.04s/it]
3%|▎ | 15182/569592 [6:11:35<265:25:09, 1.72s/it]
3%|▎ | 15182/569592 [6:11:35<265:25:09, 1.72s/it]
3%|▎ | 15183/569592 [6:11:40<394:57:06, 2.56s/it]
3%|▎ | 15183/569592 [6:11:40<394:57:06, 2.56s/it]
3%|▎ | 15184/569592 [6:11:43<447:59:45, 2.91s/it]
3%|▎ | 15184/569592 [6:11:43<447:59:45, 2.91s/it]
3%|▎ | 15185/569592 [6:11:47<497:20:16, 3.23s/it]
3%|▎ | 15185/569592 [6:11:47<497:20:16, 3.23s/it]
3%|▎ | 15186/569592 [6:11:48<391:18:34, 2.54s/it]
3%|▎ | 15186/569592 [6:11:48<391:18:34, 2.54s/it]
3%|▎ | 15187/569592 [6:11:52<452:41:24, 2.94s/it]
3%|▎ | 15187/569592 [6:11:52<452:41:24, 2.94s/it]
3%|▎ | 15188/569592 [6:11:53<359:29:22, 2.33s/it]
3%|▎ | 15188/569592 [6:11:53<359:29:22, 2.33s/it]
3%|▎ | 15189/569592 [6:11:57<433:07:41, 2.81s/it]
3%|▎ | 15189/569592 [6:11:57<433:07:41, 2.81s/it]
3%|▎ | 15190/569592 [6:12:01<468:16:37, 3.04s/it]
3%|▎ | 15190/569592 [6:12:01<468:16:37, 3.04s/it]
3%|▎ | 15191/569592 [6:12:02<371:13:18, 2.41s/it]
3%|▎ | 15191/569592 [6:12:02<371:13:18, 2.41s/it]
3%|▎ | 15192/569592 [6:12:05<417:41:51, 2.71s/it]
3%|▎ | 15192/569592 [6:12:05<417:41:51, 2.71s/it]
3%|▎ | 15193/569592 [6:12:06<338:22:24, 2.20s/it]
3%|▎ | 15193/569592 [6:12:06<338:22:24, 2.20s/it]
3%|▎ | 15194/569592 [6:12:09<394:32:38, 2.56s/it]
3%|▎ | 15194/569592 [6:12:09<394:32:38, 2.56s/it]
3%|▎ | 15195/569592 [6:12:13<427:43:05, 2.78s/it]
3%|▎ | 15195/569592 [6:12:13<427:43:05, 2.78s/it]
3%|▎ | 15196/569592 [6:12:16<468:13:17, 3.04s/it]
3%|▎ | 15196/569592 [6:12:16<468:13:17, 3.04s/it]
3%|▎ | 15197/569592 [6:12:17<369:51:26, 2.40s/it]
3%|▎ | 15197/569592 [6:12:17<369:51:26, 2.40s/it]
3%|▎ | 15198/569592 [6:12:18<306:56:02, 1.99s/it]
3%|▎ | 15198/569592 [6:12:18<306:56:02, 1.99s/it]
3%|▎ | 15199/569592 [6:12:19<257:59:12, 1.68s/it]
3%|▎ | 15199/569592 [6:12:19<257:59:12, 1.68s/it]
3%|▎ | 15200/569592 [6:12:23<359:42:48, 2.34s/it]
3%|▎ | 15200/569592 [6:12:23<359:42:48, 2.34s/it]
3%|▎ | 15201/569592 [6:12:27<414:56:12, 2.69s/it]
3%|▎ | 15201/569592 [6:12:27<414:56:12, 2.69s/it]
3%|▎ | 15202/569592 [6:12:30<457:44:01, 2.97s/it]
3%|▎ | 15202/569592 [6:12:30<457:44:01, 2.97s/it]
3%|▎ | 15203/569592 [6:12:33<465:40:47, 3.02s/it]
3%|▎ | 15203/569592 [6:12:33<465:40:47, 3.02s/it]
3%|▎ | 15204/569592 [6:12:38<521:41:24, 3.39s/it]
3%|▎ | 15204/569592 [6:12:38<521:41:24, 3.39s/it]
3%|▎ | 15205/569592 [6:12:41<538:15:29, 3.50s/it]
3%|▎ | 15205/569592 [6:12:41<538:15:29, 3.50s/it]
3%|▎ | 15206/569592 [6:12:42<418:06:06, 2.72s/it]
3%|▎ | 15206/569592 [6:12:42<418:06:06, 2.72s/it]
3%|▎ | 15207/569592 [6:12:46<474:45:51, 3.08s/it]
3%|▎ | 15207/569592 [6:12:46<474:45:51, 3.08s/it]
3%|▎ | 15208/569592 [6:12:47<375:05:31, 2.44s/it]
3%|▎ | 15208/569592 [6:12:47<375:05:31, 2.44s/it]
3%|▎ | 15209/569592 [6:12:48<305:16:40, 1.98s/it]
3%|▎ | 15209/569592 [6:12:48<305:16:40, 1.98s/it]
3%|▎ | 15210/569592 [6:12:52<393:49:31, 2.56s/it]
3%|▎ | 15210/569592 [6:12:52<393:49:31, 2.56s/it]
3%|▎ | 15211/569592 [6:12:56<474:38:50, 3.08s/it]
3%|▎ | 15211/569592 [6:12:56<474:38:50, 3.08s/it]
3%|▎ | 15212/569592 [6:13:00<509:21:00, 3.31s/it]
3%|▎ | 15212/569592 [6:13:00<509:21:00, 3.31s/it]
3%|▎ | 15213/569592 [6:13:03<510:40:56, 3.32s/it]
3%|▎ | 15213/569592 [6:13:03<510:40:56, 3.32s/it]
3%|▎ | 15214/569592 [6:13:07<519:58:28, 3.38s/it]
3%|▎ | 15214/569592 [6:13:07<519:58:28, 3.38s/it]
3%|▎ | 15215/569592 [6:13:10<513:21:54, 3.33s/it]
3%|▎ | 15215/569592 [6:13:10<513:21:54, 3.33s/it]
3%|▎ | 15216/569592 [6:13:13<501:53:57, 3.26s/it]
3%|▎ | 15216/569592 [6:13:13<501:53:57, 3.26s/it]
3%|▎ | 15217/569592 [6:13:17<533:32:35, 3.46s/it]
3%|▎ | 15217/569592 [6:13:17<533:32:35, 3.46s/it]
3%|▎ | 15218/569592 [6:13:18<415:15:08, 2.70s/it]
3%|▎ | 15218/569592 [6:13:18<415:15:08, 2.70s/it]
3%|▎ | 15219/569592 [6:13:19<335:08:38, 2.18s/it]
3%|▎ | 15219/569592 [6:13:19<335:08:38, 2.18s/it]
3%|▎ | 15220/569592 [6:13:23<413:43:03, 2.69s/it]
3%|▎ | 15220/569592 [6:13:23<413:43:03, 2.69s/it]
3%|▎ | 15221/569592 [6:13:27<459:37:38, 2.98s/it]
3%|▎ | 15221/569592 [6:13:27<459:37:38, 2.98s/it]
3%|▎ | 15222/569592 [6:13:28<369:55:26, 2.40s/it]
3%|▎ | 15222/569592 [6:13:28<369:55:26, 2.40s/it]
3%|▎ | 15223/569592 [6:13:29<301:29:34, 1.96s/it]
3%|▎ | 15223/569592 [6:13:29<301:29:34, 1.96s/it]
3%|▎ | 15224/569592 [6:13:30<254:08:04, 1.65s/it]
3%|▎ | 15224/569592 [6:13:30<254:08:04, 1.65s/it]
3%|▎ | 15225/569592 [6:13:30<221:04:54, 1.44s/it]
3%|▎ | 15225/569592 [6:13:31<221:04:54, 1.44s/it]
3%|▎ | 15226/569592 [6:13:32<217:12:11, 1.41s/it]
3%|▎ | 15226/569592 [6:13:32<217:12:11, 1.41s/it]
3%|▎ | 15227/569592 [6:13:35<316:05:18, 2.05s/it]
3%|▎ | 15227/569592 [6:13:35<316:05:18, 2.05s/it]
3%|▎ | 15228/569592 [6:13:39<376:40:11, 2.45s/it]
3%|▎ | 15228/569592 [6:13:39<376:40:11, 2.45s/it]
3%|▎ | 15229/569592 [6:13:42<422:42:22, 2.75s/it]
3%|▎ | 15229/569592 [6:13:42<422:42:22, 2.75s/it]
3%|▎ | 15230/569592 [6:13:45<445:19:46, 2.89s/it]
3%|▎ | 15230/569592 [6:13:45<445:19:46, 2.89s/it]
3%|▎ | 15231/569592 [6:13:46<354:16:54, 2.30s/it]
3%|▎ | 15231/569592 [6:13:47<354:16:54, 2.30s/it]
3%|▎ | 15232/569592 [6:13:48<302:59:10, 1.97s/it]
3%|▎ | 15232/569592 [6:13:48<302:59:10, 1.97s/it]
3%|▎ | 15233/569592 [6:13:48<256:05:00, 1.66s/it]
3%|▎ | 15233/569592 [6:13:49<256:05:00, 1.66s/it]
3%|▎ | 15234/569592 [6:13:52<350:59:49, 2.28s/it]
3%|▎ | 15234/569592 [6:13:52<350:59:49, 2.28s/it]
3%|▎ | 15235/569592 [6:13:53<298:58:21, 1.94s/it]
3%|▎ | 15235/569592 [6:13:53<298:58:21, 1.94s/it]
3%|▎ | 15236/569592 [6:13:55<272:03:18, 1.77s/it]
3%|▎ | 15236/569592 [6:13:55<272:03:18, 1.77s/it]
3%|▎ | 15237/569592 [6:13:58<340:21:19, 2.21s/it]
3%|▎ | 15237/569592 [6:13:58<340:21:19, 2.21s/it]
3%|▎ | 15238/569592 [6:14:02<419:01:33, 2.72s/it]
3%|▎ | 15238/569592 [6:14:02<419:01:33, 2.72s/it]
3%|▎ | 15239/569592 [6:14:05<455:12:52, 2.96s/it]
3%|▎ | 15239/569592 [6:14:05<455:12:52, 2.96s/it]
3%|▎ | 15240/569592 [6:14:10<508:39:37, 3.30s/it]
3%|▎ | 15240/569592 [6:14:10<508:39:37, 3.30s/it]
3%|▎ | 15241/569592 [6:14:13<529:12:29, 3.44s/it]
3%|▎ | 15241/569592 [6:14:13<529:12:29, 3.44s/it]
3%|▎ | 15242/569592 [6:14:14<413:51:56, 2.69s/it]
3%|▎ | 15242/569592 [6:14:14<413:51:56, 2.69s/it]
3%|▎ | 15243/569592 [6:14:18<461:23:50, 3.00s/it]
3%|▎ | 15243/569592 [6:14:18<461:23:50, 3.00s/it]
3%|▎ | 15244/569592 [6:14:19<369:30:31, 2.40s/it]
3%|▎ | 15244/569592 [6:14:19<369:30:31, 2.40s/it]
3%|▎ | 15245/569592 [6:14:22<411:53:26, 2.67s/it]
3%|▎ | 15245/569592 [6:14:22<411:53:26, 2.67s/it]
3%|▎ | 15246/569592 [6:14:26<449:02:16, 2.92s/it]
3%|▎ | 15246/569592 [6:14:26<449:02:16, 2.92s/it]
3%|▎ | 15247/569592 [6:14:27<362:38:02, 2.36s/it]
3%|▎ | 15247/569592 [6:14:27<362:38:02, 2.36s/it]
3%|▎ | 15248/569592 [6:14:30<412:16:43, 2.68s/it]
3%|▎ | 15248/569592 [6:14:30<412:16:43, 2.68s/it]
3%|▎ | 15249/569592 [6:14:35<510:30:49, 3.32s/it]
3%|▎ | 15249/569592 [6:14:35<510:30:49, 3.32s/it]
3%|▎ | 15250/569592 [6:14:39<532:50:28, 3.46s/it]
3%|▎ | 15250/569592 [6:14:39<532:50:28, 3.46s/it]
3%|▎ | 15251/569592 [6:14:40<414:33:02, 2.69s/it]
3%|▎ | 15251/569592 [6:14:40<414:33:02, 2.69s/it]
3%|▎ | 15252/569592 [6:14:44<501:09:43, 3.25s/it]
3%|▎ | 15252/569592 [6:14:44<501:09:43, 3.25s/it]
3%|▎ | 15253/569592 [6:14:48<520:53:34, 3.38s/it]
3%|▎ | 15253/569592 [6:14:48<520:53:34, 3.38s/it]
3%|▎ | 15254/569592 [6:14:51<511:04:29, 3.32s/it]
3%|▎ | 15254/569592 [6:14:51<511:04:29, 3.32s/it]
3%|▎ | 15255/569592 [6:14:52<401:08:18, 2.61s/it]
3%|▎ | 15255/569592 [6:14:52<401:08:18, 2.61s/it]
3%|▎ | 15256/569592 [6:14:56<471:58:24, 3.07s/it]
3%|▎ | 15256/569592 [6:14:56<471:58:24, 3.07s/it]
3%|▎ | 15257/569592 [6:15:00<486:03:30, 3.16s/it]
3%|▎ | 15257/569592 [6:15:00<486:03:30, 3.16s/it]
3%|▎ | 15258/569592 [6:15:03<479:22:11, 3.11s/it]
3%|▎ | 15258/569592 [6:15:03<479:22:11, 3.11s/it]
3%|▎ | 15259/569592 [6:15:04<378:49:17, 2.46s/it]
3%|▎ | 15259/569592 [6:15:04<378:49:17, 2.46s/it]
3%|▎ | 15260/569592 [6:15:04<309:54:29, 2.01s/it]
3%|▎ | 15260/569592 [6:15:05<309:54:29, 2.01s/it]
3%|▎ | 15261/569592 [6:15:08<374:49:30, 2.43s/it]
3%|▎ | 15261/569592 [6:15:08<374:49:30, 2.43s/it]
3%|▎ | 15262/569592 [6:15:09<309:13:59, 2.01s/it]
3%|▎ | 15262/569592 [6:15:09<309:13:59, 2.01s/it]
3%|▎ | 15263/569592 [6:15:13<393:26:18, 2.56s/it]
3%|▎ | 15263/569592 [6:15:13<393:26:18, 2.56s/it]
3%|▎ | 15264/569592 [6:15:16<444:17:53, 2.89s/it]
3%|▎ | 15264/569592 [6:15:16<444:17:53, 2.89s/it]
3%|▎ | 15265/569592 [6:15:20<477:21:34, 3.10s/it]
3%|▎ | 15265/569592 [6:15:20<477:21:34, 3.10s/it]
3%|▎ | 15266/569592 [6:15:23<494:00:00, 3.21s/it]
3%|▎ | 15266/569592 [6:15:23<494:00:00, 3.21s/it]
3%|▎ | 15267/569592 [6:15:27<497:14:46, 3.23s/it]
3%|▎ | 15267/569592 [6:15:27<497:14:46, 3.23s/it]
3%|▎ | 15268/569592 [6:15:32<571:11:40, 3.71s/it]
3%|▎ | 15268/569592 [6:15:32<571:11:40, 3.71s/it]
3%|▎ | 15269/569592 [6:15:35<548:43:20, 3.56s/it]
3%|▎ | 15269/569592 [6:15:35<548:43:20, 3.56s/it]
3%|▎ | 15270/569592 [6:15:38<543:29:21, 3.53s/it]
3%|▎ | 15270/569592 [6:15:38<543:29:21, 3.53s/it]
3%|▎ | 15271/569592 [6:15:39<422:41:21, 2.75s/it]
3%|▎ | 15271/569592 [6:15:39<422:41:21, 2.75s/it]
3%|▎ | 15272/569592 [6:15:42<441:24:20, 2.87s/it]
3%|▎ | 15272/569592 [6:15:42<441:24:20, 2.87s/it]
3%|▎ | 15273/569592 [6:15:46<489:00:19, 3.18s/it]
3%|▎ | 15273/569592 [6:15:46<489:00:19, 3.18s/it]
3%|▎ | 15274/569592 [6:15:50<514:43:44, 3.34s/it]
3%|▎ | 15274/569592 [6:15:50<514:43:44, 3.34s/it]
3%|▎ | 15275/569592 [6:15:53<520:16:28, 3.38s/it]
3%|▎ | 15275/569592 [6:15:53<520:16:28, 3.38s/it]
3%|▎ | 15276/569592 [6:15:57<551:07:24, 3.58s/it]
3%|▎ | 15276/569592 [6:15:57<551:07:24, 3.58s/it]
3%|▎ | 15277/569592 [6:16:01<536:19:53, 3.48s/it]
3%|▎ | 15277/569592 [6:16:01<536:19:53, 3.48s/it]
3%|▎ | 15278/569592 [6:16:04<533:21:49, 3.46s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15278/569592 [6:16:04<533:21:49, 3.46s/it]
3%|▎ | 15279/569592 [6:16:07<527:25:46, 3.43s/it]
3%|▎ | 15279/569592 [6:16:07<527:25:46, 3.43s/it]
3%|▎ | 15280/569592 [6:16:11<531:04:00, 3.45s/it]
3%|▎ | 15280/569592 [6:16:11<531:04:00, 3.45s/it]
3%|▎ | 15281/569592 [6:16:16<590:10:48, 3.83s/it]
3%|▎ | 15281/569592 [6:16:16<590:10:48, 3.83s/it]
3%|▎ | 15282/569592 [6:16:19<574:29:00, 3.73s/it]
3%|▎ | 15282/569592 [6:16:19<574:29:00, 3.73s/it]
3%|▎ | 15283/569592 [6:16:22<541:40:31, 3.52s/it]
3%|▎ | 15283/569592 [6:16:22<541:40:31, 3.52s/it]
3%|▎ | 15284/569592 [6:16:27<580:11:18, 3.77s/it]
3%|▎ | 15284/569592 [6:16:27<580:11:18, 3.77s/it]
3%|▎ | 15285/569592 [6:16:30<557:02:58, 3.62s/it]
3%|▎ | 15285/569592 [6:16:30<557:02:58, 3.62s/it]
3%|▎ | 15286/569592 [6:16:33<526:40:16, 3.42s/it]
3%|▎ | 15286/569592 [6:16:33<526:40:16, 3.42s/it]
3%|▎ | 15287/569592 [6:16:34<411:24:03, 2.67s/it]
3%|▎ | 15287/569592 [6:16:34<411:24:03, 2.67s/it]
3%|▎ | 15288/569592 [6:16:35<346:53:43, 2.25s/it]
3%|▎ | 15288/569592 [6:16:35<346:53:43, 2.25s/it]
3%|▎ | 15289/569592 [6:16:39<411:12:23, 2.67s/it]
3%|▎ | 15289/569592 [6:16:39<411:12:23, 2.67s/it]
3%|▎ | 15290/569592 [6:16:43<504:15:14, 3.27s/it]
3%|▎ | 15290/569592 [6:16:43<504:15:14, 3.27s/it]
3%|▎ | 15291/569592 [6:16:47<513:40:12, 3.34s/it]
3%|▎ | 15291/569592 [6:16:47<513:40:12, 3.34s/it]
3%|▎ | 15292/569592 [6:16:48<408:36:33, 2.65s/it]
3%|▎ | 15292/569592 [6:16:48<408:36:33, 2.65s/it]
3%|▎ | 15293/569592 [6:16:51<435:04:24, 2.83s/it]
3%|▎ | 15293/569592 [6:16:51<435:04:24, 2.83s/it]
3%|▎ | 15294/569592 [6:16:52<348:39:50, 2.26s/it]
3%|▎ | 15294/569592 [6:16:52<348:39:50, 2.26s/it]
3%|▎ | 15295/569592 [6:16:53<287:23:52, 1.87s/it]
3%|▎ | 15295/569592 [6:16:53<287:23:52, 1.87s/it]
3%|▎ | 15296/569592 [6:16:57<380:08:11, 2.47s/it]
3%|▎ | 15296/569592 [6:16:57<380:08:11, 2.47s/it]
3%|▎ | 15297/569592 [6:16:58<309:43:31, 2.01s/it]
3%|▎ | 15297/569592 [6:16:58<309:43:31, 2.01s/it]
3%|▎ | 15298/569592 [6:17:02<407:05:42, 2.64s/it]
3%|▎ | 15298/569592 [6:17:02<407:05:42, 2.64s/it]
3%|▎ | 15299/569592 [6:17:03<332:14:43, 2.16s/it]
3%|▎ | 15299/569592 [6:17:03<332:14:43, 2.16s/it]
3%|▎ | 15300/569592 [6:17:04<275:23:14, 1.79s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15300/569592 [6:17:04<275:23:14, 1.79s/it]
3%|▎ | 15301/569592 [6:17:05<238:09:25, 1.55s/it]
3%|▎ | 15301/569592 [6:17:05<238:09:25, 1.55s/it]
3%|▎ | 15302/569592 [6:17:09<344:51:36, 2.24s/it]
3%|▎ | 15302/569592 [6:17:09<344:51:36, 2.24s/it]
3%|▎ | 15303/569592 [6:17:12<416:18:33, 2.70s/it]
3%|▎ | 15303/569592 [6:17:12<416:18:33, 2.70s/it]
3%|▎ | 15304/569592 [6:17:16<454:31:17, 2.95s/it]
3%|▎ | 15304/569592 [6:17:16<454:31:17, 2.95s/it]
3%|▎ | 15305/569592 [6:17:20<484:07:47, 3.14s/it]
3%|▎ | 15305/569592 [6:17:20<484:07:47, 3.14s/it]
3%|▎ | 15306/569592 [6:17:23<488:23:08, 3.17s/it]
3%|▎ | 15306/569592 [6:17:23<488:23:08, 3.17s/it]
3%|▎ | 15307/569592 [6:17:24<386:03:58, 2.51s/it]
3%|▎ | 15307/569592 [6:17:24<386:03:58, 2.51s/it]
3%|▎ | 15308/569592 [6:17:27<435:56:36, 2.83s/it]
3%|▎ | 15308/569592 [6:17:27<435:56:36, 2.83s/it]
3%|▎ | 15309/569592 [6:17:31<467:08:38, 3.03s/it]
3%|▎ | 15309/569592 [6:17:31<467:08:38, 3.03s/it]
3%|▎ | 15310/569592 [6:17:32<371:26:53, 2.41s/it]
3%|▎ | 15310/569592 [6:17:32<371:26:53, 2.41s/it]
3%|▎ | 15311/569592 [6:17:33<304:25:40, 1.98s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15311/569592 [6:17:33<304:25:40, 1.98s/it]
3%|▎ | 15312/569592 [6:17:34<256:11:21, 1.66s/it]
3%|▎ | 15312/569592 [6:17:34<256:11:21, 1.66s/it]
3%|▎ | 15313/569592 [6:17:35<225:45:52, 1.47s/it]
3%|▎ | 15313/569592 [6:17:35<225:45:52, 1.47s/it]
3%|▎ | 15314/569592 [6:17:39<336:37:32, 2.19s/it]
3%|▎ | 15314/569592 [6:17:39<336:37:32, 2.19s/it]
3%|▎ | 15315/569592 [6:17:42<408:08:33, 2.65s/it]
3%|▎ | 15315/569592 [6:17:42<408:08:33, 2.65s/it]
3%|▎ | 15316/569592 [6:17:44<339:32:54, 2.21s/it]
3%|▎ | 15316/569592 [6:17:44<339:32:54, 2.21s/it]
3%|▎ | 15317/569592 [6:17:48<422:19:02, 2.74s/it]
3%|▎ | 15317/569592 [6:17:48<422:19:02, 2.74s/it]
3%|▎ | 15318/569592 [6:17:51<465:00:34, 3.02s/it]
3%|▎ | 15318/569592 [6:17:51<465:00:34, 3.02s/it]
3%|▎ | 15319/569592 [6:17:56<550:04:21, 3.57s/it]
3%|▎ | 15319/569592 [6:17:56<550:04:21, 3.57s/it]
3%|▎ | 15320/569592 [6:17:59<540:06:01, 3.51s/it]
3%|▎ | 15320/569592 [6:17:59<540:06:01, 3.51s/it]
3%|▎ | 15321/569592 [6:18:00<421:33:41, 2.74s/it]
3%|▎ | 15321/569592 [6:18:00<421:33:41, 2.74s/it]
3%|▎ | 15322/569592 [6:18:04<457:11:02, 2.97s/it]
3%|▎ | 15322/569592 [6:18:04<457:11:02, 2.97s/it]
3%|▎ | 15323/569592 [6:18:05<366:07:15, 2.38s/it]
3%|▎ | 15323/569592 [6:18:05<366:07:15, 2.38s/it]
3%|▎ | 15324/569592 [6:18:06<299:45:09, 1.95s/it]
3%|▎ | 15324/569592 [6:18:06<299:45:09, 1.95s/it]
3%|▎ | 15325/569592 [6:18:07<261:56:33, 1.70s/it]
3%|▎ | 15325/569592 [6:18:07<261:56:33, 1.70s/it]
3%|▎ | 15326/569592 [6:18:11<350:29:20, 2.28s/it]
3%|▎ | 15326/569592 [6:18:11<350:29:20, 2.28s/it]
3%|▎ | 15327/569592 [6:18:12<290:21:10, 1.89s/it]
3%|▎ | 15327/569592 [6:18:12<290:21:10, 1.89s/it]
3%|▎ | 15328/569592 [6:18:12<246:53:10, 1.60s/it]
3%|▎ | 15328/569592 [6:18:12<246:53:10, 1.60s/it]
3%|▎ | 15329/569592 [6:18:16<343:44:11, 2.23s/it]
3%|▎ | 15329/569592 [6:18:16<343:44:11, 2.23s/it]
3%|▎ | 15330/569592 [6:18:17<284:37:48, 1.85s/it]
3%|▎ | 15330/569592 [6:18:17<284:37:48, 1.85s/it]
3%|▎ | 15331/569592 [6:18:21<376:38:16, 2.45s/it]
3%|▎ | 15331/569592 [6:18:21<376:38:16, 2.45s/it]
3%|▎ | 15332/569592 [6:18:24<416:31:08, 2.71s/it]
3%|▎ | 15332/569592 [6:18:24<416:31:08, 2.71s/it]
3%|▎ | 15333/569592 [6:18:28<483:17:01, 3.14s/it]
3%|▎ | 15333/569592 [6:18:28<483:17:01, 3.14s/it]
3%|▎ | 15334/569592 [6:18:32<507:35:50, 3.30s/it]
3%|▎ | 15334/569592 [6:18:32<507:35:50, 3.30s/it]
3%|▎ | 15335/569592 [6:18:36<520:32:48, 3.38s/it]
3%|▎ | 15335/569592 [6:18:36<520:32:48, 3.38s/it]
3%|▎ | 15336/569592 [6:18:39<527:49:50, 3.43s/it]
3%|▎ | 15336/569592 [6:18:39<527:49:50, 3.43s/it]
3%|▎ | 15337/569592 [6:18:43<526:42:30, 3.42s/it]
3%|▎ | 15337/569592 [6:18:43<526:42:30, 3.42s/it]
3%|▎ | 15338/569592 [6:18:44<412:15:50, 2.68s/it]
3%|▎ | 15338/569592 [6:18:44<412:15:50, 2.68s/it]
3%|▎ | 15339/569592 [6:18:45<333:06:30, 2.16s/it]
3%|▎ | 15339/569592 [6:18:45<333:06:30, 2.16s/it]
3%|▎ | 15340/569592 [6:18:45<278:34:59, 1.81s/it]
3%|▎ | 15340/569592 [6:18:46<278:34:59, 1.81s/it]
3%|▎ | 15341/569592 [6:18:46<237:44:27, 1.54s/it]
3%|▎ | 15341/569592 [6:18:46<237:44:27, 1.54s/it]
3%|▎ | 15342/569592 [6:18:47<208:51:43, 1.36s/it]
3%|▎ | 15342/569592 [6:18:47<208:51:43, 1.36s/it]
3%|▎ | 15343/569592 [6:18:51<296:23:48, 1.93s/it]
3%|▎ | 15343/569592 [6:18:51<296:23:48, 1.93s/it]
3%|▎ | 15344/569592 [6:18:52<265:37:02, 1.73s/it]
3%|▎ | 15344/569592 [6:18:52<265:37:02, 1.73s/it]
3%|▎ | 15345/569592 [6:18:59<514:12:55, 3.34s/it]
3%|▎ | 15345/569592 [6:18:59<514:12:55, 3.34s/it]
3%|▎ | 15346/569592 [6:19:00<405:51:20, 2.64s/it]
3%|▎ | 15346/569592 [6:19:00<405:51:20, 2.64s/it]
3%|▎ | 15347/569592 [6:19:03<439:16:36, 2.85s/it]
3%|▎ | 15347/569592 [6:19:03<439:16:36, 2.85s/it]
3%|▎ | 15348/569592 [6:19:04<357:13:58, 2.32s/it]
3%|▎ | 15348/569592 [6:19:04<357:13:58, 2.32s/it]
3%|▎ | 15349/569592 [6:19:08<434:30:09, 2.82s/it]
3%|▎ | 15349/569592 [6:19:08<434:30:09, 2.82s/it]
3%|▎ | 15350/569592 [6:19:09<348:38:44, 2.26s/it]
3%|▎ | 15350/569592 [6:19:09<348:38:44, 2.26s/it]
3%|▎ | 15351/569592 [6:19:10<287:08:28, 1.87s/it]
3%|▎ | 15351/569592 [6:19:10<287:08:28, 1.87s/it]
3%|▎ | 15352/569592 [6:19:14<384:07:22, 2.50s/it]
3%|▎ | 15352/569592 [6:19:14<384:07:22, 2.50s/it]
3%|▎ | 15353/569592 [6:19:19<488:53:19, 3.18s/it]
3%|▎ | 15353/569592 [6:19:19<488:53:19, 3.18s/it]
3%|▎ | 15354/569592 [6:19:20<389:25:42, 2.53s/it]
3%|▎ | 15354/569592 [6:19:20<389:25:42, 2.53s/it]
3%|▎ | 15355/569592 [6:19:23<428:35:21, 2.78s/it]
3%|▎ | 15355/569592 [6:19:23<428:35:21, 2.78s/it]
3%|▎ | 15356/569592 [6:19:25<350:49:26, 2.28s/it]
3%|▎ | 15356/569592 [6:19:25<350:49:26, 2.28s/it]
3%|▎ | 15357/569592 [6:19:28<407:31:30, 2.65s/it]
3%|▎ | 15357/569592 [6:19:28<407:31:30, 2.65s/it]
3%|▎ | 15358/569592 [6:19:29<327:35:40, 2.13s/it]
3%|▎ | 15358/569592 [6:19:29<327:35:40, 2.13s/it]
3%|▎ | 15359/569592 [6:19:33<423:09:20, 2.75s/it]
3%|▎ | 15359/569592 [6:19:33<423:09:20, 2.75s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15360/569592 [6:19:36<449:01:44, 2.92s/it]
3%|▎ | 15360/569592 [6:19:36<449:01:44, 2.92s/it]
3%|▎ | 15361/569592 [6:19:40<477:27:18, 3.10s/it]
3%|▎ | 15361/569592 [6:19:40<477:27:18, 3.10s/it]
3%|▎ | 15362/569592 [6:19:43<478:50:22, 3.11s/it]
3%|▎ | 15362/569592 [6:19:43<478:50:22, 3.11s/it]
3%|▎ | 15363/569592 [6:19:47<513:46:20, 3.34s/it]
3%|▎ | 15363/569592 [6:19:47<513:46:20, 3.34s/it]
3%|▎ | 15364/569592 [6:19:48<401:41:55, 2.61s/it]
3%|▎ | 15364/569592 [6:19:48<401:41:55, 2.61s/it]
3%|▎ | 15365/569592 [6:19:50<367:47:19, 2.39s/it]
3%|▎ | 15365/569592 [6:19:50<367:47:19, 2.39s/it]
3%|▎ | 15366/569592 [6:19:54<437:46:48, 2.84s/it]
3%|▎ | 15366/569592 [6:19:54<437:46:48, 2.84s/it]
3%|▎ | 15367/569592 [6:19:57<459:26:50, 2.98s/it]
3%|▎ | 15367/569592 [6:19:57<459:26:50, 2.98s/it]
3%|▎ | 15368/569592 [6:20:00<473:11:32, 3.07s/it]
3%|▎ | 15368/569592 [6:20:00<473:11:32, 3.07s/it]
3%|▎ | 15369/569592 [6:20:03<479:05:31, 3.11s/it]
3%|▎ | 15369/569592 [6:20:03<479:05:31, 3.11s/it]
3%|▎ | 15370/569592 [6:20:07<505:53:56, 3.29s/it]
3%|▎ | 15370/569592 [6:20:07<505:53:56, 3.29s/it]
3%|▎ | 15371/569592 [6:20:11<509:34:32, 3.31s/it]
3%|▎ | 15371/569592 [6:20:11<509:34:32, 3.31s/it]
3%|▎ | 15372/569592 [6:20:11<401:02:59, 2.61s/it]
3%|▎ | 15372/569592 [6:20:11<401:02:59, 2.61s/it]
3%|▎ | 15373/569592 [6:20:15<434:13:57, 2.82s/it]
3%|▎ | 15373/569592 [6:20:15<434:13:57, 2.82s/it]
3%|▎ | 15374/569592 [6:20:19<477:33:11, 3.10s/it]
3%|▎ | 15374/569592 [6:20:19<477:33:11, 3.10s/it]
3%|▎ | 15375/569592 [6:20:22<498:26:52, 3.24s/it]
3%|▎ | 15375/569592 [6:20:22<498:26:52, 3.24s/it]
3%|▎ | 15376/569592 [6:20:25<498:11:45, 3.24s/it]
3%|▎ | 15376/569592 [6:20:25<498:11:45, 3.24s/it]
3%|▎ | 15377/569592 [6:20:29<504:41:43, 3.28s/it]
3%|▎ | 15377/569592 [6:20:29<504:41:43, 3.28s/it]
3%|▎ | 15378/569592 [6:20:32<495:03:11, 3.22s/it]
3%|▎ | 15378/569592 [6:20:32<495:03:11, 3.22s/it]
3%|▎ | 15379/569592 [6:20:33<388:03:57, 2.52s/it]
3%|▎ | 15379/569592 [6:20:33<388:03:57, 2.52s/it]
3%|▎ | 15380/569592 [6:20:36<440:08:23, 2.86s/it]
3%|▎ | 15380/569592 [6:20:36<440:08:23, 2.86s/it]
3%|▎ | 15381/569592 [6:20:40<491:12:32, 3.19s/it]
3%|▎ | 15381/569592 [6:20:40<491:12:32, 3.19s/it]
3%|▎ | 15382/569592 [6:20:44<501:36:27, 3.26s/it]
3%|▎ | 15382/569592 [6:20:44<501:36:27, 3.26s/it]
3%|▎ | 15383/569592 [6:20:48<558:30:01, 3.63s/it]
3%|▎ | 15383/569592 [6:20:48<558:30:01, 3.63s/it]
3%|▎ | 15384/569592 [6:20:51<536:02:39, 3.48s/it]
3%|▎ | 15384/569592 [6:20:51<536:02:39, 3.48s/it]
3%|▎ | 15385/569592 [6:20:55<527:26:38, 3.43s/it]
3%|▎ | 15385/569592 [6:20:55<527:26:38, 3.43s/it]
3%|▎ | 15386/569592 [6:20:56<411:12:04, 2.67s/it]
3%|▎ | 15386/569592 [6:20:56<411:12:04, 2.67s/it]
3%|▎ | 15387/569592 [6:20:59<459:36:33, 2.99s/it]
3%|▎ | 15387/569592 [6:20:59<459:36:33, 2.99s/it]
3%|▎ | 15388/569592 [6:21:03<483:29:58, 3.14s/it]
3%|▎ | 15388/569592 [6:21:03<483:29:58, 3.14s/it]
3%|▎ | 15389/569592 [6:21:07<537:46:08, 3.49s/it]
3%|▎ | 15389/569592 [6:21:07<537:46:08, 3.49s/it]
3%|▎ | 15390/569592 [6:21:10<514:54:21, 3.34s/it]
3%|▎ | 15390/569592 [6:21:10<514:54:21, 3.34s/it]
3%|▎ | 15391/569592 [6:21:13<512:16:11, 3.33s/it]
3%|▎ | 15391/569592 [6:21:13<512:16:11, 3.33s/it]
3%|▎ | 15392/569592 [6:21:14<400:00:52, 2.60s/it]
3%|▎ | 15392/569592 [6:21:14<400:00:52, 2.60s/it]
3%|▎ | 15393/569592 [6:21:18<448:15:36, 2.91s/it]
3%|▎ | 15393/569592 [6:21:18<448:15:36, 2.91s/it]
3%|▎ | 15394/569592 [6:21:22<486:00:18, 3.16s/it]
3%|▎ | 15394/569592 [6:21:22<486:00:18, 3.16s/it]
3%|▎ | 15395/569592 [6:21:25<509:21:12, 3.31s/it]
3%|▎ | 15395/569592 [6:21:25<509:21:12, 3.31s/it]
3%|▎ | 15396/569592 [6:21:30<579:53:13, 3.77s/it]
3%|▎ | 15396/569592 [6:21:30<579:53:13, 3.77s/it]
3%|▎ | 15397/569592 [6:21:35<615:07:05, 4.00s/it]
3%|▎ | 15397/569592 [6:21:35<615:07:05, 4.00s/it]
3%|▎ | 15398/569592 [6:21:39<650:59:20, 4.23s/it]
3%|▎ | 15398/569592 [6:21:39<650:59:20, 4.23s/it]
3%|▎ | 15399/569592 [6:21:43<603:07:58, 3.92s/it]
3%|▎ | 15399/569592 [6:21:43<603:07:58, 3.92s/it]
3%|▎ | 15400/569592 [6:21:44<473:44:55, 3.08s/it]
3%|▎ | 15400/569592 [6:21:44<473:44:55, 3.08s/it]
3%|▎ | 15401/569592 [6:21:49<552:29:23, 3.59s/it]
3%|▎ | 15401/569592 [6:21:49<552:29:23, 3.59s/it]
3%|▎ | 15402/569592 [6:21:52<557:54:04, 3.62s/it]
3%|▎ | 15402/569592 [6:21:52<557:54:04, 3.62s/it]
3%|▎ | 15403/569592 [6:21:58<654:56:14, 4.25s/it]
3%|▎ | 15403/569592 [6:21:58<654:56:14, 4.25s/it]
3%|▎ | 15404/569592 [6:21:59<503:03:58, 3.27s/it]
3%|▎ | 15404/569592 [6:21:59<503:03:58, 3.27s/it]
3%|▎ | 15405/569592 [6:22:00<393:42:42, 2.56s/it]
3%|▎ | 15405/569592 [6:22:00<393:42:42, 2.56s/it]
3%|▎ | 15406/569592 [6:22:01<321:15:09, 2.09s/it]
3%|▎ | 15406/569592 [6:22:01<321:15:09, 2.09s/it]
3%|▎ | 15407/569592 [6:22:05<411:33:29, 2.67s/it]
3%|▎ | 15407/569592 [6:22:05<411:33:29, 2.67s/it]
3%|▎ | 15408/569592 [6:22:08<446:56:53, 2.90s/it]
3%|▎ | 15408/569592 [6:22:08<446:56:53, 2.90s/it]
3%|▎ | 15409/569592 [6:22:12<481:09:56, 3.13s/it]
3%|▎ | 15409/569592 [6:22:12<481:09:56, 3.13s/it]
3%|▎ | 15410/569592 [6:22:17<574:14:37, 3.73s/it]
3%|▎ | 15410/569592 [6:22:17<574:14:37, 3.73s/it]
3%|▎ | 15411/569592 [6:22:20<546:19:54, 3.55s/it]
3%|▎ | 15411/569592 [6:22:20<546:19:54, 3.55s/it]
3%|▎ | 15412/569592 [6:22:21<423:25:31, 2.75s/it]
3%|▎ | 15412/569592 [6:22:21<423:25:31, 2.75s/it]
3%|▎ | 15413/569592 [6:22:22<337:58:36, 2.20s/it]
3%|▎ | 15413/569592 [6:22:22<337:58:36, 2.20s/it]
3%|▎ | 15414/569592 [6:22:26<432:30:03, 2.81s/it]
3%|▎ | 15414/569592 [6:22:26<432:30:03, 2.81s/it]
3%|▎ | 15415/569592 [6:22:30<456:33:05, 2.97s/it]
3%|▎ | 15415/569592 [6:22:30<456:33:05, 2.97s/it]
3%|▎ | 15416/569592 [6:22:33<487:26:24, 3.17s/it]
3%|▎ | 15416/569592 [6:22:33<487:26:24, 3.17s/it]
3%|▎ | 15417/569592 [6:22:34<383:07:49, 2.49s/it]
3%|▎ | 15417/569592 [6:22:34<383:07:49, 2.49s/it]
3%|▎ | 15418/569592 [6:22:38<426:49:00, 2.77s/it]
3%|▎ | 15418/569592 [6:22:38<426:49:00, 2.77s/it]
3%|▎ | 15419/569592 [6:22:41<464:10:46, 3.02s/it]
3%|▎ | 15419/569592 [6:22:41<464:10:46, 3.02s/it]
3%|▎ | 15420/569592 [6:22:45<517:47:04, 3.36s/it]
3%|▎ | 15420/569592 [6:22:45<517:47:04, 3.36s/it]
3%|▎ | 15421/569592 [6:22:49<511:13:04, 3.32s/it]
3%|▎ | 15421/569592 [6:22:49<511:13:04, 3.32s/it]
3%|▎ | 15422/569592 [6:22:52<504:01:56, 3.27s/it]
3%|▎ | 15422/569592 [6:22:52<504:01:56, 3.27s/it]
3%|▎ | 15423/569592 [6:22:53<395:42:16, 2.57s/it]
3%|▎ | 15423/569592 [6:22:53<395:42:16, 2.57s/it]
3%|▎ | 15424/569592 [6:22:54<320:50:58, 2.08s/it]
3%|▎ | 15424/569592 [6:22:54<320:50:58, 2.08s/it]
3%|▎ | 15425/569592 [6:22:57<382:46:11, 2.49s/it]
3%|▎ | 15425/569592 [6:22:57<382:46:11, 2.49s/it]
3%|▎ | 15426/569592 [6:22:58<312:38:28, 2.03s/it]
3%|▎ | 15426/569592 [6:22:58<312:38:28, 2.03s/it]
3%|▎ | 15427/569592 [6:23:01<374:49:13, 2.43s/it]
3%|▎ | 15427/569592 [6:23:01<374:49:13, 2.43s/it]
3%|▎ | 15428/569592 [6:23:02<307:30:41, 2.00s/it]
3%|▎ | 15428/569592 [6:23:02<307:30:41, 2.00s/it]
3%|▎ | 15429/569592 [6:23:03<261:33:48, 1.70s/it]
3%|▎ | 15429/569592 [6:23:03<261:33:48, 1.70s/it]
3%|▎ | 15430/569592 [6:23:07<334:33:23, 2.17s/it]
3%|▎ | 15430/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [6:23:07<334:33:23, 2.17s/it]
3%|▎ | 15431/569592 [6:23:08<298:29:01, 1.94s/it]
3%|▎ | 15431/569592 [6:23:08<298:29:01, 1.94s/it]
3%|▎ | 15432/569592 [6:23:12<388:20:13, 2.52s/it]
3%|▎ | 15432/569592 [6:23:12<388:20:13, 2.52s/it]
3%|▎ | 15433/569592 [6:23:13<318:25:09, 2.07s/it]
3%|▎ | 15433/569592 [6:23:13<318:25:09, 2.07s/it]
3%|▎ | 15434/569592 [6:23:17<426:55:00, 2.77s/it]
3%|▎ | 15434/569592 [6:23:17<426:55:00, 2.77s/it]
3%|▎ | 15435/569592 [6:23:18<344:04:12, 2.24s/it]
3%|▎ | 15435/569592 [6:23:18<344:04:12, 2.24s/it]
3%|▎ | 15436/569592 [6:23:22<416:18:08, 2.70s/it]
3%|▎ | 15436/569592 [6:23:22<416:18:08, 2.70s/it]
3%|▎ | 15437/569592 [6:23:26<458:35:06, 2.98s/it]
3%|▎ | 15437/569592 [6:23:26<458:35:06, 2.98s/it]
3%|▎ | 15438/569592 [6:23:29<484:18:03, 3.15s/it]
3%|▎ | 15438/569592 [6:23:29<484:18:03, 3.15s/it]
3%|▎ | 15439/569592 [6:23:30<383:52:59, 2.49s/it]
3%|▎ | 15439/569592 [6:23:30<383:52:59, 2.49s/it]
3%|▎ | 15440/569592 [6:23:34<455:35:28, 2.96s/it]
3%|▎ | 15440/569592 [6:23:34<455:35:28, 2.96s/it]
3%|▎ | 15441/569592 [6:23:35<364:45:33, 2.37s/it]
3%|▎ | 15441/569592 [6:23:35<364:45:33, 2.37s/it]
3%|▎ | 15442/569592 [6:23:41<500:36:36, 3.25s/it]
3%|▎ | 15442/569592 [6:23:41<500:36:36, 3.25s/it]
3%|▎ | 15443/569592 [6:23:44<524:58:02, 3.41s/it]
3%|▎ | 15443/569592 [6:23:44<524:58:02, 3.41s/it]
3%|▎ | 15444/569592 [6:23:45<410:32:23, 2.67s/it]
3%|▎ | 15444/569592 [6:23:45<410:32:23, 2.67s/it]
3%|▎ | 15445/569592 [6:23:50<508:50:20, 3.31s/it]
3%|▎ | 15445/569592 [6:23:50<508:50:20, 3.31s/it]
3%|▎ | 15446/569592 [6:23:51<398:43:07, 2.59s/it]
3%|▎ | 15446/569592 [6:23:51<398:43:07, 2.59s/it]
3%|▎ | 15447/569592 [6:23:55<445:40:52, 2.90s/it]
3%|▎ | 15447/569592 [6:23:55<445:40:52, 2.90s/it]
3%|▎ | 15448/569592 [6:23:56<354:39:04, 2.30s/it]
3%|▎ | 15448/569592 [6:23:56<354:39:04, 2.30s/it]
3%|▎ | 15449/569592 [6:24:00<438:56:05, 2.85s/it]
3%|▎ | 15449/569592 [6:24:00<438:56:05, 2.85s/it]
3%|▎ | 15450/569592 [6:24:03<468:12:51, 3.04s/it]
3%|▎ | 15450/569592 [6:24:03<468:12:51, 3.04s/it]
3%|▎ | 15451/569592 [6:24:07<483:50:26, 3.14s/it]
3%|▎ | 15451/569592 [6:24:07<483:50:26, 3.14s/it]
3%|▎ | 15452/569592 [6:24:10<508:38:40, 3.30s/it]
3%|▎ | 15452/569592 [6:24:10<508:38:40, 3.30s/it]
3%|▎ | 15453/569592 [6:24:14<523:36:29, 3.40s/it]
3%|▎ | 15453/569592 [6:24:14<523:36:29, 3.40s/it]
3%|▎ | 15454/569592 [6:24:15<408:39:12, 2.65s/it]
3%|▎ | 15454/569592 [6:24:15<408:39:12, 2.65s/it]
3%|▎ | 15455/569592 [6:24:18<435:44:13, 2.83s/it]
3%|▎ | 15455/569592 [6:24:18<435:44:13, 2.83s/it]
3%|▎ | 15456/569592 [6:24:19<347:52:44, 2.26s/it]
3%|▎ | 15456/569592 [6:24:19<347:52:44, 2.26s/it]
3%|▎ | 15457/569592 [6:24:20<288:35:36, 1.87s/it]
3%|▎ | 15457/569592 [6:24:20<288:35:36, 1.87s/it]
3%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93797904 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
� | 15458/569592 [6:24:21<246:46:53, 1.60s/it]
3%|▎ | 15458/569592 [6:24:21<246:46:53, 1.60s/it]
3%|▎ | 15459/569592 [6:24:22<215:28:51, 1.40s/it]
3%|▎ | 15459/569592 [6:24:22<215:28:51, 1.40s/it]
3%|▎ | 15460/569592 [6:24:25<308:48:23, 2.01s/it]
3%|▎ | 15460/569592 [6:24:25<308:48:23, 2.01s/it]
3%|▎ | 15461/569592 [6:24:26<259:55:57, 1.69s/it]
3%|▎ | 15461/569592 [6:24:26<259:55:57, 1.69s/it]
3%|▎ | 15462/569592 [6:24:28<251:26:42, 1.63s/it]
3%|▎ | 15462/569592 [6:24:28<251:26:42, 1.63s/it]
3%|▎ | 15463/569592 [6:24:32<377:05:47, 2.45s/it]
3%|▎ | 15463/569592 [6:24:32<377:05:47, 2.45s/it]
3%|▎ | 15464/569592 [6:24:33<313:54:45, 2.04s/it]
3%|▎ | 15464/569592 [6:24:33<313:54:45, 2.04s/it]
3%|▎ | 15465/569592 [6:24:37<383:19:39, 2.49s/it]
3%|▎ | 15465/569592 [6:24:37<383:19:39, 2.49s/it]
3%|▎ | 15466/569592 [6:24:38<313:35:51, 2.04s/it]
3%|▎ | 15466/569592 [6:24:38<313:35:51, 2.04s/it]
3%|▎ | 15467/569592 [6:24:39<263:52:30, 1.71s/it]
3%|▎ | 15467/569592 [6:24:39<263:52:30, 1.71s/it]
3%|▎ | 15468/569592 [6:24:43<380:08:55, 2.47s/it]
3%|▎ | 15468/569592 [6:24:43<380:08:55, 2.47s/it]
3%|▎ | 15469/569592 [6:24:44<310:10:52, 2.02s/it]
3%|▎ | 15469/569592 [6:24:44<310:10:52, 2.02s/it]
3%|▎ | 15470/569592 [6:24:48<401:30:37, 2.61s/it]
3%|▎ | 15470/569592 [6:24:48<401:30:37, 2.61s/it]
3%|▎ | 15471/569592 [6:24:49<352:40:48, 2.29s/it]
3%|▎ | 15471/569592 [6:24:49<352:40:48, 2.29s/it]
3%|▎ | 15472/569592 [6:24:53<406:09:37, 2.64s/it]
3%|▎ | 15472/569592 [6:24:53<406:09:37, 2.64s/it]
3%|▎ | 15473/569592 [6:24:54<343:56:24, 2.23s/it]
3%|▎ | 15473/569592 [6:24:54<343:56:24, 2.23s/it]
3%|▎ | 15474/569592 [6:25:00<493:28:09, 3.21s/it]
3%|▎ | 15474/569592 [6:25:00<493:28:09, 3.21s/it]
3%|▎ | 15475/569592 [6:25:00<389:49:48, 2.53s/it]
3%|▎ | 15475/569592 [6:25:01<389:49:48, 2.53s/it]
3%|▎ | 15476/569592 [6:25:02<342:23:59, 2.22s/it]
3%|▎ | 15476/569592 [6:25:02<342:23:59, 2.22s/it]
3%|▎ | 15477/569592 [6:25:04<338:22:33, 2.20s/it]
3%|▎ | 15477/569592 [6:25:04<338:22:33, 2.20s/it]
3%|▎ | 15478/569592 [6:25:08<427:13:35, 2.78s/it]
3%|▎ | 15478/569592 [6:25:08<427:13:35, 2.78s/it]
3%|▎ | 15479/569592 [6:25:13<516:52:55, 3.36s/it]
3%|▎ | 15479/569592 [6:25:13<516:52:55, 3.36s/it]
3%|▎ | 15480/569592 [6:25:16<513:11:32, 3.33s/it]
3%|▎ | 15480/569592 [6:25:16<513:11:32, 3.33s/it]
3%|▎ | 15481/569592 [6:25:17<404:22:33, 2.63s/it]
3%|▎ | 15481/569592 [6:25:17<404:22:33, 2.63s/it]
3%|▎ | 15482/569592 [6:25:21<463:57:45, 3.01s/it]
3%|▎ | 15482/569592 [6:25:21<463:57:45, 3.01s/it]
3%|▎ | 15483/569592 [6:25:22<369:37:37, 2.40s/it]
3%|▎ | 15483/569592 [6:25:22<369:37:37, 2.40s/it]
3%|▎ | 15484/569592 [6:25:26<436:30:20, 2.84s/it]
3%|▎ | 15484/569592 [6:25:26<436:30:20, 2.84s/it]
3%|▎ | 15485/569592 [6:25:30<496:57:08, 3.23s/it]
3%|▎ | 15485/569592 [6:25:30<496:57:08, 3.23s/it]
3%|▎ | 15486/569592 [6:25:34<507:00:03, 3.29s/it]
3%|▎ | 15486/569592 [6:25:34<507:00:03, 3.29s/it]
3%|▎ | 15487/569592 [6:25:37<529:44:50, 3.44s/it]
3%|▎ | 15487/569592 [6:25:37<529:44:50, 3.44s/it]
3%|▎ | 15488/569592 [6:25:41<544:30:27, 3.54s/it]
3%|▎ | 15488/569592 [6:25:41<544:30:27, 3.54s/it]
3%|▎ | 15489/569592 [6:25:45<546:56:26, 3.55s/it]
3%|▎ | 15489/569592 [6:25:45<546:56:26, 3.55s/it]
3%|▎ | 15490/569592 [6:25:48<552:02:23, 3.59s/it]
3%|▎ | 15490/569592 [6:25:48<552:02:23, 3.59s/it]
3%|▎ | 15491/569592 [6:25:52<550:14:55, 3.57s/it]
3%|▎ | 15491/569592 [6:25:52<550:14:55, 3.57s/it]
3%|▎ | 15492/569592 [6:25:55<533:37:17, 3.47s/it]
3%|▎ | 15492/569592 [6:25:55<533:37:17, 3.47s/it]
3%|▎ | 15493/569592 [6:25:58<518:43:11, 3.37s/it]
3%|▎ | 15493/569592 [6:25:58<518:43:11, 3.37s/it]
3%|▎ | 15494/569592 [6:25:59<405:01:03, 2.63s/it]
3%|▎ | 15494/569592 [6:25:59<405:01:03, 2.63s/it]
3%|▎ | 15495/569592 [6:26:03<446:53:35, 2.90s/it]
3%|▎ | 15495/569592 [6:26:03<446:53:35, 2.90s/it]
3%|▎ | 15496/569592 [6:26:06<466:24:47, 3.03s/it]
3%|▎ | 15496/569592 [6:26:06<466:24:47, 3.03s/it]
3%|▎ | 15497/569592 [6:26:09<483:38:26, 3.14s/it]
3%|▎ | 15497/569592 [6:26:09<483:38:26, 3.14s/it]
3%|▎ | 15498/569592 [6:26:14<528:24:16, 3.43s/it]
3%|▎ | 15498/569592 [6:26:14<528:24:16, 3.43s/it]
3%|▎ | 15499/569592 [6:26:17<542:00:35, 3.52s/it]
3%|▎ | 15499/569592 [6:26:17<542:00:35, 3.52s/it]
3%|▎ | 15500/569592 [6:26:21<534:02:41, 3.47s/it]
3%|▎ | 15500/569592 [6:26:21<534:02:41, 3.47s/it]
3%|▎ | 15501/569592 [6:26:24<535:18:31, 3.48s/it]
3%|▎ | 15501/569592 [6:26:24<535:18:31, 3.48s/it]
3%|▎ | 15502/569592 [6:26:28<552:07:41, 3.59s/it]
3%|▎ | 15502/569592 [6:26:28<552:07:41, 3.59s/it]
3%|▎ | 15503/569592 [6:26:32<583:43:31, 3.79s/it]
3%|▎ | 15503/569592 [6:26:32<583:43:31, 3.79s/it]
3%|▎ | 15504/569592 [6:26:33<449:23:43, 2.92s/it]
3%|▎ | 15504/569592 [6:26:33<449:23:43, 2.92s/it]
3%|▎ | 15505/569592 [6:26:36<468:58:46, 3.05s/it]
3%|▎ | 15505/569592 [6:26:36<468:58:46, 3.05s/it]
3%|▎ | 15506/569592 [6:26:41<516:53:31, 3.36s/it]
3%|▎ | 15506/569592 [6:26:41<516:53:31, 3.36s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 15507/569592 [6:26:42<405:58:48, 2.64s/it]
3%|▎ | 15507/569592 [6:26:42<405:58:48, 2.64s/it]
3%|▎ | 15508/569592 [6:26:45<440:01:35, 2.86s/it]
3%|▎ | 15508/569592 [6:26:45<440:01:35, 2.86s/it]
3%|▎ | 15509/569592 [6:26:49<487:58:15, 3.17s/it]
3%|▎ | 15509/569592 [6:26:49<487:58:15, 3.17s/it]
3%|▎ | 15510/569592 [6:26:50<386:43:10, 2.51s/it]
3%|▎ | 15510/569592 [6:26:50<386:43:10, 2.51s/it]
3%|▎ | 15511/569592 [6:26:53<416:01:09, 2.70s/it]
3%|▎ | 15511/569592 [6:26:53<416:01:09, 2.70s/it]
3%|▎ | 15512/569592 [6:26:57<458:04:11, 2.98s/it]
3%|▎ | 15512/569592 [6:26:57<458:04:11, 2.98s/it]
3%|▎ | 15513/569592 [6:27:00<483:44:51, 3.14s/it]
3%|▎ | 15513/569592 [6:27:00<483:44:51, 3.14s/it]
3%|▎ | 15514/569592 [6:27:04<513:24:57, 3.34s/it]
3%|▎ | 15514/569592 [6:27:04<513:24:57, 3.34s/it]
3%|▎ | 15515/569592 [6:27:08<539:01:10, 3.50s/it]
3%|▎ | 15515/569592 [6:27:08<539:01:10, 3.50s/it]
3%|▎ | 15516/569592 [6:27:12<568:28:41, 3.69s/it]
3%|▎ | 15516/569592 [6:27:12<568:28:41, 3.69s/it]
3%|▎ | 15517/569592 [6:27:16<593:58:00, 3.86s/it]
3%|▎ | 15517/569592 [6:27:16<593:58:00, 3.86s/it]
3%|▎ | 15518/569592 [6:27:17<457:31:24, 2.97s/it]
3%|▎ | 15518/569592 [6:27:17<457:31:24, 2.97s/it]
3%|▎ | 15519/569592 [6:27:21<502:03:57, 3.26s/it]
3%|▎ | 15519/569592 [6:27:21<502:03:57, 3.26s/it]
3%|▎ | 15520/569592 [6:27:26<585:07:06, 3.80s/it]
3%|▎ | 15520/569592 [6:27:26<585:07:06, 3.80s/it]
3%|▎ | 15521/569592 [6:27:31<634:13:59, 4.12s/it]
3%|▎ | 15521/569592 [6:27:31<634:13:59, 4.12s/it]
3%|▎ | 15522/569592 [6:27:32<485:02:47, 3.15s/it]
3%|▎ | 15522/569592 [6:27:32<485:02:47, 3.15s/it]
3%|▎ | 15523/569592 [6:27:36<513:32:13, 3.34s/it]
3%|▎ | 15523/569592 [6:27:36<513:32:13, 3.34s/it]
3%|▎ | 15524/569592 [6:27:36<401:31:29, 2.61s/it]
3%|▎ | 15524/569592 [6:27:36<401:31:29, 2.61s/it]
3%|▎ | 15525/569592 [6:27:40<451:58:24, 2.94s/it]
3%|▎ | 15525/569592 [6:27:40<451:58:24, 2.94s/it]
3%|▎ | 15526/569592 [6:27:44<473:00:29, 3.07s/it]
3%|▎ | 15526/569592 [6:27:44<473:00:29, 3.07s/it]
3%|▎ | 15527/569592 [6:27:45<375:10:47, 2.44s/it]
3%|▎ | 15527/569592 [6:27:45<375:10:47, 2.44s/it]
3%|▎ | 15528/569592 [6:27:45<306:35:43, 1.99s/it]
3%|▎ | 15528/569592 [6:27:45<306:35:43, 1.99s/it]
3%|▎ | 15529/569592 [6:27:49<382:10:11, 2.48s/it]
3%|▎ | 15529/569592 [6:27:49<382:10:11, 2.48s/it]
3%|▎ | 15530/569592 [6:27:50<312:37:24, 2.03s/it]
3%|▎ | 15530/569592 [6:27:50<312:37:24, 2.03s/it]
3%|▎ | 15531/569592 [6:27:54<387:45:16, 2.52s/it]
3%|▎ | 15531/569592 [6:27:54<387:45:16, 2.52s/it]
3%|▎ | 15532/569592 [6:27:58<482:07:28, 3.13s/it]
3%|▎ | 15532/569592 [6:27:58<482:07:28, 3.13s/it]
3%|▎ | 15533/569592 [6:28:01<484:19:08, 3.15s/it]
3%|▎ | 15533/569592 [6:28:01<484:19:08, 3.15s/it]
3%|▎ | 15534/569592 [6:28:06<556:21:50, 3.61s/it]
3%|▎ | 15534/569592 [6:28:06<556:21:50, 3.61s/it]
3%|▎ | 15535/569592 [6:28:09<538:37:26, 3.50s/it]
3%|▎ | 15535/569592 [6:28:09<538:37:26, 3.50s/it]
3%|▎ | 15536/569592 [6:28:10<418:17:34, 2.72s/it]
3%|▎ | 15536/569592 [6:28:10<418:17:34, 2.72s/it]
3%|▎ | 15537/569592 [6:28:11<334:08:48, 2.17s/it]
3%|▎ | 15537/569592 [6:28:11<334:08:48, 2.17s/it]
3%|▎ | 15538/569592 [6:28:15<394:56:57, 2.57s/it]
3%|▎ | 15538/569592 [6:28:15<394:56:57, 2.57s/it]
3%|▎ | 15539/569592 [6:28:18<434:48:24, 2.83s/it]
3%|▎ | 15539/569592 [6:28:18<434:48:24, 2.83s/it]
3%|▎ | 15540/569592 [6:28:21<455:37:58, 2.96s/it]
3%|▎ | 15540/569592 [6:28:21<455:37:58, 2.96s/it]
3%|▎ | 15541/569592 [6:28:22<360:59:47, 2.35s/it]
3%|▎ | 15541/569592 [6:28:22<360:59:47, 2.35s/it]
3%|▎ | 15542/569592 [6:28:23<296:41:09, 1.93s/it]
3%|▎ | 15542/569592 [6:28:23<296:41:09, 1.93s/it]
3%|▎ | 15543/569592 [6:28:27<368:33:13, 2.39s/it]
3%|▎ | 15543/569592 [6:28:27<368:33:13, 2.39s/it]
3%|▎ | 15544/569592 [6:28:28<302:59:24, 1.97s/it]
3%|▎ | 15544/569592 [6:28:28<302:59:24, 1.97s/it]
3%|▎ | 15545/569592 [6:28:32<393:46:57, 2.56s/it]
3%|▎ | 15545/569592 [6:28:32<393:46:57, 2.56s/it]
3%|▎ | 15546/569592 [6:28:33<319:56:58, 2.08s/it]
3%|▎ | 15546/569592 [6:28:33<319:56:58, 2.08s/it]
3%|▎ | 15547/569592 [6:28:34<269:00:24, 1.75s/it]
3%|▎ | 15547/569592 [6:28:34<269:00:24, 1.75s/it]
3%|▎ | 15548/569592 [6:28:35<232:05:29, 1.51s/it]
3%|▎ | 15548/569592 [6:28:35<232:05:29, 1.51s/it]
3%|▎ | 15549/569592 [6:28:40<392:58:30, 2.55s/it]
3%|▎ | 15549/569592 [6:28:40<392:58:30, 2.55s/it]
3%|▎ | 15550/569592 [6:28:43<427:47:24, 2.78s/it]
3%|▎ | 15550/569592 [6:28:43<427:47:24, 2.78s/it]
3%|▎ | 15551/569592 [6:28:44<344:23:09, 2.24s/it]
3%|▎ | 15551/569592 [6:28:44<344:23:09, 2.24s/it]
3%|▎ | 15552/569592 [6:28:47<406:27:48, 2.64s/it]
3%|▎ | 15552/569592 [6:28:47<406:27:48, 2.64s/it]
3%|▎ | 15553/569592 [6:28:48<328:05:33, 2.13s/it]
3%|▎ | 15553/569592 [6:28:48<328:05:33, 2.13s/it]
3%|▎ | 15554/569592 [6:28:52<392:01:44, 2.55s/it]
3%|▎ | 15554/569592 [6:28:52<392:01:44, 2.55s/it]
3%|▎ | 15555/569592 [6:28:56<456:30:31, 2.97s/it]
3%|▎ | 15555/569592 [6:28:56<456:30:31, 2.97s/it]
3%|▎ | 15556/569592 [6:28:59<476:02:36, 3.09s/it]
3%|▎ | 15556/569592 [6:28:59<476:02:36, 3.09s/it]
3%|▎ | 15557/569592 [6:29:03<506:33:28, 3.29s/it]
3%|▎ | 15557/569592 [6:29:03<506:33:28, 3.29s/it]
3%|▎ | 15558/569592 [6:29:06<510:10:41, 3.32s/it]
3%|▎ | 15558/569592 [6:29:06<510:10:41, 3.32s/it]
3%|▎ | 15559/569592 [6:29:07<400:10:26, 2.60s/it]
3%|▎ | 15559/569592 [6:29:07<400:10:26, 2.60s/it]
3%|▎ | 15560/569592 [6:29:08<327:31:28, 2.13s/it]
3%|▎ | 15560/569592 [6:29:08<327:31:28, 2.13s/it]
3%|▎ | 15561/569592 [6:29:14<488:13:40, 3.17s/it]
3%|▎ | 15561/569592 [6:29:14<488:13:40, 3.17s/it]
3%|▎ | 15562/569592 [6:29:19<562:07:30, 3.65s/it]
3%|▎ | 15562/569592 [6:29:19<562:07:30, 3.65s/it]
3%|▎ | 15563/569592 [6:29:20<434:29:59, 2.82s/it]
3%|▎ | 15563/569592 [6:29:20<434:29:59, 2.82s/it]
3%|▎ | 15564/569592 [6:29:23<479:31:32, 3.12s/it]
3%|▎ | 15564/569592 [6:29:23<479:31:32, 3.12s/it]
3%|▎ | 15565/569592 [6:29:26<480:33:33, 3.12s/it]
3%|▎ | 15565/569592 [6:29:26<480:33:33, 3.12s/it]
3%|▎ | 15566/569592 [6:29:27<380:28:11, 2.47s/it]
3%|▎ | 15566/569592 [6:29:27<380:28:11, 2.47s/it]
3%|▎ | 15567/569592 [6:29:31<445:03:19, 2.89s/it]
3%|▎ | 15567/569592 [6:29:31<445:03:19, 2.89s/it]
3%|▎ | 15568/569592 [6:29:35<482:18:48, 3.13s/it]
3%|▎ | 15568/569592 [6:29:35<482:18:48, 3.13s/it]
3%|▎ | 15569/569592 [6:29:38<484:15:26, 3.15s/it]
3%|▎ | 15569/569592 [6:29:38<484:15:26, 3.15s/it]
3%|▎ | 15570/569592 [6:29:42<494:43:31, 3.21s/it]
3%|▎ | 15570/569592 [6:29:42<494:43:31, 3.21s/it]
3%|▎ | 15571/569592 [6:29:45<502:19:28, 3.26s/it]
3%|▎ | 15571/569592 [6:29:45<502:19:28, 3.26s/it]
3%|▎ | 15572/569592 [6:29:46<395:10:16, 2.57s/it]
3%|▎ | 15572/569592 [6:29:46<395:10:16, 2.57s/it]
3%|▎ | 15573/569592 [6:29:49<416:15:49, 2.70s/it]
3%|▎ | 15573/569592 [6:29:49<416:15:49, 2.70s/it]
3%|▎ | 15574/569592 [6:29:50<335:50:13, 2.18s/it]
3%|▎ | 15574/569592 [6:29:50<335:50:13, 2.18s/it]
3%|▎ | 15575/569592 [6:29:51<279:14:23, 1.81s/it]
3%|▎ | 15575/569592 [6:29:51<279:14:23, 1.81s/it]
3%|▎ | 15576/569592 [6:29:52<239:49:29, 1.56s/it]
3%|▎ | 15576/569592 [6:29:52<239:49:29, 1.56s/it]
3%|▎ | 15577/569592 [6:29:55<331:03:28, 2.15s/it]
3%|▎ | 15577/569592 [6:29:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
55<331:03:28, 2.15s/it]
3%|▎ | 15578/569592 [6:29:56<279:25:03, 1.82s/it]
3%|▎ | 15578/569592 [6:29:56<279:25:03, 1.82s/it]
3%|▎ | 15579/569592 [6:29:57<239:04:37, 1.55s/it]
3%|▎ | 15579/569592 [6:29:57<239:04:37, 1.55s/it]
3%|▎ | 15580/569592 [6:30:01<331:25:15, 2.15s/it]
3%|▎ | 15580/569592 [6:30:01<331:25:15, 2.15s/it]
3%|▎ | 15581/569592 [6:30:05<410:28:21, 2.67s/it]
3%|▎ | 15581/569592 [6:30:05<410:28:21, 2.67s/it]
3%|▎ | 15582/569592 [6:30:08<439:36:28, 2.86s/it]
3%|▎ | 15582/569592 [6:30:08<439:36:28, 2.86s/it]
3%|▎ | 15583/569592 [6:30:11<464:50:18, 3.02s/it]
3%|▎ | 15583/569592 [6:30:11<464:50:18, 3.02s/it]
3%|▎ | 15584/569592 [6:30:12<369:03:05, 2.40s/it]
3%|▎ | 15584/569592 [6:30:12<369:03:05, 2.40s/it]
3%|▎ | 15585/569592 [6:30:13<305:16:14, 1.98s/it]
3%|▎ | 15585/569592 [6:30:13<305:16:14, 1.98s/it]
3%|▎ | 15586/569592 [6:30:14<260:02:47, 1.69s/it]
3%|▎ | 15586/569592 [6:30:14<260:02:47, 1.69s/it]
3%|▎ | 15587/569592 [6:30:15<225:17:37, 1.46s/it]
3%|▎ | 15587/569592 [6:30:15<225:17:37, 1.46s/it]
3%|▎ | 15588/569592 [6:30:19<338:21:40, 2.20s/it]
3%|▎ | 15588/569592 [6:30:19<338:21:40, 2.20s/it]
3%|▎ | 15589/569592 [6:30:23<409:32:25, 2.66s/it]
3%|▎ | 15589/569592 [6:30:23<409:32:25, 2.66s/it]
3%|▎ | 15590/569592 [6:30:24<352:54:45, 2.29s/it]
3%|▎ | 15590/569592 [6:30:24<352:54:45, 2.29s/it]
3%|▎ | 15591/569592 [6:30:28<399:13:15, 2.59s/it]
3%|▎ | 15591/569592 [6:30:28<399:13:15, 2.59s/it]
3%|▎ | 15592/569592 [6:30:33<515:19:34, 3.35s/it]
3%|▎ | 15592/569592 [6:30:33<515:19:34, 3.35s/it]
3%|▎ | 15593/569592 [6:30:34<403:34:03, 2.62s/it]
3%|▎ | 15593/569592 [6:30:34<403:34:03, 2.62s/it]
3%|▎ | 15594/569592 [6:30:39<526:46:54, 3.42s/it]
3%|▎ | 15594/569592 [6:30:39<526:46:54, 3.42s/it]
3%|▎ | 15595/569592 [6:30:42<525:30:00, 3.41s/it]
3%|▎ | 15595/569592 [6:30:42<525:30:00, 3.41s/it]
3%|▎ | 15596/569592 [6:30:46<513:10:47, 3.33s/it]
3%|▎ | 15596/569592 [6:30:46<513:10:47, 3.33s/it]
3%|▎ | 15597/569592 [6:30:46<402:17:20, 2.61s/it]
3%|▎ | 15597/569592 [6:30:46<402:17:20, 2.61s/it]
3%|▎ | 15598/569592 [6:30:47<326:16:43, 2.12s/it]
3%|▎ | 15598/569592 [6:30:47<326:16:43, 2.12s/it]
3%|▎ | 15599/569592 [6:30:48<272:28:15, 1.77s/it]
3%|▎ | 15599/569592 [6:30:48<272:28:15, 1.77s/it]
3%|▎ | 15600/569592 [6:30:54<437:03:28, 2.84s/it]
3%|▎ | 15600/569592 [6:30:54<437:03:28, 2.84s/it]
3%|▎ | 15601/569592 [6:30:57<460:25:00, 2.99s/it]
3%|▎ | 15601/569592 [6:30:57<460:25:00, 2.99s/it]
3%|▎ | 15602/569592 [6:31:01<487:37:52, 3.17s/it]
3%|▎ | 15602/569592 [6:31:01<487:37:52, 3.17s/it]
3%|▎ | 15603/569592 [6:31:04<516:01:47, 3.35s/it]
3%|▎ | 15603/569592 [6:31:04<516:01:47, 3.35s/it]
3%|▎ | 15604/569592 [6:31:08<513:47:20, 3.34s/it]
3%|▎ | 15604/569592 [6:31:08<513:47:20, 3.34s/it]
3%|▎ | 15605/569592 [6:31:12<534:23:30, 3.47s/it]
3%|▎ | 15605/569592 [6:31:12<534:23:30, 3.47s/it]
3%|▎ | 15606/569592 [6:31:15<531:06:42, 3.45s/it]
3%|▎ | 15606/569592 [6:31:15<531:06:42, 3.45s/it]
3%|▎ | 15607/569592 [6:31:18<529:22:07, 3.44s/it]
3%|▎ | 15607/569592 [6:31:18<529:22:07, 3.44s/it]
3%|▎ | 15608/569592 [6:31:22<522:03:55, 3.39s/it]
3%|▎ | 15608/569592 [6:31:22<522:03:55, 3.39s/it]
3%|▎ | 15609/569592 [6:31:25<510:15:10, 3.32s/it]
3%|▎ | 15609/569592 [6:31:25<510:15:10, 3.32s/it]
3%|▎ | 15610/569592 [6:31:32<684:20:50, 4.45s/it]
3%|▎ | 15610/569592 [6:31:32<684:20:50, 4.45s/it]
3%|▎ | 15611/569592 [6:31:35<645:04:16, 4.19s/it]
3%|▎ | 15611/569592 [6:31:35<645:04:16, 4.19s/it]
3%|▎ | 15612/569592 [6:31:36<491:55:00, 3.20s/it]
3%|▎ | 15612/569592 [6:31:36<491:55:00, 3.20s/it]
3%|▎ | 15613/569592 [6:31:40<513:39:08, 3.34s/it]
3%|▎ | 15613/569592 [6:31:40<513:39:08, 3.34s/it]
3%|▎ | 15614/569592 [6:31:43<518:58:42, 3.37s/it]
3%|▎ | 15614/569592 [6:31:43<518:58:42, 3.37s/it]
3%|▎ | 15615/569592 [6:31:44<406:56:08, 2.64s/it]
3%|▎ | 15615/569592 [6:31:44<406:56:08, 2.64s/it]
3%|▎ | 15616/569592 [6:31:48<434:50:51, 2.83s/it]
3%|▎ | 15616/569592 [6:31:48<434:50:51, 2.83s/it]
3%|▎ | 15617/569592 [6:31:52<494:25:25, 3.21s/it]
3%|▎ | 15617/569592 [6:31:52<494:25:25, 3.21s/it]
3%|▎ | 15618/569592 [6:31:56<547:21:13, 3.56s/it]
3%|▎ | 15618/569592 [6:31:56<547:21:13, 3.56s/it]
3%|▎ | 15619/569592 [6:32:00<567:40:07, 3.69s/it]
3%|▎ | 15619/569592 [6:32:00<567:40:07, 3.69s/it]
3%|▎ | 15620/569592 [6:32:03<540:35:43, 3.51s/it]
3%|▎ | 15620/569592 [6:32:03<540:35:43, 3.51s/it]
3%|▎ | 15621/569592 [6:32:06<522:58:30, 3.40s/it]
3%|▎ | 15621/569592 [6:32:06<522:58:30, 3.40s/it]
3%|▎ | 15622/569592 [6:32:10<518:03:02, 3.37s/it]
3%|▎ | 15622/569592 [6:32:10<518:03:02, 3.37s/it]
3%|▎ | 15623/569592 [6:32:13<511:01:24, 3.32s/it]
3%|▎ | 15623/569592 [6:32:13<511:01:24, 3.32s/it]
3%|▎ | 15624/569592 [6:32:14<399:52:36, 2.60s/it]
3%|▎ | 15624/569592 [6:32:14<399:52:36, 2.60s/it]
3%|▎ | 15625/569592 [6:32:17<439:28:31, 2.86s/it]
3%|▎ | 15625/569592 [6:32:17<439:28:31, 2.86s/it]
3%|▎ | 15626/569592 [6:32:21<464:44:11, 3.02s/it]
3%|▎ | 15626/569592 [6:32:21<464:44:11, 3.02s/it]
3%|▎ | 15627/569592 [6:32:24<499:52:53, 3.25s/it]
3%|▎ | 15627/569592 [6:32:24<499:52:53, 3.25s/it]
3%|▎ | 15628/569592 [6:32:29<541:33:06, 3.52s/it]
3%|▎ | 15628/569592 [6:32:29<541:33:06, 3.52s/it]
3%|▎ | 15629/569592 [6:32:32<521:02:43, 3.39s/it]
3%|▎ | 15629/569592 [6:32:32<521:02:43, 3.39s/it]
3%|▎ | 15630/569592 [6:32:36<572:00:27, 3.72s/it]
3%|▎ | 15630/569592 [6:32:36<572:00:27, 3.72s/it]
3%|▎ | 15631/569592 [6:32:39<536:53:27, 3.49s/it]
3%|▎ | 15631/569592 [6:32:39<536:53:27, 3.49s/it]
3%|▎ | 15632/569592 [6:32:43<545:22:08, 3.54s/it]
3%|▎ | 15632/569592 [6:32:43<545:22:08, 3.54s/it]
3%|▎ | 15633/569592 [6:32:47<585:28:54, 3.80s/it]
3%|▎ | 15633/569592 [6:32:47<585:28:54, 3.80s/it]
3%|▎ | 15634/569592 [6:32:51<569:18:47, 3.70s/it]
3%|▎ | 15634/569592 [6:32:51<569:18:47, 3.70s/it]
3%|▎ | 15635/569592 [6:32:54<549:36:28, 3.57s/it]
3%|▎ | 15635/569592 [6:32:54<549:36:28, 3.57s/it]
3%|▎ | 15636/569592 [6:32:59<618:01:52, 4.02s/it]
3%|▎ | 15636/569592 [6:32:59<618:01:52, 4.02s/it]
3%|▎ | 15637/569592 [6:33:02<569:29:11, 3.70s/it]
3%|▎ | 15637/569592 [6:33:02<569:29:11, 3.70s/it]
3%|▎ | 15638/569592 [6:33:05<550:30:51, 3.58s/it]
3%|▎ | 15638/569592 [6:33:05<550:30:51, 3.58s/it]
3%|▎ | 15639/569592 [6:33:08<530:12:58, 3.45s/it]
3%|▎ | 15639/569592 [6:33:08<530:12:58, 3.45s/it]
3%|▎ | 15640/569592 [6:33:09<414:04:00, 2.69s/it]
3%|▎ | 15640/569592 [6:33:09<414:04:00, 2.69s/it]
3%|▎ | 15641/569592 [6:33:13<453:56:57, 2.95s/it]
3%|▎ | 15641/569592 [6:33:13<453:56:57, 2.95s/it]
3%|▎ | 15642/569592 [6:33:17<515:49:32, 3.35s/it]
3%|▎ | 15642/569592 [6:33:17<515:49:32, 3.35s/it]
3%|▎ | 15643/569592 [6:33:21<529:03:28, 3.44s/it]
3%|▎ | 15643/569592 [6:33:21<529:03:28, 3.44s/it]
3%|▎ | 15644/569592 [6:33:24<517:15:32, 3.36s/it]
3%|▎ | 15644/569592 [6:33:24<517:15:32, 3.36s/it]
3%|▎ | 15645/569592 [6:33:27<514:11:05, 3.34s/it]
3%|▎ | 15645/569592 [6:33:27<514:11:05, 3.34s/it]
3%|▎ | 15646/569592 [6:33:30<499:01:28, 3.24s/it]
3%|▎ | 15646/569592 [6:33:30<499:01:28, 3.24s/it]
3%|▎ | 15647/569592 [6:33:34<512:32:54, 3.33s/it]
3%|▎ | 15647/569592 [6:33:34<512:32:54, 3.33s/it]
3%|▎ | 15648/569592 [6:33:35<401:08:42, 2.61s/it]
3%|▎ | 15648/569592 [6:33:35<401:08:42, 2.61s/it]
3%|▎ | 15649/569592 [6:33:40<508:55:00, 3.31s/it]
3%|▎ | 15649/569592 [6:33:40<508:55:00, 3.31s/it]
3%|▎ | 15650/569592 [6:33:41<403:25:07, 2.62s/it]
3%|▎ | 15650/569592 [6:33:41<403:25:07, 2.62s/it]
3%|▎ | 15651/569592 [6:33:44<435:11:39, 2.83s/it]
3%|▎ | 15651/569592 [6:33:44<435:11:39, 2.83s/it]
3%|▎ | 15652/569592 [6:33:48<479:37:39, 3.12s/it]
3%|▎ | 15652/569592 [6:33:48<479:37:39, 3.12s/it]
3%|▎ | 15653/569592 [6:33:49<378:07:14, 2.46s/it]
3%|▎ | 15653/569592 [6:33:49<378:07:14, 2.46s/it]
3%|▎ | 15654/569592 [6:33:50<312:06:41, 2.03s/it]
3%|▎ | 15654/569592 [6:33:50<312:06:41, 2.03s/it]
3%|▎ | 15655/569592 [6:33:54<396:24:41, 2.58s/it]
3%|▎ | 15655/569592 [6:33:54<396:24:41, 2.58s/it]
3%|▎ | 15656/569592 [6:33:58<466:58:56, 3.03s/it]
3%|▎ | 15656/569592 [6:33:58<466:58:56, 3.03s/it]
3%|▎ | 15657/569592 [6:34:01<470:23:15, 3.06s/it]
3%|▎ | 15657/569592 [6:34:01<470:23:15, 3.06s/it]
3%|▎ | 15658/569592 [6:34:04<477:06:58, 3.10s/it]
3%|▎ | 15658/569592 [6:34:04<477:06:58, 3.10s/it]
3%|▎ | 15659/569592 [6:34:05<376:46:13, 2.45s/it]
3%|▎ | 15659/569592 [6:34:05<376:46:13, 2.45s/it]
3%|▎ | 15660/569592 [6:34:06<310:28:04, 2.02s/it]
3%|▎ | 15660/569592 [6:34:06<310:28:04, 2.02s/it]
3%|▎ | 15661/569592 [6:34:10<391:03:16, 2.54s/it]
3%|▎ | 15661/569592 [6:34:10<391:03:16, 2.54s/it]
3%|▎ | 15662/569592 [6:34:13<429:15:59, 2.79s/it]
3%|▎ | 15662/569592 [6:34:13<429:15:59, 2.79s/it]
3%|▎ | 15663/569592 [6:34:14<359:59:21, 2.34s/it]
3%|▎ | 15663/569592 [6:34:14<359:59:21, 2.34s/it]
3%|▎ | 15664/569592 [6:34:15<300:03:47, 1.95s/it]
3%|▎ | 15664/569592 [6:34:15<300:03:47, 1.95s/it]
3%|▎ | 15665/569592 [6:34:16<253:34:18, 1.65s/it]
3%|▎ | 15665/569592 [6:34:16<253:34:18, 1.65s/it]
3%|▎ | 15666/569592 [6:34:20<339:08:54, 2.20s/it]
3%|▎ | 15666/569592 [6:34:20<339:08:54, 2.20s/it]
3%|▎ | 15667/569592 [6:34:25<466:31:54, 3.03s/it]
3%|▎ | 15667/569592 [6:34:25<466:31:54, 3.03s/it]
3%|▎ | 15668/569592 [6:34:28<484:57:05, 3.15s/it]
3%|▎ | 15668/569592 [6:34:28<484:57:05, 3.15s/it]
3%|▎ | 15669/569592 [6:34:29<384:03:03, 2.50s/it]
3%|▎ | 15669/569592 [6:34:29<384:03:03, 2.50s/it]
3%|▎ | 15670/569592 [6:34:33<434:28:23, 2.82s/it]
3%|▎ | 15670/569592 [6:34:33<434:28:23, 2.82s/it]
3%|▎ | 15671/569592 [6:34:37<494:05:17, 3.21s/it]
3%|▎ | 15671/569592 [6:34:37<494:05:17, 3.21s/it]
3%|▎ | 15672/569592 [6:34:38<391:02:48, 2.54s/it]
3%|▎ | 15672/569592 [6:34:38<391:02:48, 2.54s/it]
3%|▎ | 15673/569592 [6:34:41<422:30:53, 2.75s/it]
3%|▎ | 15673/569592 [6:34:41<422:30:53, 2.75s/it]
3%|▎ | 15674/569592 [6:34:45<456:48:54, 2.97s/it]
3%|▎ | 15674/569592 [6:34:45<456:48:54, 2.97s/it]
3%|▎ | 15675/569592 [6:34:49<521:49:49, 3.39s/it]
3%|▎ | 15675/569592 [6:34:49<521:49:49, 3.39s/it]
3%|▎ | 15676/569592 [6:34:53<561:52:56, 3.65s/it]
3%|▎ | 15676/569592 [6:34:53<561:52:56, 3.65s/it]
3%|▎ | 15677/569592 [6:34:54<433:35:02, 2.82s/it]
3%|▎ | 15677/569592 [6:34:54<433:35:02, 2.82s/it]
3%|▎ | 15678/569592 [6:34:57<451:14:07, 2.93s/it]
3%|▎ | 15678/569592 [6:34:57<451:14:07, 2.93s/it]
3%|▎ | 15679/569592 [6:35:01<483:02:57, 3.14s/it]
3%|▎ | 15679/569592 [6:35:01<483:02:57, 3.14s/it]
3%|▎ | 15680/569592 [6:35:02<383:51:08, 2.49s/it]
3%|▎ | 15680/569592 [6:35:02<383:51:08, 2.49s/it]
3%|▎ | 15681/569592 [6:35:05<419:38:38, 2.73s/it]
3%|▎ | 15681/569592 [6:35:05<419:38:38, 2.73s/it]
3%|▎ | 15682/569592 [6:35:06<339:01:34, 2.20s/it]
3%|▎ | 15682/569592 [6:35:06<339:01:34, 2.20s/it]
3%|▎ | 15683/569592 [6:35:10<403:57:06, 2.63s/it]
3%|▎ | 15683/569592 [6:35:10<403:57:06, 2.63s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15684/569592 [6:35:13<451:55:34, 2.94s/it]
3%|▎ | 15684/569592 [6:35:13<451:55:34, 2.94s/it]
3%|▎ | 15685/569592 [6:35:17<466:33:56, 3.03s/it]
3%|▎ | 15685/569592 [6:35:17<466:33:56, 3.03s/it]
3%|▎ | 15686/569592 [6:35:18<368:30:51, 2.40s/it]
3%|▎ | 15686/569592 [6:35:18<368:30:51, 2.40s/it]
3%|▎ | 15687/569592 [6:35:21<407:51:58, 2.65s/it]
3%|▎ | 15687/569592 [6:35:21<407:51:58, 2.65s/it]
3%|▎ | 15688/569592 [6:35:24<452:05:32, 2.94s/it]
3%|▎ | 15688/569592 [6:35:24<452:05:32, 2.94s/it]
3%|▎ | 15689/569592 [6:35:29<540:24:33, 3.51s/it]
3%|▎ | 15689/569592 [6:35:29<540:24:33, 3.51s/it]
3%|▎ | 15690/569592 [6:35:30<421:29:24, 2.74s/it]
3%|▎ | 15690/569592 [6:35:30<421:29:24, 2.74s/it]
3%|▎ | 15691/569592 [6:35:35<499:24:55, 3.25s/it]
3%|▎ | 15691/569592 [6:35:35<499:24:55, 3.25s/it]
3%|▎ | 15692/569592 [6:35:36<390:41:46, 2.54s/it]
3%|▎ | 15692/569592 [6:35:36<390:41:46, 2.54s/it]
3%|▎ | 15693/569592 [6:35:37<316:41:49, 2.06s/it]
3%|▎ | 15693/569592 [6:35:37<316:41:49, 2.06s/it]
3%|▎ | 15694/569592 [6:35:40<396:10:13, 2.57s/it]
3%|▎ | 15694/569592 [6:35:40<396:10:13, 2.57s/it]
3%|▎ | 15695/569592 [6:35:44<432:37:31, 2.81s/it]
3%|▎ | 15695/569592 [6:35:44<432:37:31, 2.81s/it]
3%|▎ | 15696/569592 [6:35:45<346:26:00, 2.25s/it]
3%|▎ | 15696/569592 [6:35:45<346:26:00, 2.25s/it]
3%|▎ | 15697/569592 [6:35:46<289:32:42, 1.88s/it]
3%|▎ | 15697/569592 [6:35:46<289:32:42, 1.88s/it]
3%|▎ | 15698/569592 [6:35:49<355:08:42, 2.31s/it]
3%|▎ | 15698/569592 [6:35:49<355:08:42, 2.31s/it]
3%|▎ | 15699/569592 [6:35:50<291:37:40, 1.90s/it]
3%|▎ | 15699/569592 [6:35:50<291:37:40, 1.90s/it]
3%|▎ | 15700/569592 [6:35:51<246:25:06, 1.60s/it]
3%|▎ | 15700/569592 [6:35:51<246:25:06, 1.60s/it]
3%|▎ | 15701/569592 [6:35:54<335:49:03, 2.18s/it]
3%|▎ | 15701/569592 [6:35:54<335:49:03, 2.18s/it]
3%|▎ | 15702/569592 [6:35:55<278:43:15, 1.81s/it]
3%|▎ | 15702/569592 [6:35:55<278:43:15, 1.81s/it]
3%|▎ | 15703/569592 [6:35:56<239:11:55, 1.55s/it]
3%|▎ | 15703/569592 [6:35:56<239:11:55, 1.55s/it]
3%|▎ | 15704/569592 [6:36:00<332:48:06, 2.16s/it]
3%|▎ | 15704/569592 [6:36:00<332:48:06, 2.16s/it]
3%|▎ | 15705/569592 [6:36:03<387:23:04, 2.52s/it]
3%|▎ | 15705/569592 [6:36:03<387:23:04, 2.52s/it]
3%|▎ | 15706/569592 [6:36:07<437:16:49, 2.84s/it]
3%|▎ | 15706/569592 [6:36:07<437:16:49, 2.84s/it]
3%|▎ | 15707/569592 [6:36:08<349:54:52, 2.27s/it]
3%|▎ | 15707/569592 [6:36:08<349:54:52, 2.27s/it]
3%|▎ | 15708/569592 [6:36:11<397:28:30, 2.58s/it]
3%|▎ | 15708/569592 [6:36:11<397:28:30, 2.58s/it]
3%|▎ | 15709/569592 [6:36:16<512:11:49, 3.33s/it]
3%|▎ | 15709/569592 [6:36:16<512:11:49, 3.33s/it]
3%|▎ | 15710/569592 [6:36:20<525:34:48, 3.42s/it]
3%|▎ | 15710/569592 [6:36:20<525:34:48, 3.42s/it]
3%|▎ | 15711/569592 [6:36:21<41/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96679310 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1:47:56, 2.68s/it]
3%|▎ | 15711/569592 [6:36:21<411:47:56, 2.68s/it]
3%|▎ | 15712/569592 [6:36:22<331:56:54, 2.16s/it]
3%|▎ | 15712/569592 [6:36:22<331:56:54, 2.16s/it]
3%|▎ | 15713/569592 [6:36:25<407:56:38, 2.65s/it]
3%|▎ | 15713/569592 [6:36:25<407:56:38, 2.65s/it]
3%|▎ | 15714/569592 [6:36:29<438:02:00, 2.85s/it]
3%|▎ | 15714/569592 [6:36:29<438:02:00, 2.85s/it]
3%|▎ | 15715/569592 [6:36:32<462:46:36, 3.01s/it]
3%|▎ | 15715/569592 [6:36:32<462:46:36, 3.01s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15716/569592 [6:36:36<488:24:30, 3.17s/it]
3%|▎ | 15716/569592 [6:36:36<488:24:30, 3.17s/it]
3%|▎ | 15717/569592 [6:36:39<517:11:41, 3.36s/it]
3%|▎ | 15717/569592 [6:36:39<517:11:41, 3.36s/it]
3%|▎ | 15718/569592 [6:36:43<525:12:45, 3.41s/it]
3%|▎ | 15718/569592 [6:36:43<525:12:45, 3.41s/it]
3%|▎ | 15719/569592 [6:36:47<540:49:29, 3.52s/it]
3%|▎ | 15719/569592 [6:36:47<540:49:29, 3.52s/it]
3%|▎ | 15720/569592 [6:36:50<522:38:14, 3.40s/it]
3%|▎ | 15720/569592 [6:36:50<522:38:14, 3.40s/it]
3%|▎ | 15721/569592 [6:36:53<522:07:06, 3.39s/it]
3%|▎ | 15721/569592 [6:36:53<522:07:06, 3.39s/it]
3%|▎ | 15722/569592 [6:36:57<524:23:57, 3.41s/it]
3%|▎ | 15722/569592 [6:36:57<524:23:57, 3.41s/it]
3%|▎ | 15723/569592 [6:37:00<537:14:59, 3.49s/it]
3%|▎ | 15723/569592 [6:37:00<537:14:59, 3.49s/it]
3%|▎ | 15724/569592 [6:37:04<546:58:57, 3.56s/it]
3%|▎ | 15724/569592 [6:37:04<546:58:57, 3.56s/it]
3%|▎ | 15725/569592 [6:37:09<594:45:06, 3.87s/it]
3%|▎ | 15725/569592 [6:37:09<594:45:06, 3.87s/it]
3%|▎ | 15726/569592 [6:37:12<591:27:24, 3.84s/it]
3%|▎ | 15726/569592 [6:37:12<591:27:24, 3.84s/it]
3%|▎ | 15727/569592 [6:37:16<555:31:35, 3.61s/it]
3%|▎ | 15727/569592 [6:37:16<555:31:35, 3.61s/it]
3%|▎ | 15728/569592 [6:37:19<541:23:06, 3.52s/it]
3%|▎ | 15728/569592 [6:37:19<541:23:06, 3.52s/it]
3%|▎ | 15729/569592 [6:37:22<523:10:01, 3.40s/it]
3%|▎ | 15729/569592 [6:37:22<523:10:01, 3.40s/it]
3%|▎ | 15730/569592 [6:37:25<528:23:02, 3.43s/it]
3%|▎ | 15730/569592 [6:37:25<528:23:02, 3.43s/it]
3%|▎ | 15731/569592 [6:37:29<525:45:20, 3.42s/it]
3%|▎ | 15731/569592 [6:37:29<525:45:20, 3.42s/it]
3%|▎ | 15732/569592 [6:37:32<519:30:22, 3.38s/it]
3%|▎ | 15732/569592 [6:37:32<519:30:22, 3.38s/it]
3%|▎ | 15733/569592 [6:37:35<504:04:36, 3.28s/it]
3%|▎ | 15733/569592 [6:37:35<504:04:36, 3.28s/it]
3%|▎ | 15734/569592 [6:37:38<501:29:33, 3.26s/it]
3%|▎ | 15734/569592 [6:37:38<501:29:33, 3.26s/it]
3%|▎ | 15735/569592 [6:37:42<523:20:45, 3.40s/it]
3%|▎ | 15735/569592 [6:37:42<523:20:45, 3.40s/it]
3%|▎ | 15736/569592 [6:37:46<546:04:18, 3.55s/it]
3%|▎ | 15736/569592 [6:37:46<546:04:18, 3.55s/it]
3%|▎ | 15737/569592 [6:37:50<568:46:04, 3.70s/it]
3%|▎ | 15737/569592 [6:37:50<568:46:04, 3.70s/it]
3%|▎ | 15738/569592 [6:37:53<540:12:53, 3.51s/it]
3%|▎ | 15738/569592 [6:37:53<540:12:53, 3.51s/it]
3%|▎ | 15739/569592 [6:37:54<420:06:14, 2.73s/it]
3%|▎ | 15739/569592 [6:37:54<420:06:14, 2.73s/it]
3%|▎ | 15740/569592 [6:37:57<453:22:42, 2.95s/it]
3%|▎ | 15740/569592 [6:37:57<453:22:42, 2.95s/it]
3%|▎ | 15741/569592 [6:38:02<547:40:47, 3.56s/it]
3%|▎ | 15741/569592 [6:38:02<547:40:47, 3.56s/it]
3%|▎ | 15742/569592 [6:38:03<424:34:34, 2.76s/it]
3%|▎ | 15742/569592 [6:38:03<424:34:34, 2.76s/it]
3%|▎ | 15743/569592 [6:38:04<339:41:32, 2.21s/it]
3%|▎ | 15743/569592 [6:38:04<339:41:32, 2.21s/it]
3%|▎ | 15744/569592 [6:38:09<466:39:34, 3.03s/it]
3%|▎ | 15744/569592 [6:38:09<466:39:34, 3.03s/it]
3%|▎ | 15745/569592 [6:38:10<379:35:55, 2.47s/it]
3%|▎ | 15745/569592 [6:38:10<379:35:55, 2.47s/it]
3%|▎ | 15746/569592 [6:38:14<430:46:44, 2.80s/it]
3%|▎ | 15746/569592 [6:38:14<430:46:44, 2.80s/it]
3%|▎ | 15747/569592 [6:38:19<529:06:57, 3.44s/it]
3%|▎ | 15747/569592 [6:38:19<529:06:57, 3.44s/it]
3%|▎ | 15748/569592 [6:38:23<545:57:04, 3.55s/it]
3%|▎ | 15748/569592 [6:38:23<545:57:04, 3.55s/it]
3%|▎ | 15749/569592 [6:38:26<543:17:05, 3.53s/it]
3%|▎ | 15749/569592 [6:38:26<543:17:05, 3.53s/it]
3%|▎ | 15750/569592 [6:38:30<533:22:56, 3.47s/it]
3%|▎ | 15750/569592 [6:38:30<533:22:56, 3.47s/it]
3%|▎ | 15751/569592 [6:38:30<415:05:05, 2.70s/it]
3%|▎ | 15751/569592 [6:38:30<415:05:05, 2.70s/it]
3%|▎ | 15752/569592 [6:38:31<334:05:42, 2.17s/it]
3%|▎ | 15752/569592 [6:38:31<334:05:42, 2.17s/it]
3%|▎ | 15753/569592 [6:38:36<432:29:41, 2.81s/it]
3%|▎ | 15753/569592 [6:38:36<432:29:41, 2.81s/it]
3%|▎ | 15754/569592 [6:38:40<482:46:54, 3.14s/it]
3%|▎ | 15754/569592 [6:38:40<482:46:54, 3.14s/it]
3%|▎ | 15755/569592 [6:38:44<549:57:38, 3.57s/it]
3%|▎ | 15755/569592 [6:38:44<549:57:38, 3.57s/it]
3%|▎ | 15756/569592 [6:38:45<427:00:01, 2.78s/it]
3%|▎ | 15756/569592 [6:38:45<427:00:01, 2.78s/it]
3%|▎ | 15757/569592 [6:38:48<444:26:38, 2.89s/it]
3%|▎ | 15757/569592 [6:38:48<444:26:38, 2.89s/it]
3%|▎ | 15758/569592 [6:38:49<354:31:10, 2.30s/it]
3%|▎ | 15758/569592 [6:38:49<354:31:10, 2.30s/it]
3%|▎ | 15759/569592 [6:38:54<472:50:18, 3.07s/it]
3%|▎ | 15759/569592 [6:38:54<472:50:18, 3.07s/it]
3%|▎ | 15760/569592 [6:38:57<490:38:09, 3.19s/it]
3%|▎ | 15760/569592 [6:38:57<490:38:09, 3.19s/it]
3%|▎ | 15761/569592 [6:39:01<496:12:30, 3.23s/it]
3%|▎ | 15761/569592 [6:39:01<496:12:30, 3.23s/it]
3%|▎ | 15762/569592 [6:39:05<544:41:23, 3.54s/it]
3%|▎ | 15762/569592 [6:39:05<544:41:23, 3.54s/it]
3%|▎ | 15763/569592 [6:39:06<423:31:06, 2.75s/it]
3%|▎ | 15763/569592 [6:39:06<423:31:06, 2.75s/it]
3%|▎ | 15764/569592 [6:39:09<443:56:08, 2.89s/it]
3%|▎ | 15764/569592 [6:39:09<443:56:08, 2.89s/it]
3%|▎ | 15765/569592 [6:39:12<461:31:38, 3.00s/it]
3%|▎ | 15765/569592 [6:39:12<461:31:38, 3.00s/it]
3%|▎ | 15766/569592 [6:39:13<371:24:38, 2.41s/it]
3%|▎ | 15766/569592 [6:39:14<371:24:38, 2.41s/it]
3%|▎ | 15767/569592 [6:39:17<424:38:52, 2.76s/it]
3%|▎ | 15767/569592 [6:39:17<424:38:52, 2.76s/it]
3%|▎ | 15768/569592 [6:39:21<463:25:49, 3.01s/it]
3%|▎ | 15768/569592 [6:39:21<463:25:49, 3.01s/it]
3%|▎ | 15769/569592 [6:39:22<368:43:26, 2.40s/it]
3%|▎ | 15769/569592 [6:39:22<368:43:26, 2.40s/it]
3%|▎ | 15770/569592 [6:39:25<409:29:58, 2.66s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15770/569592 [6:39:25<409:29:58, 2.66s/it]
3%|▎ | 15771/569592 [6:39:28<438:49:48, 2.85s/it]
3%|▎ | 15771/569592 [6:39:28<438:49:48, 2.85s/it]
3%|▎ | 15772/569592 [6:39:29<351:52:14, 2.29s/it]
3%|▎ | 15772/569592 [6:39:29<351:52:14, 2.29s/it]
3%|▎ | 15773/569592 [6:39:33<408:03:17, 2.65s/it]
3%|▎ | 15773/569592 [6:39:33<408:03:17, 2.65s/it]
3%|▎ | 15774/569592 [6:39:36<456:40:38, 2.97s/it]
3%|▎ | 15774/569592 [6:39:36<456:40:38, 2.97s/it]
3%|▎ | 15775/569592 [6:39:40<483:46:41, 3.14s/it]
3%|▎ | 15775/569592 [6:39:40<483:46:41, 3.14s/it]
3%|▎ | 15776/569592 [6:39:41<382:39:36, 2.49s/it]
3%|▎ | 15776/569592 [6:39:41<382:39:36, 2.49s/it]
3%|▎ | 15777/569592 [6:39:42<310:40:04, 2.02s/it]
3%|▎ | 15777/569592 [6:39:42<310:40:04, 2.02s/it]
3%|▎ | 15778/569592 [6:39:43<260:19:55, 1.69s/it]
3%|▎ | 15778/569592 [6:39:43<260:19:55, 1.69s/it]
3%|▎ | 15779/569592 [6:39:47<360:50:02, 2.35s/it]
3%|▎ | 15779/569592 [6:39:47<360:50:02, 2.35s/it]
3%|▎ | 15780/569592 [6:39:51<444:13:26, 2.89s/it]
3%|▎ | 15780/569592 [6:39:51<444:13:26, 2.89s/it]
3%|▎ | 15781/569592 [6:39:54<465:15:17, 3.02s/it]
3%|▎ | 15781/569592 [6:39:54<465:15:17, 3.02s/it]
3%|▎ | 15782/569592 [6:39:55<368:26:33, 2.40s/it]
3%|▎ | 15782/569592 [6:39:55<368:26:33, 2.40s/it]
3%|▎ | 15783/569592 [6:39:56<302:22:30, 1.97s/it]
3%|▎ | 15783/569592 [6:39:56<302:22:30, 1.97s/it]
3%|▎ | 15784/569592 [6:39:57<255:18:06, 1.66s/it]
3%|▎ | 15784/569592 [6:39:57<255:18:06, 1.66s/it]
3%|▎ | 15785/569592 [6:40:01<354:51:07, 2.31s/it]
3%|▎ | 15785/569592 [6:40:01<354:51:07, 2.31s/it]
3%|▎ | 15786/569592 [6:40:02<293:45:54, 1.91s/it]
3%|▎ | 15786/569592 [6:40:02<293:45:54, 1.91s/it]
3%|▎ | 15787/569592 [6:40:05<363:56:40, 2.37s/it]
3%|▎ | 15787/569592 [6:40:05<363:56:40, 2.37s/it]
3%|▎ | 15788/569592 [6:40:10<456:33:40, 2.97s/it]
3%|▎ | 15788/569592 [6:40:10<456:33:40, 2.97s/it]
3%|▎ | 15789/569592 [6:40:11<366:18:19, 2.38s/it]
3%|▎ | 15789/569592 [6:40:11<366:18:19, 2.38s/it]
3%|▎ | 15790/569592 [6:40:12<300:51:14, 1.96s/it]
3%|▎ | 15790/569592 [6:40:12<300:51:14, 1.96s/it]
3%|▎ | 15791/569592 [6:40:15<369:04:39, 2.40s/it]
3%|▎ | 15791/569592 [6:40:15<369:04:39, 2.40s/it]
3%|▎ | 15792/569592 [6:40:19<434:36:37, 2.83s/it]
3%|▎ | 15792/569592 [6:40:19<434:36:37, 2.83s/it]
3%|▎ | 15793/569592 [6:40:20<349:03:27, 2.27s/it]
3%|▎ | 15793/569592 [6:40:20<349:03:27, 2.27s/it]
3%|▎ | 15794/569592 [6:40:24<436:04:21, 2.83s/it]
3%|▎ | 15794/569592 [6:40:24<436:04:21, 2.83s/it]
3%|▎ | 15795/569592 [6:40:25<349:31:05, 2.27s/it]
3%|▎ | 15795/569592 [6:40:25<349:31:05, 2.27s/it]
3%|▎ | 15796/569592 [6:40:28<411:22:12, 2.67s/it]
3%|▎ | 15796/569592 [6:40:28<411:22:12, 2.67s/it]
3%|▎ | 15797/569592 [6:40:32<456:19:44, 2.97s/it]
3%|▎ | 15797/569592 [6:40:32<456:19:44, 2.97s/it]
3%|▎ | 15798/569592 [6:40:36<480:33:35, 3.12s/it]
3%|▎ | 15798/569592 [6:40:36<480:33:35, 3.12s/it]
3%|▎ | 15799/569592 [6:40:37<382:49:20, 2.49s/it]
3%|▎ | 15799/569592 [6:40:37<382:49:20, 2.49s/it]
3%|▎ | 15800/569592 [6:40:38<314:14:30, 2.04s/it]
3%|▎ | 15800/569592 [6:40:38<314:14:30, 2.04s/it]
3%|▎ | 15801/569592 [6:40:42<428:17:47, 2.78s/it]
3%|▎ | 15801/569592 [6:40:42<428:17:47, 2.78s/it]
3%|▎ | 15802/569592 [6:40:46<477:51:09, 3.11s/it]
3%|▎ | 15802/569592 [6:40:46<477:51:09, 3.11s/it]
3%|▎ | 15803/569592 [6:40:49<474:36:57, 3.09s/it]
3%|▎ | 15803/569592 [6:40:49<474:36:57, 3.09s/it]
3%|▎ | 15804/569592 [6:40:52<478:29:26, 3.11s/it]
3%|▎ | 15804/569592 [6:40:52<478:29:26, 3.11s/it]
3%|▎ | 15805/569592 [6:40:55<486:22:10, 3.16s/it]
3%|▎ | 15805/569592 [6:40:55<486:22:10, 3.16s/it]
3%|▎ | 15806/569592 [6:40:59<498:42:25, 3.24s/it]
3%|▎ | 15806/569592 [6:40:59<498:42:25, 3.24s/it]
3%|▎ | 15807/569592 [6:41:02<512:27:39, 3.33s/it]
3%|▎ | 15807/569592 [6:41:02<512:27:39, 3.33s/it]
3%|▎ | 15808/569592 [6:41:03<402:13:16, 2.61s/it]
3%|▎ | 15808/569592 [6:41:03<402:13:16, 2.61s/it]
3%|▎ | 15809/569592 [6:41:07<442:35:48, 2.88s/it]
3%|▎ | 15809/569592 [6:41:07<442:35:48, 2.88s/it]
3%|▎ | 15810/569592 [6:41:08<355:52:02, 2.31s/it]
3%|▎ | 15810/569592 [6:41:08<355:52:02, 2.31s/it]
3%|▎ | 15811/569592 [6:41:12<438:46:35, 2.85s/it]
3%|▎ | 15811/569592 [6:41:12<438:46:35, 2.85s/it]
3%|▎ | 15812/569592 [6:41:13<351:08:29, 2.28s/it]
3%|▎ | 15812/569592 [6:41:13<351:08:29, 2.28s/it]
3%|▎ | 15813/569592 [6:41:16<407:18:23, 2.65s/it]
3%|▎ | 15813/569592 [6:41:16<407:18:23, 2.65s/it]
3%|▎ | 15814/569592 [6:41:17<333:54:47, 2.17s/it]
3%|▎ | 15814/569592 [6:41:18<333:54:47, 2.17s/it]
3%|▎ | 15815/569592 [6:41:18<277:24:10, 1.80s/it]
3%|▎ | 15815/569592 [6:41:18<277:24:10, 1.80s/it]
3%|▎ | 15816/569592 [6:41:22<351:41:31, 2.29s/it]
3%|▎ | 15816/569592 [6:41:22<351:41:31, 2.29s/it]
3%|▎ | 15817/569592 [6:41:23<292:20:55, 1.90s/it]
3%|▎ | 15817/569592 [6:41:23<292:20:55, 1.90s/it]
3%|▎ | 15818/569592 [6:41:26<359:00:47, 2.33s/it]
3%|▎ | 15818/569592 [6:41:26<359:00:47, 2.33s/it]
3%|▎ | 15819/569592 [6:41:27<295:17:37, 1.92s/it]
3%|▎ | 15819/569592 [6:41:27<295:17:37, 1.92s/it]
3%|▎ | 15820/569592 [6:41:28<250:25:03, 1.63s/it]
3%|▎ | 15820/569592 [6:41:28<250:25:03, 1.63s/it]
3%|▎ | 15821/569592 [6:41:31<326:34:23, 2.12s/it]
3%|▎ | 15821/569592 [6:41:31<326:34:23, 2.12s/it]
3%|▎ | 15822/569592 [6:41:35<392:20:16, 2.55s/it]
3%|▎ | 15822/569592 [6:41:35<392:20:16, 2.55s/it]
3%|▎ | 15823/569592 [6:41:36<319:35:39, 2.08s/it]
3%|▎ | 15823/569592 [6:41:36<319:35:39, 2.08s/it]
3%|▎ | 15824/569592 [6:41:39<380:38:50, 2.47s/it]
3%|▎ | 15824/569592 [6:41:39<380:38:50, 2.47s/it]
3%|▎ | 15825/569592 [6:41:40<309:58:08, 2.02s/it]
3%|▎ | 15825/569592 [6:41:40<309:58:08, 2.02s/it]
3%|▎ | 15826/569592 [6:41:44<397:21:09, 2.58s/it]
3%|▎ | 15826/569592 [6:41:44<397:21:09, 2.58s/it]
3%|▎ | 15827/569592 [6:41:49<514:02:55, 3.34s/it]
3%|▎ | 15827/569592 [6:41:49<514:02:55, 3.34s/it]
3%|▎ | 15828/569592 [6:41:53<545:48:27, 3.55s/it]
3%|▎ | 15828/569592 [6:41:53<545:48:27, 3.55s/it]
3%|▎ | 15829/569592 [6:41:57<547:35:11, 3.56s/it]
3%|▎ | 15829/569592 [6:41:57<547:35:11, 3.56s/it]
3%|▎ | 15830/569592 [6:42:00<549:03:08, 3.57s/it]
3%|▎ | 15830/569592 [6:42:00<549:03:08, 3.57s/it]
3%|▎ | 15831/569592 [6:42:04<548:23:24, 3.57s/it]
3%|▎ | 15831/569592 [6:42:04<548:23:24, 3.57s/it]
3%|▎ | 15832/569592 [6:42:07<527:10:47, 3.43s/it]
3%|▎ | 15832/569592 [6:42:07<527:10:47, 3.43s/it]
3%|▎ | 15833/569592 [6:42:10<511:55:09, 3.33s/it]
3%|▎ | 15833/569592 [6:42:10<511:55:09, 3.33s/it]
3%|▎ | 15834/569592 [6:42:11<399:42:22, 2.60s/it]
3%|▎ | 15834/569592 [6:42:11<399:42:22, 2.60s/it]
3%|▎ | 15835/569592 [6:42:14<430:36:21, 2.80s/it]
3%|▎ | 15835/569592 [6:42:14<430:36:21, 2.80s/it]
3%|▎ | 15836/569592 [6:42:18<470:08:50, 3.06s/it]
3%|▎ | 15836/569592 [6:42:18<470:08:50, 3.06s/it]
3%|▎ | 15837/569592 [6:42:22<494:07:46, 3.21s/it]
3%|▎ | 15837/569592 [6:42:22<494:07:46, 3.21s/it]
3%|▎ | 15838/569592 [6:42:25<488:34:38, 3.18s/it]
3%|▎ | 15838/569592 [6:42:25<488:34:38, 3.18s/it]
3%|▎ | 15839/569592 [6:42:28<501:49:56, 3.26s/it]
3%|▎ | 15839/569592 [6:42:28<501:49:56, 3.26s/it]
3%|▎ | 15840/569592 [6:42:32<518:11:04, 3.37s/it]
3%|▎ | 15840/569592 [6:42:32<518:11:04, 3.37s/it]
3%|▎ | 15841/569592 [6:42:35<526:20:25, 3.42s/it]
3%|▎ | 15841/569592 [6:42:35<526:20:25, 3.42s/it]
3%|▎ | 15842/569592 [6:42:39<522:33:26, 3.40s/it]
3%|▎ | 15842/569592 [6:42:39<522:33:26, 3.40s/it]
3%|▎ | 15843/569592 [6:42:42<524:41:21, 3.41s/it]
3%|▎ | 15843/569592 [6:42:42<524:41:21, 3.41s/it]
3%|▎ | 15844/569592 [6:42:45<509:27:09, 3.31s/it]
3%|▎ | 15844/569592 [6:42:45<509:27:09, 3.31s/it]
3%|▎ | 15845/569592 [6:42:48<499:32:22, 3.25s/it]
3%|▎ | 15845/569592 [6:42:48<499:32:22, 3.25s/it]
3%|▎ | 15846/569592 [6:42:52<505:57:05, 3.29s/it]
3%|▎ | 15846/569592 [6:42:52<505:57:05, 3.29s/it]
3%|▎ | 15847/569592 [6:42:55<497:04:12, 3.23s/it]
3%|▎ | 15847/569592 [6:42:55<497:04:12, 3.23s/it]
3%|▎ | 15848/569592 [6:42:58<513:35:34, 3.34s/it]
3%|▎ | 15848/569592 [6:42:58<513:35:34, 3.34s/it]
3%|▎ | 15849/569592 [6:43:02<513:19:14, 3.34s/it]
3%|▎ | 15849/569592 [6:43:02<513:19:14, 3.34s/it]
3%|▎ | 15850/569592 [6:43:05<510:46:39, 3.32s/it]
3%|▎ | 15850/569592 [6:43:05<510:46:39, 3.32s/it]
3%|▎ | 15851/569592 [6:43:06<400:40:10, 2.60s/it]
3%|▎ | 15851/569592 [6:43:06<400:40:10, 2.60s/it]
3%|▎ | 15852/569592 [6:43:10<450:03:32, 2.93s/it]
3%|▎ | 15852/569592 [6:43:10<450:03:32, 2.93s/it]
3%|▎ | 15853/569592 [6:43:17<660:26:17, 4.29s/it]
3%|▎ | 15853/569592 [6:43:17<660:26:17, 4.29s/it]
3%|▎ | 15854/569592 [6:43:20<602:31:00, 3.92s/it]
3%|▎ | 15854/569592 [6:43:20<602:31:00, 3.92s/it]
3%|▎ | 15855/569592 [6:43:24<602:44:15, 3.92s/it]
3%|▎ | 15855/569592 [6:43:24<602:44:15, 3.92s/it]
3%|▎ | 15856/569592 [6:43:27<565:43:52, 3.68s/it]
3%|▎ | 15856/569592 [6:43:27<565:43:52, 3.68s/it]
3%|▎ | 15857/569592 [6:43:28<437:42:20, 2.85s/it]
3%|▎ | 15857/569592 [6:43:28<437:42:20, 2.85s/it]
3%|▎ | 15858/569592 [6:43:32<467:52:19, 3.04s/it]
3%|▎ | 15858/569592 [6:43:32<467:52:19, 3.04s/it]
3%|▎ | 15859/569592 [6:43:35<504:06:55, 3.28s/it]
3%|▎ | 15859/569592 [6:43:35<504:06:55, 3.28s/it]
3%|▎ | 15860/569592 [6:43:36<396:20:33, 2.58s/it]
3%|▎ | 15860/569592 [6:43:36<396:20:33, 2.58s/it]
3%|▎ | 15861/569592 [6:43:37<321:07:32, 2.09s/it]
3%|▎ | 15861/569592 [6:43:37<321:07:32, 2.09s/it]
3%|▎ | 15862/569592 [6:43:41<403:53:51, 2.63s/it]
3%|▎ | 15862/569592 [6:43:41<403:53:51, 2.63s/it]
3%|▎ | 15863/569592 [6:43:42<329:28:18, 2.14s/it]
3%|▎ | 15863/569592 [6:43:42<329:28:18, 2.14s/it]
3%|▎ | 15864/569592 [6:43:46<400:58:10, 2.61s/it]
3%|▎ | 15864/569592 [6:43:46<400:58:10, 2.61s/it]
3%|▎ | 15865/569592 [6:43:49<436:32:26, 2.84s/it]
3%|▎ | 15865/569592 [6:43:49<436:32:26, 2.84s/it]
3%|▎ | 15866/569592 [6:43:53<465:04:40, 3.02s/it]
3%|▎ | 15866/569592 [6:43:53<465:04:40, 3.02s/it]
3%|▎ | 15867/569592 [6:43:59<619:00:50, 4.02s/it]
3%|▎ | 15867/569592 [6:43:59<619:00:50, 4.02s/it]
3%|▎ | 15868/569592 [6:44:00<475:06:30, 3.09s/it]
3%|▎ | 15868/569592 [6:44:00<475:06:30, 3.09s/it]
3%|▎ | 15869/569592 [6:44:01<374:04:17, 2.43s/it]
3%|▎ | 15869/569592 [6:44:01<374:04:17, 2.43s/it]
3%|▎ | 15870/569592 [6:44:05<464:46:47, 3.02s/it]
3%|▎ | 15870/569592 [6:44:05<464:46:47, 3.02s/it]
3%|▎ | 15871/569592 [6:44:09<486:38:04, 3.16s/it]
3%|▎ | 15871/569592 [6:44:09<486:38:04, 3.16s/it]
3%|▎ | 15872/569592 [6:44:12<502:32:12, 3.27s/it]
3%|▎ | 15872/569592 [6:44:12<502:32:12, 3.27s/it]
3%|▎ | 15873/569592 [6:44:15<495:17:24, 3.22s/it]
3%|▎ | 15873/569592 [6:44:15<495:17:24, 3.22s/it]
3%|▎ | 15874/569592 [6:44:16<389:36:18, 2.53s/it]
3%|▎ | 15874/569592 [6:44:16<389:36:18, 2.53s/it]
3%|▎ | 15875/569592 [6:44:20<421:06:19, 2.74s/it]
3%|▎ | 15875/569592 [6:44:20<421:06:19, 2.74s/it]
3%|▎ | 15876/569592 [6:44:21<340:26:07, 2.21s/it]
3%|▎ | 15876/569592 [6:44:21<340:26:07, 2.21s/it]
3%|▎ | 15877/569592 [6:44:24<392:37:10, 2.55s/it]
3%|▎ | 15877/569592 [6:44:24<392:37:10, 2.55s/it]
3%|▎ | 15878/569592 [6:44:28<480:23:28, 3.12s/it]
3%|▎ | 15878/569592 [6:44:28<480:23:28, 3.12s/it]
3%|▎ | 15879/569592 [6:44:29<378:49:03, 2.46s/it]
3%|▎ | 15879/569592 [6:44:29<378:49:03, 2.46s/it]
3%|▎ | 15880/569592 [6:44:34<477:13:57, 3.10s/it]
3%|▎ | 15880/569592 [6:44:34<477:13:57, 3.10s/it]
3%|▎ | 15881/569592 [6:44:38<507:07:53, 3.30s/it]
3%|▎ | 15881/569592 [6:44:38<507:07:53, 3.30s/it]
3%|▎ | 15882/569592 [6:44:41<501:52:10, 3.26s/it]
3%|▎ | 15882/569592 [6:44:41<501:52:10, 3.26s/it]
3%|▎ | 15883/569592 [6:44:42<393:55:32, 2.56s/it]
3%|▎ | 15883/569592 [6:44:42<393:55:32, 2.56s/it]
3%|▎ | 15884/569592 [6:44:45<427:11:16, 2.78s/it]
3%|▎ | 15884/569592 [6:44:45<427:11:16, 2.78s/it]
3%|▎ | 15885/569592 [6:44:50<530:41:31, 3.45s/it]
3%|▎ | 15885/569592 [6:44:50<530:41:31, 3.45s/it]
3%|▎ | 15886/569592 [6:44:53<516:44:53, 3.36s/it]
3%|▎ | 15886/569592 [6:44:53<516:44:53, 3.36s/it]
3%|▎ | 15887/569592 [6:44:57<526:11:56, 3.42s/it]
3%|▎ | 15887/569592 [6:44:57<526:11:56, 3.42s/it]
3%|▎ | 15888/569592 [6:45:01<587:57:02, 3.82s/it]
3%|▎ | 15888/569592 [6:45:01<587:57:02, 3.82s/it]
3%|▎ | 15889/569592 [6:45:02<453:57:13, 2.95s/it]
3%|▎ | 15889/569592 [6:45:02<453:57:13, 2.95s/it]
3%|▎ | 15890/569592 [6:45:06<477:02:39, 3.10s/it]
3%|▎ | 15890/569592 [6:45:06<477:02:39, 3.10s/it]
3%|▎ | 15891/569592 [6:45:07<380:11:02, 2.47s/it]
3%|▎ | 15891/569592 [6:45:07<380:11:02, 2.47s/it]
3%|▎ | 15892/569592 [6:45:11<440:08:30, 2.86s/it]
3%|▎ | 15892/569592 [6:45:11<440:08:30, 2.86s/it]
3%|▎ | 15893/569592 [6:45:14<462:13:15, 3.01s/it]
3%|▎ | 15893/569592 [6:45:14<462:13:15, 3.01s/it]
3%|▎ | 15894/569592 [6:45:15<367:50:58, 2.39s/it]
3%|▎ | 15894/569592 [6:45:15<367:50:58, 2.39s/it]
3%|▎ | 15895/569592 [6:45:16<302:43:23, 1.97s/it]
3%|▎ | 15895/569592 [6:45:16<302:43:23, 1.97s/it]
3%|▎ | 15896/569592 [6:45:17<254:29:44, 1.65s/it]
3%|▎ | 15896/569592 [6:45:17<254:29:44, 1.65s/it]
3%|▎ | 15897/569592 [6:45:20<334:50:23, 2.18s/it]
3%|▎ | 15897/569592 [6:45:20<334:50:23, 2.18s/it]
3%|▎ | 15898/569592 [6:45:21<284:55:01, 1.85s/it]
3%|▎ | 15898/569592 [6:45:21<284:55:01, 1.85s/it]
3%|▎ | 15899/569592 [6:45:22<242:46:42, 1.58s/it]
3%|▎ | 15899/569592 [6:45:22<242:46:42, 1.58s/it]
3%|▎ | 15900/569592 [6:45:23<212:52:13, 1.38s/it]
3%|▎ | 15900/569592 [6:45:23<212:52:13, 1.38s/it]
3%|▎ | 15901/569592 [6:45:24<193:34:07, 1.26s/it]
3%|▎ | 15901/569592 [6:45:24<193:34:07, 1.26s/it]
3%|▎ | 15902/569592 [6:45:29<371:46:40, 2.42s/it]
3%|▎ | 15902/569592 [6:45:29<371:46:40, 2.42s/it]
3%|▎ | 15903/569592 [6:45:34<460:23:14, 2.99s/it]
3%|▎ | 15903/569592 [6:45:34<460:23:14, 2.99s/it]
3%|▎ | 15904/569592 [6:45:35<367:22:02, 2.39s/it]
3%|▎ | 15904/569592 [6:45:35<367:22:02, 2.39s/it]
3%|▎ | 15905/569592 [6:45:39<442:28:02, 2.88s/it]
3%|▎ | 15905/569592 [6:45:39<442:28:02, 2.88s/it]
3%|▎ | 15906/569592 [6:45:42<485:01:32, 3.15s/it]
3%|▎ | 15906/569592 [6:45:42<485:01:32, 3.15s/it]
3%|▎ | 15907/569592 [6:45:43<383:45:59, 2.50s/it]
3%|▎ | 15907/569592 [6:45:43<383:45:59, 2.50s/it]
3%|▎ | 15908/569592 [6:45:44<312:11:15, 2.03s/it]
3%|▎ | 15908/569592 [6:45:44<312:11:15, 2.03s/it]
3%|▎ | 15909/569592 [6:45:48<379:48:04, 2.47s/it]
3%|▎ | 15909/569592 [6:45:48<379:48:04, 2.47s/it]
3%|▎ | 15910/569592 [6:45:52<447:50:02, 2.91s/it]
3%|▎ | 15910/569592 [6:45:52<447:50:02, 2.91s/it]
3%|▎ | 15911/569592 [6:45:56<501:06:53, 3.26s/it]
3%|▎ | 15911/569592 [6:45:56<501:06:53, 3.26s/it]
3%|▎ | 15912/569592 [6:45:57<393:52:50, 2.56s/it]
3%|▎ | 15912/569592 [6:45:57<393:52:50, 2.56s/it]
3%|▎ | 15913/569592 [6:45:58<319:54:05, 2.08s/it]
3%|▎ | 15913/569592 [6:45:58<319:54:05, 2.08s/it]
3%|▎ | 15914/569592 [6:46:02<436:40:45, 2.84s/it]
3%|▎ | 15914/569592 [6:46:02<436:40:45, 2.84s/it]
3%|▎ | 15915/569592 [6:46:03<348:56:13, 2.27s/it]
3%|▎ | 15915/569592 [6:46:03<348:56:13, 2.27s/it]
3%|▎ | 15916/569592 [6:46:06<394:25:31, 2.56s/it]
3%|▎ | 15916/569592 [6:46:07<394:25:31, 2.56s/it]
3%|▎ | 15917/569592 [6:46:08<332:22:12, 2.16s/it]
3%|▎ | 15917/569592 [6:46:08<332:22:12, 2.16s/it]
3%|▎ | 15918/569592 [6:46:13<495:16:31, 3.22s/it]
3%|▎ | 15918/569592 [6:46:13<495:16:31, 3.22s/it]
3%|▎ | 15919/569592 [6:46:14<393:38:52, 2.56s/it]
3%|▎ | 15919/569592 [6:46:14<393:38:52, 2.56s/it]
3%|▎ | 15920/569592 [6:46:18<444:37:12, 2.89s/it]
3%|▎ | 15920/569592 [6:46:18<444:37:12, 2.89s/it]
3%|▎ | 15921/569592 [6:46:19<357:28:40, 2.32s/it]
3%|▎ | 15921/569592 [6:46:19<357:28:40, 2.32s/it]
3%|▎ | 15922/569592 [6:46:22<406:25:12, 2.64s/it]
3%|▎ | 15922/569592 [6:46:22<406:25:12, 2.64s/it]
3%|▎ | 15923/569592 [6:46:26<460:45:00, 3.00s/it]
3%|▎ | 15923/569592 [6:46:26<460:45:00, 3.00s/it]
3%|▎ | 15924/569592 [6:46:31<523:24:47, 3.40s/it]
3%|▎ | 15924/569592 [6:46:31<523:24:47, 3.40s/it]
3%|▎ | 15925/569592 [6:46:32<409:47:44, 2.66s/it]
3%|▎ | 15925/569592 [6:46:32<409:47:44, 2.66s/it]
3%|▎ | 15926/569592 [6:46:35<451:17:52, 2.93s/it]
3%|▎ | 15926/569592 [6:46:35<451:17:52, 2.93s/it]
3%|▎ | 15927/569592 [6:46:36<362:29:18, 2.36s/it]
3%|▎ | 15927/569592 [6:46:36<362:29:18, 2.36s/it]
3%|▎ | 15928/569592 [6:46:40<431:57:38, 2.81s/it]
3%|▎ | 15928/569592 [6:46:40<431:57:38, 2.81s/it]
3%|▎ | 15929/569592 [6:46:44<493:55:48, 3.21s/it]
3%|▎ | 15929/569592 [6:46:44<493:55:48, 3.21s/it]
3%|▎ | 15930/569592 [6:46:45<390:09:23, 2.54s/it]
3%|▎ | 15930/569592 [6:46:45<390:09:23, 2.54s/it]
3%|▎ | 15931/569592 [6:46:46<317:30:15, 2.06s/it]
3%|▎ | 15931/569592 [6:46:46<317:30:15, 2.06s/it]
3%|▎ | 15932/569592 [6:46:47<266:49:23, 1.73s/it]
3%|▎ | 15932/569592 [6:46:47<266:49:23, 1.73s/it]
3%|▎ | 15933/569592 [6:46:48<229:44:11, 1.49s/it]
3%|▎ | 15933/569592 [6:46:48<229:44:11, 1.49s/it]
3%|▎ | 15934/569592 [6:46:52<330:22:17, 2.15s/it]
3%|▎ | 15934/569592 [6:46:52<330:22:17, 2.15s/it]
3%|▎ | 15935/569592 [6:46:53<274:28:56, 1.78s/it]
3%|▎ | 15935/569592 [6:46:53<274:28:56, 1.78s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98911692 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15936/569592 [6:46:57<390:00:46, 2.54s/it]
3%|▎ | 15936/569592 [6:46:57<390:00:46, 2.54s/it]
3%|▎ | 15937/569592 [6:47:01<442:09:26, 2.88s/it]
3%|▎ | 15937/569592 [6:47:01<442:09:26, 2.88s/it]
3%|▎ | 15938/569592 [6:47:04<465:57:55, 3.03s/it]
3%|▎ | 15938/569592 [6:47:04<465:57:55, 3.03s/it]
3%|▎ | 15939/569592 [6:47:05<372:28:29, 2.42s/it]
3%|▎ | 15939/569592 [6:47:05<372:28:29, 2.42s/it]
3%|▎ | 15940/569592 [6:47:08<422:55:16, 2.75s/it]
3%|▎ | 15940/569592 [6:47:08<422:55:16, 2.75s/it]
3%|▎ | 15941/569592 [6:47:09<340:46:07, 2.22s/it]
3%|▎ | 15941/569592 [6:47:09<340:46:07, 2.22s/it]
3%|▎ | 15942/569592 [6:47:13<412:45:15, 2.68s/it]
3%|▎ | 15942/569592 [6:47:13<412:45:15, 2.68s/it]
3%|▎ | 15943/569592 [6:47:17<446:44:41, 2.90s/it]
3%|▎ | 15943/569592 [6:47:17<446:44:41, 2.90s/it]
3%|▎ | 15944/569592 [6:47:20<479:43:47, 3.12s/it]
3%|▎ | 15944/569592 [6:47:20<479:43:47, 3.12s/it]
3%|▎ | 15945/569592 [6:47:24<511:18:36, 3.32s/it]
3%|▎ | 15945/569592 [6:47:24<511:18:36, 3.32s/it]
3%|▎ | 15946/569592 [6:47:28<527:00:12, 3.43s/it]
3%|▎ | 15946/569592 [6:47:28<527:00:12, 3.43s/it]
3%|▎ | 15947/569592 [6:47:29<411:20:03, 2.67s/it]
3%|▎ | 15947/569592 [6:47:29<411:20:03, 2.67s/it]
3%|▎ | 15948/569592 [6:47:32<460:58:43, 3.00s/it]
3%|▎ | 15948/569592 [6:47:32<460:58:43, 3.00s/it]
3%|▎ | 15949/569592 [6:47:33<364:48:20, 2.37s/it]
3%|▎ | 15949/569592 [6:47:33<364:48:20, 2.37s/it]
3%|▎ | 15950/569592 [6:47:37<410:19:38, 2.67s/it]
3%|▎ | 15950/569592 [6:47:37<410:19:38, 2.67s/it]
3%|▎ | 15951/569592 [6:47:38<332:27:29, 2.16s/it]
3%|▎ | 15951/569592 [6:47:38<332:27:29, 2.16s/it]
3%|▎ | 15952/569592 [6:47:41<401:19:04, 2.61s/it]
3%|▎ | 15952/569592 [6:47:41<401:19:04, 2.61s/it]
3%|▎ | 15953/569592 [6:47:45<431:44:16, 2.81s/it]
3%|▎ | 15953/569592 [6:47:45<431:44:16, 2.81s/it]
3%|▎ | 15954/569592 [6:47:48<474:08:41, 3.08s/it]
3%|▎ | 15954/569592 [6:47:48<474:08:41, 3.08s/it]
3%|▎ | 15955/569592 [6:47:52<494:25:04, 3.21s/it]
3%|▎ | 15955/569592 [6:47:52<494:25:04, 3.21s/it]
3%|▎ | 15956/569592 [6:47:55<510:47:43, 3.32s/it]
3%|▎ | 15956/569592 [6:47:55<510:47:43, 3.32s/it]
3%|▎ | 15957/569592 [6:47:59<502:30:54, 3.27s/it]
3%|▎ | 159/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95874880 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
57/569592 [6:47:59<502:30:54, 3.27s/it]
3%|▎ | 15958/569592 [6:48:02<533:24:41, 3.47s/it]
3%|▎ | 15958/569592 [6:48:02<533:24:41, 3.47s/it]
3%|▎ | 15959/569592 [6:48:06<545:17:36, 3.55s/it]
3%|▎ | 15959/569592 [6:48:06<545:17:36, 3.55s/it]
3%|▎ | 15960/569592 [6:48:09<530:18:08, 3.45s/it]
3%|▎ | 15960/569592 [6:48:09<530:18:08, 3.45s/it]
3%|▎ | 15961/569592 [6:48:13<523:16:01, 3.40s/it]
3%|▎ | 15961/569592 [6:48:13<523:16:01, 3.40s/it]
3%|▎ | 15962/569592 [6:48:14<407:48:46, 2.65s/it]
3%|▎ | 15962/569592 [6:48:14<407:48:46, 2.65s/it]
3%|▎ | 15963/569592 [6:48:17<450:58:41, 2.93s/it]
3%|▎ | 15963/569592 [6:48:17<450:58:41, 2.93s/it]
3%|▎ | 15964/569592 [6:48:21<467:50:51, 3.04s/it]
3%|▎ | 15964/569592 [6:48:21<467:50:51, 3.04s/it]
3%|▎ | 15965/569592 [6:48:26<562:52:08, 3.66s/it]
3%|▎ | 15965/569592 [6:48:26<562:52:08, 3.66s/it]
3%|▎ | 15966/569592 [6:48:29<548:25:44, 3.57s/it]
3%|▎ | 15966/569592 [6:48:29<548:25:44, 3.57s/it]
3%|▎ | 15967/569592 [6:48:33<590:30:40, 3.84s/it]
3%|▎ | 15967/569592 [6:48:33<590:30:40, 3.84s/it]
3%|▎ | 15968/569592 [6:48:34<454:40:26, 2.96s/it]
3%|▎ | 15968/569592 [6:48:34<454:40:26, 2.96s/it]
3%|▎ | 15969/569592 [6:48:38<468:58:58, 3.05s/it]
3%|▎ | 15969/569592 [6:48:38<468:58:58, 3.05s/it]
3%|▎ | 15970/569592 [6:48:41<493:59:07, 3.21s/it]
3%|▎ | 15970/569592 [6:48:41<493:59:07, 3.21s/it]
3%|▎ | 15971/569592 [6:48:46<582:03:10, 3.78s/it]
3%|▎ | 15971/569592 [6:48:46<582:03:10, 3.78s/it]
3%|▎ | 15972/569592 [6:48:50<573:45:30, 3.73s/it]
3%|▎ | 15972/569592 [6:48:50<573:45:30, 3.73s/it]
3%|▎ | 15973/569592 [6:48:55<633:03:57, 4.12s/it]
3%|▎ | 15973/569592 [6:48:55<633:03:57, 4.12s/it]
3%|▎ | 1597/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
4/569592 [6:48:58<584:02:20, 3.80s/it]
3%|▎ | 15974/569592 [6:48:58<584:02:20, 3.80s/it]
3%|▎ | 15975/569592 [6:49:01<542:42:06, 3.53s/it]
3%|▎ | 15975/569592 [6:49:01<542:42:06, 3.53s/it]
3%|▎ | 15976/569592 [6:49:04<525:27:09, 3.42s/it]
3%|▎ | 15976/569592 [6:49:04<525:27:09, 3.42s/it]
3%|▎ | 15977/569592 [6:49:05<410:38:21, 2.67s/it]
3%|▎ | 15977/569592 [6:49:05<410:38:21, 2.67s/it]
3%|▎ | 15978/569592 [6:49:09<462:27:13, 3.01s/it]
3%|▎ | 15978/569592 [6:49:09<462:27:13, 3.01s/it]
3%|▎ | 15979/569592 [6:49:12<483:19:35, 3.14s/it]
3%|▎ | 15979/569592 [6:49:12<483:19:35, 3.14s/it]
3%|▎ | 15980/569592 [6:49:16<521:13:33, 3.39s/it]
3%|▎ | 15980/569592 [6:49:16<521:13:33, 3.39s/it]
3%|▎ | 15981/569592 [6:49:20<522:29:07, 3.40s/it]
3%|▎ | 15981/569592 [6:49:20<522:29:07, 3.40s/it]
3%|▎ | 15982/569592 [6:49:23<524:37:01, 3.41s/it]
3%|▎ | 15982/569592 [6:49:23<524:37:01, 3.41s/it]
3%|▎ | 15983/569592 [6:49:27<537:03:28, 3.49s/it]
3%|▎ | 15983/569592 [6:49:27<537:03:28, 3.49s/it]
3%|▎ | 15984/569592 [6:49:30<540:09:50, 3.51s/it]
3%|▎ | 15984/569592 [6:49:30<540:09:50, 3.51s/it]
3%|▎ | 15985/569592 [6:49:33<522:48:55, 3.40s/it]
3%|▎ | 15985/569592 [6:49:33<522:48:55, 3.40s/it]
3%|▎ | 15986/569592 [6:49:34<409:07:26, 2.66s/it]
3%|▎ | 15986/569592 [6:49:34<409:07:26, 2.66s/it]
3%|▎ | 15987/569592 [6:49:35<334:45:44, 2.18s/it]
3%|▎ | 15987/569592 [6:49:35<334:45:44, 2.18s/it]
3%|▎ | 15988/569592 [6:49:39<418:42:23, 2.72s/it]
3%|▎ | 15988/569592 [6:49:40<418:42:23, 2.72s/it]
3%|▎ | 15989/569592 [6:49:43<457:56:06, 2.98s/it]
3%|▎ | 15989/569592 [6:49:43<457:56:06, 2.98s/it]
3%|▎ | 15990/569592 [6:49:47<500:40:59, 3.26s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 15990/569592 [6:49:47<500:40:59, 3.26s/it]
3%|▎ | 15991/569592 [6:49:51<547:06:18, 3.56s/it]
3%|▎ | 15991/569592 [6:49:51<547:06:18, 3.56s/it]
3%|▎ | 15992/569592 [6:49:54<526:58:18, 3.43s/it]
3%|▎ | 15992/569592 [6:49:54<526:58:18, 3.43s/it]
3%|▎ | 15993/569592 [6:49:59<584:05:55, 3.80s/it]
3%|▎ | 15993/569592 [6:49:59<584:05:55, 3.80s/it]
3%|▎ | 15994/569592 [6:50:00<449:17:54, 2.92s/it]
3%|▎ | 15994/569592 [6:50:00<449:17:54, 2.92s/it]
3%|▎ | 15995/569592 [6:50:03<470:14:59, 3.06s/it]
3%|▎ | 15995/569592 [6:50:03<470:14:59, 3.06s/it]
3%|▎ | 15996/569592 [6:50:04<375:32:42, 2.44s/it]
3%|▎ | 15996/569592 [6:50:04<375:32:42, 2.44s/it]
3%|▎ | 15997/569592 [6:50:07<414:16:27, 2.69s/it]
3%|▎ | 15997/569592 [6:50:07<414:16:27, 2.69s/it]
3%|▎ | 15998/569592 [6:50:11<448:17:56, 2.92s/it]
3%|▎ | 15998/569592 [6:50:11<448:17:56, 2.92s/it]
3%|▎ | 15999/569592 [6:50:12<358:07:55, 2.33s/it]
3%|▎ | 15999/569592 [6:50:12<358:07:55, 2.33s/it]
3%|▎ | 16000/569592 [6:50:16<424:58:48, 2.76s/it]
3%|▎ | 16000/569592 [6:50:16<424:58:48, 2.76s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-15000] due to args.save_total_limit
3%|▎ | 16001/569592 [6:52:33<6612:12:29, 43.00s/it]
3%|▎ | 16001/569592 [6:52:33<6612:12:29, 43.00s/it]
3%|▎ | 16002/569592 [6:52:36<4790:24:32, 31.15s/it]
3%|▎ | 16002/569592 [6:52:36<4790:24:32, 31.15s/it]
3%|▎ | 16003/569592 [6:52:40<3537:44:53, 23.01s/it]
3%|▎ | 16003/569592 [6:52:40<3537:44:53, 23.01s/it]
3%|▎ | 16004/569592 [6:52:45<2708:46:16, 17.62s/it]
3%|▎ | 16004/569592 [6:52:45<2708:46:16, 17.62s/it]
3%|▎ | 16005/569592 [6:52:49<2065:55:41, 13.43s/it]
3%|▎ | 16005/569592 [6:52:49<2065:55:41, 13.43s/it]
3%|▎ | 16006/569592 [6:52:52<1613:12:21, 10.49s/it]
3%|▎ | 16006/569592 [6:52:52<1613:12:21, 10.49s/it]
3%|▎ | 16007/569592 [6:52:53<1170:21:01, 7.61s/it]
3%|▎ | 16007/569592 [6:52:53<1170:21:01, 7.61s/it]
3%|▎ | 16008/569592 [6:52:54<862:08:41, 5.61s/it]
3%|▎ | 16008/569592 [6:52:54<862:08:41, 5.61s/it]
3%|▎ | 16009/569592 [6:52:55<647:30:19, 4.21s/it]
3%|▎ | 16009/569592 [6:52:55<647:30:19, 4.21s/it]
3%|▎ | 16010/569592 [6:52:59<647:30:12, 4.21s/it]
3%|▎ | 16010/569592 [6:52:59<647:30:12, 4.21s/it]
3%|▎ | 16011/569592 [6:53:00<499:00:08, 3.25s/it]
3%|▎ | 16011/569592 [6:53:00<499:00:08, 3.25s/it]
3%|▎ | 16012/569592 [6:53:01<393:31:31, 2.56s/it]
3%|▎ | 16012/569592 [6:53:01<393:31:31, 2.56s/it]
3%|▎ | 16013/569592 [6:53:02<319:43:47, 2.08s/it]
3%|▎ | 16013/569592 [6:53:02<319:43:47, 2.08s/it]
3%|▎ | 16014/569592 [6:53:03<268:02:42, 1.74s/it]
3%|▎ | 16014/569592 [6:53:03<268:02:42, 1.74s/it]
3%|▎ | 16015/569592 [6:53:07<348:46:12, 2.27s/it]
3%|▎ | 16015/569592 [6:53:07<348:46:12, 2.27s/it]
3%|▎ | 16016/569592 [6:53:08<291:31:21, 1.90s/it]
3%|▎ | 16016/569592 [6:53:08<291:31:21, 1.90s/it]
3%|▎ | 16017/569592 [6:53:09<247:20:56, 1.61s/it]
3%|▎ | 16017/569592 [6:53:09<247:20:56, 1.61s/it]
3%|▎ | 16018/569592 [6:53:13<390:01:48, 2.54s/it]
3%|▎ | 16018/569592 [6:53:13<390:01:48, 2.54s/it]
3%|▎ | 16019/569592 [6:53:17<449:17:34, 2.92s/it]
3%|▎ | 16019/569592 [6:53:17<449:17:34, 2.92s/it]
3%|▎ | 16020/569592 [6:53:21<486:37:24, 3.16s/it]
3%|▎ | 16020/569592 [6:53:21<486:37:24, 3.16s/it]
3%|▎ | 16021/569592 [6:53:22<385:07:34, 2.50s/it]
3%|▎ | 16021/569592 [6:53:22<385:07:34, 2.50s/it]
3%|▎ | 16022/569592 [6:53:23<313:47:41, 2.04s/it]
3%|▎ | 16022/569592 [6:53:23<313:47:41, 2.04s/it]
3%|▎ | 16023/569592 [6:53:27<429:00:20, 2.79s/it]
3%|▎ | 16023/569592 [6:53:27<429:00:20, 2.79s/it]
3%|▎ | 16024/569592 [6:53:29<363:41:46, 2.37s/it]
3%|▎ | 16024/569592 [6:53:29<363:41:46, 2.37s/it]
3%|▎ | 16025/569592 [6:53:32<419:55:27, 2.73s/it]
3%|▎ | 16025/569592 [6:53:32<419:55:27, 2.73s/it]
3%|▎ | 16026/569592 [6:53:33<338:09:33, 2.20s/it]
3%|▎ | 16026/569592 [6:53:33<338:09:33, 2.20s/it]
3%|▎ | 16027/569592 [6:53:38<452:30:38, 2.94s/it]
3%|▎ | 16027/569592 [6:53:38<452:30:38, 2.94s/it]
3%|▎ | 16028/569592 [6:53:39<374:17:19, 2.43s/it]
3%|▎ | 16028/569592 [6:53:39<374:17:19, 2.43s/it]
3%|▎ | 16029/569592 [6:53:40<305:06:52, 1.98s/it]
3%|▎ | 16029/569592 [6:53:40<305:06:52, 1.98s/it]
3%|▎ | 16030/569592 [6:53:44<395:14:04, 2.57s/it]
3%|▎ | 16030/569592 [6:53:44<395:14:04, 2.57s/it]
3%|▎ | 16031/569592 [6:53:47<402:18:12, 2.62s/it]
3%|▎ | 16031/569592 [6:53:47<402:18:12, 2.62s/it]
3%|▎ | 16032/569592 [6:53:50<435:59:53, 2.84s/it]
3%|▎ | 16032/569592 [6:53:50<435:59:53, 2.84s/it]
3%|▎ | 16033/569592 [6:53:54<495:20:42, 3.22s/it]
3%|▎ | 16033/569592 [6:53:54<495:20:42, 3.22s/it]
3%|▎ | 16034/569592 [6:53:58<527:14:12, 3.43s/it]
3%|▎ | 16034/569592 [6:53:58<527:14:12, 3.43s/it]
3%|▎ | 16035/569592 [6:54:01<518:16:09, 3.37s/it]
3%|▎ | 16035/569592 [6:54:01<518:16:09, 3.37s/it]
3%|▎ | 16036/569592 [6:54:06<551:54:35, 3.59s/it]
3%|▎ | 16036/569592 [6:54:06<551:54:35, 3.59s/it]
3%|▎ | 16037/569592 [6:54:06<428:47:45, 2.79s/it]
3%|▎ | 16037/569592 [6:54:06<428:47:45, 2.79s/it]
3%|▎ | 16038/569592 [6:54:10<451:34:08, 2.94s/it]
3%|▎ | 16038/569592 [6:54:10<451:34:08, 2.94s/it]
3%|▎ | 16039/569592 [6:54:13<484:53:16, 3.15s/it]
3%|▎ | 16039/569592 [6:54:13<484:53:16, 3.15s/it]
3%|▎ | 16040/569592 [6:54:14<387:50:50, 2.52s/it]
3%|▎ | 16040/569592 [6:54:14<387:50:50, 2.52s/it]
3%|▎ | 16041/569592 [6:54:15<315:22:58, 2.05s/it]
3%|▎ | 16041/569592 [6:54:15<315:22:58, 2.05s/it]
3%|▎ | 16042/569592 [6:54:19<403:51:01, 2.63s/it]
3%|▎ | 16042/569592 [6:54:19<403:51:01, 2.63s/it]
3%|▎ | 16043/569592 [6:54:24<484:05:37, 3.15s/it]
3%|▎ | 16043/569592 [6:54:24<484:05:37, 3.15s/it]
3%|▎ | 16044/569592 [6:54:28<522:20:21, 3.40s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16044/569592 [6:54:28<522:20:21, 3.40s/it]
3%|▎ | 16045/569592 [6:54:29<408:38:51, 2.66s/it]
3%|▎ | 16045/569592 [6:54:29<408:38:51, 2.66s/it]
3%|▎ | 16046/569592 [6:54:30<328:37:55, 2.14s/it]
3%|▎ | 16046/569592 [6:54:30<328:37:55, 2.14s/it]
3%|▎ | 16047/569592 [6:54:31<275:43:48, 1.79s/it]
3%|▎ | 16047/569592 [6:54:31<275:43:48, 1.79s/it]
3%|▎ | 16048/569592 [6:54:32<235:56:34, 1.53s/it]
3%|▎ | 16048/569592 [6:54:32<235:56:34, 1.53s/it]
3%|▎ | 16049/569592 [6:54:35<336:56:46, 2.19s/it]
3%|▎ | 16049/569592 [6:54:35<336:56:46, 2.19s/it]
3%|▎ | 16050/569592 [6:54:39<426:37:31, 2.77s/it]
3%|▎ | 16050/569592 [6:54:39<426:37:31, 2.77s/it]
3%|▎ | 16051/569592 [6:54:40<348:18:51, 2.27s/it]
3%|▎ | 16051/569592 [6:54:40<348:18:51, 2.27s/it]
3%|▎ | 16052/569592 [6:54:45<434:36:33, 2.83s/it]
3%|▎ | 16052/569592 [6:54:45<434:36:33, 2.83s/it]
3%|▎ | 16053/569592 [6:54:48<455:12:33, 2.96s/it]
3%|▎ | 16053/569592 [6:54:48<455:12:33, 2.96s/it]
3%|▎ | 16054/569592 [6:54:52<492:38:56, 3.20s/it]
3%|▎ | 16054/569592 [6:54:52<492:38:56, 3.20s/it]
3%|▎ | 16055/569592 [6:54:56<539:54:35, 3.51s/it]
3%|▎ | 16055/569592 [6:54:56<539:54:35, 3.51s/it]
3%|▎ | 16056/569592 [6:54:59<541:06:06, 3.52s/it]
3%|▎ | 16056/569592 [6:54:59<541:06:06, 3.52s/it]
3%|▎ | 16057/569592 [6:55:03<562:37:22, 3.66s/it]
3%|▎ | 16057/569592 [6:55:03<562:37:22, 3.66s/it]
3%|▎ | 16058/569592 [6:55:08<584:42:14, 3.80s/it]
3%|▎ | 16058/569592 [6:55:08<584:42:14, 3.80s/it]
3%|▎ | 16059/569592 [6:55:08<450:06:18, 2.93s/it]
3%|▎ | 16059/569592 [6:55:08<450:06:18, 2.93s/it]
3%|▎ | 16060/569592 [6:55:12<481:08:56, 3.13s/it]
3%|▎ | 16060/569592 [6:55:12<481:08:56, 3.13s/it]
3%|▎ | 16061/569592 [6:55:16<504:19:23, 3.28s/it]
3%|▎ | 16061/569592 [6:55:16<504:19:23, 3.28s/it]
3%|▎ | 16062/569592 [6:55:20<536:37:22, 3.49s/it]
3%|▎ | 16062/569592 [6:55:20<536:37:22, 3.49s/it]
3%|▎ | 16063/569592 [6:55:21<418:00:28, 2.72s/it]
3%|▎ | 16063/569592 [6:55:21<418:00:28, 2.72s/it]
3%|▎ | 16064/569592 [6:55:21<335:35:02, 2.18s/it]
3%|▎ | 16064/569592 [6:55:21<335:35:02, 2.18s/it]
3%|▎ | 16065/569592 [6:55:22<278:40:59, 1.81s/it]
3%|▎ | 16065/569592 [6:55:22<278:40:59, 1.81s/it]
3%|▎ | 16066/569592 [6:55:27<387:15:06, 2.52s/it]
3%|▎ | 16066/569592 [6:55:27<387:15:06, 2.52s/it]
3%|▎ | 16067/569592 [6:55:30<421:06:54, 2.74s/it]
3%|▎ | 16067/569592 [6:55:30<421:06:54, 2.74s/it]
3%|▎ | 16068/569592 [6:55:31<343:57:16, 2.24s/it]
3%|▎ | 16068/569592 [6:55:32<343:57:16, 2.24s/it]
3%|▎ | 16069/569592 [6:55:35<441:39:07, 2.87s/it]
3%|▎ | 16069/569592 [6:55:35<441:39:07, 2.87s/it]
3%|▎ | 16070/569592 [6:55:39<494:58:23, 3.22s/it]
3%|▎ | 16070/569592 [6:55:39<494:58:23, 3.22s/it]
3%|▎ | 16071/569592 [6:55:43<536:22:51, 3.49s/it]
3%|▎ | 16071/569592 [6:55:43<536:22:51, 3.49s/it]
3%|▎ | 16072/569592 [6:55:47<564:12:26, 3.67s/it]
3%|▎ | 16072/569592 [6:55:47<564:12:26, 3.67s/it]
3%|▎ | 16073/569592 [6:55:51<574:47:24, 3.74s/it]
3%|▎ | 16073/569592 [6:55:51<574:47:24, 3.74s/it]
3%|▎ | 16074/569592 [6:55:55<552:12:05, 3.59s/it]
3%|▎ | 16074/569592 [6:55:55<552:12:05, 3.59s/it]
3%|▎ | 16075/569592 [6:55:58<557:06:14, 3.62s/it]
3%|▎ | 16075/569592 [6:55:58<557:06:14, 3.62s/it]
3%|▎ | 16076/569592 [6:56:02<545:34:14, 3.55s/it]
3%|▎ | 16076/569592 [6:56:02<545:34:14, 3.55s/it]
3%|▎ | 16077/569592 [6:56:05<553:23:08, 3.60s/it]
3%|▎ | 16077/569592 [6:56:05<553:23:08, 3.60s/it]
3%|▎ | 16078/569592 [6:56:10<594:19:41, 3.87s/it]
3%|▎ | 16078/569592 [6:56:10<594:19:41, 3.87s/it]
3%|▎ | 16079/569592 [6:56:14<585:38:42, 3.81s/it]
3%|▎ | 16079/569592 [6:56:14<585:38:42, 3.81s/it]
3%|▎ | 16080/569592 [6:56:17<583:18:53, 3.79s/it]
3%|▎ | 16080/569592 [6:56:17<583:18:53, 3.79s/it]
3%|▎ | 16081/569592 [6:56:24<700:05:52, 4.55s/it]
3%|▎ | 16081/569592 [6:56:24<700:05:52, 4.55s/it]
3%|▎ | 16082/569592 [6:56:27<662:30:30, 4.31s/it]
3%|▎ | 16082/569592 [6:56:27<662:30:30, 4.31s/it]
3%|▎ | 16083/569592 [6:56:31<634:54:24, 4.13s/it]
3%|▎ | 16083/569592 [6:56:31<634:54:24, 4.13s/it]
3%|▎ | 16084/569592 [6:56:35<626:44:09, 4.08s/it]
3%|▎ | 16084/569592 [6:56:35<626:44:09, 4.08s/it]
3%|▎ | 16085/569592 [6:56:40<644:12:16, 4.19s/it]
3%|▎ | 16085/569592 [6:56:40<644:12:16, 4.19s/it]
3%|▎ | 16086/569592 [6:56:40<493:05:20, 3.21s/it]
3%|▎ | 16086/569592 [6:56:40<493:05:20, 3.21s/it]
3%|▎ | 16087/569592 [6:56:46<581:23:52, 3.78s/it]
3%|▎ | 16087/569592 [6:56:46<581:23:52, 3.78s/it]
3%|▎ | 16088/569592 [6:56:49<557:36:00, 3.63s/it]
3%|▎ | 16088/569592 [6:56:49<557:36:00, 3.63s/it]
3%|▎ | 16089/569592 [6:56:53<563:44:49, 3.67s/it]
3%|▎ | 16089/569592 [6:56:53<563:44:49, 3.67s/it]
3%|▎ | 16090/569592 [6:56:56<568:57:26, 3.70s/it]
3%|▎ | 16090/569592 [6:56:56<568:57:26, 3.70s/it]
3%|▎ | 16091/569592 [6:57:00<580:35:51, 3.78s/it]
3%|▎ | 16091/569592 [6:57:00<580:35:51, 3.78s/it]
3%|▎ | 16092/569592 [6:57:01<447:24:27, 2.91s/it]
3%|▎ | 16092/569592 [6:57:01<447:24:27, 2.91s/it]
3%|▎ | 16093/569592 [6:57:02<355:08:16, 2.31s/it]
3%|▎ | 16093/569592 [6:57:02<355:08:16, 2.31s/it]
3%|▎ | 16094/569592 [6:57:06<430:40:05, 2.80s/it]
3%|▎ | 16094/569592 [6:57:06<430:40:05, 2.80s/it]
3%|▎ | 16095/569592 [6:57:10<482:44:42, 3.14s/it]
3%|▎ | 16095/569592 [6:57:10<482:44:42, 3.14s/it]
3%|▎ | 16096/569592 [6:57:14<534:29:38, 3.48s/it]
3%|▎ | 16096/569592 [6:57:14<534:29:38, 3.48s/it]
3%|▎ | 16097/569592 [6:57:18<554:26:25, 3.61s/it]
3%|▎ | 16097/569592 [6:57:18<554:26:25, 3.61s/it]
3%|▎ | 16098/569592 [6:57:22<552:00:35, 3.59s/it]
3%|▎ | 16098/569592 [6:57:22<552:00:35, 3.59s/it]
3%|▎ | 16099/569592 [6:57:26<573:24:02, 3.73s/it]
3%|▎ | 16099/569592 [6:57:26<573:24:02, 3.73s/it]
3%|▎ | 16100/569592 [6:57:27<442:17:58, 2.88s/it]
3%|▎ | 16100/569592 [6:57:27<442:17:58, 2.88s/it]
3%|▎ | 16101/569592 [6:57:30<483:42:14, 3.15s/it]
3%|▎ | 16101/569592 [6:57:30<483:42:14, 3.15s/it]
3%|▎ | 16102/569592 [6:57:34<513:54:24, 3.34s/it]
3%|▎ | 16102/569592 [6:57:34<513:54:24, 3.34s/it]
3%|▎ | 16103/569592 [6:57:38<536:45:03, 3.49s/it]
3%|▎ | 16103/569592 [6:57:38<536:45:03, 3.49s/it]
3%|▎ | 16104/569592 [6:57:39<428:01:38, 2.78s/it]
3%|▎ | 16104/569592 [6:57:39<428:01:38, 2.78s/it]
3%|▎ | 16105/569592 [6:57:40<341:28:00, 2.22s/it]
3%|▎ | 16105/569592 [6:57:40<341:28:00, 2.22s/it]
3%|▎ | 16106/569592 [6:57:41<281:58:53, 1.83s/it]
3%|▎ | 16106/569592 [6:57:41<281:58:53, 1.83s/it]
3%|▎ | 16107/569592 [6:57:46<407:30:08, 2.65s/it]
3%|▎ | 16107/569592 [6:57:46<407:30:08, 2.65s/it]
3%|▎ | 16108/569592 [6:57:47<332:33:33, 2.16s/it]
3%|▎ | 16108/569592 [6:57:47<332:33:33, 2.16s/it]
3%|▎ | 16109/569592 [6:57:48<275:52:51, 1.79s/it]
3%|▎ | 16109/569592 [6:57:48<275:52:51, 1.79s/it]
3%|▎ | 16110/569592 [6:57:52<403:09:47, 2.62s/it]
3%|▎ | 16110/569592 [6:57:52<403:09:47, 2.62s/it]
3%|▎ | 16111/569592 [6:57:56<472:29:12, 3.07s/it]
3%|▎ | 16111/569592 [6:57:56<472:29:12, 3.07s/it]
3%|▎ | 16112/569592 [6:58:00<517:44:56, 3.37s/it]
3%|▎ | 16112/569592 [6:58:00<517:44:56, 3.37s/it]
3%|▎ | 16113/569592 [6:58:01<404:07:45, 2.63s/it]
3%|▎ | 16113/569592 [6:58:01<404:07:45, 2.63s/it]
3%|▎ | 16114/569592 [6:58:04<430:40:17, 2.80s/it]
3%|▎ | 16114/569592 [6:58:04<430:40:17, 2.80s/it]
3%|▎ | 16115/569592 [6:58:05<346:04:26, 2.25s/it]
3%|▎ | 16115/569592 [6:58:05<346:04:26, 2.25s/it]
3%|▎ | 16116/569592 [6:58:06<286:11:13, 1.86s/it]
3%|▎ | 16116/569592 [6:58:06<286:11:13, 1.86s/it]
3%|▎ | 16117/569592 [6:58:10<384:49:48, 2.50s/it]
3%|▎ | 16117/569592 [6:58:10<384:49:48, 2.50s/it]
3%|▎ | 16118/569592 [6:58:14<425:36:09, 2.77s/it]
3%|▎ | 16118/569592 [6:58:14<425:36:09, 2.77s/it]
3%|▎ | 16119/569592 [6:58:15<341:11:21, 2.22s/it]
3%|▎ | 16119/569592 [6:58:15<341:11:21, 2.22s/it]
3%|▎ | 16120/569592 [6:58:18<415:01:03, 2.70s/it]
3%|▎ | 16120/569592 [6:58:18<415:01:03, 2.70s/it]
3%|▎ | 16121/569592 [6:58:22<444:55:32, 2.89s/it]
3%|▎ | 16121/569592 [6:58:22<444:55:32, 2.89s/it]
3%|▎ | 16122/569592 [6:58:26<481:18:01, 3.13s/it]
3%|▎ | 16122/569592 [6:58:26<481:18:01, 3.13s/it]
3%|▎ | 16123/569592 [6:58:29<513:16:46, 3.34s/it]
3%|▎ | 16123/569592 [6:58:29<513:16:46, 3.34s/it]
3%|▎ | 16124/569592 [6:58:33<509:25:57, 3.31s/it]
3%|▎ | 16124/569592 [6:58:33<509:25:57, 3.31s/it]
3%|▎ | 16125/569592 [6:58:33<399:02:01, 2.60s/it]
3%|▎ | 16125/569592 [6:58:34<399:02:01, 2.60s/it]
3%|▎ | 16126/569592 [6:58:34<323:43:43, 2.11s/it]
3%|▎ | 16126/569592 [6:58:34<323:43:43, 2.11s/it]
3%|▎ | 16127/569592 [6:58:36<279:13:48, 1.82s/it]
3%|▎ | 16127/569592 [6:58:36<279:13:48, 1.82s/it]
3%|▎ | 16128/569592 [6:58:39<348:37:32, 2.27s/it]
3%|▎ | 16128/569592 [6:58:39<348:37:32, 2.27s/it]
3%|▎ | 16129/569592 [6:58:40<288:08:32, 1.87s/it]
3%|▎ | 16129/569592 [6:58:40<288:08:32, 1.87s/it]
3%|▎ | 16130/569592 [6:58:41<245:07:00, 1.59s/it]
3%|▎ | 16130/569592 [6:58:41<245:07:00, 1.59s/it]
3%|▎ | 16131/569592 [6:58:42<214:54:16, 1.40s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16131/569592 [6:58:42<214:54:16, 1.40s/it]
3%|▎ | 16132/569592 [6:58:43<192:54:35, 1.25s/it]
3%|▎ | 16132/569592 [6:58:43<192:54:35, 1.25s/it]
3%|▎ | 16133/569592 [6:58:47<344:47:24, 2.24s/it]
3%|▎ | 16133/569592 [6:58:47<344:47:24, 2.24s/it]
3%|▎ | 16134/569592 [6:58:49<303:35:31, 1.97s/it]
3%|▎ | 16134/569592 [6:58:49<303:35:31, 1.97s/it]
3%|▎ | 16135/569592 [6:58:50<256:24:32, 1.67s/it]
3%|▎ | 16135/569592 [6:58:50<256:24:32, 1.67s/it]
3%|▎ | 16136/569592 [6:58:55<420:36:39, 2.74s/it]
3%|▎ | 16136/569592 [6:58:55<420:36:39, 2.74s/it]
3%|▎ | 16137/569592 [6:58:58<451:00:16, 2.93s/it]
3%|▎ | 16137/569592 [6:58:58<451:00:16, 2.93s/it]
3%|▎ | 16138/569592 [6:59:02<478:51:53, 3.11s/it]
3%|▎ | 16138/569592 [6:59:02<478:51:53, 3.11s/it]
3%|▎ | 16139/569592 [6:59:03<378:29:07, 2.46s/it]
3%|▎ | 16139/569592 [6:59:03<378:29:07, 2.46s/it]
3%|▎ | 16140/569592 [6:59:07<465:49:13, 3.03s/it]
3%|▎ | 16140/569592 [6:59:07<465:49:13, 3.03s/it]
3%|▎ | 16141/569592 [6:59:08<395:10:33, 2.57s/it]
3%|▎ | 16141/569592 [6:59:08<395:10:33, 2.57s/it]
3%|▎ | 16142/569592 [6:59:09<319:09:37, 2.08s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16142/569592 [6:59:09<319:09:37, 2.08s/it]
3%|▎ | 16143/569592 [6:59:13<378:11:11, 2.46s/it]
3%|▎ | 16143/569592 [6:59:13<378:11:11, 2.46s/it]
3%|▎ | 16144/569592 [6:59:15<379:52:19, 2.47s/it]
3%|▎ | 16144/569592 [6:59:15<379:52:19, 2.47s/it]
3%|▎ | 16145/569592 [6:59:19<434:20:56, 2.83s/it]
3%|▎ | 16145/569592 [6:59:19<434:20:56, 2.83s/it]
3%|▎ | 16146/569592 [6:59:23<471:44:53, 3.07s/it]
3%|▎ | 16146/569592 [6:59:23<471:44:53, 3.07s/it]
3%|▎ | 16147/569592 [6:59:23<373:22:31, 2.43s/it]
3%|▎ | 16147/569592 [6:59:24<373:22:31, 2.43s/it]
3%|▎ | 16148/569592 [6:59:26<359:32:24, 2.34s/it]
3%|▎ | 16148/569592 [6:59:26<359:32:24, 2.34s/it]
3%|▎ | 16149/569592 [6:59:29<422:21:17, 2.75s/it]
3%|▎ | 16149/569592 [6:59:29<422:21:17, 2.75s/it]
3%|▎ | 16150/569592 [6:59:30<340:14:08, 2.21s/it]
3%|▎ | 16150/569592 [6:59:30<340:14:08, 2.21s/it]
3%|▎ | 16151/569592 [6:59:34<404:56:59, 2.63s/it]
3%|▎ | 16151/569592 [6:59:34<404:56:59, 2.63s/it]
3%|▎ | 16152/569592 [6:59:39<499:10:52, 3.25s/it]
3%|▎ | 16152/569592 [6:59:39<499:10:52, 3.25s/it]
3%|▎ | 16153/569592 [6:59:40<401:09:07, 2.61s/it]
3%|▎ | 16153/569592 [6:59:40<401:09:07, 2.61s/it]
3%|▎ | 16154/569592 [6:59:41<324:03:28, 2.11s/it]
3%|▎ | 16154/569592 [6:59:41<324:03:28, 2.11s/it]
3%|▎ | 16155/569592 [6:59:44<383:25:52, 2.49s/it]
3%|▎ | 16155/569592 [6:59:44<383:25:52, 2.49s/it]
3%|▎ | 16156/569592 [6:59:47<427:04:56, 2.78s/it]
3%|▎ | 16156/569592 [6:59:47<427:04:56, 2.78s/it]
3%|▎ | 16157/569592 [6:59:49<349:08:34, 2.27s/it]
3%|▎ | 16157/569592 [6:59:49<349:08:34, 2.27s/it]
3%|▎ | 16158/569592 [6:59:49<286:47:18, 1.87s/it]
3%|▎ | 16158/569592 [6:59:49<286:47:18, 1.87s/it]
3%|▎ | 16159/569592 [6:59:51<252:26:48, 1.64s/it]
3%|▎ | 16159/569592 [6:59:51<252:26:48, 1.64s/it]
3%|▎ | 16160/569592 [6:59:56<443:44:23, 2.89s/it]
3%|▎ | 16160/569592 [6:59:56<443:44:23, 2.89s/it]
3%|▎ | 16161/569592 [6:59:59<454:07:10, 2.95s/it]
3%|▎ | 16161/569592 [7:00:00<454:07:10, 2.95s/it]
3%|▎ | 16162/569592 [7:00:03<484:08:57, 3.15s/it]
3%|▎ | 16162/569592 [7:00:03<484:08:57, 3.15s/it]
3%|▎ | 16163/569592 [7:00:04<384:27:36, 2.50s/it]
3%|▎ | 16163/569592 [7:00:04<384:27:36, 2.50s/it]
3%|▎ | 16164/569592 [7:00:08<442:01:06, 2.88s/it]
3%|▎ | 16164/569592 [7:00:08<442:01:06, 2.88s/it]
3%|▎ | 16165/569592 [7:00:13<525:59:05, 3.42s/it]
3%|▎ | 16165/569592 [7:00:13<525:59:05, 3.42s/it]
3%|▎ | 16166/569592 [7:00:16<516:22:00, 3.36s/it]
3%|▎ | 16166/569592 [7:00:16<516:22:00, 3.36s/it]
3%|▎ | 16167/569592 [7:00:17<404:27:19, 2.63s/it]
3%|▎ | 16167/569592 [7:00:17<404:27:19, 2.63s/it]
3%|▎ | 16168/569592 [7:00:20<450:23:54, 2.93s/it]
3%|▎ | 16168/569592 [7:00:20<450:23:54, 2.93s/it]
3%|▎ | 16169/569592 [7:00:21<359:05:32, 2.34s/it]
3%|▎ | 16169/569592 [7:00:21<359:05:32, 2.34s/it]
3%|▎ | 16170/569592 [7:00:25<413:36:47, 2.69s/it]
3%|▎ | 16170/569592 [7:00:25<413:36:47, 2.69s/it]
3%|▎ | 16171/569592 [7:00:28<439:44:14, 2.86s/it]
3%|▎ | 16171/569592 [7:00:28<439:44:14, 2.86s/it]
3%|▎ | 16172/569592 [7:00:32<502:33:57, 3.27s/it]
3%|▎ | 16172/569592 [7:00:32<502:33:57, 3.27s/it]
3%|▎ | 16173/569592 [7:00:36<519:16:35, 3.38s/it]
3%|▎ | 16173/569592 [7:00:36<519:16:35, 3.38s/it]
3%|▎ | 16174/569592 [7:00:37<404:46:10, 2.63s/it]
3%|▎ | 16174/569592 [7:00:37<404:46:10, 2.63s/it]
3%|▎ | 16175/569592 [7:00:40<436:03:05, 2.84s/it]
3%|▎ | 16175/569592 [7:00:40<436:03:05, 2.84s/it]
3%|▎ | 16176/569592 [7:00:44<482:38:57, 3.14s/it]
3%|▎ | 16176/569592 [7:00:44<482:38:57, 3.14s/it]
3%|▎ | 16177/569592 [7:00:45<393:41:46, 2.56s/it]
3%|▎ | 16177/569592 [7:00:45<393:41:46, 2.56s/it]
3%|▎ | 16178/569592 [7:00:49<444:01:22, 2.89s/it]
3%|▎ | 16178/569592 [7:00:49<444:01:22, 2.89s/it]
3%|▎ | 16179/569592 [7:00:52<471:47:33, 3.07s/it]
3%|▎ | 16179/569592 [7:00:52<471:47:33, 3.07s/it]
3%|▎ | 16180/569592 [7:00:53<374:05:03, 2.43s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (103483016 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16180/569592 [7:00:53<374:05:03, 2.43s/it]
3%|▎ | 16181/569592 [7:00:54<306:49:06, 2.00s/it]
3%|▎ | 16181/569592 [7:00:54<306:49:06, 2.00s/it]
3%|▎ | 16182/569592 [7:00:58<407:54:16, 2.65s/it]
3%|▎ | 16182/569592 [7:00:58<407:54:16, 2.65s/it]
3%|▎ | 16183/569592 [7:00:59<328:12:08, 2.14s/it]
3%|▎ | 16183/569592 [7:00:59<328:12:08, 2.14s/it]
3%|▎ | 16184/569592 [7:01:03<399:22:11, 2.60s/it]
3%|▎ | 16184/569592 [7:01:03<399:22:11, 2.60s/it]
3%|▎ | 16185/569592 [7:01:07<460:39:53, 3.00s/it]
3%|▎ | 16185/569592 [7:01:07<460:39:53, 3.00s/it]
3%|▎ | 16186/569592 [7:01:11<498:45:17, 3.24s/it]
3%|▎ | 16186/569592 [7:01:11<498:45:17, 3.24s/it]
3%|▎ | 16187/569592 [7:01:14<506:00:52, 3.29s/it]
3%|▎ | 16187/569592 [7:01:14<506:00:52, 3.29s/it]
3%|▎ | 16188/569592 [7:01:18<538:51:30, 3.51s/it]
3%|▎ | 16188/569592 [7:01:18<538:51:30, 3.51s/it]
3%|▎ | 16189/569592 [7:01:22<554:12:49, 3.61s/it]
3%|▎ | 16189/569592 [7:01:22<554:12:49, 3.61s/it]
3%|▎ | 16190/569592 [7:01:23<428:15:36, 2.79s/it]
3%|▎ | 16190/569592 [7:01:23<428:15:36, 2.79s/it]
3%|▎ | 16191/569592 [7:01:26<453:28:35, 2.95s/it]
3%|▎ | 16191/569592 [7:01:26<453:28:35, 2.95s/it]
3%|▎ | 16192/569592 [7:01:31<541:26:20, 3.52s/it]
3%|▎ | 16192/569592 [7:01:31<541:26:20, 3.52s/it]
3%|▎ | 16193/569592 [7:01:32<420:13:31, 2.73s/it]
3%|▎ | 16193/569592 [7:01:32<420:13:31, 2.73s/it]
3%|▎ | 16194/569592 [7:01:33<336:11:22, 2.19s/it]
3%|▎ | 16194/569592 [7:01:33<336:11:22, 2.19s/it]
3%|▎ | 16195/569592 [7:01:37<419:16:05, 2.73s/it]
3%|▎ | 16195/569592 [7:01:37<419:16:05, 2.73s/it]
3%|▎ | 16196/569592 [7:01:38<338:46:49, 2.20s/it]
3%|▎ | 16196/569592 [7:01:38<338:46:49, 2.20s/it]
3%|▎ | 16197/569592 [7:01:42<407:45:58, 2.65s/it]
3%|▎ | 16197/569592 [7:01:42<407:45:58, 2.65s/it]
3%|▎ | 16198/569592 [7:01:45<452:57:23, 2.95s/it]
3%|▎ | 16198/569592 [7:01:45<452:57:23, 2.95s/it]
3%|▎ | 16199/569592 [7:01:46<362:20:51, 2.36s/it]
3%|▎ | 16199/569592 [7:01:46<362:20:51, 2.36s/it]
3%|▎ | 16200/569592 [7:01:51<457:46:17, 2.98s/it]
3%|▎ | 16200/569592 [7:01:51<457:46:17, 2.98s/it]
3%|▎ | 16201/569592 [7:01:55<502:51:12, 3.27s/it]
3%|▎ | 16201/569592 [7:01:55<502:51:12, 3.27s/it]
3%|▎ | 16202/569592 [7:01:58<496:00:49, 3.23s/it]
3%|▎ | 16202/569592 [7:01:58<496:00:49, 3.23s/it]
3%|▎ | 16203/569592 [7:01:59<389:42:03, 2.54s/it]
3%|▎ | 16203/569592 [7:01:59<389:42:03, 2.54s/it]
3%|▎ | 16204/569592 [7:02:03<462:38:34, 3.01s/it]
3%|▎ | 16204/569592 [7:02:03<462:38:34, 3.01s/it]
3%|▎ | 16205/569592 [7:02:06<486:43:48, 3.17s/it]
3%|▎ | 16205/569592 [7:02:06<486:43:48, 3.17s/it]
3%|▎ | 16206/569592 [7:02:09<487:14:01, 3.17s/it]
3%|▎ | 16206/569592 [7:02:09<487:14:01, 3.17s/it]
3%|▎ | 16207/569592 [7:02:13<509:05:31, 3.31s/it]
3%|▎ | 16207/569592 [7:0/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2:13<509:05:31, 3.31s/it]
3%|▎ | 16208/569592 [7:02:16<506:43:29, 3.30s/it]
3%|▎ | 16208/569592 [7:02:16<506:43:29, 3.30s/it]
3%|▎ | 16209/569592 [7:02:20<502:25:54, 3.27s/it]
3%|▎ | 16209/569592 [7:02:20<502:25:54, 3.27s/it]
3%|▎ | 16210/569592 [7:02:23<505:06:14, 3.29s/it]
3%|▎ | 16210/569592 [7:02:23<505:06:14, 3.29s/it]
3%|▎ | 16211/569592 [7:02:26<505:35:44, 3.29s/it]
3%|▎ | 16211/569592 [7:02:26<505:35:44, 3.29s/it]
3%|▎ | 16212/569592 [7:02:27<395:06:55, 2.57s/it]
3%|▎ | 16212/569592 [7:02:27<395:06:55, 2.57s/it]
3%|▎ | 16213/569592 [7:02:31<448:56:57, 2.92s/it]
3%|▎ | 16213/569592 [7:02:31<448:56:57, 2.92s/it]
3%|▎ | 16214/569592 [7:02:34<476:32:12, 3.10s/it]
3%|▎ | 16214/569592 [7:02:34<476:32:12, 3.10s/it]
3%|▎ | 16215/569592 [7:02:38<507:16:32, 3.30s/it]
3%|▎ | 16215/569592 [7:02:38<507:16:32, 3.30s/it]
3%|▎ | 16216/569592 [7:02:41<509:15:23, 3.31s/it]
3%|▎ | 16216/569592 [7:02:41<509:15:23, 3.31s/it]
3%|▎ | 16217/569592 [7:02:42<399:20:45, 2.60s/it]
3%|▎ | 16217/569592 [7:02:42<399:20:45, 2.60s/it]
3%|▎ | 16218/569592 [7:02:43<322:02:51, 2.10s/it]
3%|▎ | 16218/569592 [7:02:43<322:02:51, 2.10s/it]
3%|▎ | 16219/569592 [7:02:44<272:00:14, 1.77s/it]
3%|▎ | 16219/569592 [7:02:44<272:00:14, 1.77s/it]
3%|▎ | 16220/569592 [7:02:48<366:00:01, 2.38s/it]
3%|▎ | 16220/569592 [7:02:48<366:00:01, 2.38s/it]
3%|▎ | 16221/569592 [7:02:49<301:36:59, 1.96s/it]
3%|▎ | 16221/569592 [7:02:49<301:36:59, 1.96s/it]
3%|▎ | 16222/569592 [7:02:50<257:32:14, 1.68s/it]
3%|▎ | 16222/569592 [7:02:50<257:32:14, 1.68s/it]
3%|▎ | 16223/569592 [7:02:54<362:35:44, 2.36s/it]
3%|▎ | 16223/569592 [7:02:54<362:35:44, 2.36s/it]
3%|▎ | 16224/569592 [7:02:58<427:44:50, 2.78s/it]
3%|▎ | 16224/569592 [7:02:58<427:44:50, 2.78s/it]
3%|▎ | 16225/569592 [7:03:02<486:35:24, 3.17s/it]
3%|▎ | 16225/569592 [7:03:02<486:35:24, 3.17s/it]
3%|▎ | 16226/569592 [7:03:03<384:46:24, 2.50s/it]
3%|▎ | 16226/569592 [7:03:03<384:46:24, 2.50s/it]
3%|▎ | 16227/569592 [7:03:06<429:27:39, 2.79s/it]
3%|▎ | 16227/569592 [7:03:06<429:27:39, 2.79s/it]
3%|▎ | 16228/569592 [7:03:10<480:56:34, 3.13s/it]
3%|▎ | 16228/569592 [7:03:10<480:56:34, 3.13s/it]
3%|▎ | 16229/569592 [7:03:11<380:06:54, 2.47s/it]
3%|▎ | 16229/569592 [7:03:11<380:06:54, 2.47s/it]
3%|▎ | 16230/569592 [7:03:14<420:50:09, 2.74s/it]
3%|▎ | 16230/569592 [7:03:14<420:50:09, 2.74s/it]
3%|▎ | 16231/569592 [7:03:18<443:03:44, 2.88s/it]
3%|▎ | 16231/569592 [7:03:18<443:03:44, 2.88s/it]
3%|▎ | 16232/569592 [7:03:19<355:57:49, 2.32s/it]
3%|▎ | 16232/569592 [7:03:19<355:57:49, 2.32s/it]
3%|▎ | 16233/569592 [7:03:20<294:41:58, 1.92s/it]
3%|▎ | 16233/569592 [7:03:20<294:41:58, 1.92s/it]
3%|▎ | 16234/569592 [7:03:21<250:05:54, 1.63s/it]
3%|▎ | 16234/569592 [7:03:21<250:05:54, 1.63s/it]
3%|▎ | 16235/569592 [7:03:24<327:40:36, 2.13s/it]
3%|▎ | 16235/569592 [7:03:24<327:40:36, 2.13s/it]
3%|▎ | 16236/569592 [7:03:28<400:58:16, 2.61s/it]
3%|▎ | 16236/569592 [7:03:28<400:58:16, 2.61s/it]
3%|▎ | 16237/569592 [7:03:31<447:43:31, 2.91s/it]
3%|▎ | 16237/569592 [7:03:31<447:43:31, 2.91s/it]
3%|▎ | 16238/569592 [7:03:35<471:06:18, 3.06s/it]
3%|▎ | 16238/569592 [7:03:35<471:06:18, 3.06s/it]
3%|▎ | 16239/569592 [7:03:39<514:38:29, 3.35s/it]
3%|▎ | 16239/569592 [7:03:39<514:38:29, 3.35s/it]
3%|▎ | 16240/569592 [7:03:40<406:57:18, 2.65s/it]
3%|▎ | 16240/569592 [7:03:40<406:57:18, 2.65s/it]
3%|▎ | 16241/569592 [7:03:41<338:10:03, 2.20s/it]
3%|▎ | 16241/569592 [7:03:41<338:10:03, 2.20s/it]
3%|▎ | 16242/569592 [7:03:44<389:54:31, 2.54s/it]
3%|▎ | 16242/569592 [7:03:44<389:54:31, 2.54s/it]
3%|▎ | 16243/569592 [7:03:45<316:23:53, 2.06s/it]
3%|▎ | 16243/569592 [7:03:45<316:23:53, 2.06s/it]
3%|▎ | 16244/569592 [7:03:46<264:54:42, 1.72s/it]
3%|▎ | 16244/569592 [7:03:46<264:54:42, 1.72s/it]
3%|▎ | 16245/569592 [7:03:49<340:31:29, 2.22s/it]
3%|▎ | 16245/569592 [7:03:49<340:31:29, 2.22s/it]
3%|▎ | 16246/569592 [7:03:53<392:48:59, 2.56s/it]
3%|▎ | 16246/569592 [7:03:53<392:48:59, 2.56s/it]
3%|▎ | 16247/569592 [7:03:54<318:37:24, 2.07s/it]
3%|▎ | 16247/569592 [7:03:54<318:37:24, 2.07s/it]
3%|▎ | 16248/569592 [7:03:55<266:22:12, 1.73s/it]
3%|▎ | 16248/569592 [7:03:55<266:22:12, 1.73s/it]
3%|▎ | 16249/569592 [7:03:59<372:30:53, 2.42s/it]
3%|▎ | 16249/569592 [7:03:59<372:30:53, 2.42s/it]
3%|▎ | 16250/569592 [7:04:02<417:28:54, 2.72s/it]
3%|▎ | 16250/569592 [7:04:02<417:28:54, 2.72s/it]
3%|▎ | 16251/569592 [7:04:03<335:53:17, 2.19s/it]
3%|▎ | 16251/569592 [7:04:03<335:53:17, 2.19s/it]
3%|▎ | 16252/569592 [7:04:07<407:20:23, 2.65s/it]
3%|▎ | 16252/569592 [7:04:07<407:20:23, 2.65s/it]
3%|▎ | 16253/569592 [7:04:11<493:41:44, 3.21s/it]
3%|▎ | 16253/569592 [7:04:11<493:41:44, 3.21s/it]
3%|▎ | 16254/569592 [7:04:15<524:05:52, 3.41s/it]
3%|▎ | 16254/569592 [7:04:15<524:05:52, 3.41s/it]
3%|▎ | 16255/569592 [7:04:18<518:00:45, 3.37s/it]
3%|▎ | 16255/569592 [7:04:18<518:00:45, 3.37s/it]
3%|▎ | 16256/569592 [7:04:22<523:33:27, 3.41s/it]
3%|▎ | 16256/569592 [7:04:22<523:33:27, 3.41s/it]
3%|▎ | 16257/569592 [7:04:23<407:55:20, 2.65s/it]
3%|▎ | 16257/569592 [7:04:23<407:55:20, 2.65s/it]
3%|▎ | 16258/569592 [7:04:27<490:23:41, 3.19s/it]
3%|▎ | 16258/569592 [7:04:27<490:23:41, 3.19s/it]
3%|▎ | 16259/569592 [7:04:31<504:04:42, 3.28s/it]
3%|▎ | 16259/569592 [7:04:31<504:04:42, 3.28s/it]
3%|▎ | 16260/569592 [7:04:32<395:32:49, 2.57s/it]
3%|▎ | 16260/569592 [7:04:32<395:32:49, 2.57s/it]
3%|▎ | 16261/569592 [7:04:33<320:22:48, 2.08s/it]
3%|▎ | 16261/569592 [7:04:33<320:22:48, 2.08s/it]
3%|▎ | 16262/569592 [7:04:36<397:09:40, 2.58s/it]
3%|▎ | 16262/569592 [7:04:36<397:09:40, 2.58s/it]
3%|▎ | 16263/569592 [7:04:37<324:09:25, 2.11s/it]
3%|▎ | 16263/569592 [7:04:37<324:09:25, 2.11s/it]
3%|▎ | 16264/569592 [7:04:41<394:18:03, 2.57s/it]
3%|▎ | 16264/569592 [7:04:41<394:18:03, 2.57s/it]
3%|▎ | 16265/569592 [7:04:42<319:08:15, 2.08s/it]
3%|▎ | 16265/569592 [7:04:42<319:08:15, 2.08s/it]
3%|▎ | 16266/569592 [7:04:43<269:22:25, 1.75s/it]
3%|▎ | 16266/569592 [7:04:43<269:22:25, 1.75s/it]
3%|▎ | 16267/569592 [7:04:44<231:51:00, 1.51s/it]
3%|▎ | 16267/569592 [7:04:44<231:51:00, 1.51s/it]
3%|▎ | 16268/569592 [7:04:48<335:28:10, 2.18s/it]
3%|▎ | 16268/569592 [7:04:48<335:28:10, 2.18s/it]
3%|▎ | 16269/569592 [7:04:49<277:50:14, 1.81s/it]
3%|▎ | 16269/569592 [7:04:49<277:50:14, 1.81s/it]
3%|▎ | 16270/569592 [7:04:52<350:11:49, 2.28s/it]
3%|▎ | 16270/569592 [7:04:52<350:11:49, 2.28s/it]
3%|▎ | 16271/569592 [7:04:53<287:50:22, 1.87s/it]
3%|▎ | 16271/569592 [7:04:53<287:50:22, 1.87s/it]
3%|▎ | 16272/569592 [7:04:55<302:00:16, 1.96s/it]
3%|▎ | 16272/569592 [7:04:55<302:00:16, 1.96s/it]
3%|▎ | 16273/569592 [7:04:59<370:22:22, 2.41s/it]
3%|▎ | 16273/569592 [7:04:59<370:22:22, 2.41s/it]
3%|▎ | 16274/569592 [7:05:00<308:35:15, 2.01s/it]
3%|▎ | 16274/569592 [7:05:00<308:35:15, 2.01s/it]
3%|▎ | 16275/569592 [7:05:03<372:57:07, 2.43s/it]
3%|▎ | 16275/569592 [7:05:03<372:57:07, 2.43s/it]
3%|▎ | 16276/569592 [7:05:07<443:47:30, 2.89s/it]
3%|▎ | 16276/569592 [7:05:07<443:47:30, 2.89s/it]
3%|▎ | 16277/569592 [7:05:09<402:55:27, 2.62s/it]
3%|▎ | 16277/569592 [7:05:09<402:55:27, 2.62s/it]
3%|▎ | 16278/569592 [7:05:13<463:00:50, 3.01s/it]
3%|▎ | 16278/569592 [7:05:13<463:00:50, 3.01s/it]
3%|▎ | 16279/569592 [7:05:14<372:15:41, 2.42s/it]
3%|▎ | 16279/569592 [7:05:14<372:15:41, 2.42s/it]
3%|▎ | 16280/569592 [7:05:18<453:15:18, 2.95s/it]
3%|▎ | 16280/569592 [7:05:18<453:15:18, 2.95s/it]
3%|▎ | 16281/569592 [7:05:21<471:32:52, 3.07s/it]
3%|▎ | 16281/569592 [7:05:21<471:32:52, 3.07s/it]
3%|▎ | 16282/569592 [7:05:25<483:20:25, 3.14s/it]
3%|▎ | 16282/569592 [7:05:25<483:20:25, 3.14s/it]
3%|▎ | 16283/569592 [7:05:28<490:03:12, 3.19s/it]
3%|▎ | 16283/569592 [7:05:28<490:03:12, 3.19s/it]
3%|▎ | 16284/569592 [7:05:32<537:52:11, 3.50s/it]
3%|▎ | 16284/569592 [7:05:32<537:52:11, 3.50s/it]
3%|▎ | 16285/569592 [7:05:33<417:54:43, 2.72s/it]
3%|▎ | 16285/569592 [7:05:33<417:54:43, 2.72s/it]
3%|▎ | 16286/569592 [7:05:37<455:21:21, 2.96s/it]
3%|▎ | 16286/569592 [7:05:37<455:21:21, 2.96s/it]
3%|▎ | 16287/569592 [7:05:38<361:24:18, 2.35s/it]
3%|▎ | 16287/569592 [7:05:38<361:24:18, 2.35s/it]
3%|▎ | 16288/569592 [7:05:39<297:23:35, 1.93s/it]
3%|▎ | 16288/569592 [7:05:39<297:23:35, 1.93s/it]
3%|▎ | 16289/569592 [7:05:42<365:26:38, 2.38s/it]
3%|▎ | 16289/569592 [7:05:42<365:26:38, 2.38s/it]
3%|▎ | 16290/569592 [7:05:46<437:58:57, 2.85s/it]
3%|▎ | 16290/569592 [7:05:46<437:58:57, 2.85s/it]
3%|▎ | 16291/569592 [7:05:50<489:07:25, 3.18s/it]
3%|▎ | 16291/569592 [7:05:50<489:07:25, 3.18s/it]
3%|▎ | 16292/569592 [7:05:51<386:31:32, 2.51s/it]
3%|▎ | 16292/569592 [7:05:51<386:31:32, 2.51s/it]
3%|▎ | 16293/569592 [7:05:54<431:47:18, 2.81s/it]
3%|▎ | 16293/569592 [7:05:54<431:47:18, 2.81s/it]
3%|▎ | 16294/569592 [7:05:55<345:03:23, 2.25s/it]
3%|▎ | 16294/569592 [7:05:55<345:03:23, 2.25s/it]
3%|▎ | 16295/569592 [7:05:59<411:09:48, 2.68s/it]
3%|▎ | 16295/569592 [7:05:59<411:09:48, 2.68s/it]
3%|▎ | 16296/569592 [7:06:02<446:13:34, 2.90s/it]
3%|▎ | 16296/569592 [7:06:02<446:13:34, 2.90s/it]
3%|▎ | 16297/569592 [7:06:06<481:39:54, 3.13s/it]
3%|▎ | 16297/569592 [7:06:06<481:39:54, 3.13s/it]
3%|▎ | 16298/569592 [7:06:10<498:17:48, 3.24s/it]
3%|▎ | 16298/569592 [7:06:10<498:17:48, 3.24s/it]
3%|▎ | 16299/569592 [7:06:13<503:17:21, 3.27s/it]
3%|▎ | 16299/569592 [7:06:13<503:17:21, 3.27s/it]
3%|▎ | 16300/569592 [7:06:14<395:10:19, 2.57s/it]
3%|▎ | 16300/569592 [7:06:14<395:10:19, 2.57s/it]
3%|▎ | 16301/569592 [7:06:18<449:57:08, 2.93s/it]
3%|▎ | 16301/569592 [7:06:18<449:57:08, 2.93s/it]
3%|▎ | 16302/569592 [7:06:21<460:28:43, 3.00s/it]
3%|▎ | 16302/569592 [7:06:21<460:28:43, 3.00s/it]
3%|▎ | 16303/569592 [7:06:22<366:56:29, 2.39s/it]
3%|▎ | 16303/569592 [7:06:22<366:56:29, 2.39s/it]
3%|▎ | 16304/569592 [7:06:26<432:59:23, 2.82s/it]
3%|▎ | 16304/569592 [7:06:26<432:59:23, 2.82s/it]
3%|▎ | 16305/569592 [7:06:29<469:26:08, 3.05s/it]
3%|▎ | 16305/569592 [7:06:29<469:26:08, 3.05s/it]
3%|▎ | 16306/569592 [7:06:33<482:11:24, 3.14s/it]
3%|▎ | 16306/569592 [7:06:33<482:11:24, 3.14s/it]
3%|▎ | 16307/569592 [7:06:36<494:43:03, 3.22s/it]
3%|▎ | 16307/569592 [7:06:36<494:43:03, 3.22s/it]
3%|▎ | 16308/569592 [7:06:40<513:47:16, 3.34s/it]
3%|▎ | 16308/569592 [7:06:40<513:47:16, 3.34s/it]
3%|▎ | 16309/569592 [7:06:43<519:02:50, 3.38s/it]
3%|▎ | 16309/569592 [7:06:43<519:02:50, 3.38s/it]
3%|▎ | 16310/569592 [7:06:47<528:30:26, 3.44s/it]
3%|▎ | 16310/569592 [7:06:47<528:30:26, 3.44s/it]
3%|▎ | 16311/569592 [7:06:48<413:38:43, 2.69s/it]
3%|▎ | 16311/569592 [7:06:48<413:38:43, 2.69s/it]
3%|▎ | 16312/569592 [7:06:48<332:26:52, 2.16s/it]
3%|▎ | 16312/569592 [7:06:49<332:26:52, 2.16s/it]
3%|▎ | 16313/569592 [7:06:52<383:54:14, 2.50s/it]
3%|▎ | 16313/569592 [7:06:52<383:54:14, 2.50s/it]
3%|▎ | 16314/569592 [7:06:56<468:40:22, 3.05s/it]
3%|▎ | 16314/569592 [7:06:56<468:40:22, 3.05s/it]
3%|▎ | 16315/569592 [7:06:59<470:57:45, 3.06s/it]
3%|▎ | 16315/569592 [7:06:59<470:57:45, 3.06s/it]
3%|▎ | 16316/569592 [7:07:03<497:45:39, 3.24s/it]
3%|▎ | 16316/569592 [7:07:03<497:45:39, 3.24s/it]
3%|▎ | 16317/569592 [7:07:09<619:13:46, 4.03s/it]
3%|▎ | 16317/569592 [7:07:09<619:13:46, 4.03s/it]
3%|▎ | 16318/569592 [7:07:12<574:43:26, 3.74s/it]
3%|▎ | 16318/569592 [7:07:12<574:43:26, 3.74s/it]
3%|▎ | 16319/569592 [7:07:15<568:36:20, 3.70s/it]
3%|▎ | 16319/569592 [7:07:15<568:36:20, 3.70s/it]
3%|▎ | 16320/569592 [7:07:16<441:00:45, 2.87s/it]
3%|▎ | 16320/569592 [7:07:17<441:00:45, 2.87s/it]
3%|▎ | 16321/569592 [7:07:19<452:47:57, 2.95s/it]
3%|▎ | 16321/569592 [7:07:19<452:47:57, 2.95s/it]
3%|▎ | 16322/569592 [7:07:23<491:21:55, 3.20s/it]
3%|▎ | 16322/569592 [7:07:23<491:21:55, 3.20s/it]
3%|▎ | 16323/569592 [7:07:27<496:16:53, 3.23s/it]
3%|▎ | 16323/569592 [7:07:27<496:16:53, 3.23s/it]
3%|▎ | 16324/569592 [7:07:30<507:34:48, 3.30s/it]
3%|▎ | 16324/569592 [7:07:30<507:34:48, 3.30s/it]
3%|▎ | 16325/569592 [7:07:31<397:11:08, 2.58s/it]
3%|▎ | 16325/569592 [7:07:31<397:11:08, 2.58s/it]
3%|▎ | 16326/569592 [7:07:32<321:51:54, 2.09s/it]
3%|▎ | 16326/569592 [7:07:32<321:51:54, 2.09s/it]
3%|▎ | 16327/569592 [7:07:36<418:43:07, 2.72s/it]
3%|▎ | 16327/569592 [7:07:36<418:43:07, 2.72s/it]
3%|▎ | 16328/569592 [7:07:39<445:29:59, 2.90s/it]
3%|▎ | 16328/569592 [7:07:39<445:29:59, 2.90s/it]
3%|▎ | 16329/569592 [7:07:43<468:39:19, 3.05s/it]
3%|▎ | 16329/569592 [7:07:43<468:39:19, 3.05s/it]
3%|▎ | 16330/569592 [7:07:44<370:21:16, 2.41s/it]
3%|▎ | 16330/569592 [7:07:44<370:21:16, 2.41s/it]
3%|▎ | 16331/569592 [7:07:48<462:12:44, 3.01s/it]
3%|▎ | 16331/569592 [7:07:48<462:12:44, 3.01s/it]
3%|▎ | 16332/569592 [7:07:52<504:00:26, 3.28s/it]
3%|▎ | 16332/569592 [7:07:52<504:00:26, 3.28s/it]
3%|▎ | 16333/569592 [7:07:56<519:27:08, 3.38s/it]
3%|▎ | 16333/569592 [7:07:56<519:27:08, 3.38s/it]
3%|▎ | 16334/569592 [7:08:00<548:17:07, 3.57s/it]
3%|▎ | 16334/569592 [7:08:00<548:17:07, 3.57s/it]
3%|▎ | 16335/569592 [7:08:01<424:22:14, 2.76s/it]
3%|▎ | 16335/569592 [7:08:01<424:22:14, 2.76s/it]
3%|▎ | 16336/569592 [7:08:04<442:47:33, 2.88s/it]
3%|▎ | 16336/569592 [7:08:04<442:47:33, 2.88s/it]
3%|▎ | 16337/569592 [7:08:07<452:36:17, 2.95s/it]
3%|▎ | 16337/569592 [7:08:07<452:36:17, 2.95s/it]
3%|▎ | 16338/569592 [7:08:10<477:59:24, 3.11s/it]
3%|▎ | 16338/569592 [7:08:10<477:59:24, 3.11s/it]
3%|▎ | 16339/569592 [7:08:11<376:55:38, 2.45s/it]
3%|▎ | 16339/569592 [7:08:11<376:55:38, 2.45s/it]
3%|▎ | 16340/569592 [7:08:15<445:58:14, 2.90s/it]
3%|▎ | 16340/569592 [7:08:15<445:58:14, 2.90s/it]
3%|▎ | 16341/569592 [7:08:16<355:17:42, 2.31s/it]
3%|▎ | 16341/569592 [7:08:16<355:17:42, 2.31s/it]
3%|▎ | 16342/569592 [7:08:20<430:14:49, 2.80s/it]
3%|▎ | 16342/569592 [7:08:20<430:14:49, 2.80s/it]
3%|▎ | 16343/569592 [7:08:24<489:55:37, 3.19s/it]
3%|▎ | 16343/569592 [7:08:24<489:55:37, 3.19s/it]
3%|▎ | 16344/569592 [7:08:29<566:51:36, 3.69s/it]
3%|▎ | 16344/569592 [7:08:29<566:51:36, 3.69s/it]
3%|▎ | 16345/569592 [7:08:32<538:04:45, 3.50s/it]
3%|▎ | 16345/569592 [7:08:32<538:04:45, 3.50s/it]
3%|▎ | 16346/569592 [7:08:35<532:55:42, 3.47s/it]
3%|▎ | 16346/569592 [7:08:35<532:55:42, 3.47s/it]
3%|▎ | 16347/569592 [7:08:36<415:27:00, 2.70s/it]
3%|▎ | 16347/569592 [7:08:36<415:27:00, 2.70s/it]
3%|▎ | 16348/569592 [7:08:37<335:09:28, 2.18s/it]
3%|▎ | 16348/569592 [7:08:37<335:09:28, 2.18s/it]
3%|▎ | 16349/569592 [7:08:41<389:16:45, 2.53s/it]
3%|▎ | 16349/569592 [7:08:41<389:16:45, 2.53s/it]
3%|▎ | 16350/569592 [7:08:42<316:03:51, 2.06s/it]
3%|▎ | 16350/569592 [7:08:42<316:03:51, 2.06s/it]
3%|▎ | 16351/569592 [7:08:43<264:12:13, 1.72s/it]
3%|▎ | 16351/569592 [7:08:43<264:12:13, 1.72s/it]
3%|▎ | 16352/569592 [7:08:43<229:02:12, 1.49s/it]
3%|▎ | 16352/569592 [7:08:43<229:02:12, 1.49s/it]
3%|▎ | 16353/569592 [7:08:47<318:09:44, 2.07s/it]
3%|▎ | 16353/569592 [7:08:47<318:09:44, 2.07s/it]
3%|▎ | 16354/569592 [7:08:48<266:29:03, 1.73s/it]
3%|▎ | 16354/569592 [7:08:48<266:29:03, 1.73s/it]
3%|▎ | 16355/569592 [7:08:52<359:01:25, 2.34s/it]
3%|▎ | 16355/569592 [7:08:52<359:01:25, 2.34s/it]
3%|▎ | 16356/569592 [7:08:56<433:16:20, 2.82s/it]
3%|▎ | 16356/569592 [7:08:56<433:16:20, 2.82s/it]
3%|▎ | 16357/569592 [7:08:59<471:21:58, 3.07s/it]
3%|▎ | 16357/569592 [7:08:59<471:21:58, 3.07s/it]
3%|▎ | 16358/569592 [7:09:00<373:45:28, 2.43s/it]
3%|▎ | 16358/569592 [7:09:00<373:45:28, 2.43s/it]
3%|▎ | 16359/569592 [7:09:04<425:13:07, 2.77s/it]
3%|▎ | 16359/569592 [7:09:04<425:13:07, 2.77s/it]
3%|▎ | 16360/569592 [7:09:05<341:15:30, 2.22s/it]
3%|▎ | 16360/569592 [7:09:05<341:15:30, 2.22s/it]
3%|▎ | 16361/569592 [7:09:06<293:31:48, 1.91s/it]
3%|▎ | 16361/569592 [7:09:06<293:31:48, 1.91s/it]
3%|▎ | 16362/569592 [7:09:09<366:33:34, 2.39s/it]
3%|▎ | 16362/569592 [7:09:09<366:33:34, 2.39s/it]
3%|▎ | 16363/569592 [7:09:13<411:38:06, 2.68s/it]
3%|▎ | 16363/569592 [7:09:13<411:38:06, 2.68s/it]
3%|▎ | 16364/569592 [7:09:14<337:12:18, 2.19s/it]
3%|▎ | 16364/569592 [7:09:14<337:12:18, 2.19s/it]
3%|▎ | 16365/569592 [7:09:17<400:29:04, 2.61s/it]
3%|▎ | 16365/569592 [7:09:17<400:29:04, 2.61s/it]
3%|▎ | 16366/569592 [7:09:22<476:24:28, 3.10s/it]
3%|▎ | 16366/569592 [7:09:22<476:24:28, 3.10s/it]
3%|▎ | 16367/569592 [7:09:23<378:08:23, 2.46s/it]
3%|▎ | 16367/569592 [7:09:23<378:08:23, 2.46s/it]
3%|▎ | 16368/569592 [7:09:23<309:17:48, 2.01s/it]
3%|▎ | 16368/569592 [7:09:24<309:17:48, 2.01s/it]
3%|▎ | 16369/569592 [7:09:24<260:40:20, 1.70s/it]
3%|▎ | 16369/569592 [7:09:24<260:40:20, 1.70s/it]
3%|▎ | 16370/569592 [7:09:26<249:29:02, 1.62s/it]
3%|▎ | 16370/569592 [7:09:26<249:29:02, 1.62s/it]
3%|▎ | 16371/569592 [7:09:32<439:55:58, 2.86s/it]
3%|▎ | 16371/569592 [7:09:32<439:55:58, 2.86s/it]
3%|▎ | 16372/569592 [7:09:35<475:24:45, 3.09s/it]
3%|▎ | 16372/569592 [7:09:35<475:24:45, 3.09s/it]
3%|▎ | 16373/569592 [7:09:40<552:37:47, 3.60s/it]
3%|▎ | 16373/569592 [7:09:40<552:37:47, 3.60s/it]
3%|▎ | 16374/569592 [7:09:41<429:48:43, 2.80s/it]
3%|▎ | 16374/569592 [7:09:41<429:48:43, 2.80s/it]
3%|▎ | 16375/569592 [7:09:45<463:45:08, 3.02s/it]
3%|▎ | 16375/569592 [7:09:45<463:45:08, 3.02s/it]
3%|▎ | 16376/569592 [7:09:45<368:16:47, 2.40s/it]
3%|▎ | 16376/569592 [7:09:45<368:16:47, 2.40s/it]
3%|▎ | 16377/569592 [7:09:46<300:51:11, 1.96s/it]
3%|▎ | 16377/569592 [7:09:46<300:51:11, 1.96s/it]
3%|▎ | 16378/569592 [7:09:47<254:41:56, 1.66s/it]
3%|▎ | 16378/569592 [7:09:47<254:41:56, 1.66s/it]
3%|▎ | 16379/569592 [7:09:53<428:33:01, 2.79s/it]
3%|▎ | 16379/569592 [7:09:53<428:33:01, 2.79s/it]
3%|▎ | 16380/569592 [7:09:56<467:01:49, 3.04s/it]
3%|▎ | 16380/569592 [7:09:56<467:01:49, 3.04s/it]
3%|▎ | 16381/569592 [7:10:01<516:04:32, 3.36s/it]
3%|▎ | 16381/569592 [7:10:01<516:04:32, 3.36s/it]
3%|▎ | 16382/569592 [7:10:01<402:59:59, 2.62s/it]
3%|▎ | 16382/569592 [7:10:01<402:59:59, 2.62s/it]
3%|▎ | 16383/569592 [7:10:02<325:08:50, 2.12s/it]
3%|▎ | 16383/569592 [7:10:02<325:08:50, 2.12s/it]
3%|▎ | 16384/569592 [7:10:03<273:04:30, 1.78s/it]
3%|▎ | 16384/569592 [7:10:03<273:04:30, 1.78s/it]
3%|▎ | 16385/569592 [7:10:05<272:09:44, 1.77s/it]
3%|▎ | 16385/569592 [7:10:05<272:09:44, 1.77s/it]
3%|▎ | 16386/569592 [7:10:08<342:39:28, 2.23s/it]
3%|▎ | 16386/569592 [7:10:08<342:39:28, 2.23s/it]
3%|▎ | 16387/569592 [7:10:13<468:12:19, 3.05s/it]
3%|▎ | 16387/569592 [7:10:13<468:12:19, 3.05s/it]
3%|▎ | 16388/569592 [7:10:17<487:02:04, 3.17s/it]
3%|▎ | 16388/569592 [7:10:17<487:02:04, 3.17s/it]
3%|▎ | 16389/569592 [7:10:18<383:31:50, 2.50s/it]
3%|▎ | 16389/569592 [7:10:18<383:31:50, 2.50s/it]
3%|▎ | 16390/569592 [7:10:19<310:36:08, 2.02s/it]
3%|▎ | 16390/569592 [7:10:19<310:36:08, 2.02s/it]
3%|▎ | 16391/569592 [7:10:24<447:12:09, 2.91s/it]
3%|▎ | 16391/569592 [7:10:24<447:12:09, 2.91s/it]
3%|▎ | 16392/569592 [7:10:25<357:47:09, 2.33s/it]
3%|▎ | 16392/569592 [7:10:25<357:47:09, 2.33s/it]
3%|▎ | 16393/569592 [7:10:27<363:29:50, 2.37s/it]
3%|▎ | 16393/569592 [7:10:27<363:29:50, 2.37s/it]
3%|▎ | 16394/569592 [7:10:31<427:19:36, 2.78s/it]
3%|▎ | 16394/569592 [7:10:31<427:19:36, 2.78s/it]
3%|▎ | 16395/569592 [7:10:34<450:46:43, 2.93s/it]
3%|▎ | 16395/569592 [7:10:34<450:46:43, 2.93s/it]
3%|▎ | 16396/569592 [7:10:38<481:47:46, 3.14s/it]
3%|▎ | 16396/569592 [7:10:38<481:47:46, 3.14s/it]
3%|▎ | 16397/569592 [7:10:41<485:43:58, 3.16s/it]
3%|▎ | 16397/569592 [7:10:41<485:43:58, 3.16s/it]
3%|▎ | 16398/569592 [7:10:42<384:09:08, 2.50s/it]
3%|▎ | 16398/569592 [7:10:42<384:09:08, 2.50s/it]
3%|▎ | 16399/569592 [7:10:47<500:44:29, 3.26s/it]
3%|▎ | 16399/569592 [7:10:47<500:44:29, 3.26s/it]
3%|▎ | 16400/569592 [7:10:48<396:41:53, 2.58s/it]
3%|▎ | 16400/569592 [7:10:48<396:41:53, 2.58s/it]
3%|▎ | 16401/569592 [7:10:53<492:35:46, 3.21s/it]
3%|▎ | 16401/569592 [7:10:53<492:35:46, 3.21s/it]
3%|▎ | 16402/569592 [7:10:56<498:38:26, 3.25s/it]
3%|▎ | 16402/569592 [7:10:56<498:38:26, 3.25s/it]
3%|▎ | 16403/569592 [7:10:57<395:54:14, 2.58s/it]
3%|▎ | 16403/569592 [7:10:57<395:54:14, 2.58s/it]
3%|▎ | 16404/569592 [7:11:00<425:16:11, 2.77s/it]
3%|▎ | 16404/569592 [7:11:00<425:16:11, 2.77s/it]
3%|▎ | 16405/569592 [7:11:01<349:53:03, 2.28s/it]
3%|▎ | 16405/569592 [7:11:01<349:53:03, 2.28s/it]
3%|▎ | 16406/569592 [7:11:02<288:59:22, 1.88s/it]
3%|▎ | 16406/569592 [7:11:02<288:59:22, 1.88s/it]
3%|▎ | 16407/569592 [7:11:06<362:22:06, 2.36s/it]
3%|▎ | 16407/569592 [7:11:06<362:22:06, 2.36s/it]
3%|▎ | 16408/569592 [7:11:10<438:45:38, 2.86s/it]
3%|▎ | 16408/569592 [7:11:10<438:45:38, 2.86s/it]
3%|▎ | 16409/569592 [7:11:11<350:08:55, 2.28s/it]
3%|▎ | 16409/569592 [7:11:11<350:08:55, 2.28s/it]
3%|▎ | 16410/569592 [7:11:17<556:47:11, 3.62s/it]
3%|▎ | 16410/569592 [7:11:17<556:47:11, 3.62s/it]
3%|▎ | 16411/569592 [7:11:21<565:17:42, 3.68s/it]
3%|▎ | 16411/569592 [7:11:21<565:17:42, 3.68s/it]
3%|▎ | 16412/569592 [7:11:25<549:01:14, 3.57s/it]
3%|▎ | 16412/569592 [7:11:25<549:01:14, 3.57s/it]
3%|▎ | 16413/569592 [7:11:28<543:36:52, 3.54s/it]
3%|▎ | 16413/569592 [7:11:28<543:36:52, 3.54s/it]
3%|▎ | 16414/569592 [7:11:31<535:15:48, 3.48s/it]
3%|▎ | 16414/569592 [7:11:31<535:15:48, 3.48s/it]
3%|▎ | 16415/569592 [7:11:35<528:12:58, 3.44s/it]
3%|▎ | 16415/569592 [7:11:35<528:12:58, 3.44s/it]
3%|▎ | 16416/569592 [7:11:36<411:17:09, 2.68s/it]
3%|▎ | 16416/569592 [7:11:36<411:17:09, 2.68s/it]
3%|▎ | 16417/569592 [7:11:39<453:54:15, 2.95s/it]
3%|▎ | 16417/569592 [7:11:39<453:54:15, 2.95s/it]
3%|▎ | 16418/569592 [7:11:40<363:39:39, 2.37s/it]
3%|▎ | 16418/569592 [7:11:40<363:39:39, 2.37s/it]
3%|▎ | 16419/569592 [7:11:44<418:17:36, 2.72s/it]
3%|▎ | 16419/569592 [7:11:44<418:17:36, 2.72s/it]
3%|▎ | 16420/569592 [7:11:45<338:08:34, 2.20s/it]
3%|▎ | 16420/569592 [7:11:45<338:08:34, 2.20s/it]
3%|▎ | 16421/569592 [7:11:49<417:10:14, 2.71s/it]
3%|▎ | 16421/569592 [7:11:49<417:10:14, 2.71s/it]
3%|▎ | 16422/569592 [7:11:52<437:28:05, 2.85s/it]
3%|▎ | 16422/569592 [7:11:52<437:28:05, 2.85s/it]
3%|▎ | 16423/569592 [7:11:56<496:51:21, 3.23s/it]
3%|▎ | 16423/569592 [7:11:56<496:51:21, 3.23s/it]
3%|▎ | 16424/569592 [7:11:59<491:28:06, 3.20s/it]
3%|▎ | 16424/569592 [7:11:59<491:28:06, 3.20s/it]
3%|▎ | 16425/569592 [7:12:02<499:27:44, 3.25s/it]
3%|▎ | 16425/569592 [7:12:02<499:27:44, 3.25s/it]
3%|▎ | 16426/569592 [7:12:03<392:11:55, 2.55s/it]
3%|▎ | 16426/569592 [7:12:03<392:11:55, 2.55s/it]
3%|▎ | 16427/569592 [7:12:07<421:32:50, 2.74s/it]
3%|▎ | 16427/569592 [7:12:07<421:32:50, 2.74s/it]
3%|▎ | 16428/569592 [7:12:10<448:52:56, 2.92s/it]
3%|▎ | 16428/569592 [7:12:10<448:52:56, 2.92s/it]
3%|▎ | 16429/569592 [7:12:14<483:04:39, 3.14s/it]
3%|▎ | 16429/569592 [7:12:14<483:04:39, 3.14s/it]
3%|▎ | 16430/569592 [7:12:14<379:04:23, 2.47s/it]
3%|▎ | 16430/569592 [7:12:14<379:04:23, 2.47s/it]
3%|▎ | 16431/569592 [7:12:18<424:56:07, 2.77s/it]
3%|▎ | 16431/569592 [7:12:18<424:56:07, 2.77s/it]
3%|▎ | 16432/569592 [7:12:21<457:27:53, 2.98s/it]
3%|▎ | 16432/569592 [7:12:21<457:27:53, 2.98s/it]
3%|▎ | 16433/569592 [7:12:25<478:12:48, 3.11s/it]
3%|▎ | 16433/569592 [7:12:25<478:12:48, 3.11s/it]
3%|▎ | 16434/569592 [7:12:28<496:44:09, 3.23s/it]
3%|▎ | 16434/569592 [7:12:28<496:44:09, 3.23s/it]
3%|▎ | 16435/569592 [7:12:32<521:20:22, 3.39s/it]
3%|▎ | 16435/569592 [7:12:32<521:20:22, 3.39s/it]
3%|▎ | 16436/569592 [7:12:37<601:09:43, 3.91s/it]
3%|▎ | 16436/569592 [7:12:37<601:09:43, 3.91s/it]
3%|▎ | 16437/569592 [7:12:40<568:56:11, 3.70s/it]
3%|▎ | 16437/569592 [7:12:40<568:56:11, 3.70s/it]
3%|▎ | 16438/569592 [7:12:41<439:09:04, 2.86s/it]
3%|▎ | 16438/569592 [7:12:41<439:09:04, 2.86s/it]
3%|▎ | 16439/569592 [7:12:45<465:25:37, 3.03s/it]
3%|▎ | 16439/569592 [7:12:45<465:25:37, 3.03s/it]
3%|▎ | 16440/569592 [7:12:50<572:53:08, 3.73s/it]
3%|▎ | 16440/569592 [7:12:50<572:53:08, 3.73s/it]
3%|▎ | 16441/569592 [7:12:53<540:09:12, 3.52s/it]
3%|▎ | 16441/569592 [7:12:53<540:09:12, 3.52s/it]
3%|▎ | 16442/569592 [7:12:56<534:29:36, 3.48s/it]
3%|▎ | 16442/569592 [7:12:56<534:29:36, 3.48s/it]
3%|▎ | 16443/569592 [7:13:00<515:04:59, 3.35s/it]
3%|▎ | 16443/569592 [7:13:00<515:04:59, 3.35s/it]
3%|▎ | 16444/569592 [7:13:00<402:25:28, 2.62s/it]
3%|▎ | 16444/569592 [7:13:00<402:25:28, 2.62s/it]
3%|▎ | 16445/569592 [7:13:01<327:40:52, 2.13s/it]
3%|▎ | 16445/569592 [7:13:01<327:40:52, 2.13s/it]
3%|▎ | 16446/569592 [7:13:02<272:32:44, 1.77s/it]
3%|▎ | 16446/569592 [7:13:02<272:32:44, 1.77s/it]
3%|▎ | 16447/569592 [7:13:06<350:38:07, 2.28s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100663296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16447/569592 [7:13:06<350:38:07, 2.28s/it]
3%|▎ | 16448/569592 [7:13:09<401:23:01, 2.61s/it]
3%|▎ | 16448/569592 [7:13:09<401:23:01, 2.61s/it]
3%|▎ | 16449/569592 [7:13:16<572:12:44, 3.72s/it]
3%|▎ | 16449/569592 [7:13:16<572:12:44, 3.72s/it]
3%|▎ | 16450/569592 [7:13:19<555:08:48, 3.61s/it]
3%|▎ | 16450/569592 [7:13:19<555:08:48, 3.61s/it]
3%|▎ | 16451/569592 [7:13:22<529:37:06, 3.45s/it]
3%|▎ | 16451/569592 [7:13:22<529:37:06, 3.45s/it]
3%|▎ | 16452/569592 [7:13:25<515:25:00, 3.35s/it]
3%|▎ | 16452/569592 [7:13:25<515:25:00, 3.35s/it]
3%|▎ | 16453/569592 [7:13:28<516:27:09, 3.36s/it]
3%|▎ | 16453/569592 [7:13:28<516:27:09, 3.36s/it]
3%|▎ | 16454/569592 [7:13:29<403:38:45, 2.63s/it]
3%|▎ | 16454/569592 [7:13:29<403:38:45, 2.63s/it]
3%|▎ | 16455/569592 [7:13:30<326:42:00, 2.13s/it]
3%|▎ | 16455/569592 [7:13:30<326:42:00, 2.13s/it]
3%|▎ | 16456/569592 [7:13:34<388:11:04, 2.53s/it]
3%|▎ | 16456/569592 [7:13:34<388:11:04, 2.53s/it]
3%|▎ | 16457/569592 [7:13:35<315:51:19, 2.06s/it]
3%|▎ | 16457/569592 [7:13:35<315:51:19, 2.06s/it]
3%|▎ | 16458/569592 [7:13:39<439:07:25, 2.86s/it]
3%|▎ | 16458/569592 [7:13:40<439:07:25, 2.86s/it]
3%|▎ | 16459/569592 [7:13:40<353:19:14, 2.30s/it]
3%|▎ | 16459/569592 [7:13:40<353:19:14, 2.30s/it]
3%|▎ | 16460/569592 [7:13:45<443:14:06, 2.88s/it]
3%|▎ | 16460/569592 [7:13:45<443:14:06, 2.88s/it]
3%|▎ | 16461/569592 [7:13:48<468:21:33, 3.05s/it]
3%|▎ | 16461/569592 [7:13:48<468:21:33, 3.05s/it]
3%|▎ | 16462/569592 [7:13:51<480:31:25, 3.13s/it]
3%|▎ | 16462/569592 [7:13:51<480:31:25, 3.13s/it]
3%|▎ | 16463/569592 [7:13:52<377:08:16, 2.45s/it]
3%|▎ | 16463/569592 [7:13:52<377:08:16, 2.45s/it]
3%|▎ | 16464/569592 [7:13:56<421:23:05, 2.74s/it]
3%|▎ | 16464/569592 [7:13:56<421:23:05, 2.74s/it]
3%|▎ | 16465/569592 [7:13:57<338:07:47, 2.20s/it]
3%|▎ | 16465/569592 [7:13:57<338:07:47, 2.20s/it]
3%|▎ | 16466/569592 [7:13:58<282:42:51, 1.84s/it]
3%|▎ | 16466/569592 [7:13:58<282:42:51, 1.84s/it]
3%|▎ | 16467/569592 [7:13:59<242:14:52, 1.58s/it]
3%|▎ | 16467/569592 [7:13:59<242:14:52, 1.58s/it]
3%|▎ | 16468/569592 [7:14:00<212:44:16, 1.38s/it]
3%|▎ | 16468/569592 [7:14:00<212:44:16, 1.38s/it]
3%|▎ | 16469/569592 [7:14:01<194:00:43, 1.26s/it]
3%|▎ | 16469/569592 [7:14:01<194:00:43, 1.26s/it]
3%|▎ | 16470/569592 [7:14:04<300:02:18, 1.95s/it]
3%|▎ | 16470/569592 [7:14:04<300:02:18, 1.95s/it]
3%|▎ | 16471/569592 [7:14:06<273:43:34, 1.78s/it]
3%|▎ | 16471/569592 [7:14:06<273:43:34, 1.78s/it]
3%|▎ | 16472/569592 [7:14:09<364:20:27, 2.37s/it]
3%|▎ | 16472/569592 [7:14:09<364:20:27, 2.37s/it]
3%|▎ | 16473/569592 [7:14:13<429:16:06, 2.79s/it]
3%|▎ | 16473/569592 [7:14:13<429:16:06, 2.79s/it]
3%|▎ | 16474/569592 [7:14:17<504:00:31, 3.28s/it]
3%|▎ | 16474/569592 [7:14:17<504:00:31, 3.28s/it]
3%|▎ | 16475/569592 [7:14:21<522:51:23, 3.40s/it]
3%|▎ | 16475/569592 [7:14:21<522:51:23, 3.40s/it]
3%|▎ | 16476/569592 [7:14:22<409:52:54, 2.67s/it]
3%|▎ | 16476/569592 [7:14:22<409:52:54, 2.67s/it]
3%|▎ | 16477/569592 [7:14:26<471:43:08, 3.07s/it]
3%|▎ | 16477/569592 [7:14:26<471:43:08, 3.07s/it]
3%|▎ | 16478/569592 [7:14:30<510:02:54, 3.32s/it]
3%|▎ | 16478/569592 [7:14:30<510:02:54, 3.32s/it]
3%|▎ | 16479/569592 [7:14:34<559:39:28, 3.64s/it]
3%|▎ | 16479/569592 [7:14:34<559:39:28, 3.64s/it]
3%|▎ | 16480/569592 [7:14:35<436:09:43, 2.84s/it]
3%|▎ | 16480/569592 [7:14:35<436:09:43, 2.84s/it]
3%|▎ | 16481/569592 [7:14:39<466:02:23, 3.03s/it]
3%|▎ | 16481/569592 [7:14:39<466:02:23, 3.03s/it]
3%|▎ | 16482/569592 [7:14:40<369:40:50, 2.41s/it]
3%|▎ | 16482/569592 [7:14:40<369:40:50, 2.41s/it]
3%|▎ | 16483/569592 [7:14:41<305:47:08, 1.99s/it]
3%|▎ | 16483/569592 [7:14:41<305:47:08, 1.99s/it]
3%|▎ | 16484/569592 [7:14:42<257:34:26, 1.68s/it]
3%|▎ | 16484/569592 [7:14:42<257:34:26, 1.68s/it]
3%|▎ | 16485/569592 [7:14:43<224:47:44, 1.46s/it]
3%|▎ | 16485/569592 [7:14:43<224:47:44, 1.46s/it]
3%|▎ | 16486/569592 [7:14:46<324:27:49, 2.11s/it]
3%|▎ | 16486/569592 [7:14:46<324:27:49, 2.11s/it]
3%|▎ | 16487/569592 [7:14:47<270:43:16, 1.76s/it]
3%|▎ | 16487/569592 [7:14:47<270:43:16, 1.76s/it]
3%|▎ | 16488/569592 [7:14:51<377:49:22, 2.46s/it]
3%|▎ | 16488/569592 [7:14:51<377:49:22, 2.46s/it]
3%|▎ | 16489/569592 [7:14:55<423:35:38, 2.76s/it]
3%|▎ | 16489/569592 [7:14:55<423:35:38, 2.76s/it]
3%|▎ | 16490/569592 [7:14:58<446:14:43, 2.90s/it]
3%|▎ | 16490/569592 [7:14:58<446:14:43, 2.90s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (107124888 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 16491/569592 [7:15:03<556:57:27, 3.63s/it]
3%|▎ | 16491/569592 [7:15:03<556:57:27, 3.63s/it]
3%|▎ | 16492/569592 [7:15:04<433:48:42, 2.82s/it]
3%|▎ | 16492/569592 [7:15:04<433:48:42, 2.82s/it]
3%|▎ | 16493/569592 [7:15:05<350:36:57, 2.28s/it]
3%|▎ | 16493/569592 [7:15:05<350:36:57, 2.28s/it]
3%|▎ | 16494/569592 [7:15:09<407:18:13, 2.65s/it]
3%|▎ | 16494/569592 [7:15:09<407:18:13, 2.65s/it]
3%|▎ | 16495/569592 [7:15:10<328:19:43, 2.14s/it]
3%|▎ | 16495/569592 [7:15:10<328:19:43, 2.14s/it]
3%|▎ | 16496/569592 [7:15:11<272:10:20, 1.77s/it]
3%|▎ | 16496/569592 [7:15:11<272:10:20, 1.77s/it]
3%|▎ | 16497/569592 [7:15:12<247:17:19, 1.61s/it]
3%|▎ | 16497/569592 [7:15:12<247:17:19, 1.61s/it]
3%|▎ | 16498/569592 [7:15:16<340:39:02, 2.22s/it]
3%|▎ | 16498/569592 [7:15:16<340:39:02, 2.22s/it]
3%|▎ | 16499/569592 [7:15:19<416:18:05, 2.71s/it]
3%|▎ | 16499/569592 [7:15:19<416:18:05, 2.71s/it]
3%|▎ | 16500/569592 [7:15:20<333:53:44, 2.17s/it]
3%|▎ | 16500/569592 [7:15:20<333:53:44, 2.17s/it]
3%|▎ | 16501/569592 [7:15:21<279:13:01, 1.82s/it]
3%|▎ | 16501/569592 [7:15:21<279:13:01, 1.82s/it]
3%|▎ | 16502/569592 [7:15:24<338:35:38, 2.20s/it]
3%|▎ | 16502/569592 [7:15:25<338:35:38, 2.20s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98178300 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16503/569592 [7:15:29<428:34:15, 2.79s/it]
3%|▎ | 16503/569592 [7:15:29<428:34:15, 2.79s/it]
3%|▎ | 16504/569592 [7:15:30<346:56:29, 2.26s/it]
3%|▎ | 16504/569592 [7:15:30<346:56:29, 2.26s/it]
3%|▎ | 16505/569592 [7:15:33<408:20:47, 2.66s/it]
3%|▎ | 16505/569592 [7:15:33<408:20:47, 2.66s/it]
3%|▎ | 16506/569592 [7:15:35<364:08:18, 2.37s/it]
3%|▎ | 16506/569592 [7:15:35<364:08:18, 2.37s/it]
3%|▎ | 16507/569592 [7:15:39<433:34:46, 2.82s/it]
3%|▎ | 16507/569592 [7:15:39<433:34:46, 2.82s/it]
3%|▎ | 16508/569592 [7:15:42<459:25:24, 2.99s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16508/569592 [7:15:42<459:25:24, 2.99s/it]
3%|▎ | 16509/569592 [7:15:43<368:38:44, 2.40s/it]
3%|▎ | 16509/569592 [7:15:43<368:38:44, 2.40s/it]
3%|▎ | 16510/569592 [7:15:46<387:46:30, 2.52s/it]
3%|▎ | 16510/569592 [7:15:46<387:46:30, 2.52s/it]
3%|▎ | 16511/569592 [7:15:49<403:47:57, 2.63s/it]
3%|▎ | 16511/569592 [7:15:49<403:47:57, 2.63s/it]
3%|▎ | 16512/569592 [7:15:52<434:57:48, 2.83s/it]
3%|▎ | 16512/569592 [7:15:52<434:57:48, 2.83s/it]
3%|▎ | 16513/569592 [7:15:53<347:16:57, 2.26s/it]
3%|▎ | 16513/569592 [7:15:53<347:16:57, 2.26s/it]
3%|▎ | 16514/569592 [7:15:57<421:15:17, 2.74s/it]
3%|▎ | 16514/569592 [7:15:57<421:15:17, 2.74s/it]
3%|▎ | 16515/569592 [7:16:00<449:04:32, 2.92s/it]
3%|▎ | 16515/569592 [7:16:00<449:04:32, 2.92s/it]
3%|▎ | 16516/569592 [7:16:02<380:14:00, 2.47s/it]
3%|▎ | 16516/569592 [7:16:02<380:14:00, 2.47s/it]
3%|▎ | 16517/569592 [7:16:06<463:21:25, 3.02s/it]
3%|▎ | 16517/569592 [7:16:06<463:21:25, 3.02s/it]
3%|▎ | 16518/569592 [7:16:08<392:33:45, 2.56s/it]
3%|▎ | 16518/569592 [7:16:08<392:33:45, 2.56s/it]
3%|▎ | 16519/569592 [7:16:12<482:52:36, 3.14s/it]
3%|▎ | 16519/569592 [7:16:12<482:52:36, 3.14s/it]
3%|▎ | 16520/569592 [7:16:16<502:34:36, 3.27s/it]
3%|▎ | 16520/569592 [7:16:16<502:34:36, 3.27s/it]
3%|▎ | 16521/569592 [7:16:17<405:37:33, 2.64s/it]
3%|▎ | 16521/569592 [7:16:17<405:37:33, 2.64s/it]
3%|▎ | 16522/569592 [7:16:18<331:23:27, 2.16s/it]
3%|▎ | 16522/569592 [7:16:18<331:23:27, 2.16s/it]
3%|▎ | 16523/569592 [7:16:23<466:36:31, 3.04s/it]
3%|▎ | 16523/569592 [7:16:23<466:36:31, 3.04s/it]
3%|▎ | 16524/569592 [7:16:26<478:37:40, 3.12s/it]
3%|▎ | 16524/569592 [7:16:26<478:37:40, 3.12s/it]
3%|▎ | 16525/569592 [7:16:30<492:39:19, 3.21s/it]
3%|▎ | 16525/569592 [7:16:30<492:39:19, 3.21s/it]
3%|▎ | 16526/569592 [7:16:34<542:26:27, 3.53s/it]
3%|▎ | 16526/569592 [7:16:34<542:26:27, 3.53s/it]
3%|▎ | 16527/569592 [7:16:38<572:57:20, 3.73s/it]
3%|▎ | 16527/569592 [7:16:38<572:57:20, 3.73s/it]
3%|▎ | 16528/569592 [7:16:41<538:43:36, 3.51s/it]
3%|▎ | 16528/569592 [7:16:41<538:43:36, 3.51s/it]
3%|▎ | 16529/569592 [7:16:45<561:42:36, 3.66s/it]
3%|▎ | 16529/569592 [7:16:45<561:42:36, 3.66s/it]
3%|▎ | 16530/569592 [7:16:51<644:23:47, 4.19s/it]
3%|▎ | 16530/569592 [7:16:51<644:23:47, 4.19s/it]
3%|▎ | 16531/569592 [7:16:54<598:47:35, 3.90s/it]
3%|▎ | 16531/569592 [7:16:54<598:47:35, 3.90s/it]
3%|▎ | 16532/569592 [7:16:57<559:32:55, 3.64s/it]
3%|▎ | 16532/569592 [7:16:57<559:32:55, 3.64s/it]
3%|▎ | 16533/569592 [7:17:01<564:18:41, 3.67s/it]
3%|▎ | 16533/569592 [7:17:01<564:18:41, 3.67s/it]
3%|▎ | 16534/569592 [7:17:02<438:19:30, 2.85s/it]
3%|▎ | 16534/569592 [7:17:02<438:19:30, 2.85s/it]
3%|▎ | 16535/569592 [7:17:02<349:57:41, 2.28s/it]
3%|▎ | 16535/569592 [7:17:02<349:57:41, 2.28s/it]
3%|▎ | 16536/569592 [7:17:06<427:20:08, 2.78s/it]
3%|▎ | 16536/569592 [7:17:06<427:20:08, 2.78s/it]
3%|▎ | 16537/569592 [7:17:08<351:50:04, 2.29s/it]
3%|▎ | 16537/569592 [7:17:08<351:50:04, 2.29s/it]
3%|▎ | 16538/569592 [7:17:11<412:13:40, 2.68s/it]
3%|▎ | 16538/569592 [7:17:11<412:13:40, 2.68s/it]
3%|▎ | 16539/569592 [7:17:15<448:12:10, 2.92s/it]
3%|▎ | 16539/569592 [7:17:15<448:12:10, 2.92s/it]
3%|▎ | 16540/569592 [7:17:19<536:39:52, 3.49s/it]
3%|▎ | 16540/569592 [7:17:19<536:39:52, 3.49s/it]
3%|▎ | 16541/569592 [7:17:20<417:40:02, 2.72s/it]
3%|▎ | 16541/569592 [7:17:20<417:40:02, 2.72s/it]
3%|▎ | 16542/569592 [7:17:24<462:39:25, 3.01s/it]
3%|▎ | 16542/569592 [7:17:24<462:39:25, 3.01s/it]
3%|▎ | 16543/569592 [7:17:25<366:05:17, 2.38s/it]
3%|▎ | 16543/569592 [7:17:25<366:05:17, 2.38s/it]
3%|▎ | 16544/569592 [7:17:26<304:58:00, 1.99s/it]
3%|▎ | 16544/569592 [7:17:26<304:58:00, 1.99s/it]
3%|▎ | 16545/569592 [7:17:30<383:28:10, 2.50s/it]
3%|▎ | 16545/569592 [7:17:30<383:28:10, 2.50s/it]
3%|▎ | 16546/569592 [7:17:31<313:50:32, 2.04s/it]
3%|▎ | 16546/569592 [7:17:31<313:50:32, 2.04s/it]
3%|▎ | 16547/569592 [7:17:34<386:15:33, 2.51s/it]
3%|▎ | 16547/569592 [7:17:34<386:15:33, 2.51s/it]
3%|▎ | 16548/569592 [7:17:38<447:59:09, 2.92s/it]
3%|▎ | 16548/569592 [7:17:38<447:59:09, 2.92s/it]
3%|▎ | 16549/569592 [7:17:42<494:05:57, 3.22s/it]
3%|▎ | 16549/569592 [7:17:42<494:05:57, 3.22s/it]
3%|▎ | 16550/569592 [7:17:46<526:22:51, 3.43s/it]
3%|▎ | 16550/569592 [7:17:46<526:22:51, 3.43s/it]
3%|▎ | 16551/569592 [7:17:49<517:20:33, 3.37s/it]
3%|▎ | 16551/569592 [7:17:49<517:20:33, 3.37s/it]
3%|▎ | 16552/569592 [7:17:52<501:52:22, 3.27s/it]
3%|▎ | 16552/569592 [7:17:52<501:52:22, 3.27s/it]
3%|▎ | 16553/569592 [7:17:56<537:12:11, 3.50s/it]
3%|▎ | 16553/569592 [7:17:56<537:12:11, 3.50s/it]
3%|▎ | 16554/569592 [7:18:01<590:36:29, 3.84s/it]
3%|▎ | 16554/569592 [7:18:01<590:36:29, 3.84s/it]
3%|▎ | 16555/569592 [7:18:05<581:32:03, 3.79s/it]
3%|▎ | 16555/569592 [7:18:05<581:32:03, 3.79s/it]
3%|▎ | 16556/569592 [7:18:08<553:12:35, 3.60s/it]
3%|▎ | 16556/569592 [7:18:08<553:12:35, 3.60s/it]
3%|▎ | 16557/569592 [7:18:11<542:28:04, 3.53s/it]
3%|▎ | 16557/569592 [7:18:11<542:28:04, 3.53s/it]
3%|▎ | 16558/569592 [7:18:15<537:10:24, 3.50s/it]
3%|▎ | 16558/569592 [7:18:15<537:10:24, 3.50s/it]
3%|▎ | 16559/569592 [7:18:18<515:41:13, 3.36s/it]
3%|▎ | 16559/569592 [7:18:18<515:41:13, 3.36s/it]
3%|▎ | 16560/569592 [7:18:19<403:52:05, 2.63s/it]
3%|▎ | 16560/569592 [7:18:19<403:52:05, 2.63s/it]
3%|▎ | 16561/569592 [7:18:22<431:41:28, 2.81s/it]
3%|▎ | 16561/569592 [7:18:22<431:41:28, 2.81s/it]
3%|▎ | 16562/569592 [7:18:26<481:01:44, 3.13s/it]
3%|▎ | 16562/569592 [7:18:26<481:01:44, 3.13s/it]
3%|▎ | 16563/569592 [7:18:27<382:29:27, 2.49s/it]
3%|▎ | 16563/569592 [7:18:27<382:29:27, 2.49s/it]
3%|▎ | 16564/569592 [7:18:30<424:24:05, 2.76s/it]
3%|▎ | 16564/569592 [7:18:30<424:24:05, 2.76s/it]
3%|▎ | 16565/569592 [7:18:33<442:29:40, 2.88s/it]
3%|▎ | 16565/569592 [7:18:33<442:29:40, 2.88s/it]
3%|▎ | 16566/569592 [7:18:37<484:51:33, 3.16s/it]
3%|▎ | 16566/569592 [7:18:37<484:51:33, 3.16s/it]
3%|▎ | 16567/569592 [7:18:41<511:18:06, 3.33s/it]
3%|▎ | 16567/569592 [7:18:41<511:18:06, 3.33s/it]
3%|▎ | 16568/569592 [7:18:44<522:56:23, 3.40s/it]
3%|▎ | 16568/569592 [7:18:44<522:56:23, 3.40s/it]
3%|▎ | 16569/569592 [7:18:48<519:31:59, 3.38s/it]
3%|▎ | 16569/569592 [7:18:48<519:31:59, 3.38s/it]
3%|▎ | 16570/569592 [7:18:51<517:51:11, 3.37s/it]
3%|▎ | 16570/569592 [7:18:51<517:51:11, 3.37s/it]
3%|▎ | 16571/569592 [7:18:54<506:10:14, 3.30s/it]
3%|▎ | 16571/569592 [7:18:54<506:10:14, 3.30s/it]
3%|▎ | 16572/569592 [7:18:57<499:53:43, 3.25s/it]
3%|▎ | 16572/569592 [7:18:57<499:53:43, 3.25s/it]
3%|▎ | 16573/569592 [7:18:58<391:25:22, 2.55s/it]
3%|▎ | 16573/569592 [7:18:58<391:25:22, 2.55s/it]
3%|▎ | 16574/569592 [7:19:02<437:27:46, 2.85s/it]
3%|▎ | 16574/569592 [7:19:02<437:27:46, 2.85s/it]
3%|▎ | 16575/569592 [7:19:03<356:22:49, 2.32s/it]
3%|▎ | 16575/569592 [7:19:03<356:22:49, 2.32s/it]
3%|▎ | 16576/569592 [7:19:07<433:51:11, 2.82s/it]
3%|▎ | 16576/569592 [7:19:07<433:51:11, 2.82s/it]
3%|▎ | 16577/569592 [7:19:08<348:56:29, 2.27s/it]
3%|▎ | 16577/569592 [7:19:08<348:56:29, 2.27s/it]
3%|▎ | 16578/569592 [7:19:12<439:59:45, 2.86s/it]
3%|▎ | 16578/569592 [7:19:12<439:59:45, 2.86s/it]
3%|▎ | 16579/569592 [7:19:15<460:07:10, 3.00s/it]
3%|▎ | 16579/569592 [7:19:15<460:07:10, 3.00s/it]
3%|▎ | 16580/569592 [7:19:19<476:50:12, 3.10s/it]
3%|▎ | 16580/569592 [7:19:19<476:50:12, 3.10s/it]
3%|▎ | 16581/569592 [7:19:22<497:34:40, 3.24s/it]
3%|▎ | 16581/569592 [7:19:22<497:34:40, 3.24s/it]
3%|▎ | 16582/569592 [7:19:23<395:44:48, 2.58s/it]
3%|▎ | 16582/569592 [7:19:23<395:44:48, 2.58s/it]
3%|▎ | 16583/569592 [7:19:24<322:51:01, 2.10s/it]
3%|▎ | 16583/569592 [7:19:24<322:51:01, 2.10s/it]
3%|▎ | 16584/569592 [7:19:28<380:39:21, 2.48s/it]
3%|▎ | 16584/569592 [7:19:28<380:39:21, 2.48s/it]
3%|▎ | 16585/569592 [7:19:29<312:12:26, 2.03s/it]
3%|▎ | 16585/569592 [7:19:29<312:12:26, 2.03s/it]
3%|▎ | 16586/569592 [7:19:30<263:26:09, 1.71s/it]
3%|▎ | 16586/569592 [7:19:30<263:26:09, 1.71s/it]
3%|▎ | 16587/569592 [7:19:33<347:59:44, 2.27s/it]
3%|▎ | 16587/569592 [7:19:33<347:59:44, 2.27s/it]
3%|▎ | 16588/569592 [7:19:37<404:22:03, 2.63s/it]
3%|▎ | 16588/569592 [7:19:37<404:22:03, 2.63s/it]
3%|▎ | 16589/569592 [7:19:38<328:44:53, 2.14s/it]
3%|▎ | 16589/569592 [7:19:38<328:44:53, 2.14s/it]
3%|▎ | 16590/569592 [7:19:41<382:26:02, 2.49s/it]
3%|▎ | 16590/569592 [7:19:41<382:26:02, 2.49s/it]
3%|▎ | 16591/569592 [7:19:45<469:48:25, 3.06s/it]
3%|▎ | 16591/569592 [7:19:45<469:48:25, 3.06s/it]
3%|▎ | 16592/569592 [7:19:49<517:12:10, 3.37s/it]
3%|▎ | 16592/569592 [7:19:49<517:12:10, 3.37s/it]
3%|▎ | 16593/569592 [7:19:53<514:54:01, 3.35s/it]
3%|▎ | 16593/569592 [7:19:53<514:54:01, 3.35s/it]
3%|▎ | 16594/569592 [7:19:56<530:44:01, 3.46s/it]
3%|▎ | 16594/569592 [7:19:56<530:44:01, 3.46s/it]
3%|▎ | 16595/569592 [7:20:00<519:15:18, 3.38s/it]
3%|▎ | 16595/569592 [7:20:00<519:15:18, 3.38s/it]
3%|▎ | 16596/569592 [7:20:00<404:46:29, 2.64s/it]
3%|▎ | 16596/569592 [7:20:01<404:46:29, 2.64s/it]
3%|▎ | 16597/569592 [7:20:01<326:27:03, 2.13s/it]
3%|▎ | 16597/569592 [7:20:01<326:27:03, 2.13s/it]
3%|▎ | 16598/569592 [7:20:05<383:18:00, 2.50s/it]
3%|▎ | 16598/569592 [7:20:05<383:18:00, 2.50s/it]
3%|▎ | 16599/569592 [7:20:06<313:17:42, 2.04s/it]
3%|▎ | 16599/569592 [7:20:06<313:17:42, 2.04s/it]
3%|▎ | 16600/569592 [7:20:07<265:22:22, 1.73s/it]
3%|▎ | 16600/569592 [7:20:07<265:22:22, 1.73s/it]
3%|▎ | 16601/569592 [7:20:11<390:22:19, 2.54s/it]
3%|▎ | 16601/569592 [7:20:11<390:22:19, 2.54s/it]
3%|▎ | 16602/569592 [7:20:15<432:22:14, 2.81s/it]
3%|▎ | 16602/569592 [7:20:15<432:22:14, 2.81s/it]
3%|▎ | 16603/569592 [7:20:16<348:10:41, 2.27s/it]
3%|▎ | 16603/569592 [7:20:16<348:10:41, 2.27s/it]
3%|▎ | 16604/569592 [7:20:17<286:49:44, 1.87s/it]
3%|▎ | 16604/569592 [7:20:17<286:49:44, 1.87s/it]
3%|▎ | 16605/569592 [7:20:20<380:47:34, 2.48s/it]
3%|▎ | 16605/569592 [7:20:20<380:47:34, 2.48s/it]
3%|▎ | 16606/569592 [7:20:21<311:32:37, 2.03s/it]
3%|▎ | 16606/569592 [7:20:21<311:32:37, 2.03s/it]
3%|▎ | 16607/569592 [7:20:25<373:12:43, 2.43s/it]
3%|▎ | 16607/569592 [7:20:25<373:12:43, 2.43s/it]
3%|▎ | 16608/569592 [7:20:29<450:56:18, 2.94s/it]
3%|▎ | 16608/569592 [7:20:29<450:56:18, 2.94s/it]
3%|▎ | 16609/569592 [7:20:32<475:03:59, 3.09s/it]
3%|▎ | 16609/569592 [7:20:32<475:03:59, 3.09s/it]
3%|▎ | 16610/569592 [7:20:33<377:16:11, 2.46s/it]
3%|▎ | 16610/569592 [7:20:33<377:16:11, 2.46s/it]
3%|▎ | 16611/569592 [7:20:37<418:13:39, 2.72s/it]
3%|▎ | 16611/569592 [7:20:37<418:13:39, 2.72s/it]
3%|▎ | 16612/569592 [7:20:38<336:37:28, 2.19s/it]
3%|▎ | 16612/569592 [7:20:38<336:37:28, 2.19s/it]
3%|▎ | 16613/569592 [7:20:41<386:57:47, 2.52s/it]
3%|▎ | 16613/569592 [7:20:41<386:57:47, 2.52s/it]
3%|▎ | 16614/569592 [7:20:45<442:50:04, 2.88s/it]
3%|▎ | 16614/569592 [7:20:45<442:50:04, 2.88s/it]
3%|▎ | 16615/569592 [7:20:46<354:53:56, 2.31s/it]
3%|▎ | 16615/569592 [7:20:46<354:53:56, 2.31s/it]
3%|▎ | 16616/569592 [7:20:49<408:53:57, 2.66s/it]
3%|▎ | 16616/569592 [7:20:49<408:53:57, 2.66s/it]
3%|▎ | 16617/569592 [7:20:50<329:09:30, 2.14s/it]
3%|▎ | 16617/569592 [7:20:50<329:09:30, 2.14s/it]
3%|▎ | 16618/569592 [7:20:51<273:31:15, 1.78s/it]
3%|▎ | 16618/569592 [7:20:51<273:31:15, 1.78s/it]
3%|▎ | 16619/569592 [7:20:52<235:14:30, 1.53s/it]
3%|▎ | 16619/569592 [7:20:52<235:14:30, 1.53s/it]
3%|▎ | 16620/569592 [7:20:56<338:23:10, 2.20s/it]
3%|▎ | 16620/569592 [7:20:56<338:23:10, 2.20s/it]
3%|▎ | 16621/569592 [7:20:57<305:50:54, 1.99s/it]
3%|▎ | 16621/569592 [7:20:57<305:50:54, 1.99s/it]
3%|▎ | 16622/569592 [7:20:58<260:53:58, 1.70s/it]
3%|▎ | 16622/569592 [7:20:58<260:53:58, 1.70s/it]
3%|▎ | 16623/569592 [7:21:02<345:24:32, 2.25s/it]
3%|▎ | 16623/569592 [7:21:02<345:24:32, 2.25s/it]
3%|▎ | 16624/569592 [7:21:05<393:51:45, 2.56s/it]
3%|▎ | 16624/569592 [7:21:05<393:51:45, 2.56s/it]
3%|▎ | 16625/569592 [7:21:08<401:29:12, 2.61s/it]
3%|▎ | 16625/569592 [7:21:08<401:29:12, 2.61s/it]
3%|▎ | 16626/569592 [7:21:11<430:49:39, 2.80s/it]
3%|▎ | 16626/569592 [7:21:11<430:49:39, 2.80s/it]
3%|▎ | 16627/569592 [7:21:13<377:15:12, 2.46s/it]
3%|▎ | 16627/569592 [7:21:13<377:15:12, 2.46s/it]
3%|▎ | 16628/569592 [7:21:15<357:46:08, 2.33s/it]
3%|▎ | 16628/569592 [7:21:15<357:46:08, 2.33s/it]
3%|▎ | 16629/569592 [7:21:19<429:39:51, 2.80s/it]
3%|▎ | 16629/569592 [7:21:19<429:39:51, 2.80s/it]
3%|▎ | 16630/569592 [7:21:20<343:50:47, 2.24s/it]
3%|▎ | 16630/569592 [7:21:20<343:50:47, 2.24s/it]
3%|▎ | 16631/569592 [7:21:23<419:06:18, 2.73s/it]
3%|▎ | 16631/569592 [7:21:23<419:06:18, 2.73s/it]
3%|▎ | 16632/569592 [7:21:26<389:12:44, 2.53s/it]
3%|▎ | 16632/569592 [7:21:26<389:12:44, 2.53s/it]
3%|▎ | 16633/569592 [7:21:29<444:27:00, 2.89s/it]
3%|▎ | 16633/569592 [7:21:29<444:27:00, 2.89s/it]
3%|▎ | 16634/569592 [7:21:33<478:49:05, 3.12s/it]
3%|▎ | 16634/569592 [7:21:33<478:49:05, 3.12s/it]
3%|▎ | 16635/569592 [7:21:34<377:46:48, 2.46s/it]
3%|▎ | 16635/569592 [7:21:34<377:46:48, 2.46s/it]
3%|▎ | 16636/569592 [7:21:38<442:54:13, 2.88s/it]
3%|▎ | 16636/569592 [7:21:38<442:54:13, 2.88s/it]
3%|▎ | 16637/569592 [7:21:43<539:53:33, 3.51s/it]
3%|▎ | 16637/569592 [7:21:43<539:53:33, 3.51s/it]
3%|▎ | 16638/569592 [7:21:44<420:52:42, 2.74s/it]
3%|▎ | 16638/569592 [7:21:44<420:52:42, 2.74s/it]
3%|▎ | 16639/569592 [7:21:47<470:07:32, 3.06s/it]
3%|▎ | 16639/569592 [7:21:47<470:07:32, 3.06s/it]
3%|▎ | 16640/569592 [7:21:52<545:27:45, 3.55s/it]
3%|▎ | 16640/569592 [7:21:52<545:27:45, 3.55s/it]
3%|▎ | 16641/569592 [7:21:53<425:53:48, 2.77s/it]
3%|▎ | 16641/569592 [7:21:53<425:53:48, 2.77s/it]
3%|▎ | 16642/569592 [7:21:57<483:15:21, 3.15s/it]
3%|▎ | 16642/569592 [7:21:57<483:15:21, 3.15s/it]
3%|▎ | 16643/569592 [7:22:01<501:42:36, 3.27s/it]
3%|▎ | 16643/569592 [7:22:01<501:42:36, 3.27s/it]
3%|▎ | 16644/569592 [7:22:04<521:20:16, 3.39s/it]
3%|▎ | 16644/569592 [7:22:04<521:20:16, 3.39s/it]
3%|▎ | 16645/569592 [7:22:08<528:37:58, 3.44s/it]
3%|▎ | 16645/569592 [7:22:08<528:37:58, 3.44s/it]
3%|▎ | 16646/569592 [7:22:09<411:36:41, 2.68s/it]
3%|▎ | 16646/569592 [7:22:09<411:36:41, 2.68s/it]
3%|▎ | 16647/569592 [7:22:12<452:06:11, 2.94s/it]
3%|▎ | 16647/569592 [7:22:12<452:06:11, 2.94s/it]
3%|▎ | 16648/569592 [7:22:16<479:46:30, 3.12s/it]
3%|▎ | 16648/569592 [7:22:16<479:46:30, 3.12s/it]
3%|▎ | 16649/569592 [7:22:19<489:10:17, 3.18s/it]
3%|▎ | 16649/569592 [7:22:19<489:10:17, 3.18s/it]
3%|▎ | 16650/569592 [7:22:23<498:54:02, 3.25s/it]
3%|▎ | 16650/569592 [7:22:23<498:54:02, 3.25s/it]
3%|▎ | 16651/569592 [7:22:23<390:03:07, 2.54s/it]
3%|▎ | 16651/569592 [7:22:23<390:03:07, 2.54s/it]
3%|▎ | 16652/569592 [7:22:27<426:24:28, 2.78s/it]
3%|▎ | 16652/569592 [7:22:27<426:24:28, 2.78s/it]
3%|▎ | 16653/569592 [7:22:30<467:44:53, 3.05s/it]
3%|▎ | 16653/569592 [7:22:30<467:44:53, 3.05s/it]
3%|▎ | 16654/569592 [7:22:35<550:04:59, 3.58s/it]
3%|▎ | 16654/569592 [7:22:35<550:04:59, 3.58s/it]
3%|▎ | 16655/569592 [7:22:40<608:17:09, 3.96s/it]
3%|▎ | 16655/569592 [7:22:40<608:17:09, 3.96s/it]
3%|▎ | 16656/569592 [7:22:43<561:26:24, 3.66s/it]
3%|▎ | 16656/569592 [7:22:43<561:26:24, 3.66s/it]
3%|▎ | 16657/569592 [7:22:47<572:19:37, 3.73s/it]
3%|▎ | 16657/569592 [7:22:47<572:19:37, 3.73s/it]
3%|▎ | 16658/569592 [7:22:50<549:26:22, 3.58s/it]
3%|▎ | 16658/569592 [7:22:50<549:26:22, 3.58s/it]
3%|▎ | 16659/569592 [7:22:51<426:51:28, 2.78s/it]
3%|▎ | 16659/569592 [7:22:51<426:51:28, 2.78s/it]
3%|▎ | 16660/569592 [7:22:55<466:13:45, 3.04s/it]
3%|▎ | 16660/569592 [7:22:55<466:13:45, 3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (113164308 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
.04s/it]
3%|▎ | 16661/569592 [7:22:56<375:32:34, 2.45s/it]
3%|▎ | 16661/569592 [7:22:56<375:32:34, 2.45s/it]
3%|▎ | 16662/569592 [7:23:00<431:34:14, 2.81s/it]
3%|▎ | 16662/569592 [7:23:00<431:34:14, 2.81s/it]
3%|▎ | 16663/569592 [7:23:03<446:41:03, 2.91s/it]
3%|▎ | 16663/569592 [7:23:03<446:41:03, 2.91s/it]
3%|▎ | 16664/569592 [7:23:06<462:13:28, 3.01s/it]
3%|▎ | 16664/569592 [7:23:06<462:13:28, 3.01s/it]
3%|▎ | 16665/569592 [7:23:09<468:40:57, 3.05s/it]
3%|▎ | 16665/569592 [7:23:09<468:40:57, 3.05s/it]
3%|▎ | 16666/569592 [7:23:13<502:37:06, 3.27s/it]
3%|▎ | 16666/569592 [7:23:13<502:37:06, 3.27s/it]
3%|▎ | 16667/569592 [7:23:16<520:36:57, 3.39s/it]
3%|▎ | 16667/569592 [7:23:16<520:36:57, 3.39s/it]
3%|▎ | 16668/569592 [7:23:21<569:38:38, 3.71s/it]
3%|▎ | 16668/569592 [7:23:21<569:38:38, 3.71s/it]
3%|▎ | 16669/569592 [7:23:24<547:07:12, 3.56s/it]
3%|▎ | 16669/569592 [7:23:24<547:07:12, 3.56s/it]
3%|▎ | 16670/569592 [7:23:29<584:25:00, 3.81s/it]
3%|▎ | 16670/569592 [7:23:29<584:25:00, 3.81s/it]
3%|▎ | 16671/569592 [7:23:32<567:37:08, 3.70s/it]
3%|▎ | 16671/569592 [7:23:32<567:37:08, 3.70s/it]
3%|▎ | 16672/569592 [7:23:36<580:14:10, 3.78s/it]
3%|▎ | 16672/569592 [7:23:36<580:14:10, 3.78s/it]
3%|▎ | 16673/569592 [7:23:39<545:54:01, 3.55s/it]
3%|▎ | 16673/569592 [7:23:39<545:54:01, 3.55s/it]
3%|▎ | 16674/569592 [7:23:42<529:23:50, 3.45s/it]
3%|▎ | 16674/569592 [7:23:42<529:23:50, 3.45s/it]
3%|▎ | 16675/569592 [7:23:46<569:44:03, 3.71s/it]
3%|▎ | 16675/569592 [7:23:47<569:44:03, 3.71s/it]
3%|▎ | 16676/569592 [7:23:49<536:28:04, 3.49s/it]
3%|▎ | 16676/569592 [7:23:49<536:28:04, 3.49s/it]
3%|▎ | 16677/569592 [7:23:50<417:06:16, 2./home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
72s/it]
3%|▎ | 16677/569592 [7:23:50<417:06:16, 2.72s/it]
3%|▎ | 16678/569592 [7:23:54<454:39:55, 2.96s/it]
3%|▎ | 16678/569592 [7:23:54<454:39:55, 2.96s/it]
3%|▎ | 16679/569592 [7:23:55<363:18:41, 2.37s/it]
3%|▎ | 16679/569592 [7:23:55<363:18:41, 2.37s/it]
3%|▎ | 16680/569592 [7:23:56<299:44:17, 1.95s/it]
3%|▎ | 16680/569592 [7:23:56<299:44:17, 1.95s/it]
3%|▎ | 16681/569592 [7:23:59<374:48:55, 2.44s/it]
3%|▎ | 16681/569592 [7:23:59<374:48:55, 2.44s/it]
3%|▎ | 16682/569592 [7:24:03<439:34:44, 2.86s/it]
3%|▎ | 16682/569592 [7:24:03<439:34:44, 2.86s/it]
3%|▎ | 16683/569592 [7:24:07<467:20:44, 3.04s/it]
3%|▎ | 16683/569592 [7:24:07<467:20:44, 3.04s/it]
3%|▎ | 16684/569592 [7:24:11<527:40:40, 3.44s/it]
3%|▎ | 16684/569592 [7:24:11<527:40:40, 3.44s/it]
3%|▎ | 16685/569592 [7:24:15<541:13:23, 3.52s/it]
3%|▎ | 16685/569592 [7:24:15<541:13:23, 3.52s/it]
3%|▎ | 16686/569592 [7:24:18<530:27:23, 3.45s/it]
3%|▎ | 16686/569592 [7:24:18<530:27:23, 3.45s/it]
3%|▎ | 16687/569592 [7:24:22<538:08:52, 3.50s/it]
3%|▎ | 16687/569592 [7:24:22<538:08:52, 3.50s/it]
3%|▎ | 16688/569592 [7:24:25<519:31:19, 3.38s/it]
3%|▎ | 16688/569592 [7:24:25<519:31:19, 3.38s/it]
3%|▎ | 16689/569592 [7:24:26<405:47:45, 2.64s/it]
3%|▎ | 16689/569592 [7:24:26<405:47:45, 2.64s/it]
3%|▎ | 16690/569592 [7:24:27<328:55:00, 2.14s/it]
3%|▎ | 16690/569592 [7:24:27<328:55:00, 2.14s/it]
3%|▎ | 16691/569592 [7:24:31<420:35:27, 2.74s/it]
3%|▎ | 16691/569592 [7:24:31<420:35:27, 2.74s/it]
3%|▎ | 16692/569592 [7:24:35<461:44:04, 3.01s/it]
3%|▎ | 16692/569592 [7:24:35<461:44:04, 3.01s/it]
3%|▎ | 16693/569592 [7:24:36<369:30:50, 2.41s/it]
3%|▎ | 16693/569592 [7:24:36<369:30:50, 2.41s/it]
3%|▎ | 16694/569592 [7:24:39<425:14:15, 2.77s/it]
3%|▎ | 16694/569592 [7:24:39<425:14:15, 2.77s/it]
3%|▎ | 16695/569592 [7:24:40<340:47:15, 2.22s/it]
3%|▎ | 16695/569592 [7:24:40<340:47:15, 2.22s/it]
3%|▎ | 16696/569592 [7:24:44<422:02:55, 2.75s/it]
3%|▎ | 16696/569592 [7:24:44<422:02:55, 2.75s/it]
3%|▎ | 16697/569592 [7:24:45<345:07:00, 2.25s/it]
3%|▎ | 16697/569592 [7:24:45<345:07:00, 2.25s/it]
3%|▎ | 16698/569592 [7:24:49<399:12:46, 2.60s/it]
3%|▎ | 16698/569592 [7:24:49<399:12:46, 2.60s/it]
3%|▎ | 16699/569592 [7:24:52<444:05:14, 2.89s/it]
3%|▎ | 16699/569592 [7:24:52<444:05:14, 2.89s/it]
3%|▎ | 16700/569592 [7:24:56<483:09:07, 3.15s/it]
3%|▎ | 16700/569592 [7:24:56<483:09:07, 3.15s/it]
3%|▎ | 16701/569592 [7:24:57<381:14:31, 2.48s/it]
3%|▎ | 16701/569592 [7:24:57<381:14:31, 2.48s/it]
3%|▎ | 16702/569592 [7:24:58<310:59:01, 2.02s/it]
3%|▎ | 16702/569592 [7:24:58<310:59:01, 2.02s/it]
3%|▎ | 16703/569592 [7:24:59<262:59:41, 1.71s/it]
3%|▎ | 16703/569592 [7:24:59<262:59:41, 1.71s/it]
3%|▎ | 16704/569592 [7:25:02<353:43:38, 2.30s/it]
3%|▎ | 16704/569592 [7:25:02<353:43:38, 2.30s/it]
3%|▎ | 16705/569592 [7:25:03<292:11:10, 1.90s/it]
3%|▎ | 16705/569592 [7:25:03<292:11:10, 1.90s/it]
3%|▎ | 16706/569592 [7:25:08<410:29:11, 2.67s/it]
3%|▎ | 16706/569592 [7:25:08<410:29:11, 2.67s/it]
3%|▎ | 16707/569592 [7:25:12<460:17:51, 3.00s/it]
3%|▎ | 16707/569592 [7:25:12<460:17:51, 3.00s/it]
3%|▎ | 16708/569592 [7:25:13<366:15:26, 2.38s/it]
3%|▎ | 16708/569592 [7:25:13<366:15:26, 2.38s/it]
3%|▎ | 16709/569592 [7:25:17<458:48:50, 2.99s/it]
3%|▎ | 16709/569592 [7:25:17<458:48:50, 2.99s/it]
3%|▎ | 16710/569592 [7:25:18<364:17:04, 2.37s/it]
3%|▎ | 16710/569592 [7:25:18<364:17:04, 2.37s/it]
3%|▎ | 16711/569592 [7:25:22<453:00:37, 2.95s/it]
3%|▎ | 16711/569592 [7:25:22<453:00:37, 2.95s/it]
3%|▎ | 16712/569592 [7:25:26<501:08:24, 3.26s/it]
3%|▎ | 16712/569592 [7:25:26<501:08:24, 3.26s/it]
3%|▎ | 16713/569592 [7:25:30<509:03:19, 3.31s/it]
3%|▎ | 16713/569592 [7:25:30<509:03:19, 3.31s/it]
3%|▎ | 16714/569592 [7:25:33<504:23:35, 3.28s/it]
3%|▎ | 16714/569592 [7:25:33<504:23:35, 3.28s/it]
3%|▎ | 16715/569592 [7:25:36<494:20:01, 3.22s/it]
3%|▎ | 16715/569592 [7:25:36<494:20:01, 3.22s/it]
3%|▎ | 16716/569592 [7:25:37<387:13:58, 2.52s/it]
3%|▎ | 16716/569592 [7:25:37<387:13:58, 2.52s/it]
3%|▎ | 16717/569592 [7:25:38<317:07:21, 2.06s/it]
3%|▎ | 16717/569592 [7:25:38<317:07:21, 2.06s/it]
3%|▎ | 16718/569592 [7:25:41<384:25:48, 2.50s/it]
3%|▎ | 16718/569592 [7:25:41<384:25:48, 2.50s/it]
3%|▎ | 16719/569592 [7:25:45<422:21:01, 2.75s/it]
3%|▎ | 16719/569592 [7:25:45<422:21:01, 2.75s/it]
3%|▎ | 16720/569592 [7:25:46<338:53:48, 2.21s/it]
3%|▎ | 16720/569592 [7:25:46<338:53:48, 2.21s/it]
3%|▎ | 16721/569592 [7:25:47<280:00:02, 1.82s/it]
3%|▎ | 16721/569592 [7:25:47<280:00:02, 1.82s/it]
3%|▎ | 16722/569592 [7:25:47<238:55:31, 1.56s/it]
3%|▎ | 16722/569592 [7:25:47<238:55:31, 1.56s/it]
3%|▎ | 16723/569592 [7:25:51<350:23:51, 2.28s/it]
3%|▎ | 16723/569592 [7:25:51<350:23:51, 2.28s/it]
3%|▎ | 16724/569592 [7:25:52<288:11:57, 1.88s/it]
3%|▎ | 16724/569592 [7:25:52<288:11:57, 1.88s/it]
3%|▎ | 16725/569592 [7:25:53<245:34:27, 1.60s/it]
3%|▎ | 16725/569592 [7:25:53<245:34:27, 1.60s/it]
3%|▎ | 16726/569592 [7:25:57<344:57:38, 2.25s/it]
3%|▎ | 16726/569592 [7:25:57<344:57:38, 2.25s/it]
3%|▎ | 16727/569592 [7:26:01<418:30:11, 2.73s/it]
3%|▎ | 16727/569592 [7:26:01<418:30:11, 2.73s/it]
3%|▎ | 16728/569592 [7:26:03<385:02:23, 2.51s/it]
3%|▎ | 16728/569592 [7:26:03<385:02:23, 2.51s/it]
3%|▎ | 16729/569592 [7:26:04<312:39:19, 2.04s/it]
3%|▎ | 16729/569592 [7:26:04<312:39:19, 2.04s/it]
3%|▎ | 16730/569592 [7:26:05<261:51:02, 1.71s/it]
3%|▎ | 16730/569592 [7:26:05<261:51:02, 1.71s/it]
3%|▎ | 16731/569592 [7:26:06<255:50:10, 1.67s/it]
3%|▎ | 16731/569592 [7:26:06<255:50:10, 1.67s/it]
3%|▎ | 16732/569592 [7:26:10<361:39:57, 2.36s/it]
3%|▎ | 16732/569592 [7:26:10<361:39:57, 2.36s/it]
3%|▎ | 16733/569592 [7:26:11<296:40:22, 1.93s/it]
3%|▎ | 16733/569592 [7:26:11<296:40:22, 1.93s/it]
3%|▎ | 16734/569592 [7:26:17<454:32:35, 2.96s/it]
3%|▎ | 16734/569592 [7:26:17<454:32:35, 2.96s/it]
3%|▎ | 16735/569592 [7:26:18<366:23:44, 2.39s/it]
3%|▎ | 16735/569592 [7:26:18<366:23:44, 2.39s/it]
3%|▎ | 16736/569592 [7:26:19<332:38:47, 2.17s/it]
3%|▎ | 16736/569592 [7:26:19<332:38:47, 2.17s/it]
3%|▎ | 16737/569592 [7:26:23<399:31:37, 2.60s/it]
3%|▎ | 16737/569592 [7:26:23<399:31:37, 2.60s/it]
3%|▎ | 16738/569592 [7:26:26<436:51:15, 2.84s/it]
3%|▎ | 16738/569592 [7:26:26<436:51:15, 2.84s/it]
3%|▎ | 16739/569592 [7:26:27<348:34:46, 2.27s/it]
3%|▎ | 16739/569592 [7:26:27<348:34:46, 2.27s/it]
3%|▎ | 16740/569592 [7:26:31<410:43:56, 2.67s/it]
3%|▎ | 16740/569592 [7:26:31<410:43:56, 2.67s/it]
3%|▎ | 16741/569592 [7:26:36<500:09:15, 3.26s/it]
3%|▎ | 16741/569592 [7:26:36<500:09:15, 3.26s/it]
3%|▎ | 16742/569592 [7:26:39<515:15:36, 3.36s/it]
3%|▎ | 16742/569592 [7:26:39<515:15:36, 3.36s/it]
3%|▎ | 16743/569592 [7:26:40<409:06:53, 2.66s/it]
3%|▎ | 16743/569592 [7:26:40<409:06:53, 2.66s/it]
3%|▎ | 16744/569592 [7:26:41<338:05:13, 2.20s/it]
3%|▎ | 16744/569592 [7:26:41<338:05:13, 2.20s/it]
3%|▎ | 16745/569592 [7:26:42<279:35:53, 1.82s/it]
3%|▎ | 16745/569592 [7:26:42<279:35:53, 1.82s/it]
3%|▎ | 16746/569592 [7:26:47<403:06:20, 2.62s/it]
3%|▎ | 16746/569592 [7:26:47<403:06:20, 2.62s/it]
3%|▎ | 16747/569592 [7:26:52<514:29:29, 3.35s/it]
3%|▎ | 16747/569592 [7:26:52<514:29:29, 3.35s/it]
3%|▎ | 16748/569592 [7:26:53<402:07:21, 2.62s/it]
3%|▎ | 16748/569592 [7:26:53<402:07:21, 2.62s/it]
3%|▎ | 16749/569592 [7:26:56<452:17:41, 2.95s/it]
3%|▎ | 16749/569592 [7:26:56<452:17:41, 2.95s/it]
3%|▎ | 16750/569592 [7:27:00<493:14:46, 3.21s/it]
3%|▎ | 16750/569592 [7:27:00<493:14:46, 3.21s/it]
3%|▎ | 16751/569592 [7:27:03<497:11:14, 3.24s/it]
3%|▎ | 16751/569592 [7:27:04<497:11:14, 3.24s/it]
3%|▎ | 16752/569592 [7:27:10<625:48:40, 4.08s/it]
3%|▎ | 16752/569592 [7:27:10<625:48:40, 4.08s/it]
3%|▎ | 16753/569592 [7:27:13<599:26:44, 3.90s/it]
3%|▎ | 16753/569592 [7:27:13<599:26:44, 3.90s/it]
3%|▎ | 16754/569592 [7:27:14<460:05:02, 3.00s/it]
3%|▎ | 16754/569592 [7:27:14<460:05:02, 3.00s/it]
3%|▎ | 16755/569592 [7:27:18<519:48:58, 3.38s/it]
3%|▎ | 16755/569592 [7:27:18<519:48:58, 3.38s/it]
3%|▎ | 16756/569592 [7:27:19<405:43:50, 2.64s/it]
3%|▎ | 16756/569592 [7:27:19<405:43:50, 2.64s/it]
3%|▎ | 16757/569592 [7:27:20<328:22:16, 2.14s/it]
3%|▎ | 16757/569592 [7:27:20<328:22:16, 2.14s/it]
3%|▎ | 16758/569592 [7:27:23<383:32:11, 2.50s/it]
3%|▎ | 16758/569592 [7:27:23<383:32:11, 2.50s/it]
3%|▎ | 16759/569592 [7:27:24<313:34:51, 2.04s/it]
3%|▎ | 16759/569592 [7:27:24<313:34:51, 2.04s/it]
3%|▎ | 16760/569592 [7:27:28<378:40:15, 2.47s/it]
3%|▎ | 16760/569592 [7:27:28<378:40:15, 2.47s/it]
3%|▎ | 16761/569592 [7:27:29<311:00:05, 2.03s/it]
3%|▎ | 16761/569592 [7:27:29<311:00:05, 2.03s/it]
3%|▎ | 16762/569592 [7:27:32<386:04:00, 2.51s/it]
3%|▎ | 16762/569592 [7:27:33<386:04:00, 2.51s/it]
3%|▎ | 16763/569592 [7:27:36<423:32:13, 2.76s/it]
3%|▎ | 16763/569592 [7:27:36<423:32:13, 2.76s/it]
3%|▎ | 16764/569592 [7:27:37<341:54:13, 2.23s/it]
3%|▎ | 16764/569592 [7:27:37<341:54:13, 2.23s/it]
3%|▎ | 16765/569592 [7:27:42<470:07:32, 3.06s/it]
3%|▎ | 16765/569592 [7:27:42<470:07:32, 3.06s/it]
3%|▎ | 16766/569592 [7:27:46<513:59:59, 3.35s/it]
3%|▎ | 16766/569592 [7:27:46<513:59:59, 3.35s/it]
3%|▎ | 16767/569592 [7:27:49<508:04:26, 3.31s/it]
3%|▎ | 16767/569592 [7:27:49<508:04:26, 3.31s/it]
3%|▎ | 16768/569592 [7:27:52<511:37:03, 3.33s/it]
3%|▎ | 16768/569592 [7:27:52<511:37:03, 3.33s/it]
3%|▎ | 16769/569592 [7:27:53<400:36:00, 2.61s/it]
3%|▎ | 16769/569592 [7:27:53<4/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96727296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
00:36:00, 2.61s/it]
3%|▎ | 16770/569592 [7:27:54<325:25:49, 2.12s/it]
3%|▎ | 16770/569592 [7:27:54<325:25:49, 2.12s/it]
3%|▎ | 16771/569592 [7:27:59<422:07:19, 2.75s/it]
3%|▎ | 16771/569592 [7:27:59<422:07:19, 2.75s/it]
3%|▎ | 16772/569592 [7:28:02<445:40:01, 2.90s/it]
3%|▎ | 16772/569592 [7:28:02<445:40:01, 2.90s/it]
3%|▎ | 16773/569592 [7:28:06<525:48:01, 3.42s/it]
3%|▎ | 16773/569592 [7:28:06<525:48:01, 3.42s/it]
3%|▎ | 16774/569592 [7:28:10<536:28:07, 3.49s/it]
3%|▎ | 16774/569592 [7:28:10<536:28:07, 3.49s/it]
3%|▎ | 16775/569592 [7:28:14<552:35:51, 3.60s/it]
3%|▎ | 16775/569592 [7:28:14<552:35:51, 3.60s/it]
3%|▎ | 16776/569592 [7:28:15<429:33:14, 2.80s/it]
3%|▎ | 16776/569592 [7:28:15<429:33:14, 2.80s/it]
3%|▎ | 16777/569592 [7:28:16<350:45:54, 2.28s/it]
3%|▎ | 16777/569592 [7:28:16<350:45:54, 2.28s/it]
3%|▎ | 16778/569592 [7:28:20<422:09:51, 2.75s/it]
3%|▎ | 16778/569592 [7:28:20<422:09:51, 2.75s/it]
3%|▎ | 16779/569592 [7:28:23<451:17:23, 2.94s/it]
3%|▎ | 16779/569592 [7:28:23<451:17:23, 2.94s/it]
3%|▎ | 16780/569592 [7:28:27<475:12:11, 3.09s/it]
3%|▎ | 16780/569592 [7:28:27<475:12:11, 3.09s/it]
3%|▎ | 16781/569592 [7:28:30<488:01:40, 3.18s/it]
3%|▎ | 16781/569592 [7:28:30<488:01:40, 3.18s/it]
3%|▎ | 16782/569592 [7:28:34<539:11:47, 3.51s/it]
3%|▎ | 16782/569592 [7:28:34<539:11:47, 3.51s/it]
3%|▎ | 16783/569592 [7:28:38<536:39:23, 3.49s/it]
3%|▎ | 16783/569592 [7:28:38<536:39:23, 3.49s/it]
3%|▎ | 16784/569592 [7:28:41<511:24:58, 3.33s/it]
3%|▎ | 16784/569592 [7:28:41<511:24:58, 3.33s/it]
3%|▎ | 16785/569592 [7:28:44<501:39:15, 3.27s/it]
3%|▎ | 16785/569592 [7:28:44<501:39:15, 3.27s/it]
3%|▎ | 16786/569592 [7:28:45<393:19:42, 2.56s/it]
3%|▎ | 16786/569592 [7:28:45<393:19:42, 2.56s/it]
3%|▎ | 16787/569592 [7:28:49<470:50:18, 3.07s/it]
3%|▎ | 16787/569592 [7:28:49<470:50:18, 3.07s/it]
3%|▎ | 16788/569592 [7:28:52<473:18:28, 3.08s/it]
3%|▎ | 16788/569592 [7:28:52<473:18:28, 3.08s/it]
3%|▎ | 16789/569592 [7:28:56<493:25:11, 3.21s/it]
3%|▎ | 16789/569592 [7:28:56<493:25:11, 3.21s/it]
3%|▎ | 16790/569592 [7:29:00<525:11:30, 3.42s/it]
3%|▎ | 16790/569592 [7:29:00<525:11:30, 3.42s/it]
3%|▎ | 16791/569592 [7:29:03<539:49:28, 3.52s/it]
3%|▎ | 16791/569592 [7:29:03<539:49:28, 3.52s/it]
3%|▎ | 16792/569592 [7:29:06<526:50:45, 3.43s/it]
3%|▎ | 16792/569592 [7:29:07<526:50:45, 3.43s/it]
3%|▎ | 16793/569592 [7:29:10<520:35:42, 3.39s/it]
3%|▎ | 16793/569592 [7:29:10<520:35:42, 3.39s/it]
3%|▎ | 16794/569592 [7:29:13<528:20:13, 3.44s/it]
3%|▎ | 16794/569592 [7:29:13<528:20:13, 3.44s/it]
3%|▎ | 16795/569592 [7:29:14<412:24:18, 2.69s/it]
3%|▎ | 16795/569592 [7:29:14<412:24:18, 2.69s/it]
3%|▎ | 16796/569592 [7:29:18<437:55:43, 2.85s/it]
3%|▎ | 16796/569592 [7:29:18<437:55:43, 2.85s/it]
3%|▎ | 16797/569592 [7:29:21<482:12:58, 3.14s/it]
3%|▎ | 16797/569592 [7:29:21<482:12:58, 3.14s/it]
3%|▎ | 16798/569592 [7:29:25<488:53:05, 3.18s/it]
3%|▎ | 16798/569592 [7:29:25<488:53:05, 3.18s/it]
3%|▎ | 16799/569592 [7:29:29<537:45:16, 3.50s/it]
3%|▎ | 16799/569592 [7:29:29<537:45:16, 3.50s/it]
3%|▎ | 16800/569592 [7:29:32<517:09:24, 3.37s/it]
3%|▎ | 16800/569592 [7:29:32<517:09:24, 3.37s/it]
3%|▎ | 16801/569592 [7:29:33<404:54:25, 2.64s/it]
3%|▎ | 16801/569592 [7:29:33<404:54:25, 2.64s/it]
3%|▎ | 16802/569592 [7:29:37<471:21:35, 3.07s/it]
3%|▎ | 16802/569592 [7:29:37<471:21:35, 3.07s/it]
3%|▎ | 16803/569592 [7:29:40<490:38:42, 3.20s/it]
3%|▎ | 16803/569592 [7:29:40<490:38:42, 3.20s/it]
3%|▎ | 16804/569592 [7:29:45<533:11:06, 3.47s/it]
3%|▎ | 16804/569592 [7:29:45<533:11:06, 3.47s/it]
3%|▎ | 16805/569592 [7:29:45<415:46:20, 2.71s/it]
3%|▎ | 16805/569592 [7:29:45<415:46:20, 2.71s/it]
3%|▎ | 16806/569592 [7:29:46<335:21:29, 2.18s/it]
3%|▎ | 16806/569592 [7:29:46<335:21:29, 2.18s/it]
3%|▎ | 16807/569592 [7:29:47<279:11:39, 1.82s/it]
3%|▎ | 16807/569592 [7:29:47<279:11:39, 1.82s/it]
3%|▎ | 16808/569592 [7:29:51<351:19:39, 2.29s/it]
3%|▎ | 16808/569592 [7:29:51<351:19:39, 2.29s/it]
3%|▎ | 16809/569592 [7:29:52<293:37:54, 1.91s/it]
3%|▎ | 16809/569592 [7:29:52<293:37:54, 1.91s/it]
3%|▎ | 16810/569592 [7:29:55<363:35:32, 2.37s/it]
3%|▎ | 16810/569592 [7:29:55<363:35:32, 2.37s/it]
3%|▎ | 16811/569592 [7:29:56<300:12:22, 1.96s/it]
3%|▎ | 16811/569592 [7:29:56<300:12:22, 1.96s/it]
3%|▎ | 16812/569592 [7:30:00<376:50:28, 2.45s/it]
3%|▎ | 16812/569592 [7:30:00<376:50:28, 2.45s/it]
3%|▎ | 16813/569592 [7:30:01<307:04:09, 2.00s/it]
3%|▎ | 16813/569592 [7:30:01<307:04:09, 2.00s/it]
3%|▎ | 16814/569592 [7:30:05<392:22:52, 2.56s/it]
3%|▎ | 16814/569592 [7:30:05<392:22:52, 2.56s/it]
3%|▎ | 16815/569592 [7:30:08<438:06:12, 2.85s/it]
3%|▎ | 16815/569592 [7:30:08<438:06:12, 2.85s/it]
3%|▎ | 16816/569592 [7:30:12<460:36:42, 3.00s/it]
3%|▎ | 16816/569592 [7:30:12<460:36:42, 3.00s/it]
3%|▎ | 16817/569592 [7:30:17<558:26:08, 3.64s/it]
3%|▎ | 16817/569592 [7:30:17<558:26:08, 3.64s/it]
3%|▎ | 16818/569592 [7:30:18<432:43:27, 2.82s/it]
3%|▎ | 16818/569592 [7:30:18<432:43:27, 2.82s/it]
3%|▎ | 16819/569592 [7:30:19<346:59:02, 2.26s/it]
3%|▎ | 16819/569592 [7:30:19<346:59:02, 2.26s/it]
3%|▎ | 16820/569592 [7:30:23<435:12:17, 2.83s/it]
3%|▎ | 16820/569592 [7:30:23<435:12:17, 2.83s/it]
3%|▎ | 16821/569592 [7:30:26<473:21:48, 3.08s/it]
3%|▎ | 16821/569592 [7:30:26<473:21:48, 3.08s/it]
3%|▎ | 16822/569592 [7:30:27<376:20:20, 2.45s/it]
3%|▎ | 16822/569592 [7:30:27<376:20:20, 2.45s/it]
3%|▎ | 16823/569592 [7:30:31<423:29:45, 2.76s/it]
3%|▎ | 16823/569592 [7:30:31<423:29:45, 2.76s/it]
3%|▎ | 16824/569592 [7:30:32<343:29:18, 2.24s/it]
3%|▎ | 16824/569592 [7:30:32<343:29:18, 2.24s/it]
3%|▎ | 16825/569592 [7:30:35<400:38:30, 2.61s/it]
3%|▎ | 16825/569592 [7:30:35<400:38:30, 2.61s/it]
3%|▎ | 16826/569592 [7:30:39<446:27:36, 2.91s/it]
3%|▎ | 16826/569592 [7:30:39<446:27:36, 2.91s/it]
3%|▎ | 16827/569592 [7:30:43<496:49:50, 3.24s/it]
3%|▎ | 16827/569592 [7:30:43<496:49:50, 3.24s/it]
3%|▎ | 16828/569592 [7:30:46<506:42:31, 3.30s/it]
3%|▎ | 16828/569592 [7:30:46<506:42:31, 3.30s/it]
3%|▎ | 16829/569592 [7:30:47<396:01:52, 2.58s/it]
3%|▎ | 16829/569592 [7:30:47<396:01:52, 2.58s/it]
3%|▎ | 16830/569592 [7:30:52<480:42:47, 3.13s/it]
3%|▎ | 16830/569592 [7:30:52<480:42:47, 3.13s/it]
3%|▎ | 16831/569592 [7:30:55<507:58:57, 3.31s/it]
3%|▎ | 16831/569592 [7:30:55<507:58:57, 3.31s/it]
3%|▎ | 16832/569592 [7:30:58<498:40:34, 3.25s/it]
3%|▎ | 16832/569592 [7:30:59<498:40:34, 3.25s/it]
3%|▎ | 16833/569592 [7:30:59<390:50:07, 2.55s/it]
3%|▎ | 16833/569592 [7:30:59<390:50:07, 2.55s/it]
3%|▎ | 16834/569592 [7:31:00<316:33:07, 2.06s/it]
3%|▎ | 16834/569592 [7:31:00<316:33:07, 2.06s/it]
3%|▎ | 16835/569592 [7:31:01<264:27:59, 1.72s/it]
3%|▎ | 16835/569592 [7:31:01<264:27:59, 1.72s/it]
3%|▎ | 16836/569592 [7:31:02<228:39:05, 1.49s/it]
3%|▎ | 16836/569592 [7:31:02<228:39:05, 1.49s/it]
3%|▎ | 16837/569592 [7:31:03<204:13:57, 1.33s/it]
3%|▎ | 16837/569592 [7:31:03<204:13:57, 1.33s/it]
3%|▎ | 16838/569592 [7:31:04<185:15:35, 1.21s/it]
3%|▎ | 16838/569592 [7:31:04<185:15:35, 1.21s/it]
3%|▎ | 16839/569592 [7:31:08<305:09:55, 1.99s/it]
3%|▎ | 16839/569592 [7:31:08<305:09:55, 1.99s/it]
3%|▎ | 16840/569592 [7:31:09<260:47:17, 1.70s/it]
3%|▎ | 16840/569592 [7:31:09<260:47:17, 1.70s/it]
3%|▎ | 16841/569592 [7:31:15<444:28:14, 2.89s/it]
3%|▎ | 16841/569592 [7:31:15<444:28:14, 2.89s/it]
3%|▎ | 16842/569592 [7:31:16<358:36:30, 2.34s/it]
3%|▎ | 16842/569592 [7:31:16<358:36:30, 2.34s/it]
3%|▎ | 16843/569592 [7:31:17<293:53:12, 1.91s/it]
3%|▎ | 16843/569592 [7:31:17<293:53:12, 1.91s/it]
3%|▎ | 16844/569592 [7:31:21<425:44:06, 2.77s/it]
3%|▎ | 16844/569592 [7:31:21<425:44:06, 2.77s/it]
3%|▎ | 16845/569592 [7:31:25<460:04:53, 3.00s/it]
3%|▎ | 16845/569592 [7:31:25<460:04:53, 3.00s/it]
3%|▎ | 16846/569592 [7:31:26<368:28:15, 2.40s/it]
3%|▎ | 16846/569592 [7:31:26<368:28:15, 2.40s/it]
3%|▎ | 16847/569592 [7:31:27<302:09:09, 1.97s/it]
3%|▎ | 16847/569592 [7:31:27<302:09:09, 1.97s/it]
3%|▎ | 16848/569592 [7:31:28<285:31:16, 1.86s/it]
3%|▎ | 16848/569592 [7:31:28<285:31:16, 1.86s/it]
3%|▎ | 16849/569592 [7:31:32<386:30:46, 2.52s/it]
3%|▎ | 16849/569592 [7:31:33<386:30:46, 2.52s/it]
3%|▎ | 16850/569592 [7:31:33<313:52:23, 2.04s/it]
3%|▎ | 16850/569592 [7:31:33<313:52:23, 2.04s/it]
3%|▎ | 16851/569592 [7:31:38<408:14:52, 2.66s/it]
3%|▎ | 16851/569592 [7:31:38<408:14:52, 2.66s/it]
3%|▎ | 16852/569592 [7:31:39<370:59:51, 2.42s/it]
3%|▎ | 16852/569592 [7:31:39<370:59:51, 2.42s/it]
3%|▎ | 16853/569592 [7:31:42<391:04:31, 2.55s/it]
3%|▎ | 16853/569592 [7:31:42<391:04:31, 2.55s/it]
3%|▎ | 16854/569592 [7:31:43<319:08:05, 2.08s/it]
3%|▎ | 16854/569592 [7:31:43<319:08:05, 2.08s/it]
3%|▎ | 16855/569592 [7:31:47<378:28:59, 2.47s/it]
3%|▎ | 16855/569592 [7:31:47<378:28:59, 2.47s/it]
3%|▎ | 16856/569592 [7:31:50<442:52:45, 2.88s/it]
3%|▎ | 16856/569592 [7:31:50<442:52:45, 2.88s/it]
3%|▎ | 16857/569592 [7:31:54<455:55:15, 2.97s/it]
3%|▎ | 16857/569592 [7:31:54<455:55:15, 2.97s/it]
3%|▎ | 16858/569592 [7:31:58<543:54:09, 3.54s/it]
3%|▎ | 16858/569592 [7:31:59<543:54:09, 3.54s/it]
3%|▎ | 16859/569592 [7:32:02<533:17:13, 3.47s/it]
3%|▎ | 16859/569592 [7:32:02<533:17:13, 3.47s/it]
3%|▎ | 16860/569592 [7:32:05<537:38:05, 3.50s/it]
3%|▎ | 16860/569592 [7:32:05<537:38:05, 3.50s/it]
3%|▎ | 16861/569592 [7:32:06<419:59:09, 2.74s/it]
3%|▎ | 16861/569592 [7:32:06<419:59:09, 2.74s/it]
3%|▎ | 16862/569592 [7:32:10<447:11:43, 2.91s/it]
3%|▎ | 16862/569592 [7:32:10<447:11:43, 2.91s/it]
3%|▎ | 16863/569592 [7:32:11<357:00:16, 2.33s/it]
3%|▎ | 16863/569592 [7:32:11<357:00:16, 2.33s/it]
3%|▎ | 16864/569592 [7:32:12<292:14:05, 1.90s/it]
3%|▎ | 16864/569592 [7:32:12<292:14:05, 1.90s/it]
3%|▎ | 16865/569592 [7:32:15<370:59:28, 2.42s/it]
3%|▎ | 16865/569592 [7:32:15<370:59:28, 2.42s/it]
3%|▎ | 16866/569592 [7:32:16<304:02:34, 1.98s/it]
3%|▎ | 16866/569592 [7:32:16<304:02:34, 1.98s/it]
3%|▎ | 16867/569592 [7:32:20<389:16:16, 2.54s/it]
3%|▎ | 16867/569592 [7:32:20<389:16:16/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92381674 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
, 2.54s/it]
3%|▎ | 16868/569592 [7:32:24<442:03:25, 2.88s/it]
3%|▎ | 16868/569592 [7:32:24<442:03:25, 2.88s/it]
3%|▎ | 16869/569592 [7:32:27<480:25:13, 3.13s/it]
3%|▎ | 16869/569592 [7:32:27<480:25:13, 3.13s/it]
3%|▎ | 16870/569592 [7:32:30<482:23:35, 3.14s/it]
3%|▎ | 16870/569592 [7:32:30<482:23:35, 3.14s/it]
3%|▎ | 16871/569592 [7:32:34<487:25:57, 3.17s/it]
3%|▎ | 16871/569592 [7:32:34<487:25:57, 3.17s/it]
3%|▎ | 16872/569592 [7:32:39<566:52:19, 3.69s/it]
3%|▎ | 16872/569592 [7:32:39<566:52:19, 3.69s/it]
3%|▎ | 16873/569592 [7:32:40<441:15:48, 2.87s/it]
3%|▎ | 16873/569592 [7:32:40<441:15:48, 2.87s/it]
3%|▎ | 16874/569592 [7:32:41<351:58:28, 2.29s/it]
3%|▎ | 16874/569592 [7:32:41<351:58:28, 2.29s/it]
3%|▎ | 16875/569592 [7:32:42<292:47:50, 1.91s/it]
3%|▎ | 16875/569592 [7:32:42<292:47:50, 1.91s/it]
3%|▎ | 16876/569592 [7:32:45<371:56:59, 2.42s/it]
3%|▎ | 16876/569592 [7:32:45<371:56:59, 2.42s/it]
3%|▎ | 16877/569592 [7:32:49<428:24:37, 2.79s/it]
3%|▎ | 16877/569592 [7:32:49<428:24:37, 2.79s/it]
3%|▎ | 16878/569592 [7:32:52<460:24:57, 3.00s/it]
3%|▎ | 16878/569592 [7:32:52<460:24:57, 3.00s/it]
3%|▎ | 16879/569592 [7:32:56<503:26:56, 3.28s/it]
3%|▎ | 16879/569592 [7:32:56<503:26:56, 3.28s/it]
3%|▎ | 16880/569592 [7:33:00<510:25:42, 3.32s/it]
3%|▎ | 16880/569592 [7:33:00<510:25:42, 3.32s/it]
3%|▎ | 16881/569592 [7:33:03<500:27:48, 3.26s/it]
3%|▎ | 16881/569592 [7:33:03<500:27:48, 3.26s/it]
3%|▎ | 16882/569592 [7:33:06<499:20:23, 3.25s/it]
3%|▎ | 16882/569592 [7:33:06<499:20:23, 3.25s/it]
3%|▎ | 16883/569592 [7:33:10<548:29:41, 3.57s/it]
3%|▎ | 16883/569592 [7:33:10<548:29:41, 3.57s/it]
3%|▎ | 16884/569592 [7:33:14<540:31:11, 3.52s/it]
3%|▎ | 16884/569592 [7:33:14<540:31:11, 3.52s/it]
3%|▎ | 16885/569592 [7:33:15<420:00:27, 2.74s/it]
3%|▎ | 16885/569592 [7:33:15<420:00:27, 2.74s/it]
3%|▎ | 16886/569592 [7:33:18<454:14:47, 2.96s/it]
3%|▎ | 16886/569592 [7:33:18<454:14:47, 2.96s/it]
3%|▎ | 16887/569592 [7:33:19<363:45:32, 2.37s/it]
3%|▎ | 16887/569592 [7:33:19<363:45:32, 2.37s/it]
3%|▎ | 16888/569592 [7:33:23<433:02:22, 2.82s/it]
3%|▎ | 16888/569592 [7:33:23<433:02:22, 2.82s/it]
3%|▎ | 16889/569592 [7:33:26<463:50:17, 3.02s/it]
3%|▎ | 16889/569592 [7:33:26<463:50:17, 3.02s/it]
3%|▎ | 16890/569592 [7:33:30<466:20:31, 3.04s/it]
3%|▎ | 16890/569592 [7:33:30<466:20:31, 3.04s/it]
3%|▎ | 16891/569592 [7:33:33<465:54:53, 3.03s/it]
3%|▎ | 16891/569592 [7:33:33<465:54:53, 3.03s/it]
3%|▎ | 16892/569592 [7:33:36<494:51:14, 3.22s/it]
3%|▎ | 16892/569592 [7:33:36<494:51:14, 3.22s/it]
3%|▎ | 16893/569592 [7:33:40<512:51:18, 3.34s/it]
3%|▎ | 16893/569592 [7:33:40<512:51:18, 3.34s/it]
3%|▎ | 16894/569592 [7:33:43<507:08:06, 3.30s/it]
3%|▎ | 16894/569592 [7:33:43<507:08:06, 3.30s/it]
3%|▎ | 16895/569592 [7:33:47<528:03:56, 3.44s/it]
3%|▎ | 16895/569592 [7:33:47<528:03:56, 3.44s/it]
3%|▎ | 16896/569592 [7:33:50<521:22:09, 3.40s/it]
3%|▎ | 16896/569592 [7:33:50<521:22:09, 3.40s/it]
3%|▎ | 16897/569592 [7:33:53<503:44:37, 3.28s/it]
3%|▎ | 16897/569592 [7:33:53<503:44:37, 3.28s/it]
3%|▎ | 16898/569592 [7:33:56<497:24:06, 3.24s/it]
3%|▎ | 16898/569592 [7:33:56<497:24:06, 3.24s/it]
3%|▎ | 16899/569592 [7:34:00<497:34:17, 3.24s/it]
3%|▎ | 16899/569592 [7:34:00<497:34:17, 3.24s/it]
3%|▎ | 16900/569592 [7:34:00<389:48:35, 2.54s/it]
3%|▎ | 16900/569592 [7:34:00<389:48:35, 2.54s/it]
3%|▎ | 16901/569592 [7:34:06<506:54:47, 3.30s/it]
3%|▎ | 16901/569592 [7:34:06<506:54:47, 3.30s/it]
3%|▎ | 16902/569592 [7:34:06<397:17:35, 2.59s/it]
3%|▎ | 16902/569592 [7:34:06<397:17:35, 2.59s/it]
3%|▎ | 16903/569592 [7:34:10<454:28:05, 2.96s/it]
3%|▎ | 16903/569592 [7:34:10<454:28:05, 2.96s/it]
3%|▎ | 16904/569592 [7:34:14<484:03:16, 3.15s/it]
3%|▎ | 16904/569592 [7:34:14<484:03:16, 3.15s/it]
3%|▎ | 16905/569592 [7:34:19<568:46:35, 3.70s/it]
3%|▎ | 16905/569592 [7:34:19<568:46:35, 3.70s/it]
3%|▎ | 16906/569592 [7:34:23<610:37:09, 3.98s/it]
3%|▎ | 16906/569592 [7:34:23<610:37:09, 3.98s/it]
3%|▎ | 16907/569592 [7:34:27<573:30:53, 3.74s/it]
3%|▎ | 16907/569592 [7:34:27<573:30:53, 3.74s/it]
3%|▎ | 16908/569592 [7:34:30<556:44:40, 3.63s/it]
3%|▎ | 16908/569592 [7:34:30<556:44:40, 3.63s/it]
3%|▎ | 16909/569592 [7:34:33<535:45:01, 3.49s/it]
3%|▎ | 16909/569592 [7:34:33<535:45:01, 3.49s/it]
3%|▎ | 16910/569592 [7:34:37<560:08:22, 3.65s/it]
3%|▎ | 16910/569592 [7:34:37<560:08:22, 3.65s/it]
3%|▎ | 16911/569592 [7:34:40<535:54:29, 3.49s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94864000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 16911/569592 [7:34:40<535:54:29, 3.49s/it]
3%|▎ | 16912/569592 [7:34:44<543:52:28, 3.54s/it]
3%|▎ | 16912/569592 [7:34:44<543:52:28, 3.54s/it]
3%|▎ | 16913/569592 [7:34:45<423:30:28, 2.76s/it]
3%|▎ | 16913/569592 [7:34:45<423:30:28, 2.76s/it]
3%|▎ | 16914/569592 [7:34:48<445:11:19, 2.90s/it]
3%|▎ | 16914/569592 [7:34:48<445:11:19, 2.90s/it]
3%|▎ | 16915/569592 [7:34:51<461:37:55, 3.01s/it]
3%|▎ | 16915/569592 [7:34:51<461:37:55, 3.01s/it]
3%|▎ | 16916/569592 [7:34:55<491:12:05, 3.20s/it]
3%|▎ | 16916/569592 [7:34:55<491:12:05, 3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
.20s/it]
3%|▎ | 16917/569592 [7:34:59<524:53:10, 3.42s/it]
3%|▎ | 16917/569592 [7:34:59<524:53:10, 3.42s/it]
3%|▎ | 16918/569592 [7:35:00<409:36:36, 2.67s/it]
3%|▎ | 16918/569592 [7:35:00<409:36:36, 2.67s/it]
3%|▎ | 16919/569592 [7:35:01<338:11:58, 2.20s/it]
3%|▎ | 16919/569592 [7:35:01<338:11:58, 2.20s/it]
3%|▎ | 16920/569592 [7:35:05<409:43:19, 2.67s/it]
3%|▎ | 16920/569592 [7:35:05<409:43:19, 2.67s/it]
3%|▎ | 16921/569592 [7:35:10<529:48:35, 3.45s/it]
3%|▎ | 16921/569592 [7:35:10<529:48:35, 3.45s/it]
3%|▎ | 16922/569592 [7:35:11<411:43:02, 2.68s/it]
3%|▎ | 16922/569592 [7:35:11<411:43:02, 2.68s/it]
3%|▎ | 16923/569592 [7:35:14<435:59:06, 2.84s/it]
3%|▎ | 16923/569592 [7:35:14<435:59:06, 2.84s/it]
3%|▎ | 16924/569592 [7:35:15<346:49:52, 2.26s/it]
3%|▎ | 16924/569592 [7:35:15<346:49:52, 2.26s/it]
3%|▎ | 16925/569592 [7:35:16<286:58:01, 1.87s/it]
3%|▎ | 16925/569592 [7:35:16<286:58:01, 1.87s/it]
3%|▎ | 16926/569592 [7:35:17<244:00:37, 1.59s/it]
3%|▎ | 16926/569592 [7:35:17<244:00:37, 1.59s/it]
3%|▎ | 16927/569592 [7:35:18<213:44:02, 1.39s/it]
3%|▎ | 16927/569592 [7:35:18<213:44:02, 1.39s/it]
3%|▎ | 16928/569592 [7:35:21<312:10:54, 2.03s/it]
3%|▎ | 16928/569592 [7:35:21<312:10:54, 2.03s/it]
3%|▎ | 16929/569592 [7:35:25<389:30:26, 2.54s/it]
3%|▎ | 16929/569592 [7:35:25<389:30:26, 2.54s/it]
3%|▎ | 16930/569592 [7:35:26<318:30:11, 2.07s/it]
3%|▎ | 16930/569592 [7:35:26<318:30:11, 2.07s/it]
3%|▎ | 16931/569592 [7:35:30<399:19:43, 2.60s/it]
3%|▎ | 16931/569592 [7:35:30<399:19:43, 2.60s/it]
3%|▎ | 16932/569592 [7:35:33<437:43:26, 2.85s/it]
3%|▎ | 16932/569592 [7:35:33<437:43:26, 2.85s/it]
3%|▎ | 16933/569592 [7:35:37<490:36:36, 3.20s/it]
3%|▎ | 16933/569592 [7:35:37<490:36:36, 3.20s/it]
3%|▎ | 16934/569592 [7:35:38<386:33:26, 2.52s/it]
3%|▎ | 16934/569592 [7:35:38<386:33:26, 2.52s/it]
3%|▎ | 16935/569592 [7:35:43<468:02:47, 3.05s/it]
3%|▎ | 16935/569592 [7:35:43<468:02:47, 3.05s/it]
3%|▎ | 16936/569592 [7:35:44<374:58:02, 2.44s/it]
3%|▎ | 16936/569592 [7:35:44<374:58:02, 2.44s/it]
3%|▎ | 16937/569592 [7:35:45<308:57:28, 2.01s/it]
3%|▎ | 16937/569592 [7:35:45<308:57:28, 2.01s/it]
3%|▎ | 16938/569592 [7:35:46<261:15:28, 1.70s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 16938/569592 [7:35:46<261:15:28, 1.70s/it]
3%|▎ | 16939/569592 [7:35:50<365:47:38, 2.38s/it]
3%|▎ | 16939/569592 [7:35:50<365:47:38, 2.38s/it]
3%|▎ | 16940/569592 [7:35:53<423:36:25, 2.76s/it]
3%|▎ | 16940/569592 [7:35:53<423:36:25, 2.76s/it]
3%|▎ | 16941/569592 [7:35:59<539:43:36, 3.52s/it]
3%|▎ | 16941/569592 [7:35:59<539:43:36, 3.52s/it]
3%|▎ | 16942/569592 [7:35:59<422:47:55, 2.75s/it]
3%|▎ | 16942/569592 [7:35:59<422:47:55, 2.75s/it]
3%|▎ | 16943/569592 [7:36:03<450:10:34, 2.93s/it]
3%|▎ | 16943/569592 [7:36:03<450:10:34, 2.93s/it]
3%|▎ | 16944/569592 [7:36:07<520:36:21, 3.39s/it]
3%|▎ | 16944/569592 [7:36:07<520:36:21, 3.39s/it]
3%|▎ | 16945/569592 [7:36:12<562:34:55, 3.66s/it]
3%|▎ | 16945/569592 [7:36:12<562:34:55, 3.66s/it]
3%|▎ | 16946/569592 [7:36:15<545:05:01, 3.55s/it]
3%|▎ | 16946/569592 [7:36:15<545:05:01, 3.55s/it]
3%|▎ | 16947/569592 [7:36:16<422:41:33, 2.75s/it]
3%|▎ | 16947/569592 [7:36:16<422:41:33, 2.75s/it]
3%|▎ | 16948/569592 [7:36:20<485:32:41, 3.16s/it]
3%|▎ | 16948/569592 [7:36:20<485:32:41, 3.16s/it]
3%|▎ | 16949/569592 [7:36:21<383:38:02, 2.50s/it]
3%|▎ | 16949/569592 [7:36:21<383:38:02, 2.50s/it]
3%|▎ | 16950/569592 [7:36:22<314:21:12, 2.05s/it]
3%|▎ | 16950/569592 [7:36:22<314:21:12, 2.05s/it]
3%|▎ | 16951/569592 [7:36:23<265:45:57, 1.73s/it]
3%|▎ | 16951/569592 [7:36:23<265:45:57, 1.73s/it]
3%|▎ | 16952/569592 [7:36:26<340:34:03, 2.22s/it]
3%|▎ | 16952/569592 [7:36:26<340:34:03, 2.22s/it]
3%|▎ | 16953/569592 [7:36:27<290:15:04, 1.89s/it]
3%|▎ | 16953/569592 [7:36:27<290:15:04, 1.89s/it]
3%|▎ | 16954/569592 [7:36:31<370:32:28, 2.41s/it]
3%|▎ | 16954/569592 [7:36:31<370:32:28, 2.41s/it]
3%|▎ | 16955/569592 [7:36:32<303:05:32, 1.97s/it]
3%|▎ | 16955/569592 [7:36:32<303:05:32, 1.97s/it]
3%|▎ | 16956/569592 [7:36:33<254:17:18, 1.66s/it]
3%|▎ | 16956/569592 [7:36:33<254:17:18, 1.66s/it]
3%|▎ | 16957/569592 [7:36:34<220:53:18, 1.44s/it]
3%|▎ | 16957/569592 [7:36:34<220:53:18, 1.44s/it]
3%|▎ | 16958/569592 [7:36:38<340:51:21, 2.22s/it]
3%|▎ | 16958/569592 [7:36:38<340:51:21, 2.22s/it]
3%|▎ | 16959/569592 [7:36:42<432:14:29, 2.82s/it]
3%|▎ | 16959/569592 [7:36:42<432:14:29, 2.82s/it]
3%|▎ | 16960/569592 [7:36:43<350:28:16, 2.28s/it]
3%|▎ | 16960/569592 [7:36:43<350:28:16, 2.28s/it]
3%|▎ | 16961/569592 [7:36:44<297:56:51, 1.94s/it]
3%|▎ | 16961/569592 [7:36:44<297:56:51, 1.94s/it]
3%|▎ | 16962/569592 [7:36:49<408:35:39, 2.66s/it]
3%|▎ | 16962/569592 [7:36:49<408:35:39, 2.66s/it]
3%|▎ | 16963/569592 [7:36:50<337:45:49, 2.20s/it]
3%|▎ | 16963/569592 [7:36:50<337:45:49, 2.20s/it]
3%|▎ | 16964/569592 [7:36:51<281:07:32, 1.83s/it]
3%|▎ | 16964/569592 [7:36:51<281:07:32, 1.83s/it]
3%|▎ | 16965/569592 [7:36:55<388:34:16, 2.53s/it]
3%|▎ | 16965/569592 [7:36:55<388:34:16, 2.53s/it]
3%|▎ | 16966/569592 [7:36:56<315:25:06, 2.05s/it]
3%|▎ | 16966/569592 [7:36:56<315:25:06, 2.05s/it]
3%|▎ | 16967/569592 [7:36:59<388:08:02, 2.53s/it]
3%|▎ | 16967/569592 [7:36:59<388:08:02, 2.53s/it]
3%|▎ | 16968/569592 [7:37:03<420:19:21, 2.74s/it]
3%|▎ | 16968/569592 [7:37:03<420:19:21, 2.74s/it]
3%|▎ | 16969/569592 [7:37:06<463:41:56, 3.02s/it]
3%|▎ | 16969/569592 [7:37:06<463:41:56, 3.02s/it]
3%|▎ | 16970/569592 [7:37:07<369:50:28, 2.41s/it]
3%|▎ | 16970/569592 [7:37:07<369:50:28, 2.41s/it]
3%|▎ | 16971/569592 [7:37:10<367:38:07, 2.39s/it]
3%|▎ | 16971/569592 [7:37:10<367:38:07, 2.39s/it]
3%|▎ | 16972/569592 [7:37:13<417:46:25, 2.72s/it]
3%|▎ | 16972/569592 [7:37:13<417:46:25, 2.72s/it]
3%|▎ | 16973/569592 [7:37:17<478:30:44, 3.12s/it]
3%|▎ | 16973/569592 [7:37:17<478:30:44, 3.12s/it]
3%|▎ | 16974/569592 [7:37:21<492:31:22, 3.21s/it]
3%|▎ | 16974/569592 [7:37:21<492:31:22, 3.21s/it]
3%|▎ | 16975/569592 [7:37:24<503:53:53, 3.28s/it]
3%|▎ | 16975/569592 [7:37:24<503:53:53, 3.28s/it]
3%|▎ | 16976/569592 [7:37:29<562:10:04, 3.66s/it]
3%|▎ | 16976/569592 [7:37:29<562:10:04, 3.66s/it]
3%|▎ | 16977/569592 [7:37:32<556:17:32, 3.62s/it]
3%|▎ | 16977/569592 [7:37:32<556:17:32, 3.62s/it]
3%|▎ | 16978/569592 [7:37:35<536:29:15, 3.49s/it]
3%|▎ | 16978/569592 [7:37:35<536:29:15, 3.49s/it]
3%|▎ | 16979/569592 [7:37:36<419:49:00, 2.73s/it]
3%|▎ | 16979/569592 [7:37:36<419:49:00, 2.73s/it]
3%|▎ | 16980/569592 [7:37:37<337:54:49, 2.20s/it]
3%|▎ | 16980/569592 [7:37:37<337:54:49, 2.20s/it]
3%|▎ | 16981/569592 [7:37:38<280:20:39, 1.83s/it]
3%|▎ | 16981/569592 [7:37:38<280:20:39, 1.83s/it]
3%|▎ | 16982/569592 [7:37:39<239:39:49, 1.56s/it]
3%|▎ | 16982/569592 [7:37:39<239:39:49, 1.56s/it]
3%|▎ | 16983/569592 [7:37:43<339:38:53, 2.21s/it]
3%|▎ | 16983/569592 [7:37:43<339:38:53, 2.21s/it]
3%|▎ | 16984/569592 [7:37:47<409:16:21, 2.67s/it]
3%|▎ | 16984/569592 [7:37:47<409:16:21, 2.67s/it]
3%|▎ | 16985/569592 [7:37:50<438:01:23, 2.85s/it]
3%|▎ | 16985/569592 [7:37:50<438:01:23, 2.85s/it]
3%|▎ | 16986/569592 [7:37:53<469:12:21, 3.06s/it]
3%|▎ | 16986/569592 [7:37:53<469:12:21, 3.06s/it]
3%|▎ | 16987/569592 [7:37:57<503:44:32, 3.28s/it]
3%|▎ | 16987/569592 [7:37:57<503:44:32, 3.28s/it]
3%|▎ | 16988/569592 [7:37:58<397:26:44, 2.59s/it]
3%|▎ | 16988/569592 [7:37:58<397:26:44, 2.59s/it]
3%|▎ | 16989/569592 [7:38:02<459:12:40, 2.99s/it]
3%|▎ | 16989/569592 [7:38:02<459:12:40, 2.99s/it]
3%|▎ | 16990/569592 [7:38:05<477:29:39, 3.11s/it]
3%|▎ | 16990/569592 [7:38:05<477:29:39, 3.11s/it]
3%|▎ | 16991/569592 [7:38:07<381:58:09, 2.49s/it]
3%|▎ | 16991/569592 [7:38:07<381:58:09, 2.49s/it]
3%|▎ | 16992/569592 [7:38:07<312:19:17, 2.03s/it]
3%|▎ | 16992/569592 [7:38:07<312:19:17, 2.03s/it]
3%|▎ | 16993/569592 [7:38:11<387:39:11, 2.53s/it]
3%|▎ | 16993/569592 [7:38:11<387:39:11, 2.53s/it]
3%|▎ | 16994/569592 [7:38:15<429:17:33, 2.80s/it]
3%|▎ | 16994/569592 [7:38:15<429:17:33, 2.80s/it]
3%|▎ | 16995/569592 [7:38:18<450:03:23, 2.93s/it]
3%|▎ | 16995/569592 [7:38:18<450:03:23, 2.93s/it]
3%|▎ | 16996/569592 [7:38:21<463:00:52, 3.02s/it]
3%|▎ | 16996/569592 [7:38:21<463:00:52, 3.02s/it]
3%|▎ | 16997/569592 [7:38:26<547:08:00, 3.56s/it]
3%|▎ | 16997/569592 [7:38:26<547:08:00, 3.56s/it]
3%|▎ | 16998/569592 [7:38:30<563:01:03, 3.67s/it]
3%|▎ | 16998/569592 [7:38:30<563:01:03, 3.67s/it]
3%|▎ | 16999/569592 [7:38:33<547:08:04, 3.56s/it]
3%|▎ | 16999/569592 [7:38:33<547:08:04, 3.56s/it]
3%|▎ | 17000/569592 [7:38:36<528:27:36, 3.44s/it]
3%|▎ | 17000/569592 [7:38:36<528:27:36, 3.44s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-16000] due to args.save_total_limit
3%|▎ | 17001/569592 [7:40:58<6888:11:17, 44.87s/it]
3%|▎ | 17001/569592 [7:40:58<6888:11:17, 44.87s/it]
3%|▎ | 17002/569592 [7:41:01<4984:38:00, 32.47s/it]
3%|▎ | 17002/569592 [7:41:01<4984:38:00, 32.47s/it]
3%|▎ | 17003/569592 [7:41:02<3531:24:19, 23.01s/it]
3%|▎ | 17003/569592 [7:41:02<3531:24:19, 23.01s/it]
3%|▎ | 17004/569592 [7:41:07<2693:45:08, 17.55s/it]
3%|▎ | 17004/569592 [7:41:07<2693:45:08, 17.55s/it]
3%|▎ | 17005/569592 [7:41:08<1931:07:35, 12.58s/it]
3%|▎ | 17005/569592 [7:41:08<1931:07:35, 12.58s/it]
3%|▎ | 17006/569592 [7:41:12<1548:05:23, 10.09s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101767250 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17006/569592 [7:41:12<1548:05:23, 10.09s/it]
3%|▎ | 17007/569592 [7:41:15<1226:50:46, 7.99s/it]
3%|▎ | 17007/569592 [7:41:15<1226:50:46, 7.99s/it]
3%|▎ | 17008/569592 [7:41:21<1096:17:16, 7.14s/it]
3%|▎ | 17008/569592 [7:41:21<1096:17:16, 7.14s/it]
3%|▎ | 17009/569592 [7:41:24<945:56:44, 6.16s/it]
3%|▎ | 17009/569592 [7:41:25<945:56:44, 6.16s/it]
3%|▎ | 17010/569592 [7:41:28<831:12:21, 5.42s/it]
3%|▎ | 17010/569592 [7:41:28<831:12:21, 5.42s/it]
3%|▎ | 17011/569592 [7:41:29<622:46:45, 4.06s/it]
3%|▎ | 17011/569592 [7:41:29<622:46:45, 4.06s/it]
3%|▎ | 17012/569592 [7:41:33<621:06:02, 4.05s/it]
3%|▎ | 17012/569592 [7:41:33<621:06:02, 4.05s/it]
3%|▎ | 17013/569592 [7:41:37<617:38:38, 4.02s/it]
3%|▎ | 17013/569592 [7:41:37<617:38:38, 4.02s/it]
3%|▎ | 17014/569592 [7:41:38<474:37:51, 3.09s/it]
3%|▎ | 17014/569592 [7:41:38<474:37:51, 3.09s/it]
3%|▎ | 17015/569592 [7:41:41<479:27:54, 3.12s/it]
3%|▎ | 17015/569592 [7:41:41<479:27:54, 3.12s/it]
3%|▎ | 17016/569592 [7:41:46<554:02:23, 3.61s/it]
3%|▎ | 17016/569592 [7:41:46<554:02:23, 3.61s/it]
3%|▎ | 17017/569592 [7:41:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93781632 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
47<432:33:01, 2.82s/it]
3%|▎ | 17017/569592 [7:41:47<432:33:01, 2.82s/it]
3%|▎ | 17018/569592 [7:41:51<482:25:45, 3.14s/it]
3%|▎ | 17018/569592 [7:41:51<482:25:45, 3.14s/it]
3%|▎ | 17019/569592 [7:41:52<380:08:27, 2.48s/it]
3%|▎ | 17019/569592 [7:41:52<380:08:27, 2.48s/it]
3%|▎ | 17020/569592 [7:41:56<483:34:37, 3.15s/it]
3%|▎ | 17020/569592 [7:41:56<483:34:37, 3.15s/it]
3%|▎ | 17021/569592 [7:42:02<590:02:04, 3.84s/it]
3%|▎ | 17021/569592 [7:42:02<590:02:04, 3.84s/it]
3%|▎ | 17022/569592 [7:42:05<576:34:17, 3.76s/it]
3%|▎ | 17022/569592 [7:42:05<576:34:17, 3.76s/it]
3%|▎ | 17023/569592 [7:42:06<444:59:14, 2.90s/it]
3%|▎ | 17023/569592 [7:42:06<444:59:14, 2.90s/it]
3%|▎ | 17024/569592 [7:42:10<493:59:31, 3.22s/it]
3%|▎ | 17024/569592 [7:42:10<493:59:31, 3.22s/it]
3%|▎ | 17025/569592 [7:42:14<522:37:20, 3.40s/it]
3%|▎ | 17025/569592 [7:42:14<522:37:20, 3.40s/it]
3%|▎ | 17026/569592 [7:42:18<543:37:02, 3.54s/it]
3%|▎ | 17026/569592 [7:42:18<543:37:02, 3.54s/it]
3%|▎ | 17027/569592 [7:42:21<541:29:38, 3.53s/it]
3%|▎ | 17027/569592 [7:42:22<541:29:38, 3.53s/it]
3%|▎ | 17028/569592 [7:42:25<544:14:38, 3.55s/it]
3%|▎ | 17028/569592 [7:42:25<544:14:38, 3.55s/it]
3%|▎ | 17029/569592 [7:42:29<572:43:55, 3.73s/it]
3%|▎ | 17029/569592 [7:42:29<572:43:55, 3.73s/it]
3%|▎ | 17030/569592 [7:42:33<588:46:34, 3.84s/it]
3%|▎ | 17030/569592 [7:42:33<588:46:34, 3.84s/it]
3%|▎ | 17031/569592 [7:42:34<454:00:09, 2.96s/it]
3%|▎ | 17031/569592 [7:42:34<454:00:09, 2.96s/it]
3%|▎ | 17032/569592 [7:42:38<488:30:36, 3.18s/it]
3%|▎ | 17032/569592 [7:42:38<488:30:36, 3.18s/it]
3%|▎ | 17033/569592 [7:42:42<509:47:19, 3.32s/it]
3%|▎ | 17033/569592 [7:42:42<509:47:19, 3.32s/it]
3%|▎ | 17034/569592 [7:42:46<537:44:44, 3.50s/it]
3%|▎ | 17034/569592 [7:42:46<537:44:44, 3.50s/it]
3%|▎ | 17035/569592 [7:42:49<525:54:21, 3.43s/it]
3%|▎ | 17035/569592 [7:42:49<525:54:21, 3.43s/it]
3%|▎ | 17036/569592 [7:42:52<529:17:38, 3.45s/it]
3%|▎ | 17036/569592 [7:42:52<529:17:38, 3.45s/it]
3%|▎ | 17037/569592 [7:42:57<600:21:48, 3.91s/it]
3%|▎ | 17037/569592 [7:42:57<600:21:48, 3.91s/it]
3%|▎ | 17038/569592 [7:42:58<462:21:03, 3.01s/it]
3%|▎ | 17038/569592 [7:42:58<462:21:03, 3.01s/it]
3%|▎ | 17039/569592 [7:42:59<365:51:07, 2.38s/it]
3%|▎ | 17039/569592 [7:42:59<365:51:07, 2.38s/it]
3%|▎ | 17040/569592 [7:43:00<301:21:11, 1.96s/it]
3%|▎ | 17040/569592 [7:43:00<301:21:11, 1.96s/it]
3%|▎ | 17041/569592 [7:43:04<414:54:02, 2.70s/it]
3%|▎ | 17041/569592 [7:43:05<414:54:02, 2.70s/it]
3%|▎ | 17042/569592 [7:43:06<336:46:32, 2.19s/it]
3%|▎ | 17042/569592 [7:43:06<336:46:32, 2.19s/it]
3%|▎ | 17043/569592 [7:43:06<281:12:56, 1.83s/it]
3%|▎ | 17043/569592 [7:43:07<281:12:56, 1.83s/it]
3%|▎ | 17044/569592 [7:43:11<385:28:11, 2.51s/it]
3%|▎ | 17044/569592 [7:43:11<385:28:11, 2.51s/it]
3%|▎ | 17045/569592 [7:43:12<316:37:12, 2.06s/it]
3%|▎ | 17045/569592 [7:43:12<316:37:12, 2.06s/it]
3%|▎ | 17046/569592 [7:43:13<269:13:18, 1.75s/it]
3%|▎ | 17046/569592 [7:43:13<269:13:18, 1.75s/it]
3%|▎ | 17047/569592 [7:43:16<364:41:05, 2.38s/it]
3%|▎ | 17047/569592 [7:43:16<364:41:05, 2.38s/it]
3%|▎ | 17048/569592 [7:43:20<433:31:29, 2.82s/it]
3%|▎ | 17048/569592 [7:43:20<433:31:29, 2.82s/it]
3%|▎ | 17049/569592 [7:43:24<494:05:31, 3.22s/it]
3%|▎ | 17049/569592 [7:43:24<494:05:31, 3.22s/it]
3%|▎ | 17050/569592 [7:43:29<562:07:42, 3.66s/it]
3%|▎ | 17050/569592 [7:43:29<562:07:42, 3.66s/it]
3%|▎ | 17051/569592 [7:43:33<576:11:50, 3.75s/it]
3%|▎ | 17051/569592 [7:43:33<576:11:50, 3.75s/it]
3%|▎ | 17052/569592 [7:43:34<444:20:06, 2.90s/it]
3%|▎ | 17052/569592 [7:43:34<444:20:06, 2.90s/it]
3%|▎ | 17053/569592 [7:43:35<353:29:50, 2.30s/it]
3%|▎ | 17053/569592 [7:43:35<353:29:50, 2.30s/it]
3%|▎ | 17054/569592 [7:43:39<428:00:31, 2.79s/it]
3%|▎ | 17054/569592 [7:43:39<428:00:31, 2.79s/it]
3%|▎ | 17055/569592 [7:43:43<490:56:21, 3.20s/it]
3%|▎ | 17055/569592 [7:43:43<490:56:21, 3.20s/it]
3%|▎ | 17056/569592 [7:43:47<519:35:53, 3.39s/it]
3%|▎ | 17056/569592 [7:43:47<519:35:53, 3.39s/it]
3%|▎ | 17057/569592 [7:43:48<405:56:31, 2.64s/it]
3%|▎ | 17057/569592 [7:43:48<405:56:31, 2.64s/it]
3%|▎ | 17058/569592 [7:43:52<458:31:42, 2.99s/it]
3%|▎ | 17058/569592 [7:43:52<458:31:42, 2.99s/it]
3%|▎ | 17059/569592 [7:43:55<495:21:54, 3.23s/it]
3%|▎ | 17059/569592 [7:43:55<495:21:54, 3.23s/it]
3%|▎ | 17060/569592 [7:43:59<532:44:40, 3.47s/it]
3%|▎ | 17060/569592 [7:43:59<532:44:40, 3.47s/it]
3%|▎ | 17061/569592 [7:44:04<563:45:26, 3.67s/it]
3%|▎ | 17061/569592 [7:44:04<563:45:26, 3.67s/it]
3%|▎ | 17062/569592 [7:44:04<439:18:09, 2.86s/it]
3%|▎ | 17062/569592 [7:44:05<439:18:09, 2.86s/it]
3%|▎ | 17063/569592 [7:44:09<496:53:59, 3.24s/it]
3%|▎ | 17063/569592 [7:44:09<496:53:59, 3.24s/it]
3%|▎ | 17064/569592 [7:44:10<389:24:47, 2.54s/it]
3%|▎ | 17064/569592 [7:44:10<389:24:47, 2.54s/it]
3%|▎ | 17065/569592 [7:44:10<317:31:00, 2.07s/it]
3%|▎ | 17065/569592 [7:44:11<317:31:00, 2.07s/it]
3%|▎ | 17066/569592 [7:44:11<265:30:06, 1.73s/it]
3%|▎ | 17066/569592 [7:44:11<265:30:06, 1.73s/it]
3%|▎ | 17067/569592 [7:44:16<379:35:56, 2.47s/it]
3%|▎ | 17067/569592 [7:44:16<379:35:56, 2.47s/it]
3%|▎ | 17068/569592 [7:44:17<308:05:49, 2.01s/it]
3%|▎ | 17068/569592 [7:44:17<308:05:49, 2.01s/it]
3%|▎ | 17069/569592 [7:44:17<258:06:28, 1.68s/it]
3%|▎ | 17069/569592 [7:44:18<258:06:28, 1.68s/it]
3%|▎ | 17070/569592 [7:44:18<225:16:21, 1.47s/it]
3%|▎ | 17070/569592 [7:44:18<225:16:21, 1.47s/it]
3%|▎ | 17071/569592 [7:44:23<345:09:19, 2.25s/it]
3%|▎ | 17071/569592 [7:44:23<345:09:19, 2.25s/it]
3%|▎ | 17072/569592 [7:44:23<284:10:34, 1.85s/it]
3%|▎ | 17072/569592 [7:44:24<284:10:34, 1.85s/it]
3%|▎ | 17073/569592 [7:44:28<408:51:33, 2.66s/it]
3%|▎ | 17073/569592 [7:44:28<408:51:33, 2.66s/it]
3%|▎ | 17074/569592 [7:44:29<329:51:12, 2.15s/it]
3%|▎ | 17074/569592 [7:44:29<329:51:12, 2.15s/it]
3%|▎ | 17075/569592 [7:44:30<276:02:07, 1.80s/it]
3%|▎ | 17075/569592 [7:44:30<276:02:07, 1.80s/it]
3%|▎ | 17076/569592 [7:44:34<372:20:23, 2.43s/it]
3%|▎ | 17076/569592 [7:44:34<372:20:23, 2.43s/it]
3%|▎ | 17077/569592 [7:44:38<447:17:10, 2.91s/it]
3%|▎ | 17077/569592 [7:44:38<447:17:10, 2.91s/it]
3%|▎ | 17078/569592 [7:44:39<355:56:21, 2.32s/it]
3%|▎ | 17078/569592 [7:44:39<355:56:21, 2.32s/it]
3%|▎ | 17079/569592 [7:44:40<291:19:53, 1.90s/it]
3%|▎ | 17079/569592 [7:44:40<291:19:53, 1.90s/it]
3%|▎ | 17080/569592 [7:44:42<322:09:37, 2.10s/it]
3%|▎ | 17080/569592 [7:44:42<322:09:37, 2.10s/it]
3%|▎ | 17081/569592 [7:44:44<290:16:02, 1.89s/it]
3%|▎ | 17081/569592 [7:44:44<290:16:02, 1.89s/it]
3%|▎ | 17082/569592 [7:44:48<412:09:58, 2.69s/it]
3%|▎ | 17082/569592 [7:44:48<412:09:58, 2.69s/it]
3%|▎ | 17083/569592 [7:44:52<457:54:03, 2.98s/it]
3%|▎ | 17083/569592 [7:44:52<457:54:03, 2.98s/it]
3%|▎ | 17084/569592 [7:44:56<498:17:47, 3.25s/it]
3%|▎ | 17084/569592 [7:44:56<498:17:47, 3.25s/it]
3%|▎ | 17085/569592 [7:44:57<392:31:48, 2.56s/it]
3%|▎ | 17085/569592 [7:44:57<392:31:48, 2.56s/it]
3%|▎ | 17086/569592 [7:45:00<431:58:56, 2.81s/it]
3%|▎ | 17086/569592 [7:45:00<431:58:56, 2.81s/it]
3%|▎ | 17087/569592 [7:45:04<483:39:42, 3.15s/it]
3%|▎ | 17087/569592 [7:45:04<483:39:42, 3.15s/it]
3%|▎ | 17088/569592 [7:45:08<525:44:02, 3.43s/it]
3%|▎ | 17088/569592 [7:45:08<525:44:02, 3.43s/it]
3%|▎ | 17089/569592 [7:45:09<410:21:14, 2.67s/it]
3%|▎ | 17089/569592 [7:45:09<410:21:14, 2.67s/it]
3%|▎ | 17090/569592 [7:45:14<497:10:42, 3.24s/it]
3%|▎ | 17090/569592 [7:45:14<497:10:42, 3.24s/it]
3%|▎ | 17091/569592 [7:45:15<391:59:36, 2.55s/it]
3%|▎ | 17091/569592 [7:45:15<391:59:36, 2.55s/it]
3%|▎ | 17092/569592 [7:45:19<470:14:30, 3.06s/it]
3%|▎ | 17092/569592 [7:45:19<470:14:30, 3.06s/it]
3%|▎ | 17093/569592 [7:45:20<370:58:16, 2.42s/it]
3%|▎ | 17093/569592 [7:45:20<370:58:16, 2.42s/it]
3%|▎ | 17094/569592 [7:45:23<429:06:13, 2.80s/it]
3%|▎ | 17094/569592 [7:45:23<429:06:13, 2.80s/it]
3%|▎ | 17095/569592 [7:45:27<451:46:43, 2.94s/it]
3%|▎ | 17095/569592 [7:45:27<451:46:43, 2.94s/it]
3%|▎ | 17096/569592 [7:45:30<489:17:42, 3.19s/it]
3%|▎ | 17096/569592 [7:45:30<489:17:42, 3.19s/it]
3%|▎ | 17097/569592 [7:45:31<386:31:36, 2.52s/it]
3%|▎ | 17097/569592 [7:45:31<386:31:36, 2.52s/it]
3%|▎ | 17098/569592 [7:45:35<440:04:30, 2.87s/it]
3%|▎ | 17098/569592 [7:45:35<440:04:30, 2.87s/it]
3%|▎ | 17099/569592 [7:45:39<505:16:45, 3.29s/it]
3%|▎ | 17099/569592 [7:45:39<505:16:45, 3.29s/it]
3%|▎ | 17100/569592 [7:45:43<540:38:32, 3.52s/it]
3%|▎ | 17100/569592 [7:45:43<540:38:32, 3.52s/it]
3%|▎ | 17101/569592 [7:45:44<419:56:18, 2.74s/it]
3%|▎ | 17101/569592 [7:45:44<419:56:18, 2.74s/it]
3%|▎ | 17102/569592 [7:45:48<472:48:47, 3.08s/it]
3%|▎ | 17102/569592 [7:45:48<472:48:47, 3.08s/it]
3%|▎ | 17103/569592 [7:45:52<496:19:47, 3.23s/it]
3%|▎ | 17103/569592 [7:45:52<496:19:47, 3.23s/it]
3%|▎ | 17104/569592 [7:45:56<519:04:19, 3.38s/it]
3%|▎ | 17104/569592 [7:45:56<519:04:19, 3.38s/it]
3%|▎ | 17105/569592 [7:45:56<406:04:59, 2.65s/it]
3%|▎ | 17105/569592 [7:45:56<406:04:59, 2.65s/it]
3%|▎ | 17106/569592 [7:46:01<482:23:50, 3.14s/it]
3%|▎ | 17106/569592 [7:46:01<482:23:50, 3.14s/it]
3%|▎ | 17107/569592 [7:46:02<380:30:50, 2.48s/it]
3%|▎ | 17107/569592 [7:46:02<380:30:50, 2.48s/it]
3%|▎ | 17108/569592 [7:46:06<454:45:28, 2.96s/it]
3%|▎ | 17108/569592 [7:46:06<454:45:28, 2.96s/it]
3%|▎ | 17109/569592 [7:46:07<360:51:52, 2.35s/it]
3%|▎ | 17109/569592 [7:46:07<360:51:52, 2.35s/it]
3%|▎ | 17110/569592 [7:46:12<499:52:25, 3.26s/it]
3%|▎ | 17110/569592 [7:46:12<499:52:25, 3.26s/it]
3%|▎ | 17111/569592 [7:46:16<540:25:39, 3.52s/it]
3%|▎ | 17111/569592 [7:46:16<540:25:39, 3.52s/it]
3%|▎ | 17112/569592 [7:46:20<555:39:56, 3.62s/it]
3%|▎ | 17112/569592 [7:46:20<555:39:56, 3.62s/it]
3%|▎ | 17113/569592 [7:46:24<574:33:25, 3.74s/it]
3%|▎ | 17113/569592 [7:46:24<574:33:25, 3.74s/it]
3%|▎ | 17114/569592 [7:46:27<540:40:02, 3.52s/it]
3%|▎ | 17114/569592 [7:46:27<540:40:02, 3.52s/it]
3%|▎ | 17115/569592 [7:46:30<530:43:39, 3.46s/it]
3%|▎ | 17115/569592 [7:46:30<530:43:39, 3.46s/it]
3%|▎ | 17116/569592 [7:46:34<534:11:45, 3.48s/it]
3%|▎ | 17116/569592 [7:46:34<534:11:45, 3.48s/it]
3%|▎ | 17117/569592 [7:46:38<542:00:49, 3.53s/it]
3%|▎ | 17117/569592 [7:46:38<542:00:49, 3.53s/it]
3%|▎ | 17118/569592 [7:46:39<422:01:45, 2.75s/it]
3%|▎ | 17118/569592 [7:46:39<422:01:45, 2.75s/it]
3%|▎ | 17119/569592 [7:46:42<453:46:21, 2.96s/it]
3%|▎ | 17119/569592 [7:46:42<453:46:21, 2.96s/it]
3%|▎ | 17120/569592 [7:46:43<361:43:31, 2.36s/it]
3%|▎ | 17120/569592 [7:46:43<361:43:31, 2.36s/it]
3%|▎ | 17121/569592 [7:46:47<438:53:43, 2.86s/it]
3%|▎ | 17121/569592 [7:46:47<438:53:43, 2.86s/it]
3%|▎ | 17122/569592 [7:46:51<471:23:06, 3.07s/it]
3%|▎ | 17122/569592 [7:46:51<471:23:06, 3.07s/it]
3%|▎ | 17123/569592 [7:46:54<481:40:14, 3.14s/it]
3%|▎ | 17123/569592 [7:46:54<481:40:14, 3.14s/it]
3%|▎ | 17124/569592 [7:46:58<506:31:01, 3.30s/it]
3%|▎ | 17124/569592 [7:46:58<506:31:01, 3.30s/it]
3%|▎ | 17125/569592 [7:46:58<399:08:49, 2.60s/it]
3%|▎ | 17125/569592 [7:46:58<399:08:49, 2.60s/it]
3%|▎ | 17126/569592 [7:47:02<451:53:39, 2.94s/it]
3%|▎ | 17126/569592 [7:47:02<451:53:39, 2.94s/it]
3%|▎ | 17127/569592 [7:47:05<461:22:00, 3.01s/it]
3%|▎ | 17127/569592 [7:47:05<461:22:00, 3.01s/it]
3%|▎ | 17128/569592 [7:47:09<492:16:41, 3.21s/it]
3%|▎ | 17128/569592 [7:47:09<492:16:41, 3.21s/it]
3%|▎ | 17129/569592 [7:47:12<496:42:49, 3.24s/it]
3%|▎ | 17129/569592 [7:47:12<496:42:49, 3.24s/it]
3%|▎ | 17130/569592 [7:47:16<520:02:24, 3.39s/it]
3%|▎ | 17130/569592 [7:47:16<520:02:24, 3.39s/it]
3%|▎ | 17131/569592 [7:47:19<517:47:37, 3.37s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98783895 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93413376 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17131/569592 [7:47:19<517:47:37, 3.37s/it]
3%|▎ | 17132/569592 [7:47:20<404:07:13, 2.63s/it]
3%|▎ | 17132/569592 [7:47:20<404:07:13, 2.63s/it]
3%|▎ | 17133/569592 [7:47:25<494:43:49, 3.22s/it]
3%|▎ | 17133/569592 [7:47:25<494:43:49, 3.22s/it]
3%|▎ | 17134/569592 [7:47:29<515:01:36, 3.36s/it]
3%|▎ | 17134/569592 [7:47:29<515:01:36, 3.36s/it]
3%|▎ | 17135/569592 [7:47:33<542:40:41, 3.54s/it]
3%|▎ | 17135/569592 [7:47:33<542:40:41, 3.54s/it]
3%|▎ | 17136/569592 [7:47:33<420:44:03, 2.74s/it]
3%|▎ | 17136/569592 [7:47:33<420:44:03, 2.74s/it]
3%|▎ | 17137/569592 [7:47:37<477:14:50, 3.11s/it]
3%|▎ | 17137/569592 [7:47:37<477:14:50, 3.11s/it]
3%|▎ | 17138/569592 [7:47:41<513:17:49, 3.34s/it]
3%|▎ | 17138/569592 [7:47:41<513:17:49, 3.34s/it]
3%|▎ | 17139/569592 [7:47:42<402:56:05, 2.63s/it]
3%|▎ | 17139/569592 [7:47:42<402:56:05, 2.63s/it]
3%|▎ | 17140/569592 [7:47:43<323:44:32, 2.11s/it]
3%|▎ | 17140/569592 [7:47:43<323:44:32, 2.11s/it]
3%|▎ | 17141/569592 [7:47:47<407:39:45, 2.66s/it]
3%|▎ | 17141/569592 [7:47:47<407:39:45, 2.66s/it]
3%|▎ | 17142/569592 [7:47:51<481:32:15, 3.14s/it]
3%|▎ | 17142/569592 [7:47:51<481:32:15, 3.14s/it]
3%|▎ | 17143/569592 [7:47:55<491:29:23, 3.20s/it]
3%|▎ | 17143/569592 [7:47:55<491:29:23, 3.20s/it]
3%|▎ | 17144/569592 [7:47:58<507:24:24, 3.31s/it]
3%|▎ | 17144/569592 [7:47:58<507:24:24, 3.31s/it]
3%|▎ | 17145/569592 [7:48:02<521:09:31, 3.40s/it]
3%|▎ | 17145/569592 [7:48:02<521:09:31, 3.40s/it]
3%|▎ | 17146/569592 [7:48:05<503:20:12, 3.28s/it]
3%|▎ | 17146/569592 [7:48:05<503:20:12, 3.28s/it]
3%|▎ | 17147/569592 [7:48:08<509:41:14, 3.32s/it]
3%|▎ | 17147/569592 [7:48:08<509:41:14, 3.32s/it]
3%|▎ | 17148/569592 [7:48:12<520:36:31, 3.39s/it]
3%|▎ | 17148/569592 [7:48:12<520:36:31, 3.39s/it]
3%|▎ | 17149/569592 [7:48:15<525:51:57, 3.43s/it]
3%|▎ | 17149/569592 [7:48:15<525:51:57, 3.43s/it]
3%|▎ | 17150/569592 [7:48:19<526:12:11, 3.43s/it]
3%|▎ | 17150/569592 [7:48:19<526:12:11, 3.43s/it]
3%|▎ | 17151/569592 [7:48:22<530:50:24, 3.46s/it]
3%|▎ | 17151/569592 [7:48:22<530:50:24, 3.46s/it]
3%|▎ | 17152/569592 [7:48:26<528:51:34, 3.45s/it]
3%|▎ | 17152/569592 [7:48:26<528:51:34, 3.45s/it]
3%|▎ | 17153/569592 [7:48:27<414:42:21, 2.70s/it]
3%|▎ | 17153/569592 [7:48:27<414:42:21, 2.70s/it]
3%|▎ | 17154/569592 [7:48:28<335:45:35, 2.19s/it]
3%|▎ | 17154/569592 [7:48:28<335:45:35, 2.19s/it]
3%|▎ | 17155/569592 [7:48:32<415:09:14, 2.71s/it]
3%|▎ | 17155/569592 [7:48:32<415:09:14, 2.71s/it]
3%|▎ | 17156/569592 [7:48:33<344:36:52, 2.25s/it]
3%|▎ | 17156/569592 [7:48:33<344:36:52, 2.25s/it]
3%|▎ | 17157/569592 [7:48:37<412:19:40, 2.69s/it]
3%|▎ | 17157/569592 [7:48:37<412:19:40, 2.69s/it]
3%|▎ | 17158/569592 [7:48:37<332:54:43, 2.17s/it]
3%|▎ | 17158/569592 [7:48:37<332:54:43, 2.17s/it]
3%|▎ | 17159/569592 [7:48:41<399:09:03, 2.60s/it]
3%|▎ | 17159/569592 [7:48:41<399:09:03, 2.60s/it]
3%|▎ | 17160/569592 [7:48:42<325:46:16, 2.12s/it]
3%|▎ | 17160/569592 [7:48:42<325:46:16, 2.12s/it]
3%|▎ | 17161/569592 [7:48:43<272:42:28, 1.78s/it]
3%|▎ | 17161/569592 [7:48:43<272:42:28, 1.78s/it]
3%|▎ | 17162/569592 [7:48:47<366:44:03, 2.39s/it]
3%|▎ | 17162/569592 [7:48:47<366:44:03, 2.39s/it]
3%|▎ | 17163/569592 [7:48:51<438:50:46, 2.86s/it]
3%|▎ | 17163/569592 [7:48:51<438:50:46, 2.86s/it]
3%|▎ | 17164/569592 [7:48:52<356:43:17, 2.32s/it]
3%|▎ | 17164/569592 [7:48:52<356:43:17, 2.32s/it]
3%|▎ | 17165/569592 [7:48:56<419:12:38, 2.73s/it]
3%|▎ | 17165/569592 [7:48:56<419:12:38, 2.73s/it]
3%|▎ | 17166/569592 [7:48:59<455:20:37, 2.97s/it]
3%|▎ | 17166/569592 [7:48:59<455:20:37, 2.97s/it]
3%|▎ | 17167/569592 [7:49:03<482:53:03, 3.15s/it]
3%|▎ | 17167/569592 [7:49:03<482:53:03, 3.15s/it]
3%|▎ | 17168/569592 [7:49:06<508:12:36, 3.31s/it]
3%|▎ | 17168/569592 [7:49:06<508:12:36, 3.31s/it]
3%|▎ | 17169/569592 [7:49:10<537:28:30, 3.50s/it]
3%|▎ | 17169/569592 [7:49:10<537:28:30, 3.50s/it]
3%|▎ | 17170/569592 [7:49:14<540:59:38, 3.53s/it]
3%|▎ | 17170/569592 [7:49:14<540:59:38, 3.53s/it]
3%|▎ | 17171/569592 [7:49:17<525:27:27, 3.42s/it]
3%|▎ | 17171/569592 [7:49:17<525:27:27, 3.42s/it]
3%|▎ | 17172/569592 [7:49:21<539:46:52, 3.52s/it]
3%|▎ | 17172/569592 [7:49:21<539:46:52, 3.52s/it]
3%|▎ | 17173/569592 [7:49:22<418:41:55, 2.73s/it]
3%|▎ | 17173/569592 [7:49:22<418:41:55, 2.73s/it]
3%|▎ | 17174/569592 [7:49:23<337:06:08, 2.20s/it]
3%|▎ | 17174/569592 [7:49:23<337:06:08, 2.20s/it]
3%|▎ | 17175/569592 [7:49:24<282:10:12, 1.84s/it]
3%|▎ | 17175/569592 [7:49:24<282:10:12, 1.84s/it]
3%|▎ | 17176/569592 [7:49:27<355:39:33, 2.32s/it]
3%|▎ | 17176/569592 [7:49:27<355:39:33, 2.32s/it]
3%|▎ | 17177/569592 [7:49:31<413:12:00, 2.69s/it]
3%|▎ | 17177/569592 [7:49:31<413:12:00, 2.69s/it]
3%|▎ | 17178/569592 [7:49:35<479:53:17, 3.13s/it]
3%|▎ | 17178/569592 [7:49:35<479:53:17, 3.13s/it]
3%|▎ | 17179/569592 [7:49:36<379:55:35, 2.48s/it]
3%|▎ | 17179/569592 [7:49:36<379:55:35, 2.48s/it]
3%|▎ | 17180/569592 [7:49:40<443:16:16, 2.89s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17180/569592 [7:49:40<443:16:16, 2.89s/it]
3%|▎ | 17181/569592 [7:49:43<459:11:04, 2.99s/it]
3%|▎ | 17181/569592 [7:49:43<459:11:04, 2.99s/it]
3%|▎ | 17182/569592 [7:49:44<368:38:38, 2.40s/it]
3%|▎ | 17182/569592 [7:49:44<368:38:38, 2.40s/it]
3%|▎ | 17183/569592 [7:49:45<303:54:12, 1.98s/it]
3%|▎ | 17183/569592 [7:49:45<303:54:12, 1.98s/it]
3%|▎ | 17184/569592 [7:49:46<255:53:39, 1.67s/it]
3%|▎ | 17184/569592 [7:49:46<255:53:39, 1.67s/it]
3%|▎ | 17185/569592 [7:49:50<381:22:39, 2.49s/it]
3%|▎ | 17185/569592 [7:49:50<381:22:39, 2.49s/it]
3%|▎ | 17186/569592 [7:49:55<480:37:05, 3.13s/it]
3%|▎ | 17186/569592 [7:49:55<480:37:05, 3.13s/it]
3%|▎ | 17187/569592 [7:49:56<381:06:01, 2.48s/it]
3%|▎ | 17187/569592 [7:49:56<381:06:01, 2.48s/it]
3%|▎ | 17188/569592 [7:49:57<310:45:52, 2.03s/it]
3%|▎ | 17188/569592 [7:49:57<310:45:52, 2.03s/it]
3%|▎ | 17189/569592 [7:50:00<384:25:56, 2.51s/it]
3%|▎ | 17189/569592 [7:50:00<384:25:56, 2.51s/it]
3%|▎ | 17190/569592 [7:50:04<431:44:18, 2.81s/it]
3%|▎ | 17190/569592 [7:50:04<431:44:18, 2.81s/it]
3%|▎ | 17191/569592 [7:50:08<471:51:13, 3.08s/it]
3%|▎ | 17191/569592 [7:50:08<471:51:13, 3.08s/it]
3%|▎ | 17192/569592 [7:50:09<373:20:47, 2.43s/it]
3%|▎ | 17192/569592 [7:50:09<373:20:47, 2.43s/it]
3%|▎ | 17193/569592 [7:50:10<306:42:19, 2.00s/it]
3%|▎ | 17193/569592 [7:50:10<306:42:19, 2.00s/it]
3%|▎ | 17194/569592 [7:50:10<256:48:10, 1.67s/it]
3%|▎ | 17194/569592 [7:50:10<256:48:10, 1.67s/it]
3%|▎ | 17195/569592 [7:50:11<221:55:40, 1.45s/it]
3%|▎ | 17195/569592 [7:50:11<221:55:40, 1.45s/it]
3%|▎ | 17196/569592 [7:50:15<322:44:19, 2.10s/it]
3%|▎ | 17196/569592 [7:50:15<322:44:19, 2.10s/it]
3%|▎ | 17197/569592 [7:50:16<269:24:32, 1.76s/it]
3%|▎ | 17197/569592 [7:50:16<269:24:32, 1.76s/it]
3%|▎ | 17198/569592 [7:50:17<233:42:33, 1.52s/it]
3%|▎ | 17198/569592 [7:50:17<233:42:33, 1.52s/it]
3%|▎ | 17199/569592 [7:50:20<314:01:08, 2.05s/it]
3%|▎ | 17199/569592 [7:50:20<314:01:08, 2.05s/it]
3%|▎ | 17200/569592 [7:50:24<374:04:39, 2.44s/it]
3%|▎ | 17200/569592 [7:50:24<374:04:39, 2.44s/it]
3%|▎ | 17201/569592 [7:50:28<447:14:16, 2.91s/it]
3%|▎ | 17201/569592 [7:50:28<447:14:16, 2.91s/it]
3%|▎ | 17202/569592 [7:50:31<471:24:38, 3.07s/it]
3%|▎ | 17202/569592 [7:50:31<471:24:38, 3.07s/it]
3%|▎ | 17203/569592 [7:50:35<502:55:49, 3.28s/it]
3%|▎ | 17203/569592 [7:50:35<502:55:49, 3.28s/it]
3%|▎ | 17204/569592 [7:50:39<538:48:15, 3.51s/it]
3%|▎ | 17204/569592 [7:50:39<538:48:15, 3.51s/it]
3%|▎ | 17205/569592 [7:50:40<420:04:59, 2.74s/it]
3%|▎ | 17205/569592 [7:50:40<420:04:59, 2.74s/it]
3%|▎ | 17206/569592 [7:50:44<466:13:27, 3.04s/it]
3%|▎ | 17206/569592 [7:50:44<466:13:27, 3.04s/it]
3%|▎ | 17207/569592 [7:50:44<369:48:35, 2.41s/it]
3%|▎ | 17207/569592 [/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
7:50:44<369:48:35, 2.41s/it]
3%|▎ | 17208/569592 [7:50:45<306:08:16, 2.00s/it]
3%|▎ | 17208/569592 [7:50:45<306:08:16, 2.00s/it]
3%|▎ | 17209/569592 [7:50:49<382:57:30, 2.50s/it]
3%|▎ | 17209/569592 [7:50:49<382:57:30, 2.50s/it]
3%|▎ | 17210/569592 [7:50:50<310:53:38, 2.03s/it]
3%|▎ | 17210/569592 [7:50:50<310:53:38, 2.03s/it]
3%|▎ | 17211/569592 [7:50:54<391:45:05, 2.55s/it]
3%|▎ | 17211/569592 [7:50:54<391:45:05, 2.55s/it]
3%|▎ | 17212/569592 [7:50:59<516:06:10, 3.36s/it]
3%|▎ | 17212/569592 [7:50:59<516:06:10, 3.36s/it]
3%|▎ | 17213/569592 [7:51:03<530:58:09, 3.46s/it]
3%|▎ | 17213/569592 [7:51:03<530:58:09, 3.46s/it]
3%|▎ | 17214/569592 [7:51:04<413:53:41, 2.70s/it]
3%|▎ | 17214/569592 [7:51:04<413:53:41, 2.70s/it]
3%|▎ | 17215/569592 [7:51:08<473:46:56, 3.09s/it]
3%|▎ | 17215/569592 [7:51:08<473:46:56, 3.09s/it]
3%|▎ | 17216/569592 [7:51:11<477:13:44, 3.11s/it]
3%|▎ | 17216/569592 [7:51:11<477:13:44, 3.11s/it]
3%|▎ | 17217/569592 [7:51:12<378:29:51, 2.47s/it]
3%|▎ | 17217/569592 [7:51:12<378:29:51, 2.47s/it]
3%|▎ | 17218/569592 [7:51:15<426:45:42, 2.78s/it]
3%|▎ | 17218/569592 [7:51:15<426:45:42, 2.78s/it]
3%|▎ | 17219/569592 [7:51:16<349:27:08, 2.28s/it]
3%|▎ | 17219/569592 [7:51:16<349:27:08, 2.28s/it]
3%|▎ | 17220/569592 [7:51:17<288:20:40, 1.88s/it]
3%|▎ | 17220/569592 [7:51:17<288:20:40, 1.88s/it]
3%|▎ | 17221/569592 [7:51:22<395:00:22, 2.57s/it]
3%|▎ | 17221/569592 [7:51:22<395:00:22, 2.57s/it]
3%|▎ | 17222/569592 [7:51:23<320:03:33, 2.09s/it]
3%|▎ | 17222/569592 [7:51:23<320:03:33, 2.09s/it]
3%|▎ | 17223/569592 [7:51:26<406:00:52, 2.65s/it]
3%|▎ | 17223/569592 [7:51:27<406:00:52, 2.65s/it]
3%|▎ | 17224/569592 [7:51:30<453:06:00, 2.95s/it]
3%|▎ | 17224/569592 [7:51:30<453:06:00, 2.95s/it]
3%|▎ | 17225/569592 [7:51:33<470:11:55, 3.06s/it]
3%|▎ | 17225/569592 [7:51:34<470:11:55, 3.06s/it]
3%|▎ | 17226/569592 [7:51:35<379:09:58, 2.47s/it]
3%|▎ | 17226/569592 [7:51:35<379:09:58, 2.47s/it]
3%|▎ | 17227/569592 [7:51:38<445:17:54, 2.90s/it]
3%|▎ | 17227/569592 [7:51:38<445:17:54, 2.90s/it]
3%|▎ | 17228/569592 [7:51:42<476:01:46, 3.10s/it]
3%|▎ | 17228/569592 [7:51:42<476:01:46, 3.10s/it]
3%|▎ | 17229/569592 [7:51:46<498:12:10, 3.25s/it]
3%|▎ | 17229/569592 [7:51:46<498:12:10, 3.25s/it]
3%|▎ | 17230/569592 [7:51:49<500:41:01, 3.26s/it]
3%|▎ | 17230/569592 [7:51:49<500:41:01, 3.26s/it]
3%|▎ | 17231/569592 [7:51:53<517:50:57, 3.38s/it]
3%|▎ | 17231/569592 [7:51:53<517:50:57, 3.38s/it]
3%|▎ | 17232/569592 [7:51:53<403:21:51, 2.63s/it]
3%|▎ | 17232/569592 [7:51:53<403:21:51, 2.63s/it]
3%|▎ | 17233/569592 [7:51:54<326:09:19, 2.13s/it]
3%|▎ | 17233/569592 [7:51:54<326:09:19, 2.13s/it]
3%|▎ | 17234/569592 [7:51:58<406:05:35, 2.65s/it]
3%|▎ | 17234/569592 [7:51:58<406:05:35, 2.65s/it]
3%|▎ | 17235/569592 [7:52:05<575:52:14, 3.75s/it]
3%|▎ | 17235/569592 [7:52:05<575:52:14, 3.75s/it]
3%|▎ | 17236/569592 [7:52:08<539:09:04, 3.51s/it]
3%|▎ | 17236/569592 [7:52:08<539:09:04, 3.51s/it]
3%|▎ | 17237/569592 [7:52:08<419:28:15, 2.73s/it]
3%|▎ | 17237/569592 [7:52:08<419:28:15, 2.73s/it]
3%|▎ | 17238/569592 [7:52:09<335:48:26, 2.19s/it]
3%|▎ | 17238/569592 [7:52:09<335:48:26, 2.19s/it]
3%|▎ | 17239/569592 [7:52:13<400:29:52, 2.61s/it]
3%|▎ | 17239/569592 [7:52:13<400:29:52, 2.61s/it]
3%|▎ | 17240/569592 [7:52:14<333:40:09, 2.17s/it]
3%|▎ | 17240/569592 [7:52:14<333:40:09, 2.17s/it]
3%|▎ | 17241/569592 [7:52:18<411:50:35, 2.68s/it]
3%|▎ | 17241/569592 [7:52:18<411:50:35, 2.68s/it]
3%|▎ | 17242/569592 [7:52:21<447:46:46, 2.92s/it]
3%|▎ | 17242/569592 [7:52:22<447:46:46, 2.92s/it]
3%|▎ | 17243/569592 [7:52:25<464:15:36, 3.03s/it]
3%|▎ | 17243/569592 [7:52:25<464:15:36, 3.03s/it]
3%|▎ | 17244/569592 [7:52:29<524:05:40, 3.42s/it]
3%|▎ | 17244/569592 [7:52:29<524:05:40, 3.42s/it]
3%|▎ | 17245/569592 [7:52:33<552:02:29, 3.60s/it]
3%|▎ | 17245/569592 [7:52:33<552:02:29, 3.60s/it]
3%|▎ | 17246/569592 [7:52:37<563:18:53, 3.67s/it]
3%|▎ | 17246/569592 [7:52:37<563:18:53, 3.67s/it]
3%|▎ | 17247/569592 [7:52:40<542:53:49, 3.54s/it]
3%|▎ | 17247/569592 [7:52:40<542:53:49, 3.54s/it]
3%|▎ | 17248/569592 [7:52:43<531:01:19, 3.46s/it]
3%|▎ | 17248/569592 [7:52:43<531:01:19, 3.46s/it]
3%|▎ | 17249/569592 [7:52:44<412:32:29, 2.69s/it]
3%|▎ | 17249/569592 [7:52:44<412:32:29, 2.69s/it]
3%|▎ | 17250/569592 [7:52:48<435:02:32, 2.84s/it]
3%|▎ | 17250/569592 [7:52:48<435:02:32, 2.84s/it]
3%|▎ | 17251/569592 [7:52:51<454:56:41, 2.97s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (104235390 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17251/569592 [7:52:51<454:56:41, 2.97s/it]
3%|▎ | 17252/569592 [7:52:55<509:38:00, 3.32s/it]
3%|▎ | 17252/569592 [7:52:55<509:38:00, 3.32s/it]
3%|▎ | 17253/569592 [7:52:58<509:28:12, 3.32s/it]
3%|▎ | 17253/569592 [7:52:58<509:28:12, 3.32s/it]
3%|▎ | 17254/569592 [7:52:59<398:59:09, 2.60s/it]
3%|▎ | 17254/569592 [7:52:59<398:59:09, 2.60s/it]
3%|▎ | 17255/569592 [7:53:03<454:03:34, 2.96s/it]
3%|▎ | 17255/569592 [7:53:03<454:03:34, 2.96s/it]
3%|▎ | 17256/569592 [7:53:06<474:19:02, 3.09s/it]
3%|▎ | 17256/569592 [7:53:06<474:19:02, 3.09s/it]
3%|▎ | 17257/569592 [7:53:07<378:51:08, 2.47s/it]
3%|▎ | 17257/569592 [7:53:07<378:51:08, 2.47s/it]
3%|▎ | 17258/569592 [7:53:11<430:42:10, 2.81s/it]
3%|▎ | 17258/569592 [7:53:11<430:42:10, 2.81s/it]
3%|▎ | 17259/569592 [7:53:14<449:07:16, 2.93s/it]
3%|▎ | 17259/569592 [7:53:14<449:07:16, 2.93s/it]
3%|▎ | 17260/569592 [7:53:18<469:12:35, 3.06s/it]
3%|▎ | 17260/569592 [7:53:18<469:12:35, 3.06s/it]
3%|▎ | 17261/569592 [7:53:22<544:22:04, 3.55s/it]
3%|▎ | 17261/569592 [7:53:22<544:22:04, 3.55s/it]
3%|▎ | 17262/569592 [7:53:27<611:02:03, 3.98s/it]
3%|▎ | 17262/569592 [7:53:27<611:02:03, 3.98s/it]
3%|▎ | 17263/569592 [7:53:30<575:20:22, 3.75s/it]
3%|▎ | 17263/569592 [7:53:30<575:20:22, 3.75s/it]
3%|▎ | 17264/569592 [7:53:31<442:43:18, 2.89s/it]
3%|▎ | 17264/569592 [7:53:31<442:43:18, 2.89s/it]
3%|▎ | 17265/569592 [7:53:35<458:46:11, 2.99s/it]
3%|▎ | 17265/569592 [7:53:35<458:46:11, 2.99s/it]
3%|▎ | 17266/569592 [7:53:36<364:32:24, 2.38s/it]
3%|▎ | 17266/569592 [7:53:36<364:32:24, 2.38s/it]
3%|▎ | 17267/569592 [7:53:37<308:56:31, 2.01s/it]
3%|▎ | 17267/569592 [7:53:37<308:56:31, 2.01s/it]
3%|▎ | 17268/569592 [7:53:40<381:25:24, 2.49s/it]
3%|▎ | 17268/569592 [7:53:40<381:25:24, 2.49s/it]
3%|▎ | 17269/569592 [7:53:41<311:21:14, 2.03s/it]
3%|▎ | 17269/569592 [7:53:41<311:21:14, 2.03s/it]
3%|▎ | 17270/569592 [7:53:45<384:55:07, 2.51s/it]
3%|▎ | 17270/569592 [7:53:45<384:55:07, 2.51s/it]
3%|▎ | 17271/569592 [7:53:46<314:30:16, 2.05s/it]
3%|▎ | 17271/569592 [7:53:46<314:30:16, 2.05s/it]
3%|▎ | 17272/569592 [7:53:47<264:48:00, 1.73s/it]
3%|▎ | 17272/569592 [7:53:47<264:48:00, 1.73s/it]
3%|▎ | 17273/569592 [7:53:50<345:51:04, 2.25s/it]
3%|▎ | 17273/569592 [7:53:50<345:51:04, 2.25s/it]
3%|▎ | 17274/569592 [7:53:51<286:00:49, 1.86s/it]
3%|▎ | 17274/569592 [7:53:51<286:00:49, 1.86s/it]
3%|▎ | 17275/569592 [7:53:56<432:29:51, 2.82s/it]
3%|▎ | 17275/569592 [7:53:56<432:29:51, 2.82s/it]
3%|▎ | 17276/569592 [7:54:00<457:58:48, 2.99s/it]
3%|▎ | 17276/569592 [7:54:00<457:58:48, 2.99s/it]
3%|▎ | 17277/569592 [7:54:01<365:20:41, 2.38s/it]
3%|▎ | 17277/569592 [7:54:01<365:20:41, 2.38s/it]
3%|▎ | 17278/569592 [7:54:02<298:26:52, 1.95s/it]
3%|▎ | 17278/569592 [7:54:02<298:26:52, 1.95s/it]
3%|▎ | 17279/569592 [7:54:05<384:40:48, 2.51s/it]
3%|▎ | 17279/569592 [7:54:05<384:40:48, 2.51s/it]
3%|▎ | 17280/569592 [7:54:06<313:02:17, 2.04s/it]
3%|▎ | 17280/569592 [7:54:06<313:02:17, 2.04s/it]
3%|▎ | 17281/569592 [7:54:07<262:05:59, 1.71s/it]
3%|▎ | 17281/569592 [7:54:07<262:05:59, 1.71s/it]
3%|▎ | 17282/569592 [7:54:12<386:55:34, 2.52s/it]
3%|▎ | 17282/569592 [7:54:12<386:55:34, 2.52s/it]
3%|▎ | 17283/569592 [7:54:16<456:24:41, 2.97s/it]
3%|▎ | 17283/569592 [7:54:16<456:24:41, 2.97s/it]
3%|▎ |/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
17284/569592 [7:54:19<478:35:42, 3.12s/it]
3%|▎ | 17284/569592 [7:54:19<478:35:42, 3.12s/it]
3%|▎ | 17285/569592 [7:54:23<509:06:09, 3.32s/it]
3%|▎ | 17285/569592 [7:54:23<509:06:09, 3.32s/it]
3%|▎ | 17286/569592 [7:54:26<518:18:30, 3.38s/it]
3%|▎ | 17286/569592 [7:54:26<518:18:30, 3.38s/it]
3%|▎ | 17287/569592 [7:54:27<406:19:20, 2.65s/it]
3%|▎ | 17287/569592 [7:54:27<406:19:20, 2.65s/it]
3%|▎ | 17288/569592 [7:54:28<327:50:08, 2.14s/it]
3%|▎ | 17288/569592 [7:54:28<327:50:08, 2.14s/it]
3%|▎ | 17289/569592 [7:54:32<393:50:15, 2.57s/it]
3%|▎ | 17289/569592 [7:54:32<393:50:15, 2.57s/it]
3%|▎ | 17290/569592 [7:54:33<318:58:39, 2.08s/it]
3%|▎ | 17290/569592 [7:54:33<318:58:39, 2.08s/it]
3%|▎ | 17291/569592 [7:54:36<381:18:36, 2.49s/it]
3%|▎ | 17291/569592 [7:54:36<381:18:36, 2.49s/it]
3%|▎ | 17292/569592 [7:54:37<310:25:06, 2.02s/it]
3%|▎ | 17292/569592 [7:54:37<310:25:06, 2.02s/it]
3%|▎ | 17293/569592 [7:54:41<391:43:39, 2.55s/it]
3%|▎ | 17293/569592 [7:54:41<391:43:39, 2.55s/it]
3%|▎ | 17294/569592 [7:54:42<317:50:44, 2.07s/it]
3%|▎ | 17294/569592 [7:54:42<317:50:44, 2.07s/it]
3%|▎ | 17295/569592 [7:54:43<270:50:47, 1.77s/it]
3%|▎ | 17295/569592 [7:54:43<270:50:47, 1.77s/it]
3%|▎ | 17296/569592 [7:54:49<452:11:52, 2.95s/it]
3%|▎ | 17296/569592 [7:54:49<452:11:52, 2.95s/it]
3%|▎ | 17297/569592 [7:54:50<361:28:37, 2.36s/it]
3%|▎ | 17297/569592 [7:54:50<361:28:37, 2.36s/it]
3%|▎ | 17298/569592 [7:54:53<419:41:56, 2.74s/it]
3%|▎ | 17298/569592 [7:54:53<419:41:56, 2.74s/it]
3%|▎ | 17299/569592 [7:54:58<490:06:27, 3.19s/it]
3%|▎ | 17299/569592 [7:54:58<490:06:27, 3.19s/it]
3%|▎ | 17300/569592 [7:54:59<387:27:29, 2.53s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17300/569592 [7:54:59<387:27:29, 2.53s/it]
3%|▎ | 17301/569592 [7:55:02<421:08:18, 2.75s/it]
3%|▎ | 17301/569592 [7:55:02<421:08:18, 2.75s/it]
3%|▎ | 17302/569592 [7:55:05<461:06:33, 3.01s/it]
3%|▎ | 17302/569592 [7:55:05<461:06:33, 3.01s/it]
3%|▎ | 17303/569592 [7:55:09<498:01:00, 3.25s/it]
3%|▎ | 17303/569592 [7:55:09<498:01:00, 3.25s/it]
3%|▎ | 17304/569592 [7:55:10<390:14:41, 2.54s/it]
3%|▎ | 17304/569592 [7:55:10<390:14:41, 2.54s/it]
3%|▎ | 17305/569592 [7:55:11<317:00:35, 2.07s/it]
3%|▎ | 17305/569592 [7:55:11<317:00:35, 2.07s/it]
3%|▎ | 17306/569592 [7:55:12<267:58:06, 1.75s/it]
3%|▎ | 17306/569592 [7:55:12<267:58:06, 1.75s/it]
3%|▎ | 17307/569592 [7:55:13<241:04:05, 1.57s/it]
3%|▎ | 17307/569592 [7:55:13<241:04:05, 1.57s/it]
3%|▎ | 17308/569592 [7:55:17<331:21:53, 2.16s/it]
3%|▎ | 17308/569592 [7:55:17<331:21:53, 2.16s/it]
3%|▎ | 17309/569592 [7:55:21<405:52:02, 2.65s/it]
3%|▎ | 17309/569592 [7:55:21<405:52:02, 2.65s/it]
3%|▎ | 17310/569592 [7:55:22<327:15:19, 2.13s/it]
3%|▎ | 17310/569592 [7:55:22<327:15:19, 2.13s/it]
3%|▎ | 17311/569592 [7:55:23<286:30:41, 1.87s/it]
3%|▎ | 17311/569592 [7:55:23<286:30:41, 1.87s/it]
3%|▎ | 17312/569592 [7:55:26<357:34:42, 2.33s/it]
3%|▎ | 17312/569592 [7:55:26<357:34:42, 2.33s/it]
3%|▎ | 17313/569592 [7:55:30<410:58:05, 2.68s/it]
3%|▎ | 17313/569592 [7:55:30<410:58:05, 2.68s/it]
3%|▎ | 17314/569592 [7:55:33<436:17:35, 2.84s/it]
3%|▎ | 17314/569592 [7:55:33<436:17:35, 2.84s/it]
3%|▎ | 17315/569592 [7:55:36<464:00:39, 3.02s/it]
3%|▎ | 17315/569592 [7:55:36<464:00:39, 3.02s/it]
3%|▎ | 17316/569592 [7:55:40<496:28:06, 3.24s/it]
3%|▎ | 17/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
316/569592 [7:55:40<496:28:06, 3.24s/it]
3%|▎ | 17317/569592 [7:55:44<519:30:12, 3.39s/it]
3%|▎ | 17317/569592 [7:55:44<519:30:12, 3.39s/it]
3%|▎ | 17318/569592 [7:55:45<405:41:43, 2.64s/it]
3%|▎ | 17318/569592 [7:55:45<405:41:43, 2.64s/it]
3%|▎ | 17319/569592 [7:55:48<449:46:23, 2.93s/it]
3%|▎ | 17319/569592 [7:55:48<449:46:23, 2.93s/it]
3%|▎ | 17320/569592 [7:55:49<358:28:38, 2.34s/it]
3%|▎ | 17320/569592 [7:55:49<358:28:38, 2.34s/it]
3%|▎ | 17321/569592 [7:55:53<416:37:22, 2.72s/it]
3%|▎ | 17321/569592 [7:55:53<416:37:22, 2.72s/it]
3%|▎ | 17322/569592 [7:55:54<335:26:41, 2.19s/it]
3%|▎ | 17322/569592 [7:55:54<335:26:41, 2.19s/it]
3%|▎ | 17323/569592 [7:55:57<383:09:21, 2.50s/it]
3%|▎ | 17323/569592 [7:55:57<383:09:21, 2.50s/it]
3%|▎ | 17324/569592 [7:56:01<439:25:50, 2.86s/it]
3%|▎ | 17324/569592 [7:56:01<439:25:50, 2.86s/it]
3%|▎ | 17325/569592 [7:56:04<475:32:31, 3.10s/it]
3%|▎ | 17325/569592 [7:56:04<475:32:31, 3.10s/it]
3%|▎ | 17326/569592 [7:56:08<501:10:47, 3.27s/it]
3%|▎ | 17326/569592 [7:56:08<501:10:47, 3.27s/it]
3%|▎ | 17327/569592 [7:56:09<396:53:50, 2.59s/it]
3%|▎ | 17327/569592 [7:56:09<396:53:50, 2.59s/it]
3%|▎ | 17328/569592 [7:56:13<443:42:57, 2.89s/it]
3%|▎ | 17328/569592 [7:56:13<443:42:57, 2.89s/it]
3%|▎ | 17329/569592 [7:56:16<479:51:24, 3.13s/it]
3%|▎ | 17329/569592 [7:56:16<479:51:24, 3.13s/it]
3%|▎ | 17330/569592 [7:56:20<482:11:55, 3.14s/it]
3%|▎ | 17330/569592 [7:56:20<482:11:55, 3.14s/it]
3%|▎ | 17331/569592 [7:56:23<500:58:11, 3.27s/it]
3%|▎ | 17331/569592 [7:56:23<500:58:11, 3.27s/it]
3%|▎ | 17332/569592 [7:56:27<517:59:07, 3.38s/it]
3%|▎ | 17332/569592 [7:56:27<517:59:07, 3.38s/it]
3%|▎ | 17333/569592 [7:56:31<539:40:02, 3.52s/it]
3%|▎ | 17333/569592 [7:56:31<539:40:02, 3.52s/it]
3%|▎ | 17334/569592 [7:56:34<519:44:08, 3.39s/it]
3%|▎ | 17334/569592 [7:56:34<519:44:08, 3.39s/it]
3%|▎ | 17335/569592 [7:56:35<405:08:50, 2.64s/it]
3%|▎ | 17335/569592 [7:56:35<405:08:50, 2.64s/it]
3%|▎ | 17336/569592 [7:56:38<454:41:55, 2.96s/it]
3%|▎ | 17336/569592 [7:56:38<454:41:55, 2.96s/it]
3%|▎ | 17337/569592 [7:56:42<490:34:34, 3.20s/it]
3%|▎ | 17337/569592 [7:56:42<490:34:34, 3.20s/it]
3%|▎ | 17338/569592 [7:56:46<539:54:24, 3.52s/it]
3%|▎ | 17338/569592 [7:56:46<539:54:24, 3.52s/it]
3%|▎ | 17339/569592 [7:56:50<525:32:22, 3.43s/it]
3%|▎ | 17339/569592 [7:56:50<525:32:22, 3.43s/it]
3%|▎ | 17340/569592 [7:56:53<515:42:46, 3.36s/it]
3%|▎ | 17340/569592 [7:56:53<515:42:46, 3.36s/it]
3%|▎ | 17341/569592 [7:56:56<500:49:47, 3.26s/it]
3%|▎ | 17341/569592 [7:56:56<500:49:47, 3.26s/it]
3%|▎ | 17342/569592 [7:56:59<502:19:58, 3.27s/it]
3%|▎ | 17342/569592 [7:56:59<502:19:58, 3.27s/it]
3%|▎ | 17343/569592 [7:57:04<557:50:09, 3.64s/it]
3%|▎ | 17343/569592 [7:57:04<557:50:09, 3.64s/it]
3%|▎ | 17344/569592 [7:57:07<544:40:38, 3.55s/it]
3%|▎ | 17344/569592 [7:57:07<544:40:38, 3.55s/it]
3%|▎ | 17345/569592 [7:57:10<528:28:25, 3.45s/it]
3%|▎ | 17345/569592 [7:57:10<528:28:25, 3.45s/it]
3%|▎ | 17346/569592 [7:57:13<519:05:54, 3.38s/it]
3%|▎ | 17346/569592 [7:57:13<519:05:54, 3.38s/it]
3%|▎ | 17347/569592 [7:57:17<537:56:40, 3.51s/it]
3%|▎ | 17347/569592 [7:57:17<537:56:40, 3.51s/it]
3%|▎ | 17348/569592 [7:57:20<529:52:06, 3.45s/it]
3%|▎ | 17348/569592 [7:57:20<529:52:06, 3.45s/it]
3%|▎ | 17349/569592 [7:57:23<509:44:45, 3.32s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92540748 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17349/569592 [7:57:23<509:44:45, 3.32s/it]
3%|▎ | 17350/569592 [7:57:24<397:53:47, 2.59s/it]
3%|▎ | 17350/569592 [7:57:24<397:53:47, 2.59s/it]
3%|▎ | 17351/569592 [7:57:28<436:54:24, 2.85s/it]
3%|▎ | 17351/569592 [7:57:28<436:54:24, 2.85s/it]
3%|▎ | 17352/569592 [7:57:32<476:02:51, 3.10s/it]
3%|▎ | 17352/569592 [7:57:32<476:02:51, 3.10s/it]
3%|▎ | 17353/569592 [7:57:35<505:19:43, 3.29s/it]
3%|▎ | 17353/569592 [7:57:35<505:19:43, 3.29s/it]
3%|▎ | 17354/569592 [7:57:38<492:20:35, 3.21s/it]
3%|▎ | 17354/569592 [7:57:38<492:20:35, 3.21s/it]
3%|▎ | 17355/569592 [7:57:39<389:22:09, 2.54s/it]
3%|▎ | 17355/569592 [7:57:39<389:22:09, 2.54s/it]
3%|▎ | 17356/569592 [7:57:40<318:25:17, 2.08s/it]
3%|▎ | 17356/569592 [7:57:40<318:25:17, 2.08s/it]
3%|▎ | 17357/569592 [7:57:44<389:45:30, 2.54s/it]
3%|▎ | 17357/569592 [7:57:44<389:45:30, 2.54s/it]
3%|▎ | 17358/569592 [7:57:48<454:54:03, 2.97s/it]
3%|▎ | 17358/569592 [7:57:48<454:54:03, 2.97s/it]
3%|▎ | 17359/569592 [7:57:52<495:54:32, 3.23s/it]
3%|▎ | 17359/569592 [7:57:52<495:54:32, 3.23s/it]
3%|▎ | 17360/569592 [7:57:56<554:59:58, 3.62s/it]
3%|▎ | 17360/569592 [7:57:56<554:59:58, 3.62s/it]
3%|▎ | 17361/569592 [7:58:00<541:56:44, 3.53s/it]
3%|▎ | 17361/569592 [7:58:00<541:56:44, 3.53s/it]
3%|▎ | 17362/569592 [7:58:00<420:39:48, 2.74s/it]
3%|▎ | 17362/569592 [7:58:00<420:39:48, 2.74s/it]
3%|▎ | 17363/569592 [7:58:04<456:37:58, 2.98s/it]
3%|▎ | 17363/569592 [7:58:04<456:37:58, 2.98s/it]
3%|▎ | 17364/569592 [7:58:05<362:37:23, 2.36s/it]
3%|▎ | 17364/569592 [7:58:05<362:37:23, 2.36s/it]
3%|▎ | 17365/569592 [7:58:08<416:48:13, 2.72s/it]
3%|▎ | 17365/569592 [7:58:08<416:48:13, 2.72s/it]
3%|▎ | 17366/569592 [7:58:12<470:52:58, 3.07s/it]
3%|▎ | 17366/569592 [7:58:12<470:52:58, 3.07s/it]
3%|▎ | 17367/569592 [7:58:13<375:29:51, 2.45s/it]
3%|▎ | 17367/569592 [7:58:13<375:29:51, 2.45s/it]
3%|▎ | 17368/569592 [7:58:17<426:42:07, 2.78s/it]
3%|▎ | 17368/569592 [7:58:17<426:42:07, 2.78s/it]
3%|▎ | 17369/569592 [7:58:21<485:48:38, 3.17s/it]
3%|▎ | 17369/569592 [7:58:21<485:48:38, 3.17s/it]
3%|▎ | 17370/569592 [7:58:25<506:01:33, 3.30s/it]
3%|▎ | 17370/569592 [7:58:25<506:01:33, 3.30s/it]
3%|▎ | 17371/569592 [7:58:28<512:2/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95042851 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
8:45, 3.34s/it]
3%|▎ | 17371/569592 [7:58:28<512:28:45, 3.34s/it]
3%|▎ | 17372/569592 [7:58:31<502:11:12, 3.27s/it]
3%|▎ | 17372/569592 [7:58:31<502:11:12, 3.27s/it]
3%|▎ | 17373/569592 [7:58:34<504:49:43, 3.29s/it]
3%|▎ | 17373/569592 [7:58:34<504:49:43, 3.29s/it]
3%|▎ | 17374/569592 [7:58:38<523:11:12, 3.41s/it]
3%|▎ | 17374/569592 [7:58:38<523:11:12, 3.41s/it]
3%|▎ | 17375/569592 [7:58:39<407:14:14, 2.65s/it]
3%|▎ | 17375/569592 [7:58:39<407:14:14, 2.65s/it]
3%|▎ | 17376/569592 [7:58:43<458:30:34, 2.99s/it]
3%|▎ | 17376/569592 [7:58:43<458:30:34, 2.99s/it]
3%|▎ | 17377/569592 [7:58:47<494:11:43, 3.22s/it]
3%|▎ | 17377/569592 [7:58:47<494:11:43, 3.22s/it]
3%|▎ | 17378/569592 [7:58:50<486:52:38, 3.17s/it]
3%|▎ | 17378/569592 [7:58:50<486:52:38, 3.17s/it]
3%|▎ | 17379/569592 [7:58:53<508:24:18, 3.31s/it]
3%|▎ | 17379/569592 [7:58:53<508:24:18, 3.31s/it]
3%|▎ | 17380/569592 [7:58:57<544:35:47, 3.55s/it]
3%|▎ | 17380/569592 [7:58:57<544:35:47, 3.55s/it]
3%|▎ | 17381/569592 [7:59:01<534:48:10, 3.49s/it]
3%|▎ | 17381/569592 [7:59:01<534:48:10, 3.49s/it]
3%|▎ | 17382/569592 [7:59:02<416:49:18, 2.72s/it]
3%|▎ | 17382/569592 [7:59:02<416:49:18, 2.72s/it]
3%|▎ | 17383/569592 [7:59:05<451:39:57, 2.94s/it]
3%|▎ | 17383/569592 [7:59:05<451:39:57, 2.94s/it]
3%|▎ | 17384/569592 [7:59:06<363:21:29, 2.37s/it]
3%|▎ | 17384/569592 [7:59:06<363:21:29, 2.37s/it]
3%|▎ | 17385/569592 [7:59:07<303:15:04, 1.98s/it]
3%|▎ | 17385/569592 [7:59:07<303:15:04, 1.98s/it]
3%|▎ | 17386/569592 [7:59:08<255:18:25, 1.66s/it]
3%|▎ | 17386/569592 [7:59:08<255:18:25, 1.66s/it]
3%|▎ | 17387/569592 [7:59:12<372:50:30, 2.43s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17387/569592 [7:59:12<372:50:30, 2.43s/it]
3%|▎ | 17388/569592 [7:59:16<420:44:07, 2.74s/it]
3%|▎ | 17388/569592 [7:59:16<420:44:07, 2.74s/it]
3%|▎ | 17389/569592 [7:59:17<339:38:22, 2.21s/it]
3%|▎ | 17389/569592 [7:59:17<339:38:22, 2.21s/it]
3%|▎ | 17390/569592 [7:59:20<388:11:42, 2.53s/it]
3%|▎ | 17390/569592 [7:59:20<388:11:42, 2.53s/it]
3%|▎ | 17391/569592 [7:59:21<315:31:10, 2.06s/it]
3%|▎ | 17391/569592 [7:59:21<315:31:10, 2.06s/it]
3%|▎ | 17392/569592 [7:59:25<384:46:20, 2.51s/it]
3%|▎ | 17392/569592 [7:59:25<384:46:20, 2.51s/it]
3%|▎ | 17393/569592 [7:59:29<495:40:30, 3.23s/it]
3%|▎ | 17393/569592 [7:59:29<495:40:30, 3.23s/it]
3%|▎ | 17394/569592 [7:59:33<504:41:18, 3.29s/it]
3%|▎ | 17394/569592 [7:59:33<504:41:18, 3.29s/it]
3%|▎ | 17395/569592 [7:59:36<502:43:11, 3.28s/it]
3%|▎ | 17395/569592 [7:59:36<502:43:11, 3.28s/it]
3%|▎ | 17396/569592 [7:59:40<511:16:41, 3.33s/it]
3%|▎ | 17396/569592 [7:59:40<511:16:41, 3.33s/it]
3%|▎ | 17397/569592 [7:59:44<547:25:24, 3.57s/it]
3%|▎ | 17397/569592 [7:59:44<547:25:24, 3.57s/it]
3%|▎ | 17398/569592 [7:59:47<538:35:39, 3.51s/it]
3%|▎ | 17398/569592 [7:59:47<538:35:39, 3.51s/it]
3%|▎ | 17399/569592 [7:59:48<420:00:33, 2.74s/it]
3%|▎ | 17399/569592 [7:59:48<420:00:33, 2.74s/it]
3%|▎ | 17400/569592 [7:59:49<336:14:18, 2.19s/it]
3%|▎ | 17400/569592 [7:59:49<336:14:18, 2.19s/it]
3%|▎ | 17401/569592 [7:59:53<415:55:55, 2.71s/it]
3%|▎ | 17401/569592 [7:59:53<415:55:55, 2.71s/it]
3%|▎ | 17402/569592 [7:59:56<448:16:44, 2.92s/it]
3%|▎ | 17402/569592 [7:59:56<448:16:44, 2.92s/it]
3%|▎ | 17403/569592 [7:59:57<356:52:26, 2.33s/it]
3%|▎ | 17403/569592 [7:59:57<356:52:26, 2.33s/it]
3%|▎ | 17404/569592 [8:00:03<503:09:59, 3.28s/it]
3%|▎ | 17404/569592 [8:00:03<503:09:59, 3.28s/it]
3%|▎ | 17405/569592 [8:00:04<396:26:56, 2.58s/it]
3%|▎ | 17405/569592 [8:00:04<396:26:56, 2.58s/it]
3%|▎ | 17406/569592 [8:00:07<448:02:58, 2.92s/it]
3%|▎ | 17406/569592 [8:00:08<448:02:58, 2.92s/it]
3%|▎ | 17407/569592 [8:00:09<364:19:35, 2.38s/it]
3%|▎ | 17407/569592 [8:00:09<364:19:35, 2.38s/it]
3%|▎ | 17408/569592 [8:00:12<410:48:43, 2.68s/it]
3%|▎ | 17408/569592 [8:00:12<410:48:43, 2.68s/it]
3%|▎ | 17409/569592 [8:00:13<330:56:14, 2.16s/it]
3%|▎ | 17409/569592 [8:00:13<330:56:14, 2.16s/it]
3%|▎ | 17410/569592 [8:00:14<275:59:52, 1.80s/it]
3%|▎ | 17410/569592 [8:00:14<275:59:52, 1.80s/it]
3%|▎ | 17411/569592 [8:00:18<365:35:16, 2.38s/it]
3%|▎ | 17411/569592 [8:00:18<365:35:16, 2.38s/it]
3%|▎ | 17412/569592 [8:00:22<450:16:34, 2.94s/it]
3%|▎ | 17412/569592 [8:00:22<450:16:34, 2.94s/it]
3%|▎ | 17413/569592 [8:00:23<357:51:43, 2.33s/it]
3%|▎ | 17413/569592 [8:00:23<357:51:43, 2.33s/it]
3%|▎ | 17414/569592 [8:00:26<417:39:54, 2.72s/it]
3%|▎ | 17414/569592 [8:00:26<417:39:54, 2.72s/it]
3%|▎ | 17415/569592 [8:00:27<337:49:42, 2.20s/it]
3%|▎ | 17415/569592 [8:00:27<337:49:42, 2.20s/it]
3%|▎ | 17416/569592 [8:00:28<280:27:54, 1.83s/it]
3%|▎ | 17416/569592 [8:00:28<280:27:54, 1.83s/it]
3%|▎ | 17417/569592 [8:00:29<241:28:20, 1.57s/it]
3%|▎ | 17417/569592 [8:00:29<241:28:20, 1.57s/it]
3%|▎ | 17418/569592 [8:00:30<211:57:39, 1.38s/it]
3%|▎ | 17418/569592 [8:00:30<211:57:39, 1.38s/it]
3%|▎ | 17419/569592 [8:00:34<308:40:27, 2.01s/it]
3%|▎ | 17419/569592 [8:00:34<308:40:27, 2.01s/it]
3%|▎ | 17420/569592 [8:00:37<382:06:40, 2.49s/it]
3%|▎ | 17420/569592 [8:00:37<382:06:40, 2.49s/it]
3%|▎ | 17421/569592 [8:00:41<446:12:26, 2.91s/it]
3%|▎ | 17421/569592 [8:00:41<446:12:26, 2.91s/it]
3%|▎ | 17422/569592 [8:00:42<354:57:07, 2.31s/it]
3%|▎ | 17422/569592 [8:00:42<354:57:07, 2.31s/it]
3%|▎ | 17423/569592 [8:00:43<292:00:37, 1.90s/it]
3%|▎ | 17423/569592 [8:00:43<292:00:37, 1.90s/it]
3%|▎ | 17424/569592 [8:00:44<248:10:37, 1.62s/it]
3%|▎ | 17424/569592 [8:00:44<248:10:37, 1.62s/it]
3%|▎ | 17425/569592 [8:00:47<335:04:54, 2.18s/it]
3%|▎ | 17425/569592 [8:00:48<335:04:54, 2.18s/it]
3%|▎ | 17426/569592 [8:00:51<382:26:45, 2.49s/it]
3%|▎ | 17426/569592 [8:00:51<382:26:45, 2.49s/it]
3%|▎ | 17427/569592 [8:00:52<314:20:17, 2.05s/it]
3%|▎ | 17427/569592 [8:00:52<314:20:17, 2.05s/it]
3%|▎ | 17428/569592 [8:00:55<384:45:59, 2.51s/it]
3%|▎ | 17428/569592 [8:00:55<384:45:59, 2.51s/it]
3%|▎ | 17429/569592 [8:00:58<393:58:52, 2.57s/it]
3%|▎ | 17429/569592 [8:00:58<393:58:52, 2.57s/it]
3%|▎ | 17430/569592 [8:00:59<321:00:49, 2.09s/it]
3%|▎ | 17430/569592 [8:00:59<321:00:49, 2.09s/it]
3%|▎ | 17431/56959/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (97353185 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2 [8:01:02<360:25:23, 2.35s/it]
3%|▎ | 17431/569592 [8:01:02<360:25:23, 2.35s/it]
3%|▎ | 17432/569592 [8:01:05<372:02:23, 2.43s/it]
3%|▎ | 17432/569592 [8:01:05<372:02:23, 2.43s/it]
3%|▎ | 17433/569592 [8:01:09<470:41:10, 3.07s/it]
3%|▎ | 17433/569592 [8:01:09<470:41:10, 3.07s/it]
3%|▎ | 17434/569592 [8:01:13<498:13:22, 3.25s/it]
3%|▎ | 17434/569592 [8:01:13<498:13:22, 3.25s/it]
3%|▎ | 17435/569592 [8:01:14<422:31:38, 2.75s/it]
3%|▎ | 17435/569592 [8:01:14<422:31:38, 2.75s/it]
3%|▎ | 17436/569592 [8:01:19<485:26:50, 3.17s/it]
3%|▎ | 17436/569592 [8:01:19<485:26:50, 3.17s/it]
3%|▎ | 17437/569592 [8:01:20<389:18:53, 2.54s/it]
3%|▎ | 17437/569592 [8:01:20<389:18:53, 2.54s/it]
3%|▎ | 17438/569592 [8:01:21<315:52:16, 2.06s/it]
3%|▎ | 17438/569592 [8:01:21<315:52:16, 2.06s/it]
3%|▎ | 17439/569592 [8:01:27<527:57:01, 3.44s/it]
3%|▎ | 17439/569592 [8:01:27<527:57:01, 3.44s/it]
3%|▎ | 17440/569592 [8:01:28<414:05:55, 2.70s/it]
3%|▎ | 17440/569592 [8:01:28<414:05:55, 2.70s/it]
3%|▎ | 17441/569592 [8:01:32<453:06:03, 2.95s/it]
3%|▎ | 17441/569592 [8:01:32<453:06:03, 2.95s/it]
3%|▎ | 17442/569592 [8:01:35<489:18:22, 3.19s/it]
3%|▎ | 17442/569592 [8:01:35<489:18:22, 3.19s/it]
3%|▎ | 17443/569592 [8:01:37<397:04:44, 2.59s/it]
3%|▎ | 17443/569592 [8:01:37<397:04:44, 2.59s/it]
3%|▎ | 17444/569592 [8:01:40<447:05:19, 2.92s/it]
3%|▎ | 17444/569592 [8:01:40<447:05:19, 2.92s/it]
3%|▎ | 17445/569592 [8:01:41<357:08:05, 2.33s/it]
3%|▎ | 17445/569592 [8:01:41<357:08:05, 2.33s/it]
3%|▎ | 17446/569592 [8:01:45<401:07:41, 2.62s/it]
3%|▎ | 17446/569592 [8:01:45<401:07:41, 2.62s/it]
3%|▎ | 17447/569592 [8:01:48<441:04:21, 2.88s/it]
3%|▎ | 17447/569592 [8:01:48<441:04:21, 2.88s/it]
3%|▎ | 17448/569592 [8:01:52<488:17:03, 3.18s/it]
3%|▎ | 17448/569592 [8:01:52<488:17:03, 3.18s/it]
3%|▎ | 17449/569592 [8:01:55<498:11:58, 3.25s/it]
3%|▎ | 17449/569592 [8:01:55<498:11:58, 3.25s/it]
3%|▎ | 17450/569592 [8:01:59<519:30:00, 3.39s/it]
3%|▎ | 17450/569592 [8:01:59<519:30:00, 3.39s/it]
3%|▎ | 17451/569592 [8:02:02<511:40:02, 3.34s/it]
3%|▎ | 17451/569592 [8:02:02<511:40:02, 3.34s/it]
3%|▎ | 17452/569592 [8:02:06<509:49:33, 3.32s/it]
3%|▎ | 17452/569592 [8:02:06<509:49:33, 3.32s/it]
3%|▎ | 17453/569592 [8:02:09<519:57:08, 3.39s/it]
3%|▎ | 17453/569592 [8:02:09<519:57:08, 3.39s/it]
3%|▎ | 17454/569592 [8:02:12<518:41:06, 3.38s/it]
3%|▎ | 17454/569592 [8:02:12<518:41:06, 3.38s/it]
3%|▎ | 17455/569592 [8:02:16<504:21:22, 3.29s/it]
3%|▎ | 17455/569592 [8:02:16<504:21:22, 3.29s/it]
3%|▎ | 17456/569592 [8:02:19<509:10:48, 3.32s/it]
3%|▎ | 17456/569592 [8:02:19<509:10:48, 3.32s/it]
3%|▎ | 17457/569592 [8:02:22<512:24:03, 3.34s/it]
3%|▎ | 17457/569592 [8:02:22<512:24:03, 3.34s/it]
3%|▎ | 17458/569592 [8:02:27<560:45:40, 3.66s/it]
3%|▎ | 17458/569592 [8:02:27<560:45:40, 3.66s/it]
3%|▎ | 17459/569592 [8:02:28<434:27:16, 2.83s/it]
3%|▎ | 17459/569592 [8:02:28<434:27:16, 2.83s/it]
3%|▎ | 17460/569592 [8:02:31<451:31:05, 2.94s/it]
3%|▎ | 17460/569592 [8:02:31<451:31:05, 2.94s/it]
3%|▎ | 17461/569592 [8:02:32<358:06:00, 2.33s/it]
3%|▎ | 17461/569592 [8:02:32<358:06:00, 2.33s/it]
3%|▎ | 17462/569592 [8:02:35<402:32:43, 2.62s/it]
3%|▎ | 17462/569592 [8:02:35<402:32:43, 2.62s/it]
3%|▎ | 17463/569592 [8:02:39<456:12:01, 2.97s/it]
3%|▎ | 17463/569592 [8:02:39<456:12:01, 2.97s/it]
3%|▎ | 17464/569592 [8:02:42<471:20:58, 3.07s/it]
3%|▎ | 17464/569592 [8:02:42<471:20:58, 3.07s/it]
3%|▎ | 17465/569592 [8:02:46<491:11:09, 3.20s/it]
3%|▎ | 17465/569592 [8:02:46<491:11:09, 3.20s/it]
3%|▎ | 17466/569592 [8:02:47<385:00:40, 2.51s/it]
3%|▎ | 17466/569592 [8:02:47<385:00:40, 2.51s/it]
3%|▎ | 17467/569592 [8:02:50<416:46:52, 2.72s/it]
3%|▎ | 17467/569592 [8:02:50<416:46:52, 2.72s/it]
3%|▎ | 17468/569592 [8:02:51<336:16:49, 2.19s/it]
3%|▎ | 17468/569592 [8:02:51<336:16:49, 2.19s/it]
3%|▎ | 17469/569592 [8:02:54<383:21:15, 2.50s/it]
3%|▎ | 17469/569592 [8:02:54<383:21:15, 2.50s/it]
3%|▎ | 17470/569592 [8:02:57<425:34:32, 2.77s/it]
3%|▎ | 17470/569592 [8:02:57<425:34:32, 2.77s/it]
3%|▎ | 17471/569592 [8:03:01<485:52:46, 3.17s/it]
3%|▎ | 17471/569592 [8:03:01<485:52:46, 3.17s/it]
3%|▎ | 17472/569592 [8:03:05<492:54:25, 3.21s/it]
3%|▎ | 17472/569592 [8:03:05<492:54:25, 3.21s/it]
3%|▎ | 17473/569592 [8:03:06<390:22:49, 2.55s/it]
3%|▎ | 17473/569592 [8:03:06<390:22:49, 2.55s/it]
3%|▎ | 17474/569592 [8:03:09<421:30:48, 2.75s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 17474/569592 [8:03:09<421:30:48, 2.75s/it]
3%|▎ | 17475/569592 [8:03:12<451:21:46, 2.94s/it]
3%|▎ | 17475/569592 [8:03:12<451:21:46, 2.94s/it]
3%|▎ | 17476/569592 [8:03:16<503:51:38, 3.29s/it]
3%|▎ | 17476/569592 [8:03:16<503:51:38, 3.29s/it]
3%|▎ | 17477/569592 [8:03:20<532:18:33, 3.47s/it]
3%|▎ | 17477/569592 [8:03:20<532:18:33, 3.47s/it]
3%|▎ | 17478/569592 [8:03:24<521:31:58, 3.40s/it]
3%|▎ | 17478/569592 [8:03:24<521:31:58, 3.40s/it]
3%|▎ | 17479/569592 [8:03:27<511:24:20, 3.33s/it]
3%|▎ | 17479/569592 [8:03:27<511:24:20, 3.33s/it]
3%|▎ | 17480/569592 [8:03:28<402:20:26, 2.62s/it]
3%|▎ | 17480/569592 [8:03:28<402:20:26, 2.62s/it]
3%|▎ | 17481/569592 [8:03:31<443:11:11, 2.89s/it]
3%|▎ | 17481/569592 [8:03:31<443:11:11, 2.89s/it]
3%|▎ | 17482/569592 [8:03:32<353:20:13, 2.30s/it]
3%|▎ | 17482/569592 [8:03:32<353:20:13, 2.30s/it]
3%|▎ | 17483/569592 [8:03:36<401:47:32, 2.62s/it]
3%|▎ | 17483/569592 [8:03:36<401:47:32, 2.62s/it]
3%|▎ | 17484/569592 [8:03:39<458:52:55, 2.99s/it]
3%|▎ | 17484/569592 [8:03:39<458:52:55, 2.99s/it]
3%|▎ | 17485/569592 [8:03:40<369:42:44, 2.41s/it]
3%|▎ | 17485/569592 [8:03:40<369:42:44, 2.41s/it]
3%|▎ | 17486/569592 [8:03:44<424:58:20, 2.77s/it]
3%|▎ | 17486/569592 [8:03:44<424:58:20, 2.77s/it]
3%|▎ | 17487/569592 [8:03:48<474:51:52, 3.10s/it]
3%|▎ | 17487/569592 [8:03:48<474:51:52, 3.10s/it]
3%|▎ | 17488/569592 [8:03:52<508:57:20, 3.32s/it]
3%|▎ | 17488/569592 [8:03:52<508:57:20, 3.32s/it]
3%|▎ | 17489/569592 [8:03:53<397:33:55, 2.59s/it]
3%|▎ | 17489/569592 [8:03:53<397:33:55, 2.59s/it]
3%|▎ | 17490/569592 [8:03:56<454:27:49, 2.96s/it]
3%|▎ | 17490/569592 [8:03:57<454:27:49, 2.96s/it]
3%|▎ | 17491/569592 [8:03:58<373:48:01, 2.44s/it]
3%|▎ | 17491/569592 [8:03:58<373:48:01, 2.44s/it]
3%|▎ | 17492/569592 [8:04:01<416:26:05, 2.72s/it]
3%|▎ | 17492/569592 [8:04:01<416:26:05, 2.72s/it]
3%|▎ | 17493/569592 [8:04:02<334:33:21, 2.18s/it]
3%|▎ | 17493/569592 [8:04:02<334:33:21, 2.18s/it]
3%|▎ | 17494/569592 [8:04:06<416:41:30, 2.72s/it]
3%|▎ | 17494/569592 [8:04:06<416:41:30, 2.72s/it]
3%|▎ | 17495/569592 [8:04:09<438:28:38, 2.86s/it]
3%|▎ | 17495/569592 [8:04:09<438:28:38, 2.86s/it]
3%|▎ | 17496/569592 [8:04:13<463:31:44, 3.02s/it]
3%|▎ | 17496/569592 [8:04:13<463:31:44, 3.02s/it]
3%|▎ | 17497/569592 [8:04:16<491:46:50, 3.21s/it]
3%|▎ | 17497/569592 [8:04:16<491:46:50, 3.21s/it]
3%|▎ | 17498/569592 [8:04:17<387:54:49, 2.53s/it]
3%|▎ | 17498/569592 [8:04:17<387:54:49, 2.53s/it]
3%|▎ | 17499/569592 [8:04:21<429:57:36, 2.80s/it]
3%|▎ | 17499/569592 [8:04:21<429:57:36, 2.80s/it]
3%|▎ | 17500/569592 [8:04:24<455:30:36, 2.97s/it]
3%|▎ | 17500/569592 [8:04:24<455:30:36, 2.97s/it]
3%|▎ | 17501/569592 [8:04:27<470:58:37, 3.07s/it]
3%|▎ | 17501/569592 [8:04:27<470:58:37, 3.07s/it]
3%|▎ | 17502/569592 [8:04:28<371:32:50, 2.42s/it]
3%|▎ | 17502/569592 [8:04:28<371:32:50, 2.42s/it]
3%|▎ | 17503/569592 [8:04:32<417:13:04, 2.72s/it]
3%|▎ | 17503/569592 [8:04:32<417:13:04, 2.72s/it]
3%|▎ | 17504/569592 [8:04:33<335:22:11, 2.19s/it]
3%|▎ | 17504/569592 [8:04:33<335:22:11, 2.19s/it]
3%|▎ | 17505/569592 [8:04:36<406:44:15, 2.65s/it]
3%|▎ | 17505/569592 [8:04:36<406:44:15, 2.65s/it]
3%|▎ | 17506/569592 [8:04:37<329:58:31, 2.15s/it]
3%|▎ | 17506/569592 [8:04:37<329:58:31, 2.15s/it]
3%|▎ | 17507/569592 [8:04:38<274:34:11, 1.79s/it]
3%|▎ | 17507/569592 [8:04:38<274:34:11, 1.79s/it]
3%|▎ | 17508/569592 [8:04:42<356:19:45, 2.32s/it]
3%|▎ | 17508/569592 [8:04:42<356:19:45, 2.32s/it]
3%|▎ | 17509/569592 [8:04:43<293:00:49, 1.91s/it]
3%|▎ | 17509/569592 [8:04:43<293:00:49, 1.91s/it]
3%|▎ | 17510/569592 [8:04:47<381:25:14, 2.49s/it]
3%|▎ | 17510/569592 [8:04:47<381:25:14, 2.49s/it]
3%|▎ | 17511/569592 [8:04:50<441:19:33, 2.88s/it]
3%|▎ | 17511/569592 [8:04:50<441:19:33, 2.88s/it]
3%|▎ | 17512/569592 [8:04:55<531:52:00, 3.47s/it]
3%|▎ | 17512/569592 [8:04:55<531:52:00, 3.47s/it]
3%|▎ | 17513/569592 [8:04:56<420:17:44, 2.74s/it]
3%|▎ | 17513/569592 [8:04:56<420:17:44, 2.74s/it]
3%|▎ | 17514/569592 [8:05:00<454:00:06, 2.96s/it]
3%|▎ | 17514/569592 [8:05:00<454:00:06, 2.96s/it]
3%|▎ | 17515/569592 [8:05:03<486:03:06, 3.17s/it]
3%|▎ | 17515/569592 [8:05:03<486:03:06, 3.17s/it]
3%|▎ | 17516/569592 [8:05:06<485:49:43, 3.17s/it]
3%|▎ | 17516/569592 [8:05:07<485:49:43, 3.17s/it]
3%|▎ | 17517/569592 [8:05:07<380:40:47, 2.48s/it]
3%|▎ | 17517/569592 [8:05:07<380:40:47, 2.48s/it]
3%|▎ | 17518/569592 [8:05:08<310:18:40, 2.02s/it]
3%|▎ | 17518/569592 [8:05:08<310:18:40, 2.02s/it]
3%|▎ | 17519/569592 [8:05:12<395:55:53, 2.58s/it]
3%|▎ | 17519/569592 [8:05:12<395:55:53, 2.58s/it]
3%|▎ | 17520/569592 [8:05:16<438:25:12, 2.86s/it]
3%|▎ | 17520/569592 [8:05:16<438:25:12, 2.86s/it]
3%|▎ | 17521/569592 [8:05:19<467:10:17, 3.05s/it]
3%|▎ | 17521/569592 [8:05:19<467:10:17, 3.05s/it]
3%|▎ | 17522/569592 [8:05:23<494:00:23, 3.22s/it]
3%|▎ | 17522/569592 [8:05:23<494:00:23, 3.22s/it]
3%|▎ | 17523/569592 [8:05:24<387:04:24, 2.52s/it]
3%|▎ | 17523/569592 [8:05:24<387:04:24, 2.52s/it]
3%|▎ | 17524/569592 [8:05:28<448:17:07, 2.92s/it]
3%|▎ | 17524/569592 [8:05:28<448:17:07, 2.92s/it]
3%|▎ | 17525/569592 [8:05:29<356:54:56, 2.33s/it]
3%|▎ | 17525/569592 [8:05:29<356:54:56, 2.33s/it]
3%|▎ | 17526/569592 [8:05:32<402:22:06, 2.62s/it]
3%|▎ | 17526/569592 [8:05:32<402:22:06, 2.62s/it]
3%|▎ | 17527/569592 [8:05:35<440:25:36, 2.87s/it]
3%|▎ | 17527/569592 [8:05:35<440:25:36, 2.87s/it]
3%|▎ | 17528/569592 [8:05:36<352:39:34, 2.30s/it]
3%|▎ | 17528/569592 [8:05:36<352:39:34, 2.30s/it]
3%|▎ | 17529/569592 [8:05:37<295:53:18, 1.93s/it]
3%|▎ | 17529/569592 [8:05:37<295:53:18, 1.93s/it]
3%|▎ | 17530/569592 [8:05:41<399:02:02, 2.60s/it]
3%|▎ | 17530/569592 [8:05:42<399:02:02, 2.60s/it]
3%|▎ | 17531/569592 [8:05:42<323:37:35, 2.11s/it]
3%|▎ | 17531/569592 [8:05:42<323:37:35, 2.11s/it]
3%|▎ | 17532/569592 [8:05:46<395:34:54, 2.58s/it]
3%|▎ | 17532/569592 [8:05:46<395:34:54, 2.58s/it]
3%|▎ | 17533/569592 [8:05:47<319:49:23, 2.09s/it]
3%|▎ | 17533/569592 [8:05:47<319:49:23, 2.09s/it]
3%|▎ | 17534/569592 [8:05:51<396:58:45, 2.59s/it]
3%|▎ | 17534/569592 [8:05:51<396:58:45, 2.59s/it]
3%|▎ | 17535/569592 [8:05:52<322:00:28, 2.10s/it]
3%|▎ | 17535/569592 [8:05:52<322:00:28, 2.10s/it]
3%|▎ | 17536/569592 [8:05:53<270:13:32, 1.76s/it]
3%|▎ | 17536/569592 [8:05:53<270:13:32, 1.76s/it]
3%|▎ | 17537/569592 [8:05:56<356:44:09, 2.33s/it]
3%|▎ | 17537/569592 [8:05:56<356:44:09, 2.33s/it]
3%|▎ | 17538/569592 [8:06:00<424:30:17, 2.77s/it]
3%|▎ | 17538/569592 [8:06:00<424:30:17, 2.77s/it]
3%|▎ | 17539/569592 [8:06:03<448:47:31, 2.93s/it]
3%|▎ | 17539/569592 [8:06:04<448:47:31, 2.93s/it]
3%|▎ | 17540/569592 [8:06:04<357:43:35, 2.33s/it]
3%|▎ | 17540/569592 [8:06:04<357:43:35, 2.33s/it]
3%|▎ | 17541/569592 [8:06:08<398:19:17, 2.60s/it]
3%|▎ | 17541/569592 [8:06:08<398:19:17, 2.60s/it]
3%|▎ | 17542/569592 [8:06:11<450:11:53, 2.94s/it]
3%|▎ | 17542/569592 [8:06:11<450:11:53, 2.94s/it]
3%|▎ | 17543/569592 [8:06:12<359:11:18, 2.34s/it]
3%|▎ | 17543/569592 [8:06:12<359:11:18, 2.34s/it]
3%|▎ | 17544/569592 [8:06:13<295:21:46, 1.93s/it]
3%|▎ | 17544/569592 [8:06:13<295:21:46, 1.93s/it]
3%|▎ | 17545/569592 [8:06:17<368:01:51, 2.40s/it]
3%|▎ | 17545/569592 [8:06:17<368:01:51, 2.40s/it]
3%|▎ | 17546/569592 [8:06:18<300:46:13, 1.96s/it]
3%|▎ | 17546/569592 [8:06:18<300:46:13, 1.96s/it]
3%|▎ | 17547/569592 [8:06:21<377:36:01, 2.46s/it]
3%|▎ | 17547/569592 [8:06:21<377:36:01, 2.46s/it]
3%|▎ | 17548/569592 [8:06:22<307:37:48, 2.01s/it]
3%|▎ | 17548/569592 [8:06:22<307:37:48, 2.01s/it]
3%|▎ | 17549/569592 [8:06:24<278:19:21, 1.82s/it]
3%|▎ | 17549/569592 [8:06:24<278:19:21, 1.82s/it]
3%|▎ | 17550/569592 [8:06:28<374:34:54, 2.44s/it]
3%|▎ | 17550/569592 [8:06:28<374:34:54, 2.44s/it]
3%|▎ | 17551/569592 [8:06:31<427:38:33, 2.79s/it]
3%|▎ | 17551/569592 [8:06:31<427:38:33, 2.79s/it]
3%|▎ | 17552/569592 [8:06:35<456:08:07, 2.97s/it]
3%|▎ | 17552/569592 [8:06:35<456:08:07, 2.97s/it]
3%|▎ | 17553/569592 [8:06:36<362:59:49, 2.37s/it]
3%|▎ | 17553/569592 [8:06:36<362:59:49, 2.37s/it]
3%|▎ | 17554/569592 [8:06:39<408:13:30, 2.66s/it]
3%|▎ | 17554/569592 [8:06:39<408:13:30, 2.66s/it]
3%|▎ | 17555/569592 [8:06:40<329:15:42, 2.15s/it]
3%|▎ | 17555/569592 [8:06:40<329:15:42, 2.15s/it]
3%|▎ | 17556/569592 [8:06:44<426:19:38, 2.78s/it]
3%|▎ | 17556/569592 [8:06:44<426:19:38, 2.78s/it]
3%|▎ | 17557/569592 [8:06:45<342:56:20, 2.24s/it]
3%|▎ | 17557/569592 [8:06:45<342:56:20, 2.24s/it]
3%|▎ | 17558/569592 [8:06:46<286:16:07, 1.87s/it]
3%|▎ | 17558/569592 [8:06:46<286:16:07, 1.87s/it]
3%|▎ | 17559/569592 [8:06:50<368:28:19, 2.40s/it]
3%|▎ | 17559/569592 [8:06:50<368:28:19, 2.40s/it]
3%|▎ | 17560/569592 [8:06:54<434:47:45, 2.84s/it]
3%|▎ | 17560/569592 [8:06:54<434:47:45, 2.84s/it]
3%|▎ | 17561/569592 [8:06:58<508:18:22, 3.31s/it]
3%|▎ | 17561/569592 [8:06:58<508:18:22, 3.31s/it]
3%|▎ | 17562/569592 [8:07:02<522:42:06, 3.41s/it]
3%|▎ | 17562/569592 [8:07:02<522:42:06, 3.41s/it]
3%|▎ | 17563/569592 [8:07:03<411:28:50, 2.68s/it]
3%|▎ | 17563/569592 [8:07:03<411:28:50, 2.68s/it]
3%|▎ | 17564/569592 [8:07:06<439:03:45, 2.86s/it]
3%|▎ | 17564/569592 [8:07:06<439:03:45, 2.86s/it]
3%|▎ | 17565/569592 [8:07:09<469:25:35, 3.06s/it]
3%|▎ | 17565/569592 [8:07:09<469:25:35, 3.06s/it]
3%|▎ | 17566/569592 [8:07:13<505:02:50, 3.29s/it]
3%|▎ | 17566/569592 [8:07:13<505:02:50, 3.29s/it]
3%|▎ | 17567/569592 [8:07:17<511:47:59, 3.34s/it]
3%|▎ | 17567/569592 [8:07:17<511:47:59, 3.34s/it]
3%|▎ | 17568/569592 [8:07:21<572:36:59, 3.73s/it]
3%|▎ | 17568/569592 [8:07:21<572:36:59, 3.73s/it]
3%|▎ | 17569/569592 [8:07:25<558:16:21, 3.64s/it]
3%|▎ | 17569/569592 [8:07:25<558:16:21, 3.64s/it]
3%|▎ | 17570/569592 [8:07:28<535:04:09, 3.49s/it]
3%|▎ | 17570/569592 [8:07:28<535:04:09, 3.49s/it]
3%|▎ | 17571/569592 [8:07:32<546:43:36, 3.57s/it]
3%|▎ | 17571/569592 [8:07:32<546:43:36, 3.57s/it]
3%|▎ | 17572/569592 [8:07:33<428:43:53, 2.80s/it]
3%|▎ | 17572/569592 [8:07:33<428:43:53, 2.80s/it]
3%|▎ | 17573/569592 [8:07:36<456:35:25, 2.98s/it]
3%|▎ | 17573/569592 [8:07:36<456:35:25, 2.98s/it]
3%|▎ | 17574/569592 [8:07:39<468:15:32, 3.05s/it]
3%|▎ | 17574/569592 [8:07:40<468:15:32, 3.05s/it]
3%|▎ | 17575/569592 [8:07:43<503:23:44, 3.28s/it]
3%|▎ | 17575/569592 [8:07:43<503:23:44, 3.28s/it]
3%|▎ | 17576/569592 [8:07:47<523:07:28, 3.41s/it]
3%|▎ | 17576/569592 [8:07:47<523:07:28, 3.41s/it]
3%|▎ | 17577/569592 [8:07:48<411:57:36, 2.69s/it]
3%|▎ | 17577/569592 [8:07:48<411:57:36, 2.69s/it]
3%|▎ | 17578/569592 [8:07:49<332:14:46, 2.17s/it]
3%|▎ | 17578/569592 [8:07:49<332:14:46, 2.17s/it]
3%|▎ | 17579/569592 [8:07:50<276:58:35, 1.81s/it]
3%|▎ | 17579/569592 [8:07:50<276:58:35, 1.81s/it]
3%|▎ | 17580/569592 [8:07:55<424:36:38, 2.77s/it]
3%|▎ | 17580/569592 [8:07:55<424:36:38, 2.77s/it]
3%|▎ | 17581/569592 [8:07:58<460:03:34, 3.00s/it]
3%|▎ | 17581/569592 [8:07:58<460:03:34, 3.00s/it]
3%|▎ | 17582/569592 [8:08:02<508:56:08, 3.32s/it]
3%|▎ | 17582/569592 [8:08:02<508:56:08, 3.32s/it]
3%|▎ | 17583/569592 [8:08:06<505:22:41, 3.30s/it]
3%|▎ | 17583/569592 [8:08:06<505:22:41, 3.30s/it]
3%|▎ | 17584/569592 [8:08:10<559:31:06, 3.65s/it]
3%|▎ | 17584/569592 [8:08:10<559:31:06, 3.65s/it]
3%|▎ | 17585/569592 [8:08:11<437:14:00, 2.85s/it]
3%|▎ | 17585/569592 [8:08:11<437:14:00, 2.85s/it]
3%|▎ | 17586/569592 [8:08:15<476:21:52, 3.11s/it]
3%|▎ | 17586/569592 [8:08:15<476:21:52, 3.11s/it]
3%|▎ | 17587/569592 [8:08:20<551:51:28, 3.60s/it]
3%|▎ | 17587/569592 [8:08:20<551:51:28, 3.60s/it]
3%|▎ | 17588/569592 [8:08:24<612:16:30, 3.99s/it]
3%|▎ | 17588/569592 [8:08:24<612:16:30, 3.99s/it]
3%|▎ | 17589/569592 [8:08:28<583:39:12, 3.81s/it]
3%|▎ | 17589/569592 [8:08:28<583:39:12, 3.81s/it]
3%|▎ | 17590/569592 [8:08:31<548:23:32, 3.58s/it]
3%|▎ | 17590/569592 [8:08:31<548:23:32, 3.58s/it]
3%|▎ | 17591/569592 [8:08:32<425:09:51, 2.77s/it]
3%|▎ | 17591/569592 [8:08:32<425:09:51, 2.77s/it]
3%|▎ | 17592/569592 [8:08:35<468:50:47, 3.06s/it]
3%|▎ | 17592/569592 [8:08:35<468:50:47, 3.06s/it]
3%|▎ | 17593/569592 [8:08:39<483:35:42, 3.15s/it]
3%|▎ | 17593/569592 [8:08:39<483:35:42, 3.15s/it]
3%|▎ | 17594/569592 [8:08:42<486:41:39, 3.17s/it]
3%|▎ | 17594/569592 [8:08:42<486:41:39, 3.17s/it]
3%|▎ | 17595/569592 [8:08:45<488:43:46, 3.19s/it]
3%|▎ | 17595/569592 [8:08:45<488:43:46, 3.19s/it]
3%|▎ | 17596/569592 [8:08:50<547:03:55, 3.57s/it]
3%|▎ | 17596/569592 [8:08:50<547:03:55, 3.57s/it]
3%|▎ | 17597/569592 [8:08:53<540:54:22, 3.53s/it]
3%|▎ | 17597/569592 [8:08:53<540:54:22, 3.53s/it]
3%|▎ | 17598/569592 [8:08:57<550:21:03, 3.59s/it]
3%|▎ | 17598/569592 [8:08:57<550:21:03, 3.59s/it]
3%|▎ | 17599/569592 [8:09:00<541:38:34, 3.53s/it]
3%|▎ | 17599/569592 [8:09:00<541:38:34, 3.53s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (96000000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17600/569592 [8:09:04<544:30:09, 3.55s/it]
3%|▎ | 17600/569592 [8:09:04<544:30:09, 3.55s/it]
3%|▎ | 17601/569592 [8:09:05<422:16:42, 2.75s/it]
3%|▎ | 17601/569592 [8:09:05<422:16:42, 2.75s/it]
3%|▎ | 17602/569592 [8:09:09<473:32:25, 3.09s/it]
3%|▎ | 17602/569592 [8:09:09<473:32:25, 3.09s/it]
3%|▎ | 17603/569592 [8:09:10<373:31:17, 2.44s/it]
3%|▎ | 17603/569592 [8:09:10<373:31:17, 2.44s/it]
3%|▎ | 17604/569592 [8:09:14<444:43:01, 2.90s/it]
3%|▎ | 17604/569592 [8:09:14<444:43:01, 2.90s/it]
3%|▎ | 17605/569592 [8:09:18<500:40:08, 3.27s/it]
3%|▎ | 17605/569592 [8:09:18<500:40:08, 3.27s/it]
3%|▎ | 17606/569592 [8:09:22<566:40:29, 3.70s/it]
3%|▎ | 17606/569592 [8:09:22<566:40:29, 3.70s/it]
3%|▎ | 17607/569592 [8:09:23<437:41:34, 2.85s/it]
3%|▎ | 17607/569592 [8:09:23<437:41:34, 2.85s/it]
3%|▎ | 17608/569592 [8:09:27<468:08:54, 3.05s/it]
3%|▎ | 17608/569592 [8:09:27<468:08:54, 3.05s/it]
3%|▎ | 17609/569592 [8:09:30<484:37:38, 3.16s/it]
3%|▎ | 17609/569592 [8:09:30<484:37:38, 3.16s/it]
3%|▎ | 17610/569592 [8:09:33<489:16:30, 3.19s/it]
3%|▎ | 17610/569592 [8:09:33<489:16:30, 3.19s/it]
3%|▎ | 17611/569592 [8:09:37<482:43:01, 3.15s/it]
3%|▎ | 17611/569592 [8:09:37<482:43:01, 3.15s/it]
3%|▎ | 17612/569592 [8:09:37<379:08:08, 2.47s/it]
3%|▎ | 17612/569592 [8:09:37<379:08:08, 2.47s/it]
3%|▎ | 17613/569592 [8:09:41<433:43:26, 2.83s/it]
3%|▎ | 17613/569592 [8:09:41<433:43:26, 2.83s/it]
3%|▎ | 17614/569592 [8:09:44<456:54:43, 2.98s/it]
3%|▎ | 17614/569592 [8:09:44<456:54:43, 2.98s/it]
3%|▎ | 17615/569592 [8:09:48<486:16:30, 3.17s/it]
3%|▎ | 17615/569592 [8:09:48<486:16:30, 3.17s/it]
3%|▎ | 17616/569592 [8:09:51<496:06:30, 3.24s/it]
3%|▎ | 17616/569592 [8:09:51<496:06:30, 3.24s/it]
3%|▎ | 17617/569592 [8:09:55<507:28:58, 3.31s/it]
3%|▎ | 17617/569592 [8:09:55<507:28:58, 3.31s/it]
3%|▎ | 17618/569592 [8:09:58<498:49:41, 3.25s/it]
3%|▎ | 17618/569592 [8:09:58<498:49:41, 3.25s/it]
3%|▎ | 17619/569592 [8:09:59<390:45:48, 2.55s/it]
3%|▎ | 17619/569592 [8:09:59<390:45:48, 2.55s/it]
3%|▎ | 17620/569592 [8:10:02<430:04:53, 2.81s/it]
3%|▎ | 17620/569592 [8:10:02<430:04:53, 2.81s/it]
3%|▎ | 17621/569592 [8:10:06<456:23:56, 2.98s/it]
3%|▎ | 17621/569592 [8:10:06<456:23:56, 2.98s/it]
3%|▎ | 17622/569592 [8:10:09<479:36:21, 3.13s/it]
3%|▎ | 17622/569592 [8:10:09<479:36:21, 3.13s/it]
3%|▎ | 17623/569592 [8:10:13<489:33:56, 3.19s/it]
3%|▎ | 17623/569592 [8:10:13<489:33:56, 3.19s/it]
3%|▎ | 17624/569592 [8:10:13<385:24:51, 2.51s/it]
3%|▎ | 17624/569592 [8:10:13<385:24:51, 2.51s/it]
3%|▎ | 17625/569592 [8:10:14<312:08:52, 2.04s/it]
3%|▎ | 17625/569592 [8:10:14<312:08:52, 2.04s/it]
3%|▎ | 17626/569592 [8:10:18<399:09:48, 2.60s/it]
3%|▎ | 17626/569592 [8:10:18<399:09:48, 2.60s/it]
3%|▎ | 17627/569592 [8:10:22<459:55:10, 3.00s/it]
3%|▎ | 17627/569592 [8:10:22<459:55:10, 3.00s/it]
3%|▎ | 17628/569592 [8:10:23<365:27:07, 2.38s/it]
3%|▎ | 17628/569592 [8:10:23<365:27:07, 2.38s/it]
3%|▎ | 17629/569592 [8:10:28<470:53:30, 3.07s/it]
3%|▎ | 17629/569592 [8:10:28<470:53:30, 3.07s/it]
3%|▎ | 17630/569592 [8:10:31<489:25:04, 3.19s/it]
3%|▎ | 17630/569592 [8:10:31<489:25:04, 3.19s/it]
3%|▎ | 17631/569592 [8:10:32<385:23:46, 2.51s/it]
3%|▎ | 17631/569592 [8:10:32<385:23:46, 2.51s/it]
3%|▎ | 17632/569592 [8:10:33<311:28:10, 2.03s/it]
3%|▎ | 17632/569592 [8:10:33<311:28:10, 2.03s/it]
3%|▎ | 17633/569592 [8:10:37<387:29:47, 2.53s/it]
3%|▎ | 17633/569592 [8:10:37<387:29:47, 2.53s/it]
3%|▎ | 17634/569592 [8:10:40<422:30:45, 2.76s/it]
3%|▎ | 17634/569592 [8:10:40<422:30:45, 2.76s/it]
3%|▎ | 17635/569592 [8:10:41<338:37:41, 2.21s/it]
3%|▎ | 17635/569592 [8:10:41<338:37:41, 2.21s/it]
3%|▎ | 17636/569592 [8:10:45<403:26:52, 2.63s/it]
3%|▎ | 17636/569592 [8:10:45<403:26:52, 2.63s/it]
3%|▎ | 17637/569592 [8:10:46<330:28:38, 2.16s/it]
3%|▎ | 17637/569592 [8:10:46<330:28:38, 2.16s/it]
3%|▎ | 17638/569592 [8:10:47<274:12:12, 1.79s/it]
3%|▎ | 17638/569592 [8:10:47<274:12:12, 1.79s/it]
3%|▎ | 17639/569592 [8:10:50<361:22:58, 2.36s/it]
3%|▎ | 17639/569592 [8:10:50<361:22:58, 2.36s/it]
3%|▎ | 17640/569592 [8:10:54<435:32:46, 2.84s/it]
3%|▎ | 17640/569592 [8:10:54<435:32:46, 2.84s/it]
3%|▎ | 17641/569592 [8:10:58<480:45:33, 3.14s/it]
3%|▎ | 17641/569592 [8:10:58<480:45:33, 3.14s/it]
3%|▎ | 17642/569592 [8:11:01<487:40:51, 3.18s/it]
3%|▎ | 17642/569592 [8:11:01<487:40:51, 3.18s/it]
3%|▎ | 17643/569592 [8:11:02<385:30:17, 2.51s/it]
3%|▎ | 17643/569592 [8:11:02<385:30:17, 2.51s/it]
3%|▎ | 17644/569592 [8:11:06<416:47:47, 2.72s/it]
3%|▎ | 17644/569592 [8:11:06<416:47:47, 2.72s/it]
3%|▎ | 17645/569592 [8:11:07<340:06:43, 2.22s/it]
3%|▎ | 17645/569592 [8:11:07<340:06:43, 2.22s/it]
3%|▎ | 17646/569592 [8:11:08<282:15:46, 1.84s/it]
3%|▎ | 17646/569592 [8:11:08<282:15:46, 1.84s/it]
3%|▎ | 17647/569592 [8:11:09<241:37:02, 1.58s/it]
3%|▎ | 17647/569592 [8:11:09<241:37:02, 1.58s/it]
3%|▎ | 17648/569592 [8:11:12<339:12:19, 2.21s/it]
3%|▎ | 17648/569592 [8:11:12<339:12:19, 2.21s/it]
3%|▎ | 17649/569592 [8:11:13<280:21:37, 1.83s/it]
3%|▎ | 17649/569592 [8:11:13<280:21:37, 1.83s/it]
3%|▎ | 17650/569592 [8:11:17<364:10:29, 2.38s/it]
3%|▎ | 17650/569592 [8:11:17<364:10:29, 2.38s/it]
3%|▎ | 17651/569592 [8:11:21<432:19:33, 2.82s/it]
3%|▎ | 17651/569592 [8:11:21<432:19:33, 2.82s/it]
3%|▎ | 17652/569592 [8:11:24<453:05:42, 2.96s/it]
3%|▎ | 17652/569592 [8:11:24<453:05:42, 2.96s/it]
3%|▎ | 17653/569592 [8:11:25<361:51:49, 2.36s/it]
3%|▎ | 17653/569592 [8:11:25<361:51:49, 2.36s/it]
3%|▎ | 17654/569592 [8:11:28<412:23:12, 2.69s/it]
3%|▎ | 17654/569592 [8:11:28<412:23:12, 2.69s/it]
3%|▎ | 17655/569592 [8:11:32<452:28:39, 2.95s/it]
3%|▎ | 17655/569592 [8:11:32<452:28:39, 2.95s/it]
3%|▎ | 17656/569592 [8:11:36<486:14:40, 3.17s/it]
3%|▎ | 17656/569592 [8:11:36<486:14:40, 3.17s/it]
3%|▎ | 17657/569592 [8:11:37<384:19:55, 2.51s/it]
3%|▎ | 17657/569592 [8:11:37<384:19:55, 2.51s/it]
3%|▎ | 17658/569592 [8:11:38<312:02:49, 2.04s/it]
3%|▎ | 17658/569592 [8:11:38<312:02:49, 2.04s/it]
3%|▎ | 17659/569592 [8:11:39<271:12:06, 1.77s/it]
3%|▎ | 17659/569592 [8:11:39<271:12:06, 1.77s/it]
3%|▎ | 17660/569592 [8:11:42<345:29:39, 2.25s/it]
3%|▎ | 17660/569592 [8:11:42<345:29:39, 2.25s/it]
3%|▎ | 17661/569592 [8:11:46<429:46:49, 2.80s/it]
3%|▎ | 17661/569592 [8:11:46<429:46:49, 2.80s/it]
3%|▎ | 17662/569592 [8:11:47<349:32:39, 2.28s/it]
3%|▎ | 17662/569592 [8:11:47<349:32:39, 2.28s/it]
3%|▎ | 17663/569592 [8:11:51<399:15:18, 2.60s/it]
3%|▎ | 17663/569592 [8:11:51<399:15:18, 2.60s/it]
3%|▎ | 17664/569592 [8:11:51<322:22:59, 2.10s/it]
3%|▎ | 17664/569592 [8:11:51<322:22:59, 2.10s/it]
3%|▎ | 17665/569592 [8:11:55<372:56:53, 2.43s/it]
3%|▎ | 17665/569592 [8:11:55<372:56:53, 2.43s/it]
3%|▎ | 17666/569592 [8:11:56<304:49:52, 1.99s/it]
3%|▎ | 17666/569592 [8:11:56<304:49:52, 1.99s/it]
3%|▎ | 17667/569592 [8:11:59<391:15:17, 2.55s/it]
3%|▎ | 17667/569592 [8:12:00<391:15:17, 2.55s/it]
3%|▎ | 17668/569592 [8:12:03<440:56:19, 2.88s/it]
3%|▎ | 17668/569592 [8:12:03<440:56:19, 2.88s/it]
3%|▎ | 17669/569592 [8:12:04<351:03:22, 2.29s/it]
3%|▎ | 17669/569592 [8:12:04<351:03:22, 2.29s/it]
3%|▎ | 17670/569592 [8:12:05<291:36:34, 1.90s/it]
3%|▎ | 17670/569592 [8:12:05<291:36:34, 1.90s/it]
3%|▎ | 17671/569592 [8:12:10<440:56:19, 2.88s/it]
3%|▎ | 17671/569592 [8:12:10<440:56:19, 2.88s/it]
3%|▎ | 17672/569592 [8:12:14<465:03:42, 3.03s/it]
3%|▎ | 17672/569592 [8:12:14<465:03:42, 3.03s/it]
3%|▎ | 17673/569592 [8:12:15<369:37:19, 2.41s/it]
3%|▎ | 17673/569592 [8:12:15<369:37:19, 2.41s/it]
3%|▎ | 17674/569592 [8:12:19<461:31:23, 3.01s/it]
3%|▎ | 17674/569592 [8:12:19<461:31:23, 3.01s/it]
3%|▎ | 17675/569592 [8:12:20<366:48:33, 2.39s/it]
3%|▎ | 17675/569592 [8:12:20<366:48:33, 2.39s/it]
3%|▎ | 17676/569592 [8:12:24<445:22:35, 2.91s/it]
3%|▎ | 17676/569592 [8:12:24<445:22:35, 2.91s/it]
3%|▎ | 17677/569592 [8:12:28<481:03:33, 3.14s/it]
3%|▎ | 17677/569592 [8:12:28<481:03:33, 3.14s/it]
3%|▎ | 17678/569592 [8:12:31<482:50:15, 3.15s/it]
3%|▎ | 17678/569592 [8:12:31<482:50:15, 3.15s/it]
3%|▎ | 17679/569592 [8:12:34<501:56:47, 3.27s/it]
3%|▎ | 17679/569592 [8:12:34<501:56:47, 3.27s/it]
3%|▎ | 17680/569592 [8:12:38<503:05:16, 3.28s/it]
3%|▎ | 17680/569592 [8:12:38<503:05:16, 3.28s/it]
3%|▎ | 17681/569592 [8:12:39<397:05:56, 2.59s/it]
3%|▎ | 17681/569592 [8:12:39<397:05:56, 2.59s/it]
3%|▎ | 17682/569592 [8:12:42<436:37:57, 2.85s/it]
3%|▎ | 17682/569592 [8:12:42<436:37:57, 2.85s/it]
3%|▎ | 17683/569592 [8:12:46<471:12:09, 3.07s/it]
3%|▎ | 17683/569592 [8:12:46<471:12:09, 3.07s/it]
3%|▎ | 17684/569592 [8:12:50<502:08:56, 3.28s/it]
3%|▎ | 17684/569592 [8:12:50<502:08:56, 3.28s/it]
3%|▎ | 17685/569592 [8:12:53<512:17:12, 3.34s/it]
3%|▎ | 17685/569592 [8:12:53<512:17:12, 3.34s/it]
3%|▎ | 17686/569592 [8:12:57<530:24:59, 3.46s/it]
3%|▎ | 17686/569592 [8:12:57<530:24:59, 3.46s/it]
3%|▎ | 17687/569592 [8:13:00<515:48:22, 3.36s/it]
3%|▎ | 17687/569592 [8:13:00<515:48:22, 3.36s/it]
3%|▎ | 17688/569592 [8:13:03<508:12:09, 3.31s/it]
3%|▎ | 17688/569592 [8:13:03<508:12:09, 3.31s/it]
3%|▎ | 17689/569592 [8:13:06<509:58:10, 3.33s/it]
3%|▎ | 17689/569592 [8:13:06<509:58:10, 3.33s/it]
3%|▎ | 17690/569592 [8:13:10<508:30:25, 3.32s/it]
3%|▎ | 17690/569592 [8:13:10<508:30:25, 3.32s/it]
3%|▎ | 17691/569592 [8:13:13<502:47:48, 3.28s/it]
3%|▎ | 17691/569592 [8:13:13<502:47:48, 3.28s/it]
3%|▎ | 17692/569592 [8:13:16<493:15:39, 3.22s/it]
3%|▎ | 17692/569592 [8:13:16<493:15:39, 3.22s/it]
3%|▎ | 17693/569592 [8:13:19<498:12:32, 3.25s/it]
3%|▎ | 17693/569592 [8:13:19<498:12:32, 3.25s/it]
3%|▎ | 17694/569592 [8:13:20<390:28:08, 2.55s/it]
3%|▎ | 17694/569592 [8:13:20<390:28:08, 2.55s/it]
3%|▎ | 17695/569592 [8:13:23<422:29:57, 2.76s/it]
3%|▎ | 17695/569592 [8:13:23<422:29:57, 2.76s/it]
3%|▎ | 17696/569592 [8:13:24<338:08:30, 2.21s/it]
3%|▎ | 17696/569592 [8:13:24<338:08:30, 2.21s/it]
3%|▎ | 17697/569592 [8:13:25<279:26:17, 1.82s/it]
3%|▎ | 17697/569592 [8:13:25<279:26:17, 1.82s/it]
3%|▎ | 17698/569592 [8:13:30<428:38:36, 2.80s/it]
3%|▎ | 17698/569592 [8:13:30<428:38:36, 2.80s/it]
3%|▎ | 17699/569592 [8:13:35<509:44:49, 3.33s/it]
3%|▎ | 17699/569592 [8:13:35<509:44:49, 3.33s/it]
3%|▎ | 17700/569592 [8:13:38<514:21:42, 3.36s/it]
3%|▎ | 17700/569592 [8:13:38<514:21:42, 3.36s/it]
3%|▎ | 17701/569592 [8:13:42<514:05:35, 3.35s/it]
3%|▎ | 17701/569592 [8:13:42<514:05:35, 3.35s/it]
3%|▎ | 17702/569592 [8:13:45<505:04:37, 3.29s/it]
3%|▎ | 17702/569592 [8:13:45<505:04:37, 3.29s/it]
3%|▎ | 17703/569592 [8:13:46<395:10:38, 2.58s/it]
3%|▎ | 17703/569592 [8:13:46<395:10:38, 2.58s/it]
3%|▎ | 17704/569592 [8:13:50<452:26:10, 2.95s/it]
3%|▎ | 17704/569592 [8:13:50<452:26:10, 2.95s/it]
3%|▎ | 17705/569592 [8:13:53<478:48:46, 3.12s/it]
3%|▎ | 17705/569592 [8:13:53<478:48:46, 3.12s/it]
3%|▎ | 17706/569592 [8:13:57<496:54:49, 3.24s/it]
3%|▎ | 17706/569592 [8:13:57<496:54:49, 3.24s/it]
3%|▎ | 17707/569592 [8:14:01<528:39:44, 3.45s/it]
3%|▎ | 17707/569592 [8:14:01<528:39:44, 3.45s/it]
3%|▎ | 17708/569592 [8:14:04<523:05:31, 3.41s/it]
3%|▎ | 17708/569592 [8:14:04<523:05:31, 3.41s/it]
3%|▎ | 17709/569592 [8:14:07<506:27:12, 3.30s/it]
3%|▎ | 17709/569592 [8:14:07<506:27:12, 3.30s/it]
3%|▎ | 17710/569592 [8:14:10<506:52:19, 3.31s/it]
3%|▎ | 17710/569592 [8:14:10<506:52:19, 3.31s/it]
3%|▎ | 17711/569592 [8:14:13<497:12:57, 3.24s/it]
3%|▎ | 17711/569592 [8:14:13<497:12:57, 3.24s/it]
3%|▎ | 17712/569592 [8:14:14<389:45:33, 2.54s/it]
3%|▎ | 17712/569592 [8:14:14<389:45:33, 2.54s/it]
3%|▎ | 17713/569592 [8:14:18<434:02:21, 2.83s/it]
3%|▎ | 17713/569592 [8:14:18<434:02:21, 2.83s/it]
3%|▎ | 17714/569592 [8:14:22<474:41:06, 3.10s/it]
3%|▎ | 17714/569592 [8:14:22<474:41:06, 3.10s/it]
3%|▎ | 17715/569592 [8:14:22<374:59:36, 2.45s/it]
3%|▎ | 17715/569592 [8:14:22<374:59:36, 2.45s/it]
3%|▎ | 17716/569592 [8:14:26<424:22:27, 2.77s/it]
3%|▎ | 17716/569592 [8:14:26<424:22:27, 2.77s/it]
3%|▎ | 17717/569592 [8:14:29<454:48:40, 2.97s/it]
3%|▎ | 17717/569592 [8:14:29<454:48:40, 2.97s/it]
3%|▎ | 17718/569592 [8:14:33<463:04:41, 3.02s/it]
3%|▎ | 17718/569592 [8:14:33<463:04:41, 3.02s/it]
3%|▎ | 17719/569592 [8:14:34<370:20:17, 2.42s/it]
3%|▎ | 17719/569592 [8:14:34<370:20:17, 2.42s/it]
3%|▎ | 17720/569592 [8:14:38<442:36:53, 2.89s/it]
3%|▎ | 17720/569592 [8:14:38<442:36:53, 2.89s/it]
3%|▎ | 17721/569592 [8:14:39<359:17:10, 2.34s/it]
3%|▎ | 17721/569592 [8:14:39<359:17:10, 2.34s/it]
3%|▎ | 17722/569592 [8:14:43<435:28:35, 2.84s/it]
3%|▎ | 17722/569592 [8:14:43<435:28:35, 2.84s/it]
3%|▎ | 17723/569592 [8:14:46<464:44:29, 3.03s/it]
3%|▎ | 17723/569592 [8:14:46<464:44:29, 3.03s/it]
3%|▎ | 17724/569592 [8:14:49<482:35:01, 3.15s/it]
3%|▎ | 17724/569592 [8:14:49<482:35:01, 3.15s/it]
3%|▎ | 17725/569592 [8:14:50<378:47:55, 2.47s/it]
3%|▎ | 17725/569592 [8:14:50<378:47:55, 2.47s/it]
3%|▎ | 17726/569592 [8:14:54<437:24:24, 2.85s/it]
3%|▎ | 17726/569592 [8:14:54<437:24:24, 2.85s/it]
3%|▎ | 17727/569592 [8:14:58<476:45:52, 3.11s/it]
3%|▎ | 17727/569592 [8:14:58<476:45:52, 3.11s/it]
3%|▎ | 17728/569592 [8:15:02<540:58:25, 3.53s/it]
3%|▎ | 17728/569592 [8:15:02<540:58:25, 3.53s/it]
3%|▎ | 17729/569592 [8:15:06<540:44:12, 3.53s/it]
3%|▎ | 17729/569592 [8:15:06<540:44:12, 3.53s/it]
3%|▎ | 17730/569592 [8:15:09<518:49:22, 3.38s/it]
3%|▎ | 17730/569592 [8:15:09<518:49:22, 3.38s/it]
3%|▎ | 17731/569592 [8:15:12<511:55:00, 3.34s/it]
3%|▎ | 17731/569592 [8:15:12<511:55:00, 3.34s/it]
3%|▎ | 17732/569592 [8:15:16<512:48:06, 3.35s/it]
3%|▎ | 17732/569592 [8:15:16<512:48:06, 3.35s/it]
3%|▎ | 17733/569592 [8:15:19<518:45:49, 3.38s/it]
3%|▎ | 17733/569592 [8:15:19<518:45:49, 3.38s/it]
3%|▎ | 17734/569592 [8:15:22<505:22:10, 3.30s/it]
3%|▎ | 17734/569592 [8:15:22<505:22:10, 3.30s/it]
3%|▎ | 17735/569592 [8:15:25<497:14:30, 3.24s/it]
3%|▎ | 17735/569592 [8:15:25<497:14:30, 3.24s/it]
3%|▎ | 17736/569592 [8:15:26<391:40:23, 2.56s/it]
3%|▎ | 17736/569592 [8:15:26<391:40:23, 2.56s/it]
3%|▎ | 17737/569592 [8:15:27<317:09:50, 2.07s/it]
3%|▎ | 17737/569592 [8:15:27<317:09:50, 2.07s/it]
3%|▎ | 17738/569592 [8:15:31<390:52:40, 2.55s/it]
3%|▎ | 17738/569592 [8:15:31<390:52:40, 2.55s/it]
3%|▎ | 17739/569592 [8:15:32<317:41:03, 2.07s/it]
3%|▎ | 17739/569592 [8:15:32<317:41:03, 2.07s/it]
3%|▎ | 17740/569592 [8:15:36<430:53:02, 2.81s/it]
3%|▎ | 17740/569592 [8:15:36<430:53:02, 2.81s/it]
3%|▎ | 17741/569592 [8:15:40<459:18:19, 3.00s/it]
3%|▎ | 17741/569592 [8:15:40<459:18:19, 3.00s/it]
3%|▎ | 17742/569592 [8:15:44<504:52:57, 3.29s/it]
3%|▎ | 17742/569592 [8:15:44<504:52:57, 3.29s/it]
3%|▎ | 17743/569592 [8:15:48<532:54:36, 3.48s/it]
3%|▎ | 17743/569592 [8:15:48<532:54:36, 3.48s/it]
3%|▎ | 17744/569592 [8:15:51<522:32:36, 3.41s/it]
3%|▎ | 17744/569592 [8:15:51<522:32:36, 3.41s/it]
3%|▎ | 17745/569592 [8:15:52<407:39:17, 2.66s/it]
3%|▎ | 17745/569592 [8:15:52<407:39:17, 2.66s/it]
3%|▎ | 17746/569592 [8:15:53<329:33:07, 2.15s/it]
3%|▎ | 17746/569592 [8:15:53<329:33:07, 2.15s/it]
3%|▎ | 17747/569592 [8:15:57<414:18:06, 2.70s/it]
3%|▎ | 17747/569592 [8:15:57<414:18:06, 2.70s/it]
3%|▎ | 17748/569592 [8:15:58<332:56:23, 2.17s/it]
3%|▎ | 17748/569592 [8:15:58<332:56:23, 2.17s/it]
3%|▎ | 17749/569592 [8:16:01<402:39:42, 2.63s/it]
3%|▎ | 17749/569592 [8:16:01<402:39:42, 2.63s/it]
3%|▎ | 17750/569592 [8:16:02<325:17:53, 2.12s/it]
3%|▎ | 17750/569592 [8:16:02<325:17:53, 2.12s/it]
3%|▎ | 17751/569592 [8:16:06<402:23:59, 2.63s/it]
3%|▎ | 17751/569592 [8:16:06<402:23:59, 2.63s/it]
3%|▎ | 17752/569592 [8:16:09<436:43:35, 2.85s/it]
3%|▎ | 17752/569592 [8:16:09<436:43:35, 2.85s/it]
3%|▎ | 17753/569592 [8:16:13<465:54:44, 3.04s/it]
3%|▎ | 17753/569592 [8:16:13<465:54:44, 3.04s/it]
3%|▎ | 17754/569592 [8:16:18<561:59:25, 3.67s/it]
3%|▎ | 17754/569592 [8:16:18<561:59:25, 3.67s/it]
3%|▎ | 17755/569592 [8:16:19<441:01:31, 2.88s/it]
3%|▎ | 17755/569592 [8:16:19<441:01:31, 2.88s/it]
3%|▎ | 17756/569592 [8:16:20<353:56:29, 2.31s/it]
3%|▎ | 17756/569592 [8:16:20<353:56:29, 2.31s/it]
3%|▎ | 17757/569592 [8:16:24<425:10:25, 2.77s/it]
3%|▎ | 17757/569592 [8:16:24<425:10:25, 2.77s/it]
3%|▎ | 17758/569592 [8:16:25<340:33:24, 2.22s/it]
3%|▎ | 17758/569592 [8:16:25<340:33:24, 2.22s/it]
3%|▎ | 17759/569592 [8:16:26<281:42:43, 1.84s/it]
3%|▎ | 17759/569592 [8:16:26<281:42:43, 1.84s/it]
3%|▎ | 17760/569592 [8:16:30<373:42:11, 2.44s/it]
3%|▎ | 17760/569592 [8:16:30<373:42:11, 2.44s/it]
3%|▎ | 17761/569592 [8:16:33<413:45:47, 2.70s/it]
3%|▎ | 17761/569592 [8:16:33<413:45:47, 2.70s/it]
3%|▎ | 17762/569592 [8:16:34<332:14:00, 2.17s/it]
3%|▎ | 17762/569592 [8:16:34<332:14:00, 2.17s/it]
3%|▎ | 17763/569592 [8:16:35<288:01:47, 1.88s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (101412842 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 17763/569592 [8:16:35<288:01:47, 1.88s/it]
3%|▎ | 17764/569592 [8:16:36<258:58:51, 1.69s/it]
3%|▎ | 17764/569592 [8:16:36<258:58:51, 1.69s/it]
3%|▎ | 17765/569592 [8:16:37<227:01:18, 1.48s/it]
3%|▎ | 17765/569592 [8:16:37<227:01:18, 1.48s/it]
3%|▎ | 17766/569592 [8:16:41<330:43:57, 2.16s/it]
3%|▎ | 17766/569592 [8:16:41<330:43:57, 2.16s/it]
3%|▎ | 17767/569592 [8:16:45<396:06:59, 2.58s/it]
3%|▎ | 17767/569592 [8:16:45<396:06:59, 2.58s/it]
3%|▎ | 17768/569592 [8:16:49<464:40:32, 3.03s/it]
3%|▎ | 17768/569592 [8:16:49<464:40:32, 3.03s/it]
3%|▎ | 17769/569592 [8:16:54<546:37:30, 3.57s/it]
3%|▎ | 17769/569592 [8:16:54<546:37:30, 3.57s/it]
3%|▎ | 17770/569592 [8:16:54<425:12:21, 2.77s/it]
3%|▎ | 17770/569592 [8:16:54<425:12:21, 2.77s/it]
3%|▎ | 17771/569592 [8:16:55<340:29:09, 2.22s/it]
3%|▎ | 17771/569592 [8:16:55<340:29:09, 2.22s/it]
3%|▎ | 17772/569592 [8:16:59<398:16:01, 2.60s/it]
3%|▎ | 17772/569592 [8:16:59<398:16:01, 2.60s/it]
3%|▎ | 17773/569592 [8:17:02<443:43:33, 2.89s/it]
3%|▎ | 17773/569592 [8:17:02<443:43:33, 2.89s/it]
3%|▎ | 17774/569592 [8:17:03<357:53:07, 2.33s/it]
3%|▎ | 17774/569592 [8:17:03<357:53:07, 2.33s/it]
3%|▎ | 17775/569592 [8:17:04<293:51:22, 1.92s/it]
3%|▎ | 17775/569592 [8:17:04<293:51:22, 1.92s/it]
3%|▎ | 17776/569592 [8:17:05<249:26:41, 1.63s/it]
3%|▎ | 17776/569592 [8:17:05<249:26:41, 1.63s/it]
3%|▎ | 17777/569592 [8:17:09<358:08:09, 2.34s/it]
3%|▎ | 17777/569592 [8:17:09<358:08:09, 2.34s/it]
3%|▎ | 17778/569592 [8:17:10<294:15:04, 1.92s/it]
3%|▎ | 17778/569592 [8:17:10<294:15:04, 1.92s/it]
3%|▎ | 17779/569592 [8:17:14<388:50:43, 2.54s/it]
3%|▎ | 17779/569592 [8:17:14<388:50:43, 2.54s/it]
3%|▎ | 17780/569592 [8:17:15<316:32:54, 2.07s/it]
3%|▎ | 17780/569592 [8:17:15<316:32:54, 2.07s/it]
3%|▎ | 17781/569592 [8:17:19<416:54:38, 2.72s/it]
3%|▎ | 17781/569592 [8:17:20<416:54:38, 2.72s/it]
3%|▎ | 17782/569592 [8:17:21<338:47:37, 2.21s/it]
3%|▎ | 17782/569592 [8:17:21<338:47:37, 2.21s/it]
3%|▎ | 17783/569592 [8:17:24<398:27:15, 2.60s/it]
3%|▎ | 17783/569592 [8:17:24<398:27:15, 2.60s/it]
3%|▎ | 17784/569592 [8:17:26<347:59:48, 2.27s/it]
3%|▎ | 17784/569592 [8:17:26<347:59:48, 2.27s/it]
3%|▎ | 17785/569592 [8:17:31<485:48:32, 3.17s/it]
3%|▎ | 17785/569592 [8:17:31<485:48:32, 3.17s/it]
3%|▎ | 17786/569592 [8:17:34<498:32:14, 3.25s/it]
3%|▎ | 17786/569592 [8:17:34<498:32:14, 3.25s/it]
3%|▎ | 17787/569592 [8:17:35<398:14:12, 2.60s/it]
3%|▎ | 17787/569592 [8:17:35<398:14:12, 2.60s/it]
3%|▎ | 17788/569592 [8:17:36<326:07:01, 2.13s/it]
3%|▎ | 17788/569592 [8:17:36<326:07:01, 2.13s/it]
3%|▎ | 17789/569592 [8:17:40<417:01:48, 2.72s/it]
3%|▎ | 17789/569592 [8:17:40<417:01:48, 2.72s/it]
3%|▎ | 17790/569592 [8:17:44<474:43:42, 3.10s/it]
3%|▎ | 17790/569592 [8:17:44<474:43:42, 3.10s/it]
3%|▎ | 17791/569592 [8:17:48<492:39:53, 3.21s/it]
3%|▎ | 17791/569592 [8:17:48<492:39:53, 3.21s/it]
3%|▎ | 17792/569592 [8:17:51<503:30:41, 3.28s/it]
3%|▎ | 17792/569592 [8:17:51<503:30:41, 3.28s/it]
3%|▎ | 17793/569592 [8:17:55<501:52:24, 3.27s/it]
3%|▎ | 17793/569592 [8:17:55<501:52:24, 3.27s/it]
3%|▎ | 17794/569592 [8:17:59<539:06:18, 3.52s/it]
3%|▎ | 17794/569592 [8:17:59<539:06:18, 3.52s/it]
3%|▎ | 17795/569592 [8:18:02<544:50:01, 3.55s/it]
3%|▎ | 17795/569592 [8:18:02<544:50:01, 3.55s/it]
3%|▎ | 17796/569592 [8:18:05<519:34:40, 3.39s/it]
3%|▎ | 17796/569592 [8:18:05<519:34:40, 3.39s/it]
3%|▎ | 17797/569592 [8:18:09<523:46:41, 3.42s/it]
3%|▎ | 17797/569592 [8:18:09<523:46:41, 3.42s/it]
3%|▎ | 17798/569592 [8:18:10<408:20:55, 2.66s/it]
3%|▎ | 17798/569592 [8:18:10<408:20:55, 2.66s/it]
3%|▎ | 17799/569592 [8:18:11<331:11:44, 2.16s/it]
3%|▎ | 17799/569592 [8:18:11<331:11:44, 2.16s/it]
3%|▎ | 17800/569592 [8:18:14<399:19:05, 2.61s/it]
3%|▎ | 17800/569592 [8:18:14<399:19:05, 2.61s/it]
3%|▎ | 17801/569592 [8:18:18<446:09:32, 2.91s/it]
3%|▎ | 17801/569592 [8:18:18<446:09:32, 2.91s/it]
3%|▎ | 17802/569592 [8:18:22<485:10:09, 3.17s/it]
3%|▎ | 17802/569592 [8:18:22<485:10:09, 3.17s/it]
3%|▎ | 17803/569592 [8:18:23<380:51:34, 2.48s/it]
3%|▎ | 17803/569592 [8:18:23<380:51:34, 2.48s/it]
3%|▎ | 17804/569592 [8:18:26<416:07:20, 2.71s/it]
3%|▎ | 17804/569592 [8:18:26<416:07:20, 2.71s/it]
3%|▎ | 17805/569592 [8:18:27<338:52:10, 2.21s/it]
3%|▎ | 17805/569592 [8:18:27<338:52:10, 2.21s/it]
3%|▎ | 17806/569592 [8:18:30<388:13:38, 2.53s/it]
3%|▎ | 17806/569592 [8:18:30<388:13:38, 2.53s/it]
3%|▎ | 17807/569592 [8:18:33<420:19:16, 2.74s/it]
3%|▎ | 17807/569592 [8:18:33<420:19:16, 2.74s/it]
3%|▎ | 17808/569592 [8:18:37<461:07:49, 3.01s/it]
3%|▎ | 17808/569592 [8:18:37<461:07:49, 3.01s/it]
3%|▎ | 17809/569592 [8:18:38<366:59:44, 2.39s/it]
3%|▎ | 17809/569592 [8:18:38<366:59:44, 2.39s/it]
3%|▎ | 17810/569592 [8:18:42<420:31:28, 2.74s/it]
3%|▎ | 17810/569592 [8:18:42<420:31:28, 2.74s/it]
3%|▎ | 17811/569592 [8:18:43<341:53:46, 2.23s/it]
3%|▎ | 17811/569592 [8:18:43<341:53:46, 2.23s/it]
3%|▎ | 17812/569592 [8:18:44<285:30:08, 1.86s/it]
3%|▎ | 17812/569592 [8:18:44<285:30:08, 1.86s/it]
3%|▎ | 17813/569592 [8:18:45<246:48:09, 1.61s/it]
3%|▎ | 17813/569592 [8:18:45<246:48:09, 1.61s/it]
3%|▎ | 17814/569592 [8:18:46<216:13:33, 1.41s/it]
3%|▎ | 17814/569592 [8:18:46<216:13:33, 1.41s/it]
3%|▎ | 17815/569592 [8:18:47<193:53:26, 1.27s/it]
3%|▎ | 17815/569592 [8:18:47<193:53:26, 1.27s/it]
3%|▎ | 17816/569592 [8:18:50<318:53:42, 2.08s/it]
3%|▎ | 17816/569592 [8:18:50<318:53:42, 2.08s/it]
3%|▎ | 17817/569592 [8:18:54<371:43:38, 2.43s/it]
3%|▎ | 17817/569592 [8:18:54<371:43:38, 2.43s/it]
3%|▎ | 17818/569592 [8:18:58<442:13:14, 2.89s/it]
3%|▎ | 17818/569592 [8:18:58<442:13:14, 2.89s/it]
3%|▎ | 17819/569592 [8:19:01<467:42:47, 3.05s/it]
3%|▎ | 17819/569592 [8:19:01<467:42:47, 3.05s/it]
3%|▎ | 17820/569592 [8:19:04<479:42:37, 3.13s/it]
3%|▎ | 17820/569592 [8:19:04<479:42:37, 3.13s/it]
3%|▎ | 17821/569592 [8:19:08<517:27:14, 3.38s/it]
3%|▎ | 17821/569592 [8:19:08<517:27:14, 3.38s/it]
3%|▎ | 17822/569592 [8:19:09<406:02:13, 2.65s/it]
3%|▎ | 17822/569592 [8:19:09<406:02:13, 2.65s/it]
3%|▎ | 17823/569592 [8:19:13<450:09:50, 2.94s/it]
3%|▎ | 17823/569592 [8:19:13<450:09:50, 2.94s/it]
3%|▎ | 17824/569592 [8:19:17<500:56:12, 3.27s/it]
3%|▎ | 17824/569592 [8:19:17<500:56:12, 3.27s/it]
3%|▎ | 17825/569592 [8:19:20<503:27:10, 3.28s/it]
3%|▎ | 17825/569592 [8:19:21<503:27:10, 3.28s/it]
3%|▎ | 17826/569592 [8:19:24<505:45:49, 3.30s/it]
3%|▎ | 17826/569592 [8:19:24<505:45:49, 3.30s/it]
3%|▎ | 17827/569592 [8:19:27<520:39:16, 3.40s/it]
3%|▎ | 17827/569592 [8:19:27<520:39:16, 3.40s/it]
3%|▎ | 17828/569592 [8:19:28<406:02:57, 2.65s/it]
3%|▎ | 17828/569592 [8:19:28<406:02:57, 2.65s/it]
3%|▎ | 17829/569592 [8:19:32<440:06:47, 2.87s/it]
3%|▎ | 17829/569592 [8:19:32<440:06:47, 2.87s/it]
3%|▎ | 17830/569592 [8:19:35<479:29:48, 3.13s/it]
3%|▎ | 17830/569592 [8:19:35<479:29:48, 3.13s/it]
3%|▎ | 17831/569592 [8:19:39<485:21:03, 3.17s/it]
3%|▎ | 17831/569592 [8:19:39<485:21:03, 3.17s/it]
3%|▎ | 17832/569592 [8:19:42<490:24:23, 3.20s/it]
3%|▎ | 17832/569592 [8:19:42<490:24:23, 3.20s/it]
3%|▎ | 17833/569592 [8:19:47<559:36:09, 3.65s/it]
3%|▎ | 17833/569592 [8:19:47<559:36:09, 3.65s/it]
3%|▎ | 17834/569592 [8:19:50<530:25:38, 3.46s/it]
3%|▎ | 17834/569592 [8:19:50<530:25:38, 3.46s/it]
3%|▎ | 17835/569592 [8:19:53<521:15:33, 3.40s/it]
3%|▎ | 17835/569592 [8:19:53<521:15:33, 3.40s/it]
3%|▎ | 17836/569592 [8:19:58<584:38:22, 3.81s/it]
3%|▎ | 17836/569592 [8:19:58<584:38:22, 3.81s/it]
3%|▎ | 17837/569592 [8:20:01<577:44:48, 3.77s/it]
3%|▎ | 17837/569592 [8:20:01<577:44:48, 3.77s/it]
3%|▎ | 17838/569592 [8:20:04<552:13:39, 3.60s/it]
3%|▎ | 17838/569592 [8:20:04<552:13:39, 3.60s/it]
3%|▎ | 17839/569592 [8:20:09<581:38:56, 3.80s/it]
3%|▎ | 17839/569592 [8:20:09<581:38:56, 3.80s/it]
3%|▎ | 17840/569592 [8:20:12<558:35:18, 3.64s/it]
3%|▎ | 17840/569592 [8:20:12<558:35:18, 3.64s/it]
3%|▎ | 17841/569592 [8:20:16<568:06:27, 3.71s/it]
3%|▎ | 17841/569592 [8:20:16<568:06:27, 3.71s/it]
3%|▎ | 17842/569592 [8:20:19<554:46:28, 3.62s/it]
3%|▎ | 17842/569592 [8:20:19<554:46:28, 3.62s/it]
3%|▎ | 17843/569592 [8:20:20<429:51:21, 2.80s/it]
3%|▎ | 17843/569592 [8:20:20<429:51:21, 2.80s/it]
3%|▎ | 17844/569592 [8:20:24<472:20:42, 3.08s/it]
3%|▎ | 17844/569592 [8:20:24<472:20:42, 3.08s/it]
3%|▎ | 17845/569592 [8:20:27<489:53:53, 3.20s/it]
3%|▎ | 17845/569592 [8:20:27<489:53:53, 3.20s/it]
3%|▎ | 17846/569592 [8:20:28<388:22:28, 2.53s/it]
3%|▎ | 17846/569592 [8:20:28<388:22:28, 2.53s/it]
3%|▎ | 17847/569592 [8:20:32<444:00:54, 2.90s/it]
3%|▎ | 17847/569592 [8:20:32<444:00:54, 2.90s/it]
3%|▎ | 17848/569592 [8:20:36<468:44:01, 3.06s/it]
3%|▎ | 17848/569592 [8:20:36<468:44:01, 3.06s/it]
3%|▎ | 17849/569592 [8:20:39<491:25:31, 3.21s/it]
3%|▎ | 17849/569592 [8:20:39<491:25:31, 3.21s/it]
3%|▎ | 17850/569592 [8:20:42<493:53:16, 3.22s/it]
3%|▎ | 17850/569592 [8:20:42<493:53:16, 3.22s/it]
3%|▎ | 17851/569592 [8:20:46<512:52:29, 3.35s/it]
3%|▎ | 17851/569592 [8:20:46<512:52:29, 3.35s/it]
3%|▎ | 17852/569592 [8:20:49<505:19:18, 3.30s/it]
3%|▎ | 17852/569592 [8:20:49<505:19:18, 3.30s/it]
3%|▎ | 17853/569592 [8:20:53<536:42:54, 3.50s/it]
3%|▎ | 17853/569592 [8:20:53<536:42:54, 3.50s/it]
3%|▎ | 17854/569592 [8:20:54<417:17:50, 2.72s/it]
3%|▎ | 17854/569592 [8:20:54<417:17:50, 2.72s/it]
3%|▎ | 17855/569592 [8:21:04<772:17:00, 5.04s/it]
3%|▎ | 17855/569592 [8:21:05<772:17:00, 5.04s/it]
3%|▎ | 17856/569592 [8:21:08<689:27:58, 4.50s/it]
3%|▎ | 17856/569592 [8:21:08<689:27:58, 4.50s/it]
3%|▎ | 17857/569592 [8:21:09<523:39:05, 3.42s/it]
3%|▎ | 17857/569592 [8:21:09<523:39:05, 3.42s/it]
3%|▎ | 17858/569592 [8:21:10<409:14:37, 2.67s/it]
3%|▎ | 17858/569592 [8:21:10<409:14:37, 2.67s/it]
3%|▎ | 17859/569592 [8:21:13<450:27:52, 2.94s/it]
3%|▎ | 17859/569592 [8:21:13<450:27:52, 2.94s/it]
3%|▎ | 17860/569592 [8:21:17<501:41:50, 3.27s/it]
3%|▎ | 17860/569592 [8:21:17<501:41:50, 3.27s/it]
3%|▎ | 17861/569592 [8:21:20<496:08:26, 3.24s/it]
3%|▎ | 17861/569592 [8:21:20<496:08:26, 3.24s/it]
3%|▎ | 17862/569592 [8:21:24<509:25:10, 3.32s/it]
3%|▎ | 17862/569592 [8:21:24<509:25:10, 3.32s/it]
3%|▎ | 17863/569592 [8:21:25<398:23:13, 2.60s/it]
3%|▎ | 17863/569592 [8:21:25<398:23:13, 2.60s/it]
3%|▎ | 17864/569592 [8:21:28<445:34:52, 2.91s/it]
3%|▎ | 17864/569592 [8:21:28<445:34:52, 2.91s/it]
3%|▎ | 17865/569592 [8:21:31<454:34:30, 2.97s/it]
3%|▎ | 17865/569592 [8:21:31<454:34:30, 2.97s/it]
3%|▎ | 17866/569592 [8:21:35<474:06:05, 3.09s/it]
3%|▎ | 17866/569592 [8:21:35<474:06:05, 3.09s/it]
3%|▎ | 17867/569592 [8:21:36<379:17:32, 2.47s/it]
3%|▎ | 17867/569592 [8:21:36<379:17:32, 2.47s/it]
3%|▎ | 17868/569592 [8:21:37<312:37:34, 2.04s/it]
3%|▎ | 17868/569592 [8:21:37<312:37:34, 2.04s/it]
3%|▎ | 17869/569592 [8:21:41<391:45:36, 2.56s/it]
3%|▎ | 17869/569592 [8:21:41<391:45:36, 2.56s/it]
3%|▎ | 17870/569592 [8:21:44<439:06:13, 2.87s/it]
3%|▎ | 17870/569592 [8:21:44<439:06:13, 2.87s/it]
3%|▎ | 17871/569592 [8:21:45<353:49:36, 2.31s/it]
3%|▎ | 17871/569592 [8:21:45<353:49:36, 2.31s/it]
3%|▎ | 17872/569592 [8:21:49<420:48:06, 2.75s/it]
3%|▎ | 17872/569592 [8:21:49<420:48:06, 2.75s/it]
3%|▎ | 17873/569592 [8:21:50<338:01:26, 2.21s/it]
3%|▎ | 17873/569592 [8:21:50<338:01:26, 2.21s/it]
3%|▎ | 17874/569592 [8:21:51<279:55:21, 1.83s/it]
3%|▎ | 17874/569592 [8:21:51<279:55:21, 1.83s/it]
3%|▎ | 17875/569592 [8:21:55<375:54:46, 2.45s/it]
3%|▎ | 17875/569592 [8:21:55<375:54:46, 2.45s/it]
3%|▎ | 17876/569592 [8:21:58<425:13:02, 2.77s/it]
3%|▎ | 17876/569592 [8:21:58<425:13:02, 2.77s/it]
3%|▎ | 17877/569592 [8:21:59<341:26:21, 2.23s/it]
3%|▎ | 17877/569592 [8:21:59<341:26:21, 2.23s/it]
3%|▎ | 17878/569592 [8:22:03<399:01:23, 2.60s/it]
3%|▎ | 17878/569592 [8:22:03<399:01:23, 2.60s/it]
3%|▎ | 17879/569592 [8:22:06<431:16:26, 2.81s/it]
3%|▎ | 17879/569592 [8:22:06<431:16:26, 2.81s/it]
3%|▎ | 17880/569592 [8:22:07<354:57:31, 2.32s/it]
3%|▎ | 17880/569592 [8:22:07<354:57:31, 2.32s/it]
3%|▎ | 17881/569592 [8:22:12<449:38:05, 2.93s/it]
3%|▎ | 17881/569592 [8:22:12<449:38:05, 2.93s/it]
3%|▎ | 17882/569592 [8:22:15<472:57:00, 3.09s/it]
3%|▎ | 17882/569592 [8:22:15<472:57:00, 3.09s/it]
3%|▎ | 17883/569592 [8:22:16<373:50:12, 2.44s/it]
3%|▎ | 17883/569592 [8:22:16<373:50:12, 2.44s/it]
3%|▎ | 17884/569592 [8:22:20<422:01:50, 2.75s/it]
3%|▎ | 17884/569592 [8:22:20<422:01:50, 2.75s/it]
3%|▎ | 17885/569592 [8:22:23<461:43:04, 3.01s/it]
3%|▎ | 17885/569592 [8:22:23<461:43:04, 3.01s/it]
3%|▎ | 17886/569592 [8:22:27<484:10:22, 3.16s/it]
3%|▎ | 17886/569592 [8:22:27<484:10:22, 3.16s/it]
3%|▎ | 17887/569592 [8:22:30<493:15:00, 3.22s/it]
3%|▎ | 17887/569592 [8:22:30<493:15:00, 3.22s/it]
3%|▎ | 17888/569592 [8:22:31<386:45:54, 2.52s/it]
3%|▎ | 17888/569592 [8:22:31<386:45:54, 2.52s/it]
3%|▎ | 17889/569592 [8:22:34<431:18:59, 2.81s/it]
3%|▎ | 17889/569592 [8:22:34<431:18:59, 2.81s/it]
3%|▎ | 17890/569592 [8:22:38<451:04:02, 2.94s/it]
3%|▎ | 17890/569592 [8:22:38<451:04:02, 2.94s/it]
3%|▎ | 17891/569592 [8:22:41<471:35:55, 3.08s/it]
3%|▎ | 17891/569592 [8:22:41<471:35:55, 3.08s/it]
3%|▎ | 17892/569592 [8:22:44<483:27:39, 3.15s/it]
3%|▎ | 17892/569592 [8:22:44<483:27:39, 3.15s/it]
3%|▎ | 17893/569592 [8:22:45<379:42:30, 2.48s/it]
3%|▎ | 17893/569592 [8:22:45<379:42:30, 2.48s/it]
3%|▎ | 17894/569592 [8:22:46<308:11:17, 2.01s/it]
3%|▎ | 17894/569592 [8:22:46<308:11:17, 2.01s/it]
3%|▎ | 17895/569592 [8:22:47<259:28:09, 1.69s/it]
3%|▎ | 17895/569592 [8:22:47<259:28:09, 1.69s/it]
3%|▎ | 17896/569592 [8:22:48<224:52:10, 1.47s/it]
3%|▎ | 17896/569592 [8:22:48<224:52:10, 1.47s/it]
3%|▎ | 17897/569592 [8:22:49<200:53:54, 1.31s/it]
3%|▎ | 17897/569592 [8:22:49<200:53:54, 1.31s/it]
3%|▎ | 17898/569592 [8:22:50<183:21:31, 1.20s/it]
3%|▎ | 17898/569592 [8:22:50<183:21:31, 1.20s/it]
3%|▎ | 17899/569592 [8:22:51<170:32:19, 1.11s/it]
3%|▎ | 17899/569592 [8:22:51<170:32:19, 1.11s/it]
3%|▎ | 17900/569592 [8:22:55<307:00:54, 2.00s/it]
3%|▎ | 17900/569592 [8:22:55<307:00:54, 2.00s/it]
3%|▎ | 17901/569592 [8:22:58<355:35:14, 2.32s/it]
3%|▎ | 17901/569592 [8:22:58<355:35:14, 2.32s/it]
3%|▎ | 17902/569592 [8:23:01<401:56:38, 2.62s/it]
3%|▎ | 17902/569592 [8:23:01<401:56:38, 2.62s/it]
3%|▎ | 17903/569592 [8:23:06<477:36:12, 3.12s/it]
3%|▎ | 17903/569592 [8:23:06<477:36:12, 3.12s/it]
3%|▎ | 17904/569592 [8:23:10<534:43:08, 3.49s/it]
3%|▎ | 17904/569592 [8:23:10<534:43:08, 3.49s/it]
3%|▎ | 17905/569592 [8:23:11<418:13:55, 2.73s/it]
3%|▎ | 17905/569592 [8:23:11<418:13:55, 2.73s/it]
3%|▎ | 17906/569592 [8:23:14<449:29:59, 2.93s/it]
3%|▎ | 17906/569592 [8:23:14<449:29:59, 2.93s/it]
3%|▎ | 17907/569592 [8:23:18<472:27:09, 3.08s/it]
3%|▎ | 17907/569592 [8:23:18<472:27:09, 3.08s/it]
3%|▎ | 17908/569592 [8:23:22<514:01:30, 3.35s/it]
3%|▎ | 17908/569592 [8:23:22<514:01:30, 3.35s/it]
3%|▎ | 17909/569592 [8:23:23<411:49:06, 2.69s/it]
3%|▎ | 17909/569592 [8:23:23<411:49:06, 2.69s/it]
3%|▎ | 17910/569592 [8:23:27<464:36:57, 3.03s/it]
3%|▎ | 17910/569592 [8:23:27<464:36:57, 3.03s/it]
3%|▎ | 17911/569592 [8:23:30<495:28:25, 3.23s/it]
3%|▎ | 17911/569592 [8:23:30<495:28:25, 3.23s/it]
3%|▎ | 17912/569592 [8:23:34<517:41:49, 3.38s/it]
3%|▎ | 17912/569592 [8:23:34<517:41:49, 3.38s/it]
3%|▎ | 17913/569592 [8:23:35<415:02:13, 2.71s/it]
3%|▎ | 17913/569592 [8:23:35<415:02:13, 2.71s/it]
3%|▎ | 17914/569592 [8:23:39<457:03:09, 2.98s/it]
3%|▎ | 17914/569592 [8:23:39<457:03:09, 2.98s/it]
3%|▎ | 17915/569592 [8:23:40<363:44:44, 2.37s/it]
3%|▎ | 17915/569592 [8:23:40<363:44:44, 2.37s/it]
3%|▎ | 17916/569592 [8:23:41<298:39:39, 1.95s/it]
3%|▎ | 17916/569592 [8:23:41<298:39:39, 1.95s/it]
3%|▎ | 17917/569592 [8:23:45<382:21:06, 2.50s/it]
3%|▎ | 17917/569592 [8:23:45<382:21:06, 2.50s/it]
3%|▎ | 17918/569592 [8:23:48<430:20:56, 2.81s/it]
3%|▎ | 17918/569592 [8:23:48<430:20:56, 2.81s/it]
3%|▎ | 17919/569592 [8:23:51<455:39:01, 2.97s/it]
3%|▎ | 17919/569592 [8:23:51<455:39:01, 2.97s/it]
3%|▎ | 17920/569592 [8:23:55<494:41:45, 3.23s/it]
3%|▎ | 17920/569592 [8:23:55<494:41:45, 3.23s/it]
3%|▎ | 17921/569592 [8:24:01<587:43:01, 3.84s/it]
3%|▎ | 17921/569592 [8:24:01<587:43:01, 3.84s/it]
3%|▎ | 17922/569592 [8:24:04<561:26:33, 3.66s/it]
3%|▎ | 17922/569592 [8:24:04<561:26:33, 3.66s/it]
3%|▎ | 17923/569592 [8:24:05<434:04:58, 2.83s/it]
3%|▎ | 17923/569592 [8:24:05<434:04:58, 2.83s/it]
3%|▎ | 17924/569592 [8:24:08<452:40:26, 2.95s/it]
3%|▎ | 17924/569592 [8:24:08<452:40:26, 2.95s/it]
3%|▎ | 17925/569592 [8:24:12<489:07:13, 3.19s/it]
3%|▎ | 17925/569592 [8:24:12<489:07:13, 3.19s/it]
3%|▎ | 17926/569592 [8:24:15<485:43:57, 3.17s/it]
3%|▎ | 17926/569592 [8:24:15<485:43:57, 3.17s/it]
3%|▎ | 17927/569592 [8:24:18<486:58:27, 3.18s/it]
3%|▎ | 17927/569592 [8:24:18<486:58:27, 3.18s/it]
3%|▎ | 17928/569592 [8:24:19<382:35:58, 2.50s/it]
3%|▎ | 17928/569592 [8:24:19<382:35:58, 2.50s/it]
3%|▎ | 17929/569592 [8:24:20<311:04:42, 2.03s/it]
3%|▎ | 17929/569592 [8:24:20<311:04:42, 2.03s/it]
3%|▎ | 17930/569592 [8:24:21<260:45:46, 1.70s/it]
3%|▎ | 17930/569592 [8:24:21<260:45:46, 1.70s/it]
3%|▎ | 17931/569592 [8:24:22<225:52:41, 1.47s/it]
3%|▎ | 17931/569592 [8:24:22<225:52:41, 1.47s/it]
3%|▎ | 17932/569592 [8:24:23<201:23:36, 1.31s/it]
3%|▎ | 17932/569592 [8:24:23<201:23:36, 1.31s/it]
3%|▎ | 17933/569592 [8:24:24<184:17:26, 1.20s/it]
3%|▎ | 17933/569592 [8:24:24<184:17:26, 1.20s/it]
3%|▎ | 17934/569592 [8:24:28<336:22:07, 2.20s/it]
3%|▎ | 17934/569592 [8:24:28<336:22:07, 2.20s/it]
3%|▎ | 17935/569592 [8:24:32<405:25:04, 2.65s/it]
3%|▎ | 17935/569592 [8:24:32<405:25:04, 2.65s/it]
3%|▎ | 17936/569592 [8:24:35<449:34:28, 2.93s/it]
3%|▎ | 17936/569592 [8:24:35<449:34:28, 2.93s/it]
3%|▎ | 17937/569592 [8:24:36<358:01:14, 2.34s/it]
3%|▎ | 17937/569592 [8:24:36<358:01:14, 2.34s/it]
3%|▎ | 17938/569592 [8:24:37<294:20:01, 1.92s/it]
3%|▎ | 17938/569592 [8:24:37<294:20:01, 1.92s/it]
3%|▎ | 17939/569592 [8:24:42<410:03:06, 2.68s/it]
3%|▎ | 17939/569592 [8:24:42<410:03:06, 2.68s/it]
3%|▎ | 17940/569592 [8:24:46<490:05:41, 3.20s/it]
3%|▎ | 17940/569592 [8:24:46<490:05:41, 3.20s/it]
3%|▎ | 17941/569592 [8:24:50<516:11:25, 3.37s/it]
3%|▎ | 17941/569592 [8:24:50<516:11:25, 3.37s/it]
3%|▎ | 17942/569592 [8:24:54<561:16:34, 3.66s/it]
3%|▎ | 17942/569592 [8:24:54<561:16:34, 3.66s/it]
3%|▎ | 17943/569592 [8:24:58<551:22:09, 3.60s/it]
3%|▎ | 17943/569592 [8:24:58<551:22:09, 3.60s/it]
3%|▎ | 17944/569592 [8:25:01<532:41:50, 3.48s/it]
3%|▎ | 17944/569592 [8:25:01<532:41:50, 3.48s/it]
3%|▎ | 17945/569592 [8:25:06<610:42:06, 3.99s/it]
3%|▎ | 17945/569592 [8:25:06<610:42:06, 3.99s/it]
3%|▎ | 17946/569592 [8:25:10<606:50:52, 3.96s/it]
3%|▎ | 17946/569592 [8:25:10<606:50:52, 3.96s/it]
3%|▎ | 17947/569592 [8:25:13<557:33:22, 3.64s/it]
3%|▎ | 17947/569592 [8:25:13<557:33:22, 3.64s/it]
3%|▎ | 17948/569592 [8:25:16<540:01:55, 3.52s/it]
3%|▎ | 17948/569592 [8:25:16<540:01:55, 3.52s/it]
3%|▎ | 17949/569592 [8:25:19<517:49:31, 3.38s/it]
3%|▎ | 17949/569592 [8:25:19<517:49:31, 3.38s/it]
3%|▎ | 17950/569592 [8:25:23<543:40:31, 3.55s/it]
3%|▎ | 17950/569592 [8:25:23<543:40:31, 3.55s/it]
3%|▎ | 17951/569592 [8:25:27<540:36:55, 3.53s/it]
3%|▎ | 17951/569592 [8:25:27<540:36:55, 3.53s/it]
3%|▎ | 17952/569592 [8:25:30<524:53:15, 3.43s/it]
3%|▎ | 17952/569592 [8:25:30<524:53:15, 3.43s/it]
3%|▎ | 17953/569592 [8:25:31<410:56:05, 2.68s/it]
3%|▎ | 17953/569592 [8:25:31<410:56:05, 2.68s/it]
3%|▎ | 17954/569592 [8:25:35<499:07:38, 3.26s/it]
3%|▎ | 17954/569592 [8:25:35<499:07:38, 3.26s/it]
3%|▎ | 17955/569592 [8:25:40<541:17:52, 3.53s/it]
3%|▎ | 17955/569592 [8:25:40<541:17:52, 3.53s/it]
3%|▎ | 17956/569592 [8:25:40<419:42:17, 2.74s/it]
3%|▎ | 17956/569592 [8:25:40<419:42:17, 2.74s/it]
3%|▎ | 17957/569592 [8:25:44<477:57:58, 3.12s/it]
3%|▎ | 17957/569592 [8:25:44<477:57:58, 3.12s/it]
3%|▎ | 17958/569592 [8:25:48<494:52:14, 3.23s/it]
3%|▎ | 17958/569592 [8:25:48<494:52:14, 3.23s/it]
3%|▎ | 17959/569592 [8:25:49<388:22:43, 2.53s/it]
3%|▎ | 17959/569592 [8:25:49<388:22:43, 2.53s/it]
3%|▎ | 17960/569592 [8:25:52<426:50:48, 2.79s/it]
3%|▎ | 17960/569592 [8:25:52<426:50:48, 2.79s/it]
3%|▎ | 17961/569592 [8:25:56<472:29:51, 3.08s/it]
3%|▎ | 17961/569592 [8:25:56<472:29:51, 3.08s/it]
3%|▎ | 17962/569592 [8:25:59<488:47:49, 3.19s/it]
3%|▎ | 17962/569592 [8:25:59<488:47:49, 3.19s/it]
3%|▎ | 17963/569592 [8:26:02<483:18:52, 3.15s/it]
3%|▎ | 17963/569592 [8:26:02<483:18:52, 3.15s/it]
3%|▎ | 17964/569592 [8:26:06<483:11:50, 3.15s/it]
3%|▎ | 17964/569592 [8:26:06<483:11:50, 3.15s/it]
3%|▎ | 17965/569592 [8:26:09<503:25:20, 3.29s/it]
3%|▎ | 17965/569592 [8:26:09<503:25:20, 3.29s/it]
3%|▎ | 17966/569592 [8:26:13<510:15:55, 3.33s/it]
3%|▎ | 17966/569592 [8:26:13<510:15:55, 3.33s/it]
3%|▎ | 17967/569592 [8:26:16<506:30:42, 3.31s/it]
3%|▎ | 17967/569592 [8:26:16<506:30:42, 3.31s/it]
3%|▎ | 17968/569592 [8:26:19<508:31:04, 3.32s/it]
3%|▎ | 17968/569592 [8:26:19<508:31:04, 3.32s/it]
3%|▎ | 17969/569592 [8:26:24<576:30:31, 3.76s/it]
3%|▎ | 17969/569592 [8:26:24<576:30:31, 3.76s/it]
3%|▎ | 17970/569592 [8:26:28<596:34:28, 3.89s/it]
3%|▎ | 17970/569592 [8:26:28<596:34:28, 3.89s/it]
3%|▎ | 17971/569592 [8:26:29<458:19:08, 2.99s/it]
3%|▎ | 17971/569592 [8:26:29<458:19:08, 2.99s/it]
3%|▎ | 17972/569592 [8:26:33<478:05:58, 3.12s/it]
3%|▎ | 17972/569592 [8:26:33<478:05:58, 3.12s/it]
3%|▎ | 17973/569592 [8:26:36<495:05:05, 3.23s/it]
3%|▎ | 17973/569592 [8:26:36<495:05:05, 3.23s/it]
3%|▎ | 17974/569592 [8:26:39<500:26:07, 3.27s/it]
3%|▎ | 17974/569592 [8:26:39<500:26:07, 3.27s/it]
3%|▎ | 17975/569592 [8:26:40<395:04:44, 2.58s/it]
3%|▎ | 17975/569592 [8:26:40<395:04:44, 2.58s/it]
3%|▎ | 17976/569592 [8:26:44<436:21:40, 2.85s/it]
3%|▎ | 17976/569592 [8:26:44<436:21:40, 2.85s/it]
3%|▎ | 17977/569592 [8:26:48<475:07:00, 3.10s/it]
3%|▎ | 17977/569592 [8:26:48<475:07:00, 3.10s/it]
3%|▎ | 17978/569592 [8:26:49<379:12:41, 2.47s/it]
3%|▎ | 17978/569592 [8:26:49<379:12:41, 2.47s/it]
3%|▎ | 17979/569592 [8:26:52<439:06:17, 2.87s/it]
3%|▎ | 17979/569592 [8:26:52<439:06:17, 2.87s/it]
3%|▎ | 17980/569592 [8:26:53<351:28:49, 2.29s/it]
3%|▎ | 17980/569592 [8:26:53<351:28:49, 2.29s/it]
3%|▎ | 17981/569592 [8:26:54<293:22:26, 1.91s/it]
3%|▎ | 17981/569592 [8:26:54<293:22:26, 1.91s/it]
3%|▎ | 17982/569592 [8:26:58<387:05:05, 2.53s/it]
3%|▎ | 17982/569592 [8:26:58<387:05:05, 2.53s/it]
3%|▎ | 17983/569592 [8:27:02<432:34:01, 2.82s/it]
3%|▎ | 17983/569592 [8:27:02<432:34:01, 2.82s/it]
3%|▎ | 17984/569592 [8:27:05<467:16:12, 3.05s/it]
3%|▎ | 17984/569592 [8:27:05<467:16:12, 3.05s/it]
3%|▎ | 17985/569592 [8:27:06<372:01:12, 2.43s/it]
3%|▎ | 17985/569592 [8:27:06<372:01:12, 2.43s/it]
3%|▎ | 17986/569592 [8:27:07<305:08:12, 1.99s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93160704 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 17986/569592 [8:27:07<305:08:12, 1.99s/it]
3%|▎ | 17987/569592 [8:27:11<391:32:01, 2.56s/it]
3%|▎ | 17987/569592 [8:27:11<391:32:01, 2.56s/it]
3%|▎ | 17988/569592 [8:27:15<444:23:14, 2.90s/it]
3%|▎ | 17988/569592 [8:27:15<444:23:14, 2.90s/it]
3%|▎ | 17989/569592 [8:27:16<355:21:25, 2.32s/it]
3%|▎ | 17989/569592 [8:27:16<355:21:25, 2.32s/it]
3%|▎ | 17990/569592 [8:27:17<293:06:40, 1.91s/it]
3%|▎ | 17990/569592 [8:27:17<293:06:40, 1.91s/it]
3%|▎ | 17991/569592 [8:27:18<248:14:20, 1.62s/it]
3%|▎ | 17991/569592 [8:27:18<248:14:20, 1.62s/it]
3%|▎ | 17992/569592 [8/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:27:21<333:33:29, 2.18s/it]
3%|▎ | 17992/569592 [8:27:22<333:33:29, 2.18s/it]
3%|▎ | 17993/569592 [8:27:25<398:56:19, 2.60s/it]
3%|▎ | 17993/569592 [8:27:25<398:56:19, 2.60s/it]
3%|▎ | 17994/569592 [8:27:29<462:32:34, 3.02s/it]
3%|▎ | 17994/569592 [8:27:29<462:32:34, 3.02s/it]
3%|▎ | 17995/569592 [8:27:32<483:33:28, 3.16s/it]
3%|▎ | 17995/569592 [8:27:32<483:33:28, 3.16s/it]
3%|▎ | 17996/569592 [8:27:33<386:19:10, 2.52s/it]
3%|▎ | 17996/569592 [8:27:33<386:19:10, 2.52s/it]
3%|▎ | 17997/569592 [8:27:34<314:41:54, 2.05s/it]
3%|▎ | 17997/569592 [8:27:34<314:41:54, 2.05s/it]
3%|▎ | 17998/569592 [8:27:38<369:44:32, 2.41s/it]
3%|▎ | 17998/569592 [8:27:38<369:44:32, 2.41s/it]
3%|▎ | 17999/569592 [8:27:41<408:49:15, 2.67s/it]
3%|▎ | 17999/569592 [8:27:41<408:49:15, 2.67s/it]
3%|▎ | 18000/569592 [8:27:42<329:58:30, 2.15s/it]
3%|▎ | 18000/569592 [8:27:42<329:58:30, 2.15s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-17000] due to args.save_total_limit
3%|▎ | 18001/569592 [8:30:00<6571:42:10, 42.89s/it]
3%|▎ | 18001/569592 [8:30:00<6571:42:10, 42.89s/it]
3%|▎ | 18002/569592 [8:30:04<4798:20:24, 31.32s/it]
3%|▎ | 18002/569592 [8:30:04<4798:20:24, 31.32s/it]
3%|▎ | 18003/569592 [8:30:08<3524:24:00, 23.00s/it]
3%|▎ | 18003/569592 [8:30:08<3524:24:00, 23.00s/it]
3%|▎ | 18004/569592 [8:30:08<2507:33:32, 16.37s/it]
3%|▎ | 18004/569592 [8:30:09<2507:33:32, 16.37s/it]
3%|▎ | 18005/569592 [8:30:13<1944:44:43, 12.69s/it]
3%|▎ | 18005/569592 [8:30:13<1944:44:43, 12.69s/it]
3%|▎ | 18006/569592 [8:30:16<1532:33:20, 10.00s/it]
3%|▎ | 18006/569592 [8:30:16<1532:33:20, 10.00s/it]
3%|▎ | 18007/569592 [8:30:19<1216:36:02, 7.94s/it]
3%|▎ | 18007/569592 [8:30:19<1216:36:02, 7.94s/it]
3%|▎ | 18008/569592 [8:30:23<1013:19:34, 6.61s/it]
3%|▎ | 18008/569592 [8:30:23<1013:19:34, 6.61s/it]
3%|▎ | 18009/569592 [8:30:24<751:13:35, 4.90s/it]
3%|▎ | 18009/569592 [8:30:24<751:13:35, 4.90s/it]
3%|▎ | 18010/569592 [8:30:25<569:42:07, 3.72s/it]
3%|▎ | 18010/569592 [8:30:25<569:42:07, 3.72s/it]
3%|▎ | 18011/569592 [8:30:26<443:38:59, 2.90s/it]
3%|▎ | 18011/569592 [8:30:26<443:38:59, 2.90s/it]
3%|▎ | 18012/569592 [8:30:27<356:29:18, 2.33s/it]
3%|▎ | 18012/569592 [8:30:27<356:29:18, 2.33s/it]
3%|▎ | 18013/569592 [8:30:28<293:11:39, 1.91s/it]
3%|▎ | 18013/569592 [8:30:28<293:11:39, 1.91s/it]
3%|▎ | 18014/569592 [8:30:29<248:37:56, 1.62s/it]
3%|▎ | 18014/569592 [8:30:29<248:37:56, 1.62s/it]
3%|▎ | 18015/569592 [8:30:33<359:52:58, 2.35s/it]
3%|▎ | 18015/569592 [8:30:33<359:52:58, 2.35s/it]
3%|▎ | 18016/569592 [8:30:34<306:34:56, 2.00s/it]
3%|▎ | 18016/569592 [8:30:34<306:34:56, 2.00s/it]
3%|▎ | 18017/569592 [8:30:38<381:08:40, 2.49s/it]
3%|▎ | 18017/569592 [8:30:38<381:08:40, 2.49s/it]
3%|▎ | 18018/569592 [8:30:41<434:44:06, 2.84s/it]
3%|▎ | 18018/569592 [8:30:41<434:44:06, 2.84s/it]
3%|▎ | 18019/569592 [8:30:45<483:38:13, 3.16s/it]
3%|▎ | 18019/569592 [8:30:45<483:38:13, 3.16s/it]
3%|▎ | 18020/569592 [8:30:49<537:33:22, 3.51s/it]
3%|▎ | 18020/569592 [8:30:49<537:33:22, 3.51s/it]
3%|▎ | 18021/569592 [8:30:50<422:14:29, 2.76s/it]
3%|▎ | 18021/569592 [8:30:50<422:14:29, 2.76s/it]
3%|▎ | 18022/569592 [8:30:56<539:25:22, 3.52s/it]
3%|▎ | 18022/569592 [8:30:56<539:25:22, 3.52s/it]
3%|▎ | 18023/569592 [8:30:57<422:32:31, 2.76s/it]
3%|▎ | 18023/569592 [8:30:57<422:32:31, 2.76s/it]
3%|▎ | 18024/569592 [8:31:02<533:46:20, 3.48s/it]
3%|▎ | 18024/569592 [8:31:02<533:46:20, 3.48s/it]
3%|▎ | 18025/569592 [8:31:06<556:43:21, 3.63s/it]
3%|▎ | 18025/569592 [8:31:06<556:43:21, 3.63s/it]
3%|▎ | 18026/569592 [8:31:10<568:16:38, 3.71s/it]
3%|▎ | 18026/569592 [8:31:10<568:16:38, 3.71s/it]
3%|▎ | 18027/569592 [8:31:11<440:47:40, 2.88s/it]
3%|▎ | 18027/569592 [8:31:11<440:47:40, 2.88s/it]
3%|▎ | 18028/569592 [8:31:15<500:54:31, 3.27s/it]
3%|▎ | 18028/569592 [8:31:15<500:54:31, 3.27s/it]
3%|▎ | 18029/569592 [8:31:19<537:13:19, 3.51s/it]
3%|▎ | 18029/569592 [8:31:19<537:13:19, 3.51s/it]
3%|▎ | 18030/569592 [8:31:20<418:02:50, 2.73s/it]
3%|▎ | 18030/569592 [8:31:20<418:02:50, 2.73s/it]
3%|▎ | 18031/569592 [8:31:24<500:01:02, 3.26s/it]
3%|▎ | 18031/569592 [8:31:24<500:01:02, 3.26s/it]
3%|▎ | 18032/569592 [8:31:25<394:35:19, 2.58s/it]
3%|▎ | 18032/569592 [8:31:25<394:35:19, 2.58s/it]
3%|▎ | 18033/569592 [8:31:26<321:31:52, 2.10s/it]
3%|▎ | 18033/569592 [8:31:26<321:31:52, 2.10s/it]
3%|▎ | 18034/569592 [8:31:32<472:53:26, 3.09s/it]
3%|▎ | 18034/569592 [8:31:32<472:53:26, 3.09s/it]
3%|▎ | 18035/569592 [8:31:35<491:28:30, 3.21s/it]
3%|▎ | 18035/569592 [8:31:35<491:28:30, 3.21s/it]
3%|▎ | 18036/569592 [8:31:36<387:20:46, 2.53s/it]
3%|▎ | 18036/569592 [8:31:36<387:20:46, 2.53s/it]
3%|▎ | 18037/569592 [8:31:41<512:47:01, 3.35s/it]
3%|▎ | 18037/569592 [8:31:41<512:47:01, 3.35s/it]
3%|▎ | 18038/569592 [8:31:46<558:27:55, 3.65s/it]
3%|▎ | 18038/569592 [8:31:46<558:27:55, 3.65s/it]
3%|▎ | 18039/569592 [8:31:49<552:36:32, 3.61s/it]
3%|▎ | 18039/569592 [8:31:49<552:36:32, 3.61s/it]
3%|▎ | 18040/569592 [8:31:50<427:55:42, 2.79s/it]
3%|▎ | 18040/569592 [8:31:50<427:55:42, 2.79s/it]
3%|▎ | 18041/569592 [8:31:55<541:55:53, 3.54s/it]
3%|▎ | 18041/569592 [8:31:55<541:55:53, 3.54s/it]
3%|▎ | 18042/569592 [8:31:59<554:15:19, 3.62s/it]
3%|▎ | 18042/569592 [8:31:59<554:15:19, 3.62s/it]
3%|▎ | 18043/569592 [8:32:03<578:52:15, 3.78s/it]
3%|▎ | 18043/569592 [8:32:03<578:52:15, 3.78s/it]
3%|▎ | 18044/569592 [8:32:04<446:34:39, 2.91s/it]
3%|▎ | 18044/569592 [8:32:04<446:34:39, 2.91s/it]
3%|▎ | 18045/569592 [8:32:05<363:42:35, 2.37s/it]
3%|▎ | 18045/569592 [8:32:05<363:42:35, 2.37s/it]
3%|▎ | 18046/569592 [8:32:07<303:54:33, 1.98s/it]
3%|▎ | 18046/569592 [8:32:07<303:54:33, 1.98s/it]
3%|▎ | 18047/569592 [8:32:08<260:49:24, 1.70s/it]
3%|▎ | 18047/569592 [8:32:08<260:49:24, 1.70s/it]
3%|▎ | 18048/569592 [8:32:12<372:36:50, 2.43s/it]
3%|▎ | 18048/569592 [8:32:12<372:36:50, 2.43s/it]
3%|▎ | 18049/569592 [8:32:17<509:05:31, 3.32s/it]
3%|▎ | 18049/569592 [8:32:17<509:05:31, 3.32s/it]
3%|▎ | 18050/569592 [8:32:21<550:24:44, 3.59s/it]
3%|▎ | 18050/569592 [8:32:21<550:24:44, 3.59s/it]
3%|▎ | 18051/569592 [8:32:25<562:34:59, 3.67s/it]
3%|▎ | 18051/569592 [8:32:25<562:34:59, 3.67s/it]
3%|▎ | 18052/569592 [8:32:26<438:04:34, 2.86s/it]
3%|▎ | 18052/569592 [8:32:26<438:04:34, 2.86s/it]
3%|▎ | 18053/569592 [8:32:27<352:42:35, 2.30s/it]
3%|▎ | 18053/569592 [8:32:27<352:42:35, 2.30s/it]
3%|▎ | 18054/569592 [8:32:32<447:51:14, 2.92s/it]
3%|▎ | 18054/569592 [8:32:32<447:51:14, 2.92s/it]
3%|▎ | 18055/569592 [8:32:35<489:03:02, 3.19s/it]
3%|▎ | 18055/569592 [8:32:35<489:03:02, 3.19s/it]
3%|▎ | 18056/569592 [8:32:39<506:15:22, 3.30s/it]
3%|▎ | 18056/569592 [8:32:39<506:15:22, 3.30s/it]
3%|▎ | 18057/569592 [8:32:43<539:41:43, 3.52s/it]
3%|▎ | 18057/569592 [8:32:43<539:41:43, 3.52s/it]
3%|▎ | 18058/569592 [8:32:47<553:34:51, 3.61s/it]
3%|▎ | 18058/569592 [8:32:47<553:34:51, 3.61s/it]
3%|▎ | 18059/569592 [8:32:51<599:12:02, 3.91s/it]
3%|▎ | 18059/569592 [8:32:51<599:12:02, 3.91s/it]
3%|▎ | 18060/569592 [8:32:55<596:41:44, 3.89s/it]
3%|▎ | 18/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (103829067 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
060/569592 [8:32:55<596:41:44, 3.89s/it]
3%|▎ | 18061/569592 [8:33:00<649:46:14, 4.24s/it]
3%|▎ | 18061/569592 [8:33:00<649:46:14, 4.24s/it]
3%|▎ | 18062/569592 [8:33:04<608:45:22, 3.97s/it]
3%|▎ | 18062/569592 [8:33:04<608:45:22, 3.97s/it]
3%|▎ | 18063/569592 [8:33:07<572:38:03, 3.74s/it]
3%|▎ | 18063/569592 [8:33:07<572:38:03, 3.74s/it]
3%|▎ | 18064/569592 [8:33:11<582:57:18, 3.81s/it]
3%|▎ | 18064/569592 [8:33:11<582:57:18, 3.81s/it]
3%|▎ | 18065/569592 [8:33:15<590:10:56, 3.85s/it]
3%|▎ | 18065/569592 [8:33:15<590:10:56, 3.85s/it]
3%|▎ | 18066/569592 [8:33:19<604:42:50, 3.95s/it]
3%|▎ | 18066/569592 [8:33:19<604:42:50, 3.95s/it]
3%|▎ | 18067/569592 [8:33:22<560:34:07, 3.66s/it]
3%|▎ | 18067/569592 [8:33:22<560:34:07, 3.66s/it]
3%|▎ | 18068/569592 [8:33:27<625:24:19, 4.08s/it]
3%|▎ | 18068/569592 [8:33:27<625:24:19, 4.08s/it]
3%|▎ | 18069/569592 [8:33:28<479:05:39, 3.13s/it]
3%|▎ | 18069/569592 [8:33:28<479:05:39, 3.13s/it]
3%|▎ | 18070/569592 [8:33:32<546:23:15, 3.57s/it]
3%|▎ | 18070/569592 [8:33:33<546:23:15, 3.57s/it]
3%|▎ | 18071/569592 [8:33:37<584:03:54, 3.81s/it]
3%|▎ | 18071/569592 [8:33:37<584:03:54, 3.81s/it]
3%|▎ | 18072/569592 [8:33:41<585:46:41, 3.82s/it]
3%|▎ | 18072/569592 [8:33:41<585:46:41, 3.82s/it]
3%|▎ | 18073/569592 [8:33:45<594:07:32, 3.88s/it]
3%|▎ | 18073/569592 [8:33:45<594:07:32, 3.88s/it]
3%|▎ | 18074/569592 [8:33:48<583:02:12, 3.81s/it]
3%|▎ | 18074/569592 [8:33:48<583:02:12, 3.81s/it]
3%|▎ | 18075/569592 [8:33:49<452:03:48, 2.95s/it]
3%|▎ | 18075/569592 [8:33:49<452:03:48, 2.95s/it]
3%|▎ | 18076/569592 [8:33:53<485:13:03, 3.17s/it]
3%|▎ | 18076/569592 [8:33:53<485:13:03, 3.17s/it]
3%|▎ | 18077/569592 [8:33:57<526:12:44, 3.43s/it]
3%|▎ | 18077/569592 [8:33:57<526:12:44, 3.43s/it]
3%|▎ | 18078/569592 [8:33:58<410:23:12, 2.68s/it]
3%|▎ | 18078/569592 [8:33:58<410:23:12, 2.68s/it]
3%|▎ | 18079/569592 [8:34:02<460:41:13, 3.01s/it]
3%|▎ | 18079/569592 [8:34:02<460:41:13, 3.01s/it]
3%|▎ | 18080/569592 [8:34:06<501:58:12, 3.28s/it]
3%|▎ | 18080/569592 [8:34:06<501:58:12, 3.28s/it]
3%|▎ | 18081/569592 [8:34:10<534:57:56, 3.49s/it]
3%|▎ | 18081/569592 [8:34:10<534:57:56, 3.49s/it]
3%|▎ | 18082/569592 [8:34:11<420:17:33, 2.74s/it]
3%|▎ | 18082/569592 [8:34:11<420:17:33, 2.74s/it]
3%|▎ | 18083/569592 [8:34:15<479:46:59, 3.13s/it]
3%|▎ | 18083/569592 [8:34:15<479:46:59, 3.13s/it]
3%|▎ | 18084/569592 [8:34:19<525:18:12, 3.43s/it]
3%|▎ | 18084/569592 [8:34:19<525:18:12, 3.43s/it]
3%|▎ | 18085/569592 [8:34:23<541:06:51, 3.53s/it]
3%|▎ | 18085/569592 [8:34:23<541:06:51, 3.53s/it]
3%|▎ | 18086/569592 [8:34:26<521:10:43, 3.40s/it]
3%|▎ | 18086/569592 [8:34:26<521:10:43, 3.40s/it]
3%|▎ | 18087/569592 [8:34:27<406:46:32, 2.66s/it]
3%|▎ | 18087/569592 [8:34:27<406:46:32, 2.66s/it]
3%|▎ | 18088/569592 [8:34:31<470:24:35, 3.07s/it]
3%|▎ | 18088/569592 [8:34:31<470:24:35, 3.07s/it]
3%|▎ | 18089/569592 [8:34:34<497:54:03, 3.25s/it]
3%|▎ | 18089/569592 [8:34:34<497:54:03, 3.25s/it]
3%|▎ | 18090/569592 [8:34:38<537:17:01, 3.51s/it]
3%|▎ | 18090/569592 [8:34:38<537:17:01, 3.51s/it]
3%|▎ | 18091/569592 [8:34:42<538:31:25, 3.52s/it]
3%|▎ | 18091/569592 [8:34:42<538:31:25, 3.52s/it]
3%|▎ | 18092/569592 [8:34:43<418:13:59, 2.73s/it]
3%|▎ | 18092/569592 [8:34:43<418:13:59, 2.73s/it]
3%|▎ | 18093/569592 [8:34:44<335:55:25, 2.19s/it]
3%|▎ | 18093/569592 [8:34:44<335:55:25, 2.19s/it]
3%|▎ | 18094/569592 [8:34:48<412:01:01, 2.69s/it]
3%|▎ | 18094/569592 [8:34:48<412:01:01, 2.69s/it]
3%|▎ | 18095/569592 [8:34:52<480:51:27, 3.14s/it]
3%|▎ | 18095/569592 [8:34:52<480:51:27, 3.14s/it]
3%|▎ | 18096/569592 [8:34:55<486:51:44, 3.18s/it]
3%|▎ | 18096/569592 [8:34:55<486:51:44, 3.18s/it]
3%|▎ | 18097/569592 [8:34:56<382:39:00, 2.50s/it]
3%|▎ | 18097/569592 [8:34:56<382:39:00, 2.50s/it]
3%|▎ | 18098/569592 [8:34:57<311:49:03, 2.04s/it]
3%|▎ | 18098/569592 [8:34:57<311:49:03, 2.04s/it]
3%|▎ | 18099/569592 [8:35:01<425:12:11, 2.78s/it]
3%|▎ | 18099/569592 [8:35:01<425:12:11, 2.78s/it]
3%|▎ | 18100/569592 [8:35:02<341:06:58, 2.23s/it]
3%|▎ | 18100/569592 [8:35:02<341:06:58, 2.23s/it]
3%|▎ | 18101/569592 [8:35:06<420:24:41, 2.74s/it]
3%|▎ | 18101/569592 [8:35:06<420:24:41, 2.74s/it]
3%|▎ | 18102/569592 [8:35:10<464:08:28, 3.03s/it]
3%|▎ | 18102/569592 [8:35:10<464:08:28, 3.03s/it]
3%|▎ | 18103/569592 [8:35:11<371:50:20, 2.43s/it]
3%|▎ | 18103/569592 [8:35:11<371:50:20, 2.43s/it]
3%|▎ | 18104/569592 [8:35:15<438:54:58, 2.87s/it]
3%|▎ | 18104/569592 [8:35:15<438:54:58, 2.87s/it]
3%|▎ | 18105/569592 [8:35:19<482:32:28, 3.15s/it]
3%|▎ | 18105/569592 [8:35:19<482:32:28, 3.15s/it]
3%|▎ | 18106/569592 [8:35:20<381:53:26, 2.49s/it]
3%|▎ | 18106/569592 [8:35:20<381:53:26, 2.49s/it]
3%|▎ | 18107/569592 [8:35:21<309:26:36, 2.02s/it]
3%|▎ | 18107/569592 [8:35:21<309:26:36, 2.02s/it]
3%|▎ | 18108/569592 [8:35:22<262:16:39, 1.71s/it]
3%|▎ | 18108/569592 [8:35:22<262:16:39, 1.71s/it]
3%|▎ | 18109/569592 [8:35:25<349:26:17, 2.28s/it]
3%|▎ | 18109//home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
569592 [8:35:25<349:26:17, 2.28s/it]
3%|▎ | 18110/569592 [8:35:30<447:05:40, 2.92s/it]
3%|▎ | 18110/569592 [8:35:30<447:05:40, 2.92s/it]
3%|▎ | 18111/569592 [8:35:34<499:27:19, 3.26s/it]
3%|▎ | 18111/569592 [8:35:34<499:27:19, 3.26s/it]
3%|▎ | 18112/569592 [8:35:35<391:12:38, 2.55s/it]
3%|▎ | 18112/569592 [8:35:35<391:12:38, 2.55s/it]
3%|▎ | 18113/569592 [8:35:39<464:17:14, 3.03s/it]
3%|▎ | 18113/569592 [8:35:39<464:17:14, 3.03s/it]
3%|▎ | 18114/569592 [8:35:43<505:51:29, 3.30s/it]
3%|▎ | 18114/569592 [8:35:43<505:51:29, 3.30s/it]
3%|▎ | 18115/569592 [8:35:47<534:29:37, 3.49s/it]
3%|▎ | 18115/569592 [8:35:47<534:29:37, 3.49s/it]
3%|▎ | 18116/569592 [8:35:53<648:28:54, 4.23s/it]
3%|▎ | 18116/569592 [8:35:53<648:28:54, 4.23s/it]
3%|▎ | 18117/569592 [8:35:53<493:58:15, 3.22s/it]
3%|▎ | 18117/569592 [8:35:53<493:58:15, 3.22s/it]
3%|▎ | 18118/569592 [8:35:57<497:03:00, 3.24s/it]
3%|▎ | 18118/569592 [8:35:57<497:03:00, 3.24s/it]
3%|▎ | 18119/569592 [8:35:58<392:46:48, 2.56s/it]
3%|▎ | 18119/569592 [8:35:58<392:46:48, 2.56s/it]
3%|▎ | 18120/569592 [8:35:59<317:54:49, 2.08s/it]
3%|▎ | 18120/569592 [8:35:59<317:54:49, 2.08s/it]
3%|▎ | 18121/569592 [8:36:02<395:55:29, 2.58s/it]
3%|▎ | 18121/569592 [8:36:02<395:55:29, 2.58s/it]
3%|▎ | 18122/569592 [8:36:03<319:46:59, 2.09s/it]
3%|▎ | 18122/569592 [8:36:03<319:46:59, 2.09s/it]
3%|▎ | 18123/569592 [8:36:07<393:10:11, 2.57s/it]
3%|▎ | 18123/569592 [8:36:07<393:10:11, 2.57s/it]
3%|▎ | 18124/569592 [8:36:11<458:17:02, 2.99s/it]
3%|▎ | 18124/569592 [8:36:11<458:17:02, 2.99s/it]
3%|▎ | 18125/569592 [8:36:12<365:32:49, 2.39s/it]
3%|▎ | 18125/569592 [8:36:12<365:32:49, 2.39s/it]
3%|▎ | 18126/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [8:36:13<299:48:58, 1.96s/it]
3%|▎ | 18126/569592 [8:36:13<299:48:58, 1.96s/it]
3%|▎ | 18127/569592 [8:36:14<253:10:50, 1.65s/it]
3%|▎ | 18127/569592 [8:36:14<253:10:50, 1.65s/it]
3%|▎ | 18128/569592 [8:36:15<221:41:30, 1.45s/it]
3%|▎ | 18128/569592 [8:36:15<221:41:30, 1.45s/it]
3%|▎ | 18129/569592 [8:36:16<205:29:31, 1.34s/it]
3%|▎ | 18129/569592 [8:36:16<205:29:31, 1.34s/it]
3%|▎ | 18130/569592 [8:36:20<338:18:38, 2.21s/it]
3%|▎ | 18130/569592 [8:36:20<338:18:38, 2.21s/it]
3%|▎ | 18131/569592 [8:36:21<296:42:33, 1.94s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 18131/569592 [8:36:21<296:42:33, 1.94s/it]
3%|▎ | 18132/569592 [8:36:22<249:46:51, 1.63s/it]
3%|▎ | 18132/569592 [8:36:22<249:46:51, 1.63s/it]
3%|▎ | 18133/569592 [8:36:26<350:37:04, 2.29s/it]
3%|▎ | 18133/569592 [8:36:26<350:37:04, 2.29s/it]
3%|▎ | 18134/569592 [8:36:27<291:59:40, 1.91s/it]
3%|▎ | 18134/569592 [8:36:27<291:59:40, 1.91s/it]
3%|▎ | 18135/569592 [8:36:31<395:12:32, 2.58s/it]
3%|▎ | 18135/569592 [8:36:31<395:12:32, 2.58s/it]
3%|▎ | 18136/569592 [8:36:35<437:17:05, 2.85s/it]
3%|▎ | 18136/569592 [8:36:35<437:17:05, 2.85s/it]
3%|▎ | 18137/569592 [8:36:38<462:33:56, 3.02s/it]
3%|▎ | 18137/569592 [8:36:38<462:33:56, 3.02s/it]
3%|▎ | 18138/569592 [8:36:42<476:06:58, 3.11s/it]
3%|▎ | 18138/569592 [8:36:42<476:06:58, 3.11s/it]
3%|▎ | 18139/569592 [8:36:46<521:05:08, 3.40s/it]
3%|▎ | 18139/569592 [8:36:46<521:05:08, 3.40s/it]
3%|▎ | 18140/569592 [8:36:50<559:28:19, 3.65s/it]
3%|▎ | 18140/569592 [8:36:50<559:28:19, 3.65s/it]
3%|▎ | 18141/569592 [8:36:53<536:33:15, 3.50s/it]
3%|▎ | 18141/569592 [8:36:53<536:33:15, 3.50s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 18142/569592 [8:36:57<555:43:01, 3.63s/it]
3%|▎ | 18142/569592 [8:36:57<555:43:01, 3.63s/it]
3%|▎ | 18143/569592 [8:37:01<558:33:00, 3.65s/it]
3%|▎ | 18143/569592 [8:37:01<558:33:00, 3.65s/it]
3%|▎ | 18144/569592 [8:37:02<433:03:14, 2.83s/it]
3%|▎ | 18144/569592 [8:37:02<433:03:14, 2.83s/it]
3%|▎ | 18145/569592 [8:37:02<345:18:36, 2.25s/it]
3%|▎ | 18145/569592 [8:37:02<345:18:36, 2.25s/it]
3%|▎ | 18146/569592 [8:37:06<416:36:53, 2.72s/it]
3%|▎ | 18146/569592 [8:37:06<416:36:53, 2.72s/it]
3%|▎ | 18147/569592 [8:37:10<474:39:09, 3.10s/it]
3%|▎ | 18147/569592 [8:37:10<474:39:09, 3.10s/it]
3%|▎ | 18148/569592 [8:37:14<495:27:23, 3.23s/it]
3%|▎ | 18148/569592 [8:37:14<495:27:23, 3.23s/it]
3%|▎ | 18149/569592 [8:37:18<530:22:35, 3.46s/it]
3%|▎ | 18149/569592 [8:37:18<530:22:35, 3.46s/it]
3%|▎ | 18150/569592 [8:37:21<512:19:54, 3.34s/it]
3%|▎ | 18150/569592 [8:37:21<512:19:54, 3.34s/it]
3%|▎ | 18151/569592 [8:37:25<530:23:13, 3.46s/it]
3%|▎ | 18151/569592 [8:37:25<530:23:13, 3.46s/it]
3%|▎ | 18152/569592 [8:37:28<520:16:08, 3.40s/it]
3%|▎ | 18152/569592 [8:37:28<520:16:08, 3.40s/it]
3%|▎ | 18153/569592 [8:37:29<406:31:36, 2.65s/it]
3%|▎ | 18153/569592 [8:37:29<406:31:36, 2.65s/it]
3%|▎ | 18154/569592 [8:37:33<459:13:29, 3.00s/it]
3%|▎ | 18154/569592 [8:37:33<459:13:29, 3.00s/it]
3%|▎ | 18155/569592 [8:37:36<476:56:53, 3.11s/it]
3%|▎ | 18155/569592 [8:37:36<476:56:53, 3.11s/it]
3%|▎ | 18156/569592 [8:37:37<378:06:09, 2.47s/it]
3%|▎ | 18156/569592 [8:37:37<378:06:09, 2.47s/it]
3%|▎ | 18157/569592 [8:37:38<307:44:02, 2.01s/it]
3%|▎ | 18157/569592 [8:37:38<307:44:02, 2.01s/it]
3%|▎ | 18158/569592 [8:37:39<260:08:41, 1.70s/it]
3%|▎ | 18158/569592 [8:37:39<260:08:41, 1.70s/it]
3%|▎ | 18159/569592 [8:37:42<334:13:36, 2.18s/it]
3%|▎ | 18159/569592 [8:37:42<334:13:36, 2.18s/it]
3%|▎ | 18160/569592 [8:37:43<281:26:18, 1.84s/it]
3%|▎ | 18160/569592 [8:37:43<281:26:18, 1.84s/it]
3%|▎ | 18161/569592 [8:37:47<372:01:35, 2.43s/it]
3%|▎ | 18161/569592 [8:37:47<372:01:35, 2.43s/it]
3%|▎ | 18162/569592 [8:37:50<415:04:30, 2.71s/it]
3%|▎ | 18162/569592 [8:37:50<415:04:30, 2.71s/it]
3%|▎ | 18163/569592 [8:37:54<458:12:56, 2.99s/it]
3%|▎ | 181/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
63/569592 [8:37:54<458:12:56, 2.99s/it]
3%|▎ | 18164/569592 [8:37:58<508:16:11, 3.32s/it]
3%|▎ | 18164/569592 [8:37:58<508:16:11, 3.32s/it]
3%|▎ | 18165/569592 [8:37:59<398:31:23, 2.60s/it]
3%|▎ | 18165/569592 [8:37:59<398:31:23, 2.60s/it]
3%|▎ | 18166/569592 [8:38:03<443:36:16, 2.90s/it]
3%|▎ | 18166/569592 [8:38:03<443:36:16, 2.90s/it]
3%|▎ | 18167/569592 [8:38:07<495:06:15, 3.23s/it]
3%|▎ | 18167/569592 [8:38:07<495:06:15, 3.23s/it]
3%|▎ | 18168/569592 [8:38:08<393:17:54, 2.57s/it]
3%|▎ | 18168/569592 [8:38:08<393:17:54, 2.57s/it]
3%|▎ | 18169/569592 [8:38:11<433:56:50, 2.83s/it]
3%|▎ | 18169/569592 [8:38:11<433:56:50, 2.83s/it]
3%|▎ | 18170/569592 [8:38:15<474:52:59, 3.10s/it]
3%|▎ | 18170/569592 [8:38:15<474:52:59, 3.10s/it]
3%|▎ | 18171/569592 [8:38:18<489:02:36, 3.19s/it]
3%|▎ | 18171/569592 [8:38:18<489:02:36, 3.19s/it]
3%|▎ | 18172/569592 [8:38:21<481:56:23, 3.15s/it]
3%|▎ | 18172/569592 [8:38:21<481:56:23, 3.15s/it]
3%|▎ | 18173/569592 [8:38:25<499:46:05, 3.26s/it]
3%|▎ | 18173/569592 [8:38:25<499:46:05, 3.26s/it]
3%|▎ | 18174/569592 [8:38:29<540:31:04, 3.53s/it]
3%|▎ | 18174/569592 [8:38:29<540:31:04, 3.53s/it]
3%|▎ | 18175/569592 [8:38:30<420:13:01, 2.74s/it]
3%|▎ | 18175/569592 [8:38:30<420:13:01, 2.74s/it]
3%|▎ | 18176/569592 [8:38:33<448:31:04, 2.93s/it]
3%|▎ | 18176/569592 [8:38:33<448:31:04, 2.93s/it]
3%|▎ | 18177/569592 [8:38:36<462:22:08, 3.02s/it]
3%|▎ | 18177/569592 [8:38:36<462:22:08, 3.02s/it]
3%|▎ | 18178/569592 [8:38:40<466:34:50, 3.05s/it]
3%|▎ | 18178/569592 [8:38:40<466:34:50, 3.05s/it]
3%|▎ | 18179/569592 [8:38:40<369:02:49, 2.41s/it]
3%|▎ | 18179/569592 [8:38:40<369:02:49, 2.41s/it]
3%|▎ | 18180/569592 [8:38:44<424:19:43, 2.77s/it]
3%|▎ | 18180/569592 [8:38:44<424:19:43, 2.77s/it]
3%|▎ | 18181/569592 [8:38:48<455:15:01, 2.97s/it]
3%|▎ | 18181/569592 [8:38:48<455:15:01, 2.97s/it]
3%|▎ | 18182/569592 [8:38:51<466:27:05, 3.05s/it]
3%|▎ | 18182/569592 [8:38:51<466:27:05, 3.05s/it]
3%|▎ | 18183/569592 [8:38:54<485:58:40, 3.17s/it]
3%|▎ | 18183/569592 [8:38:54<485:58:40, 3.17s/it]
3%|▎ | 18184/569592 [8:38:58<493:58:50, 3.23s/it]
3%|▎ | 18184/569592 [8:38:58<493:58:50, 3.23s/it]
3%|▎ | 18185/569592 [8:39:01<521:57:37, 3.41s/it]
3%|▎ | 18185/569592 [8:39:01<521:57:37, 3.41s/it]
3%|▎ | 18186/569592 [8:39:02<407:31:06, 2.66s/it]
3%|▎ | 18186/569592 [8:39:02<407:31:06, 2.66s/it]
3%|▎ | 18187/569592 [8:39:06<442:34:52, 2.89s/it]
3%|▎ | 18187/569592 [8:39:06<442:34:52, 2.89s/it]
3%|▎ | 18188/569592 [8:39:10<491:10:16, 3.21s/it]
3%|▎ | 18188/569592 [8:39:10<491:10:16, 3.21s/it]
3%|▎ | 18189/569592 [8:39:13<511:24:00, 3.34s/it]
3%|▎ | 18189/569592 [8:39:13<511:24:00, 3.34s/it]
3%|▎ | 18190/569592 [8:39:14<400:00:32, 2.61s/it]
3%|▎ | 18190/569592 [8:39:14<400:00:32, 2.61s/it]
3%|▎ | 18191/569592 [8:39:18<471:36:04, 3.08s/it]
3%|▎ | 18191/569592 [8:39:18<471:36:04, 3.08s/it]
3%|▎ | 18192/569592 [8:39:19<371:41:55, 2.43s/it]
3%|▎ | 18192/569592 [8:39:19<371:41:55, 2.43s/it]
3%|▎ | 18193/569592 [8:39:23<435:25:37, 2.84s/it]
3%|▎ | 18193/569592 [8:39:23<435:25:37, 2.84s/it]
3%|▎ | 18194/569592 [8:39:26<459:04:24, 3.00s/it]
3%|▎ | 18194/569592 [8:39:27<459:04:24, 3.00s/it]
3%|▎ | 18195/569592 [8:39:27<366:08:46, 2.39s/it]
3%|▎ | 18195/569592 [8:39:27<366:08:46, 2.39s/it]
3%|▎ | 18196/569592 [8:39:31<430:58:21, 2.81s/it]
3%|▎ | 18196/569592 [8:39:31<430:58:21, 2.81s/it]
3%|▎ | 18197/569592 [8:39:35<461:27:10, 3.01s/it]
3%|▎ | 18197/569592 [8:39:35<461:27:10, 3.01s/it]
3%|▎ | 18198/569592 [8:39:38<474:03:43, 3.10s/it]
3%|▎ | 18198/569592 [8:39:38<474:03:43, 3.10s/it]
3%|▎ | 18199/569592 [8:39:42<497:58:50, 3.25s/it]
3%|▎ | 18199/569592 [8:39:42<497:58:50, 3.25s/it]
3%|▎ | 18200/569592 [8:39:45<519:22:20, 3.39s/it]
3%|▎ | 18200/569592 [8:39:45<519:22:20, 3.39s/it]
3%|▎ | 18201/569592 [8:39:46<405:27:09, 2.65s/it]
3%|▎ | 18201/569592 [8:39:46<405:27:09, 2.65s/it]
3%|▎ | 18202/569592 [8:39:49<428:44:42, 2.80s/it]
3%|▎ | 18202/569592 [8:39:49<428:44:42, 2.80s/it]
3%|▎ | 18203/569592 [8:39:53<474:43:31, 3.10s/it]
3%|▎ | 18203/569592 [8:39:53<474:43:31, 3.10s/it]
3%|▎ | 18204/569592 [8:39:54<377:09:39, 2.46s/it]
3%|▎ | 18204/569592 [8:39:54<377:09:39, 2.46s/it]
3%|▎ | 18205/569592 [8:39:55<307:42:22, 2.01s/it]
3%|▎ | 18205/569592 [8:39:55<307:42:22, 2.01s/it]
3%|▎ | 18206/569592 [8:39:59<377:29:11, 2.46s/it]
3%|▎ | 18206/569592 [8:39:59<377:29:11, 2.46s/it]
3%|▎ | 18207/569592 [8:40:03<443:43:26, 2.90s/it]
3%|▎ | 18207/569592 [8:40:03<443:43:26, 2.90s/it]
3%|▎ | 18208/569592 [8:40:04<354:56:09, 2.32s/it]
3%|▎ | 18208/569592 [8:40:04<354:56:09, 2.32s/it]
3%|▎ | 18209/569592 [8:40:05<297:04:19, 1.94s/it]
3%|▎ | 18209/569592 [8:40:05<297:04:19, 1.94s/it]
3%|▎ | 18210/569592 [8:40:08<366:56:56, 2.40s/it]
3%|▎ | 18210/569592 [8:40:08<366:56:56, 2.40s/it]
3%|▎ | 18211/569592 [8:40:12<457:00:09, 2.98s/it]
3%|▎ | 18211/569592 [8:40:12<457:00:09, 2.98s/it]
3%|▎ | 18212/569592 [8:40:16<498:39:31, 3.26s/it]
3%|▎ | 18212/569592 [8:40:16<498:39:31, 3.26s/it]
3%|▎ | 18213/569592 [8:40:17<390:27:14, 2.55s/it]
3%|▎ | 18213/569592 [8:40:17<390:27:14, 2.55s/it]
3%|▎ | 18214/569592 [8:40:21<427:21:06, 2.79s/it]
3%|▎ | 18214/569592 [8:40:21<427:21:06, 2.79s/it]
3%|▎ | 18215/569592 [8:40:24<445:43:38, 2.91s/it]
3%|▎ | 18215/569592 [8:40:24<445:43:38, 2.91s/it]
3%|▎ | 18216/569592 [8:40:27<465:02:40, 3.04s/it]
3%|▎ | 18216/569592 [8:40:27<465:02:40, 3.04s/it]
3%|▎ | 18217/569592 [8:40:28<367:08:18, 2.40s/it]
3%|▎ | 18217/569592 [8:40:28<367:08:18, 2.40s/it]
3%|▎ | 18218/569592 [8:40:29<301:54:26, 1.97s/it]
3%|▎ | 18218/569592 [8:40:29<301:54:26, 1.97s/it]
3%|▎ | 18219/569592 [8:40:30<255:09:47, 1.67s/it]
3%|▎ | 18219/569592 [8:40:30<255:09:47, 1.67s/it]
3%|▎ | 18220/569592 [8:40:31<222:27:02, 1.45s/it]
3%|▎ | 18220/569592 [8:40:31<222:27:02, 1.45s/it]
3%|▎ | 18221/569592 [8:40:35<325:20:37, 2.12s/it]
3%|▎ | 18221/569592 [8:40:35<325:20:37, 2.12s/it]
3%|▎ | 18222/569592 [8:40:38<400:01:22, 2.61s/it]
3%|▎ | 18222/569592 [8:40:38<400:01:22, 2.61s/it]
3%|▎ | 18223/569592 [8:40:42<439:52:10, 2.87s/it]
3%|▎ | 18223/569592 [8:40:42<439:52:10, 2.87s/it]
3%|▎ | 18224/569592 [8:40:43<352:32:24, 2.30s/it]
3%|▎ | 18224/569592 [8:40:43<352:32:24, 2.30s/it]
3%|▎ | 18225/569592 [8:40:44<291:01:27, 1.90s/it]
3%|▎ | 18225/569592 [8:40:44<291:01:27, 1.90s/it]
3%|▎ | 18226/569592 [8:40:47<367:03:14, 2.40s/it]
3%|▎ | 18226/569592 [8:40:47<367:03:14, 2.40s/it]
3%|▎ | 18227/569592 [8:40:51<423:57:55, 2.77s/it]
3%|▎ | 18227/569592 [8:40:51<423:57:55, 2.77s/it]
3%|▎ | 18228/569592 [8:40:55<472:34:53, 3.09s/it]
3%|▎ | 18228/569592 [8:40:55<472:34:53, 3.09s/it]
3%|▎ | 18229/569592 [8:40:56<375:45:23, 2.45s/it]
3%|▎ | 18229/569592 [8:40:56<375:45:23, 2.45s/it]
3%|▎ | 18230/569592 [8:40:59<427:01:46, 2.79s/it]
3%|▎ | 18230/569592 [8:40:59<427:01:46, 2.79s/it]
3%|▎ | 18231/569592 [8:41:03<467:21:27, 3.05s/it]
3%|▎ | 18231/569592 [8:41:03<467:21:27, 3.05s/it]
3%|▎ | 18232/569592 [8:41:07<511:38:11, 3.34s/it]
3%|▎ | 18232/569592 [8:41:07<511:38:11, 3.34s/it]
3%|▎ | 18233/569592 [8:41:10<509:48:14, 3.33s/it]
3%|▎ | 18233/569592 [8:41:10<509:48:14, 3.33s/it]
3%|▎ | 18234/569592 [8:41:13<501:29:52, 3.27s/it]
3%|▎ | 18234/569592 [8:41:13<501:29:52, 3.27s/it]
3%|▎ | 18235/569592 [8:41:14<392:28:36, 2.56s/it]
3%|▎ | 18235/569592 [8:41:14<392:28:36, 2.56s/it]
3%|▎ | 18236/569592 [8:41:15<318:01:26, 2.08s/it]
3%|▎ | 18236/569592 [8:41:15<318:01:26, 2.08s/it]
3%|▎ | 18237/569592 [8:41:16<268:48:16, 1.76s/it]
3%|▎ | 18237/569592 [8:41:16<268:48:16, 1.76s/it]
3%|▎ | 18238/569592 [8:41:20<350:41:04, 2.29s/it]
3%|▎ | 18238/569592 [8:41:20<350:41:04, 2.29s/it]
3%|▎ | 18239/569592 [8:41:23<409:40:21, 2.67s/it]
3%|▎ | 18239/569592 [8:41:23<409:40:21, 2.67s/it]
3%|▎ | 18240/569592 [8:41:27<439:44:41, 2.87s/it]
3%|▎ | 18240/569592 [8:41:27<439:44:41, 2.87s/it]
3%|▎ | 18241/569592 [8:41:30<465:44:04, 3.04s/it]
3%|▎ | 18241/569592 [8:41:30<465:44:04, 3.04s/it]
3%|▎ | 18242/569592 [8:41:34<483:54:22, 3.16s/it]
3%|▎ | 18242/569592 [8:41:34<483:54:22, 3.16s/it]
3%|▎ | 18243/569592 [8:41:38<535:47:59, 3.50s/it]
3%|▎ | 18243/569592 [8:41:38<535:47:59, 3.50s/it]
3%|▎ | 18244/569592 [8:41:39<417:02:26, 2.72s/it]
3%|▎ | 18244/569592 [8:41:39<417:02:26, 2.72s/it]
3%|▎ | 18245/569592 [8:41:40<333:46:40, 2.18s/it]
3%|▎ | 18245/569592 [8:41:40<333:46:40, 2.18s/it]
3%|▎ | 18246/569592 [8:41:41<277:06:27, 1.81s/it]
3%|▎ | 18246/569592 [8:41:41<277:06:27, 1.81s/it]
3%|▎ | 18247/569592 [8:41:42<236:53:08, 1.55s/it]
3%|▎ | 18247/569592 [8:41:42<236:53:08, 1.55s/it]
3%|▎ | 18248/569592 [8:41:43<207:44:33, 1.36s/it]
3%|▎ | 18248/569592 [8:41:43<207:44:33, 1.36s/it]
3%|▎ | 18249/569592 [8:41:43<187:52:41, 1.23s/it]
3%|▎ | 18249/569592 [8:41:43<187:52:41, 1.23s/it]
3%|▎ | 18250/569592 [8:41:47<301:28:53, 1.97s/it]
3%|▎ | 18250/569592 [8:41:47<301:28:53, 1.97s/it]
3%|▎ | 18251/569592 [8:41:48<259:19:16, 1.69s/it]
3%|▎ | 18251/569592 [8:41:48<259:19:16, 1.69s/it]
3%|▎ | 18252/569592 [8:41:51<331:57:05, 2.17s/it]
3%|▎ | 18252/569592 [8:41:51<331:57:05, 2.17s/it]
3%|▎ | 18253/569592 [8:41:53<299:48:16, 1.96s/it]
3%|▎ | 18253/569592 [8:41:53<299:48:16, 1.96s/it]
3%|▎ | 18254/569592 [8:41:56<365:32:40, 2.39s/it]
3%|▎ | 18254/569592 [8:41:56<365:32:40, 2.39s/it]
3%|▎ | 18255/569592 [8:42:00<423:16:04, 2.76s/it]
3%|▎ | 18255/569592 [8:42:00<423:16:04, 2.76s/it]
3%|▎ | 18256/569592 [8:42:03<448:42:01, 2.93s/it]
3%|▎ | 18256/569592 [8:42:03<448:42:01, 2.93s/it]
3%|▎ | 18257/569592 [8:42:08<534:23:36, 3.49s/it]
3%|▎ | 18257/569592 [8:42:08<534:23:36, 3.49s/it]
3%|▎ | 18258/569592 [8:42:12<552:18:05, 3.61s/it]
3%|▎ | 18258/569592 [8:42:12<552:18:05, 3.61s/it]
3%|▎ | 18259/569592 [8:42:13<431:01:49, 2.81s/it]
3%|▎ | 18259/569592 [8:42:13<431:01:49, 2.81s/it]
3%|▎ | 18260/569592 [8:42:17<484:13:03, 3.16s/it]
3%|▎ | 18260/569592 [8:42:17<484:13:03, 3.16s/it]
3%|▎ | 18261/569592 [8:42:21<507:02:59, 3.31s/it]
3%|▎ | 18261/569592 [8:42:21<507:02:59, 3.31s/it]
3%|▎ | 18262/569592 [8:42:24<501:34:14, 3.28s/it]
3%|▎ | 18262/569592 [8:42:24<501:34:14, 3.28s/it]
3%|▎ | 18263/569592 [8:42:28<537:50:27, 3.51s/it]
3%|▎ | 18263/569592 [8:42:28<537:50:27, 3.51s/it]
3%|▎ | 18264/569592 [8:42:31<535:52:10, 3.50s/it]
3%|▎ | 18264/569592 [8:42:31<535:52:10, 3.50s/it]
3%|▎ | 18265/569592 [8:42:35<523:21:16, 3.42s/it]
3%|▎ | 18265/569592 [8:42:35<523:21:16, 3.42s/it]
3%|▎ | 18266/569592 [8:42:38<521:32:14, 3.41s/it]
3%|▎ | 18266/569592 [8:42:38<521:32:14, 3.41s/it]
3%|▎ | 18267/569592 [8:42:41<524:51:56, 3.43s/it]
3%|▎ | 18267/569592 [8:42:41<524:51:56, 3.43s/it]
3%|▎ | 18268/569592 [8:42:45<551:38:24, 3.60s/it]
3%|▎ | 18268/569592 [8:42:45<551:38:24, 3.60s/it]
3%|▎ | 18269/569592 [8:42:49<549:33:17, 3.59s/it]
3%|▎ | 18269/569592 [8:42:49<549:33:17, 3.59s/it]
3%|▎ | 18270/569592 [8:42:53<552:26:26, 3.61s/it]
3%|▎ | 18270/569592 [8:42:53<552:26:26, 3.61s/it]
3%|▎ | 18271/569592 [8:42:56<537:26:03, 3.51s/it]
3%|▎ | 18271/569592 [8:42:56<537:26:03, 3.51s/it]
3%|▎ | 18272/569592 [8:42:57<418:40:29, 2.73s/it]
3%|▎ | 18272/569592 [8:42:57<418:40:29, 2.73s/it]
3%|▎ | 18273/569592 [8:42:58<336:51:40, 2.20s/it]
3%|▎ | 18273/569592 [8:42:58<336:51:40, 2.20s/it]
3%|▎ | 18274/569592 [8:43:02<428:24:32, 2.80s/it]
3%|▎ | 18274/569592 [8:43:02<428:24:32, 2.80s/it]
3%|▎ | 18275/569592 [8:43:06<471:49:38, 3.08s/it]
3%|▎ | 18275/569592 [8:43:06<471:49:38, 3.08s/it]
3%|▎ | 18276/569592 [8:43:09<476:27:24, 3.11s/it]
3%|▎ | 18276/569592 [8:43:09<476:27:24, 3.11s/it]
3%|▎ | 18277/569592 [8:43:12<487:56:13, 3.19s/it]
3%|▎ | 18277/569592 [8:43:12<487:56:13, 3.19s/it]
3%|▎ | 18278/569592 [8:43:16<496:44:36, 3.24s/it]
3%|▎ | 18278/569592 [8:43:16<496:44:36, 3.24s/it]
3%|▎ | 18279/569592 [8:43:19<497:18:15, 3.25s/it]
3%|▎ | 18279/569592 [8:43:19<497:18:15, 3.25s/it]
3%|▎ | 18280/569592 [8:43:20<389:59:48, 2.55s/it]
3%|▎ | 18280/569592 [8:43:20<389:59:48, 2.55s/it]
3%|▎ | 18281/569592 [8:43:21<315:48:40, 2.06s/it]
3%|▎ | 18281/569592 [8:43:21<315:48:40, 2.06s/it]
3%|▎ | 18282/569592 [8:43:26<447:03:38, 2.92s/it]
3%|▎ | 18282/569592 [8:43:26<447:03:38, 2.92s/it]
3%|▎ | 18283/569592 [8:43:30<494:21:27, 3.23s/it]
3%|▎ | 18283/569592 [8:43:30<494:21:27, 3.23s/it]
3%|▎ | 18284/569592 [8:43:33<504:07:28, 3.29s/it]
3%|▎ | 18284/569592 [8:43:33<504:07:28, 3.29s/it]
3%|▎ | 18285/569592 [8:43:36<513:07:52, 3.35s/it]
3%|▎ | 18285/569592 [8:43:37<513:07:52, 3.35s/it]
3%|▎ | 18286/569592 [8:43:40<507:30:21, 3.31s/it]
3%|▎ | 18286/569592 [8:43:40<507:30:21, 3.31s/it]
3%|▎ | 18287/569592 [8:43:44<561:12:37, 3.66s/it]
3%|▎ | 18287/569592 [8:43:44<561:12:37, 3.66s/it]
3%|▎ | 18288/569592 [8:43:47<527:26:03, 3.44s/it]
3%|▎ | 18288/569592 [8:43:47<527:26:03, 3.44s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 18289/569592 [8:43:50<514:55:34, 3.36s/it]
3%|▎ | 18289/569592 [8:43:50<514:55:34, 3.36s/it]
3%|▎ | 18290/569592 [8:43:54<553:33:12, 3.61s/it]
3%|▎ | 18290/569592 [8:43:55<553:33:12, 3.61s/it]
3%|▎ | 18291/569592 [8:43:55<430:29:03, 2.81s/it]
3%|▎ | 18291/569592 [8:43:55<430:29:03, 2.81s/it]
3%|▎ | 18292/569592 [8:43:56<344:49:17, 2.25s/it]
3%|▎ | 18292/569592 [8:43:56<344:49:17, 2.25s/it]
3%|▎ | 18293/569592 [8:44:00<402:26:47, 2.63s/it]
3%|▎ | 18293/569592 [8:44:00<402:26:47, 2.63s/it]
3%|▎ | 18294/569592 [8:44:01<325:46:34, 2.13s/it]
3%|▎ | 18294/569592 [8:44:01<325:46:34, 2.13s/it]
3%|▎ | 18295/569592 [8:44:05<397:21:29, 2.59s/it]
3%|▎ | 18295/569592 [8:44:05<397:21:29, 2.59s/it]
3%|▎ | 18296/569592 [8:44:08<424:27:50, 2.77s/it]
3%|▎ | 18296/569592 [8:44:08<424:27:50, 2.77s/it]
3%|▎ | 18297/569592 [8:44:12<514:58:53, 3.36s/it]
3%|▎ | 18297/569592 [8:44:12<514:58:53, 3.36s/it]
3%|▎ | 18298/569592 [8:44:16<512:32:14, 3.35s/it]
3%|▎ | 18298/569592 [8:44:16<512:32:14, 3.35s/it]
3%|▎ | 18299/569592 [8:44:17<400:42:00, 2.62s/it]
3%|▎ | 18299/569592 [8:44:17<400:42:00, 2.62s/it]
3%|▎ | 18300/569592 [8:44:20<432:41:12, 2.83s/it]
3%|▎ | 18300/569592 [8:44:20<432:41:12, 2.83s/it]
3%|▎ | 18301/569592 [8:44:24<473:57:04, 3.09s/it]
3%|▎ | 18301/569592 [8:44:24<473:57:04, 3.09s/it]
3%|▎ | 18302/569592 [8:44:27<487:47:26, 3.19s/it]
3%|▎ | 18302/569592 [8:44:27<487:47:26, 3.19s/it]
3%|▎ | 18303/569592 [8:44:31<508:32:29, 3.32s/it]
3%|▎ | 18303/569592 [8:44:31<508:32:29, 3.32s/it]
3%|▎ | 18304/569592 [8:44:34<517:38:11, 3.38s/it]
3%|▎ | 18304/569592 [8:44:34<517:38:11, 3.38s/it]
3%|▎ | 18305/569592 [8:44:35<403:48:01, 2.64s/it]
3%|▎ | 18305/569592 [8:44:35<403:48:01, 2.64s/it]
3%|▎ | 18306/569592 [8:44:36<325:22:21, 2.12s/it]
3%|▎ | 18306/569592 [8:44:36<325:22:21, 2.12s/it]
3%|▎ | 18307/569592 [8:44:40<389:16:23, 2.54s/it]
3%|▎ | 18307/569592 [8:44:40<389:16:23, 2.54s/it]
3%|▎ | 18308/569592 [8:44:43<422:49:29, 2.76s/it]
3%|▎ | 18308/569592 [8:44:43<422:49:29, 2.76s/it]
3%|▎ | 18309/569592 [8:44:46<452:36:44, 2.96s/it]
3%|▎ | 18309/569592 [8:44:46<452:36:44, 2.96s/it]
3%|▎ | 18310/569592 [8:44:49<462:31:26, 3.02s/it]
3%|▎ | 18310/569592 [8:44:49<462:31:26, 3.02s/it]
3%|▎ | 18311/569592 [8:44:53<481:58:05, 3.15s/it]
3%|▎ | 18311/569592 [8:44:53<481:58:05, 3.15s/it]
3%|▎ | 18312/569592 [8:44:54<379:15:54, 2.48s/it]
3%|▎ | 18312/569592 [8:44:54<379:15:54, 2.48s/it]
3%|▎ | 18313/569592 [8:44:57<414:31:53, 2.71s/it]
3%|▎ | 18313/569592 [8:44:57<414:31:53, 2.71s/it]
3%|▎ | 18314/569592 [8:45:01<458:40:22, 3.00s/it]
3%|▎ | 18314/569592 [8:45:01<458:40:22, 3.00s/it]
3%|▎ | 18315/569592 [8:45:04<468:49:30, 3.06s/it]
3%|▎ | 18315/569592 [8:45:04<468:49:30, 3.06s/it]
3%|▎ | 18316/569592 [8:45:08<492:59:16, 3.22s/it]
3%|▎ | 18316/569592 [8:45:08<492:59:16, 3.22s/it]
3%|▎ | 18317/569592 [8:45:11<487:18:05, 3.18s/it]
3%|▎ | 18317/569592 [8:45:11<487:18:05, 3.18s/it]
3%|▎ | 18318/569592 [8:45:14<481:04:58, 3.14s/it]
3%|▎ | 18318/569592 [8:45:14<481:04:58, 3.14s/it]
3%|▎ | 18319/569592 [8:45:15<379:19:24, 2.48s/it]
3%|▎ | 18319/569592 [8:45:15<379:19:24, 2.48s/it]
3%|▎ | 18320/569592 [8:45:18<437:55:26, 2.86s/it]
3%|▎ | 18320/569592 [8:45:18<437:55:26, 2.86s/it]
3%|▎ | 18321/569592 [8:45:22<455:05:06, 2.97s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95942730 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 18321/569592 [8:45:22<455:05:06, 2.97s/it]
3%|▎ | 18322/569592 [8:45:23<375:11:19, 2.45s/it]
3%|▎ | 18322/569592 [8:45:23<375:11:19, 2.45s/it]
3%|▎ | 18323/569592 [8:45:26<413:10:26, 2.70s/it]
3%|▎ | 18323/569592 [8:45:26<413:10:26, 2.70s/it]
3%|▎ | 18324/569592 [8:45:29<441:22:16, 2.88s/it]
3%|▎ | 18324/569592 [8:45:29<441:22:16, 2.88s/it]
3%|▎ | 18325/569592 [8:45:33<478:35:33, 3.13s/it]
3%|▎ | 18325/569592 [8:45:33<478:35:33, 3.13s/it]
3%|▎ | 18326/569592 [8:45:34<377:17:10, 2.46s/it]
3%|▎ | 18326/569592 [8:45:34<377:17:10, 2.46s/it]
3%|▎ | 18327/569592 [8:45:38<425:06:38, 2.78s/it]
3%|▎ | 18327/569592 [8:45:38<425:06:38, 2.78s/it]
3%|▎ | 18328/569592 [8:45:41<464:14:33, 3.03s/it]
3%|▎ | 18328/569592 [8:45:41<464:14:33, 3.03s/it]
3%|▎ | 18329/569592 [8:45:42<372:36:50, 2.43s/it]
3%|▎ | 18329/569592 [8:45:42<372:36:50, 2.43s/it]
3%|▎ | 18330/569592 [8:45:45<409:48:39, 2.68s/it]
3%|▎ | 18330/569592 [8:45:45<409:48:39, 2.68s/it]
3%|▎ | 18331/569592 [8:45:46<333:31:40, 2.18s/it]
3%|▎ | 18331/569592 [8:45:46<333:31:40, 2.18s/it]
3%|▎ | 18332/569592 [8:45:47<277:03:53, 1.81s/it]
3%|▎ | 18332/569592 [8:45:47<277:03:53, 1.81s/it]
3%|▎ | 18333/569592 [8:45:48<236:59:34, 1.55s/it]
3%|▎ | 18333/569592 [8:45:48<236:59:34, 1.55s/it]
3%|▎ | 18334/569592 [8:45:52<317:29:39, 2.07s/it]
3%|▎ | 18334/569592 [8:45:52<317:29:39, 2.07s/it]
3%|▎ | 18335/569592 [8:45:55<384:31:45, 2.51s/it]
3%|▎ | 18335/569592 [8:45:55<384:31:45, 2.51s/it]
3%|▎ | 18336/569592 [8:45:59<422:29:56, 2.76s/it]
3%|▎ | 18336/569592 [8:45:59<422:29:56, 2.76s/it]
3%|▎ | 18337/569592 [8:45:59<339:28:09, 2.22s/it]
3%|▎ | 18337/569592 [8:45:59<339:28:09, 2.22s/it]
3%|▎ | 18338/569592 [8:46:03<395:45:43, 2.58s/it]
3%|▎ | 18338/569592 [8:46:03<395:45:43, 2.58s/it]
3%|▎ | 18339/569592 [8:46:08<493:55:09, 3.23s/it]
3%|▎ | 18339/569592 [8:46:08<493:55:09, 3.23s/it]
3%|▎ | 18340/569592 [8:46:12<535:21:39, 3.50s/it]
3%|▎ | 18340/569592 [8:46:12<535:21:39, 3.50s/it]
3%|▎ | 18341/569592 [8:46:13<416:16:00, 2.72s/it]
3%|▎ | 18341/569592 [8:46:13<416:16:00, 2.72s/it]
3%|▎ | 18342/569592 [8:46:17<473:29:47, 3.09s/it]
3%|▎ | 18342/569592 [8:46:17<473:29:47, 3.09s/it]
3%|▎ | 18343/569592 [8:46:20<491:23:18, 3.21s/it]
3%|▎ | 18343/569592 [8:46:20<491:23:18, 3.21s/it]
3%|▎ | 18344/569592 [8:46:21<394:59:52, 2.58s/it]
3%|▎ | 18344/569592 [8:46:21<394:59:52, 2.58s/it]
3%|▎ | 18345/569592 [8:46:22<319:36:46, 2.09s/it]
3%|▎ | 18345/569592 [8:46:22<319:36:46, 2.09s/it]
3%|▎ | 18346/569592 [8:46:26<400:20:37, 2.61s/it]
3%|▎ | 18346/569592 [8:46:26<400:20:37, 2.61s/it]
3%|▎ | 18347/569592 [8:46:27<323:46:06, 2.11s/it]
3%|▎ | 18347/569592 [8:46:27<323:46:06, 2.11s/it]
3%|▎ | 18348/569592 [8:46:30<377:47:55, 2.47s/it]
3%|▎ | 18348/569592 [8:46:30<377:47:55, 2.47s/it]
3%|▎ | 18349/569592 [8:46:34<450:43:24, 2.94s/it]
3%|▎ | 18349/569592 [8:46:34<450:43:24, 2.94s/it]
3%|▎ | 18350/569592 [8:46:35<360:43:33, 2.36s/it]
3%|▎ | 18350/569592 [8:46:35<360:43:33, 2.36s/it]
3%|▎ | 18351/569592 [8:46:39<412:30:35, 2.69s/it]
3%|▎ | 18351/569592 [8:46:39<412:30:35, 2.69s/it]
3%|▎ | 18352/569592 [8:46:42<440:12:17, 2.87s/it]
3%|▎ | 18352/569592 [8:46:42<440:12:17, 2.87s/it]
3%|▎ | 18353/569592 [8:46:43<352:42:15, 2.30s/it]
3%|▎ | 18353/569592 [8:46:43<352:42:15, 2.30s/it]
3%|▎ | 18354/569592 [8:46:44<291:02:18, 1.90s/it]
3%|▎ | 18354/569592 [8:46:44<291:02:18, 1.90s/it]
3%|▎ | 18355/569592 [8:46:45<247:25:24, 1.62s/it]
3%|▎ | 18355/569592 [8:46:45<247:25:24, 1.62s/it]
3%|▎ | 18356/569592 [8:46:49<352:41:17, 2.30s/it]
3%|▎ | 18356/569592 [8:46:49<352:41:17, 2.30s/it]
3%|▎ | 18357/569592 [8:46:50<299:32:28, 1.96s/it]
3%|▎ | 18357/569592 [8:46:50<299:32:28, 1.96s/it]
3%|▎ | 18358/569592 [8:46:53<367:00:17, 2.40s/it]
3%|▎ | 18358/569592 [8:46:53<367:00:17, 2.40s/it]
3%|▎ | 18359/569592 [8:46:54<301:28:36, 1.97s/it]
3%|▎ | 18359/569592 [8:46:54<301:28:36, 1.97s/it]
3%|▎ | 18360/569592 [8:46:58<370:55:26, 2.42s/it]
3%|▎ | 18360/569592 [8:46:58<370:55:26, 2.42s/it]
3%|▎ | 18361/569592 [8:46:59<305:27:26, 1.99s/it]
3%|▎ | 18361/569592 [8:46:59<305:27:26, 1.99s/it]
3%|▎ | 18362/569592 [8:47:00<257:57:20, 1.68s/it]
3%|▎ | 18362/569592 [8:47:00<257:57:20, 1.68s/it]
3%|▎ | 18363/569592 [8:47:01<234:51:15, 1.53s/it]
3%|▎ | 18363/569592 [8:47:01<234:51:15, 1.53s/it]
3%|▎ | 18364/569592 [8:47:04<306:55:14, 2.00s/it]
3%|▎ | 18364/569592 [8:47:04<306:55:14, 2.00s/it]
3%|▎ | 18365/569592 [8:47:07<337:58:16, 2.21s/it]
3%|▎ | 18365/569592 [8:47:07<337:58:16, 2.21s/it]
3%|▎ | 18366/569592 [8:47:08<303:54:29, 1.98s/it]
3%|▎ | 18366/569592 [8:47:08<303:54:29, 1.98s/it]
3%|▎ | 18367/569592 [8:47:12<403:38:46, 2.64s/it]
3%|▎ | 18367/569592 [8:47:12<403:38:46, 2.64s/it]
3%|▎ | 18368/569592 [8:47:15<384:11:00, 2.51s/it]
3%|▎ | 18368/569592 [8:47:15<384:11:00, 2.51s/it]
3%|▎ | 18369/569592 [8:47:16<347:23:56, 2.27s/it]
3%|▎ | 18369/569592 [8:47:16<347:23:56, 2.27s/it]
3%|▎ | 18370/569592 [8:47:20<403:17:16, 2.63s/it]
3%|▎ | 18370/569592 [8:47:20<403:17:16, 2.63s/it]
3%|▎ | 18371/569592 [8:47:21<346:22:13, 2.26s/it]
3%|▎ | 18371/569592 [8:47:21<346:22:13, 2.26s/it]
3%|▎ | 18372/569592 [8:47:25<437:47:05, 2.86s/it]
3%|▎ | 18372/569592 [8:47:25<437:47:05, 2.86s/it]
3%|▎ | 18373/569592 [8:47:29<477:48:00, 3.12s/it]
3%|▎ | 18373/569592 [8:47:29<477:48:00, 3.12s/it]
3%|▎ | 18374/569592 [8:47:31<395:36:05, 2.58s/it]
3%|▎ | 18374/569592 [8:47:31<395:36:05, 2.58s/it]
3%|▎ | 18375/569592 [8:47:34<450:02:40, 2.94s/it]
3%|▎ | 18375/569592 [8:47:34<450:02:40, 2.94s/it]
3%|▎ | 18376/569592 [8:47:38<492:18:05, 3.22s/it]
3%|▎ | 18376/569592 [8:47:38<492:18:05, 3.22s/it]
3%|▎ | 18377/569592 [8:47:39<387:11:41, 2.53s/it]
3%|▎ | 18377/569592 [8:47:39<387:11:41, 2.53s/it]
3%|▎ | 18378/569592 [8:47:43<455:07:09, 2.97s/it]
3%|▎ | 18378/569592 [8:47:43<455:07:09, 2.97s/it]
3%|▎ | 18379/569592 [8:47:47<496:47:13, 3.24s/it]
3%|▎ | 18379/569592 [8:47:47<496:47:13, 3.24s/it]
3%|▎ | 18380/569592 [8:47:48<394:47:52, 2.58s/it]
3%|▎ | 18380/569592 [8:47:48<394:47:52, 2.58s/it]
3%|▎ | 18381/569592 [8:47:52<446:17:12, 2.91s/it]
3%|▎ | 18381/569592 [8:47:52<446:17:12, 2.91s/it]
3%|▎ | 18382/569592 [8:47:55<474:48:22, 3.10s/it]
3%|▎ | 18382/569592 [8:47:55<474:48:22, 3.10s/it]
3%|▎ | 18383/569592 [8:47:56<379:41:39, 2.48s/it]
3%|▎ | 18383/569592 [8:47:56<379:41:39, 2.48s/it]
3%|▎ | 18384/569592 [8:48:00<443:00:59, 2.89s/it]
3%|▎ | 18384/569592 [8:48:00<443:00:59, 2.89s/it]
3%|▎ | 18385/569592 [8:48:04<475:44:20, 3.11s/it]
3%|▎ | 18385/569592 [8:48:04<475:44:20, 3.11s/it]
3%|▎ | 18386/569592 [8:48:07<483:54:19, 3.16s/it]
3%|▎ | 18386/569592 [8:48:07<483:54:19, 3.16s/it]
3%|▎ | 18387/569592 [8:48:11<507:44:51, 3.32s/it]
3%|▎ | 18387/569592 [8:48:11<507:44:51, 3.32s/it]
3%|▎ | 18388/569592 [8:48:15<572:30:09, 3.74s/it]
3%|▎ | 18388/569592 [8:48:15<572:30:09, 3.74s/it]
3%|▎ | 18389/569592 [8:48:18<538:58:57, 3.52s/it]
3%|▎ | 18389/569592 [8:48:18<538:58:57, 3.52s/it]
3%|▎ | 18390/569592 [8:48:19<418:15:51, 2.73s/it]
3%|▎ | 18390/569592 [8:48:19<418:15:51, 2.73s/it]
3%|▎ | 18391/569592 [8:48:20<335:59:41, 2.19s/it]
3%|▎ | 18391/569592 [8:48:20<335:59:41, 2.19s/it]
3%|▎ | 18392/569592 [8:48:24<413:53:21, 2.70s/it]
3%|▎ | 18392/569592 [8:48:24<413:53:21, 2.70s/it]
3%|▎ | 18393/569592 [8:48:28<463:38:52, 3.03s/it]
3%|▎ | 18393/569592 [8:48:28<463:38:52, 3.03s/it]
3%|▎ | 18394/569592 [8:48:31<480:24:41, 3.14s/it]
3%|▎ | 18394/569592 [8:48:31<480:24:41, 3.14s/it]
3%|▎ | 18395/569592 [8:48:35<487:49:27, 3.19s/it]
3%|▎ | 18395/569592 [8:48:35<487:49:27, 3.19s/it]
3%|▎ | 18396/569592 [8:48:38<495:56:04, 3.24s/it]
3%|▎ | 18396/569592 [8:48:38<495:56:04, 3.24s/it]
3%|▎ | 18397/569592 [8:48:39<388:23:46, 2.54s/it]
3%|▎ | 18397/569592 [8:48:39<388:23:46, 2.54s/it]
3%|▎ | 18398/569592 [8:48:42<430:07:40, 2.81s/it]
3%|▎ | 18398/569592 [8:48:42<430:07:40, 2.81s/it]
3%|▎ | 18399/569592 [8:48:46<480:15:17, 3.14s/it]
3%|▎ | 18399/569592 [8:48:46<480:15:17, 3.14s/it]
3%|▎ | 18400/569592 [8:48:50<502:06:29, 3.28s/it]
3%|▎ | 18400/569592 [8:48:50<502:06:29, 3.28s/it]
3%|▎ | 18401/569592 [8:48:51<393:29:17, 2.57s/it]
3%|▎ | 18401/569592 [8:48:51<393:29:17, 2.57s/it]
3%|▎ | 18402/569592 [8:48:55<488:59:56, 3.19s/it]
3%|▎ | 18402/569592 [8:48:55<488:59:56, 3.19s/it]
3%|▎ | 18403/569592 [8:49:00<541:02:34, 3.53s/it]
3%|▎ | 18403/569592 [8:49:00<541:02:34, 3.53s/it]
3%|▎ | 18404/569592 [8:49:03<515:26:46, 3.37s/it]
3%|▎ | 18404/569592 [8:49:03<515:26:46, 3.37s/it]
3%|▎ | 18405/569592 [8:49:06<505:28:54, 3.30s/it]
3%|▎ | 18405/569592 [8:49:06<505:28:54, 3.30s/it]
3%|▎ | 18406/569592 [8:49:09<518:54:14, 3.39s/it]
3%|▎ | 18406/569592 [8:49:09<518:54:14, 3.39s/it]
3%|▎ | 18407/569592 [8:49:13<508:56:26, 3.32s/it]
3%|▎ | 18407/569592 [8:49:13<508:56:26, 3.32s/it]
3%|▎ | 18408/569592 [8:49:14<397:49:01, 2.60s/it]
3%|▎ | 18408/569592 [8:49:14<397:49:01, 2.60s/it]
3%|▎ | 18409/569592 [8:49:17<452:38:27, 2.96s/it]
3%|▎ | 18409/569592 [8:49:17<452:38:27, 2.96s/it]
3%|▎ | 18410/569592 [8:49:18<362:39:59, 2.37s/it]
3%|▎ | 18410/569592 [8:49:18<362:39:59, 2.37s/it]
3%|▎ | 18411/569592 [8:49:22<416:21:30, 2.72s/it]
3%|▎ | 18411/569592 [8:49:22<416:21:30, 2.72s/it]
3%|▎ | 18412/569592 [8:49:25<447:56:20, 2.93s/it]
3%|▎ | 18412/569592 [8:49:25<447:56:20, 2.93s/it]
3%|▎ | 18413/569592 [8:49:29<472:22:49, 3.09s/it]
3%|▎ | 18413/569592 [8:49:29<472:22:49, 3.09s/it]
3%|▎ | 18414/569592 [8:49:30<371:55:44, 2.43s/it]
3%|▎ | 18414/569592 [8:49:30<371:55:44, 2.43s/it]
3%|▎ | 18415/569592 [8:49:33<437:03:43, 2.85s/it]
3%|▎ | 18415/569592 [8:49:33<437:03:43, 2.85s/it]
3%|▎ | 18416/569592 [8:49:37<480:07:30, 3.14s/it]
3%|▎ | 18416/569592 [8:49:37<480:07:30, 3.14s/it]
3%|▎ | 18417/569592 [8:49:41<504:36:19, 3.30s/it]
3%|▎ | 18417/569592 [8:49:41<504:36:19, 3.30s/it]
3%|▎ | 18418/569592 [8:49:44<494:12:29, 3.23s/it]
3%|▎ | 18418/569592 [8:49:44<494:12:29, 3.23s/it]
3%|▎ | 18419/569592 [8:49:47<486:16:19, 3.18s/it]
3%|▎ | 18419/569592 [8:49:47<486:16:19, 3.18s/it]
3%|▎ | 18420/569592 [8:49:50<494:37:57, 3.23s/it]
3%|▎ | 18420/569592 [8:49:50<494:37:57, 3.23s/it]
3%|▎ | 18421/569592 [8:49:54<515:19:01, 3.37s/it]
3%|▎ | 18421/569592 [8:49:54<515:19:01, 3.37s/it]
3%|▎ | 18422/569592 [8:49:58<537:55:54, 3.51s/it]
3%|▎ | 18422/569592 [8:49:58<537:55:54, 3.51s/it]
3%|▎ | 18423/569592 [8:49:59<417:42:53, 2.73s/it]
3%|▎ | 18423/569592 [8:49:59<417:42:53, 2.73s/it]
3%|▎ | 18424/569592 [8:50:02<455:56:48, 2.98s/it]
3%|▎ | 18424/569592 [8:50:02<455:56:48, 2.98s/it]
3%|▎ | 18425/569592 [8:50:03<363:01:29, 2.37s/it]
3%|▎ | 18425/569592 [8:50:03<363:01:29, 2.37s/it]
3%|▎ | 18426/569592 [8:50:07<405:15:30, 2.65s/it]
3%|▎ | 18426/569592 [8:50:07<405:15:30, 2.65s/it]
3%|▎ | 18427/569592 [8:50:10<437:04:01, 2.85s/it]
3%|▎ | 18427/569592 [8:50:10<437:04:01, 2.85s/it]
3%|▎ | 18428/569592 [8:50:13<461:25:59, 3.01s/it]
3%|▎ | 18428/569592 [8:50:13<461:25:59, 3.01s/it]
3%|▎ | 18429/569592 [8:50:19<559:10:22, 3.65s/it]
3%|▎ | 18429/569592 [8:50:19<559:10:22, 3.65s/it]
3%|▎ | 18430/569592 [8:50:22<545:56:09, 3.57s/it]
3%|▎ | 18430/569592 [8:50:22<545:56:09, 3.57s/it]
3%|▎ | 18431/569592 [8:50:25<515:16:14, 3.37s/it]
3%|▎ | 18431/569592 [8:50:25<515:16:14, 3.37s/it]
3%|▎ | 18432/569592 [8:50:30<581:47:05, 3.80s/it]
3%|▎ | 18432/569592 [8:50:30<581:47:05, 3.80s/it]
3%|▎ | 18433/569592 [8:50:34<611:48:15, 4.00s/it]
3%|▎ | 18433/569592 [8:50:34<611:48:15, 4.00s/it]
3%|▎ | 18434/569592 [8:50:37<572:59:53, 3.74s/it]
3%|▎ | 18434/569592 [8:50:37<572:59:53, 3.74s/it]
3%|▎ | 18435/569592 [8:50:40<546:24:35, 3.57s/it]
3%|▎ | 18435/569592 [8:50:40<546:24:35, 3.57s/it]
3%|▎ | 18436/569592 [8:50:41<425:31:35, 2.78s/it]
3%|▎ | 18436/569592 [8:50:41<425:31:35, 2.78s/it]
3%|▎ | 18437/569592 [8:50:45<464:50:59, 3.04s/it]
3%|▎ | 18437/569592 [8:50:45<464:50:59, 3.04s/it]
3%|▎ | 18438/569592 [8:50:49<509:34:17, 3.33s/it]
3%|▎ | 18438/569592 [8:50:49<509:34:17, 3.33s/it]
3%|▎ | 18439/569592 [8:50:52<511:32:56, 3.34s/it]
3%|▎ | 18439/569592 [8:50:52<511:32:56, 3.34s/it]
3%|▎ | 18440/569592 [8:50:53<401:15:40, 2.62s/it]
3%|▎ | 18440/569592 [8:50:53<401:15:40, 2.62s/it]
3%|▎ | 18441/569592 [8:50:57<439:22:50, 2.87s/it]
3%|▎ | 18441/569592 [8:50:57<439:22:50, 2.87s/it]
3%|▎ | 18442/569592 [8:51:00<455:22:39, 2.97s/it]
3%|▎ | 18442/569592 [8:51:00<455:22:39, 2.97s/it]
3%|▎ | 18443/569592 [8:51:04<504:55:23, 3.30s/it]
3%|▎ | 18443/569592 [8:51:04<504:55:23, 3.30s/it]
3%|▎ | 18444/569592 [8:51:07<502:21:11, 3.28s/it]
3%|▎ | 18444/569592 [8:51:07<502:21:11, 3.28s/it]
3%|▎ | 18445/569592 [8:51:08<393:20:17, 2.57s/it]
3%|▎ | 18445/569592 [8:51:08<393:20:17, 2.57s/it]
3%|▎ | 18446/569592 [8:51:12<440:29:54, 2.88s/it]
3%|▎ | 18446/569592 [8:51:12<440:29:54, 2.88s/it]
3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
%|▎ | 18447/569592 [8:51:13<353:41:07, 2.31s/it]
3%|▎ | 18447/569592 [8:51:13<353:41:07, 2.31s/it]
3%|▎ | 18448/569592 [8:51:16<407:46:24, 2.66s/it]
3%|▎ | 18448/569592 [8:51:16<407:46:24, 2.66s/it]
3%|▎ | 18449/569592 [8:51:17<329:32:38, 2.15s/it]
3%|▎ | 18449/569592 [8:51:17<329:32:38, 2.15s/it]
3%|▎ | 18450/569592 [8:51:18<273:03:35, 1.78s/it]
3%|▎ | 18450/569592 [8:51:18<273:03:35, 1.78s/it]
3%|▎ | 18451/569592 [8:51:22<355:11:21, 2.32s/it]
3%|▎ | 18451/569592 [8:51:22<355:11:21, 2.32s/it]
3%|▎ | 18452/569592 [8:51:26<431:14:11, 2.82s/it]
3%|▎ | 18452/569592 [8:51:26<431:14:11, 2.82s/it]
3%|▎ | 18453/569592 [8:51:30<524:20:25, 3.42s/it]
3%|▎ | 18453/569592 [8:51:30<524:20:25, 3.42s/it]
3%|▎ | 18454/569592 [8:51:35<576:45:00, 3.77s/it]
3%|▎ | 18454/569592 [8:51:35<576:45:00, 3.77s/it]
3%|▎ | 18455/569592 [8:51:36<444:14:40, 2.90s/it]
3%|▎ | 18455/569592 [8:51:36<444:14:40, 2.90s/it]
3%|▎ | 18456/569592 [8:51:39<461:43:07, 3.02s/it]
3%|▎ | 18456/569592 [8:51:39<461:43:07, 3.02s/it]
3%|▎ | 18457/569592 [8:51:43<476:45:25, 3.11s/it]
3%|▎ | 18457/569592 [8:51:43<476:45:25, 3.11s/it]
3%|▎ | 18458/569592 [8:51:46<490:06:05, 3.20s/it]
3%|▎ | 18458/569592 [8:51:46<490:06:05, 3.20s/it]
3%|▎ | 18459/569592 [8:51:47<386:11:22, 2.52s/it]
3%|▎ | 18459/569592 [8:51:47<386:11:22, 2.52s/it]
3%|▎ | 18460/569592 [8:51:48<313:01:57, 2.04s/it]
3%|▎ | 18460/569592 [8:51:48<313:01:57, 2.04s/it]
3%|▎ | 18461/569592 [8:51:49<262:37:10, 1.72s/it]
3%|▎ | 18461/569592 [8:51:49<262:37:10, 1.72s/it]
3%|▎ | 18462/569592 [8:51:50<228:38:40, 1.49s/it]
3%|▎ | 18462/569592 [8:51:50<228:38:40, 1.49s/it]
3%|▎ | 18463/569592 [8:51:51<204:50:58, 1.34s/it]
3%|▎ | 18463/569592 [8:51:51<204:50:58, 1.34s/it]
3%|▎ | 18464/569592 [8:51:54<314:22:11, 2.05s/it]
3%|▎ | 18464/569592 [8:51:54<314:22:11, 2.05s/it]
3%|▎ | 18465/569592 [8:51:58<392:53:25, 2.57s/it]
3%|▎ | 18465/569592 [8:51:58<392:53:25, 2.57s/it]
3%|▎ | 18466/569592 [8:51:59<318:03:37, 2.08s/it]
3%|▎ | 18466/569592 [8:51:59<318:03:37, 2.08s/it]
3%|▎ | 18467/569592 [8:52:03<419:35:32, 2.74s/it]
3%|▎ | 18467/569592 [8:52:03<419:35:32, 2.74s/it]
3%|▎ | 18468/569592 [8:52:07<448:20:32, 2.93s/it]
3%|▎ | 18468/569592 [8:52:07<448:20:32, 2.93s/it]
3%|▎ | 18469/569592 [8:52:11<500:21:12, 3.27s/it]
3%|▎ | 18469/569592 [8:52:11<500:21:12, 3.27s/it]
3%|▎ | 18470/569592 [8:52:14<504:22:03, 3.29s/it]
3%|▎ | 18470/569592 [8:52:14<504:22:03, 3.29s/it]
3%|▎ | 18471/569592 [8:52:19<578:02:56, 3.78s/it]
3%|▎ | 18471/569592 [8:52:19<578:02:56, 3.78s/it]
3%|▎ | 18472/569592 [8:52:20<446:02:42, 2.91s/it]
3%|▎ | 18472/569592 [8:52:20<446:02:42, 2.91s/it]
3%|▎ | 18473/569592 [8:52:21<355:13:55, 2.32s/it]
3%|▎ | 18473/569592 [8:52:21<355:13:55, 2.32s/it]
3%|▎ | 18474/569592 [8:52:24<408:33/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100663296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:09, 2.67s/it]
3%|▎ | 18474/569592 [8:52:24<408:33:09, 2.67s/it]
3%|▎ | 18475/569592 [8:52:25<331:16:07, 2.16s/it]
3%|▎ | 18475/569592 [8:52:25<331:16:07, 2.16s/it]
3%|▎ | 18476/569592 [8:52:29<378:01:24, 2.47s/it]
3%|▎ | 18476/569592 [8:52:29<378:01:24, 2.47s/it]
3%|▎ | 18477/569592 [8:52:30<308:24:57, 2.01s/it]
3%|▎ | 18477/569592 [8:52:30<308:24:57, 2.01s/it]
3%|▎ | 18478/569592 [8:52:33<371:15:36, 2.43s/it]
3%|▎ | 18478/569592 [8:52:33<371:15:36, 2.43s/it]
3%|▎ | 18479/569592 [8:52:34<303:15:03, 1.98s/it]
3%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
�� | 18479/569592 [8:52:34<303:15:03, 1.98s/it]
3%|▎ | 18480/569592 [8:52:38<381:29:36, 2.49s/it]
3%|▎ | 18480/569592 [8:52:38<381:29:36, 2.49s/it]
3%|▎ | 18481/569592 [8:52:39<317:08:05, 2.07s/it]
3%|▎ | 18481/569592 [8:52:39<317:08:05, 2.07s/it]
3%|▎ | 18482/569592 [8:52:40<265:45:45, 1.74s/it]
3%|▎ | 18482/569592 [8:52:40<265:45:45, 1.74s/it]
3%|▎ | 18483/569592 [8:52:43<353:28:58, 2.31s/it]
3%|▎ | 18483/569592 [8:52:43<353:28:58, 2.31s/it]
3%|▎ | 18484/569592 [8:52:46<372:41:17, 2.43s/it]
3%|▎ | 18484/569592 [8:52:46<372:41:17, 2.43s/it]
3%|▎ | 18485/569592 [8:52:47<304:04:49, 1.99s/it]
3%|▎ | 18485/569592 [8:52:47<304:04:49, 1.99s/it]
3%|▎ | 18486/569592 [8:52:48<255:22:06, 1.67s/it]
3%|▎ | 18486/569592 [8:52:48<255:22:06, 1.67s/it]
3%|▎ | 18487/569592 [8:52:52<355:48:20, 2.32s/it]
3%|▎ | 18487/569592 [8:52:52<355:48:20, 2.32s/it]
3%|▎ | 18488/569592 [8:52:56<423:54:28, 2.77s/it]
3%|▎ | 18488/569592 [8:52:56<423:54:28, 2.77s/it]
3%|▎ | 18489/569592 [8:52:59<470:17:51, 3.07s/it]
3%|▎ | 18489/569592 [8:52:59<470:17:51, 3.07s/it]
3%|▎ | 18490/569592 [8:53:04<554:07:46, 3.62s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92055411 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 18490/569592 [8:53:04<554:07:46, 3.62s/it]
3%|▎ | 18491/569592 [8:53:07<538:58:07, 3.52s/it]
3%|▎ | 18491/569592 [8:53:08<538:58:07, 3.52s/it]
3%|▎ | 18492/569592 [8:53:11<524:11:33, 3.42s/it]
3%|▎ | 18492/569592 [8:53:11<524:11:33, 3.42s/it]
3%|▎ | 18493/569592 [8:53:14<533:33:28, 3.49s/it]
3%|▎ | 18493/569592 [8:53:14<533:33:28, 3.49s/it]
3%|▎ | 18494/569592 [8:53:18<520:46:20, 3.40s/it]
3%|▎ | 18494/569592 [8:53:18<520:46:20, 3.40s/it]
3%|▎ | 18495/569592 [8:53:21<520:59:51, 3.40s/it]
3%|▎ | 18495/569592 [8:53:21<520:59:51, 3.40s/it]
3%|▎ | 18496/569592 [8:53:24<514:31:32, 3.36s/it]
3%|▎ | 18496/569592 [8:53:24<514:31:32, 3.36s/it]
3%|▎ | 18497/569592 [8:53:25<401:13:03, 2.62s/it]
3%|▎ | 18497/569592 [8:53:25<401:13:03, 2.62s/it]
3%|▎ | 18498/569592 [8:53:26<324:13:27, 2.12s/it]
3%|▎ | 18498/569592 [8:53:26<324:13:27, 2.12s/it]
3%|▎ | 18499/569592 [8:53:30<391:15:39, 2.56s/it]
3%|▎ | 18499/569592 [8:53:30<391:15:39, 2.56s/it]
3%|▎ | 18500/569592 [8:53:34<492:30:28, 3.22s/it]
3%|▎ | 18500/569592 [8:53:34<492:30:28, 3.22s/it]
3%|▎ | 18501/569592 [8:53:37<485:38:17, 3.17s/it]
3%|▎ | 18501/569592 [8:53:37<485:38:17, 3.17s/it]
3%|▎ | 18502/569592 [8:53:42<558:48:49, 3.65s/it]
3%|▎ | 18502/569592 [8:53:42<558:48:49, 3.65s/it]
3%|▎ | 18503/569592 [8:53:46<544:52:07, 3.56s/it]
3%|▎ | 18503/569592 [8:53:46<544:52:07, 3.56s/it]
3%|▎ | 18504/569592 [8:53:48<515:34:19, 3.37s/it]
3%|▎ | 18504/569592 [8:53:48<515:34:19, 3.37s/it]
3%|▎ | 18505/569592 [8:53:55<652:53:56, 4.27s/it]
3%|▎ | 18505/569592 [8:53:55<652:53:56, 4.27s/it]
3%|▎ | 18506/569592 [8:53:58<601:11:31, 3.93s/it]
3%|▎ | 18506/569592 [8:53:58<601:11:31, 3.93s/it]
3%|▎ | 18507/569592 [8:54:02<605:05:48, 3.95s/it]
3%|▎ | 18507/569592 [8:54:02<605:05:48, 3.95s/it]
3%|▎ | 18508/569592 [8:54:03<466:39:10, 3.05s/it]
3%|▎ | 18508/569592 [8:54:03<466:39:10, 3.05s/it]
3%|▎ | 18509/569592 [8:54:07<498:39:36, 3.26s/it]
3%|▎ | 18509/569592 [8:54:07<498:39:36, 3.26s/it]
3%|▎ | 18510/569592 [8:54:08<392:40:20, 2.57s/it]
3%|▎ | 18510/569592 [8:54:08<392:40:20, 2.57s/it]
3%|▎ | 18511/569592 [8:54:12<470:51:16, 3.08s/it]
3%|▎ | 18511/569592 [8:54:12<470:51:16, 3.08s/it]
3%|▎ | 18512/569592 [8:54:15<483:57:58, 3.16s/it]
3%|▎ | 18512/569592 [8:54:15<483:57:58, 3.16s/it]
3%|▎ | 18513/569592 [8:54:19<494:46:36, 3.23s/it]
3%|▎ | 18513/569592 [8:54:19<494:46:36, 3.23s/it]
3%|▎ | 18514/569592 [8:54:23<541:10:01, 3.54s/it]
3%|▎ | 18514/569592 [8:54:23<541:10:01, 3.54s/it]
3%|▎ | 18515/569592 [8:54:26<521:54:56, 3.41s/it]
3%|▎ | 18515/569592 [8:54:26<521:54:56, 3.41s/it]
3%|▎ | 18516/569592 [8:54:29<517:57:26, 3.38s/it]
3%|▎ | 18516/569592 [8:54:29<517:57:26, 3.38s/it]
3%|▎ | 18517/569592 [8:54:33<540:39:02, 3.53s/it]
3%|▎ | 18517/569592 [8:54:33<540:39:02, 3.53s/it]
3%|▎ | 18518/569592 [8:54:37<563:21:35, 3.68s/it]
3%|▎ | 18518/569592 [8:54:37<563:21:35, 3.68s/it]
3%|▎ | 18519/569592 [8:54:40<542:40:36, 3.55s/it]
3%|▎ | 18519/569592 [8:54:40<542:40:36, 3.55s/it]
3%|▎ | 18520/569592 [8:54:44<561:03:28, 3.67s/it]
3%|▎ | 18520/569592 [8:54:44<561:03:28, 3.67s/it]
3%|▎ | 18521/569592 [8:54:47<532:10:27, 3.48s/it]
3%|▎ | 18521/569592 [8:54:47<532:10:27, 3.48s/it]
3%|▎ | 18522/569592 [8:54:48<414:45:34, 2.71s/it]
3%|▎ | 18522/569592 [8:54:48<414:45:34, 2.71s/it]
3%|▎ | 18523/569592 [8:54:52<450:26:08,/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95476746 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
2.94s/it]
3%|▎ | 18523/569592 [8:54:52<450:26:08, 2.94s/it]
3%|▎ | 18524/569592 [8:54:55<471:25:08, 3.08s/it]
3%|▎ | 18524/569592 [8:54:55<471:25:08, 3.08s/it]
3%|▎ | 18525/569592 [8:54:59<485:11:58, 3.17s/it]
3%|▎ | 18525/569592 [8:54:59<485:11:58, 3.17s/it]
3%|▎ | 18526/569592 [8:55:02<502:23:12, 3.28s/it]
3%|▎ | 18526/569592 [8:55:02<502:23:12, 3.28s/it]
3%|▎ | 18527/569592 [8:55:03<394:22:12, 2.58s/it]
3%|▎ | 18527/569592 [8:55:03<394:22:12, 2.58s/it]
3%|▎ | 18528/569592 [8:55:07<440:04:03, 2.87s/it]
3%|▎ | 18528/569592 [8:55:07<440:04:03, 2.87s/it]
3%|▎ | 18529/569592 [8:55:12<543:36:44, 3.55s/it]
3%|▎ | 18529/569592 [8:55:12<543:36:44, 3.55s/it]
3%|▎ | 18530/569592 [8:55:15<526:10:06, 3.44s/it]
3%|▎ | 18530/569592 [8:55:15<526:10:06, 3.44s/it]
3%|▎ | 18531/569592 [8:55:19<562:37:41, 3.68s/it]
3%|▎ | 18531/569592 [8:55:19<562:37:41, 3.68s/it]
3%|▎ | 18532/569592 [8:55:20<434:42:10, 2.84s/it]
3%|▎ | 18532/569592 [8:55:20<434:42:10, 2.84s/it]
3%|▎ | 18533/569592 [8:55:21<346:41:11, 2.26s/it]
3%|▎ | 18533/569592 [8:55:21<346:41:11, 2.26s/it]
3%|▎ | 18534/569592 [8:55:25<419:04:23, 2.74s/it]
3%|▎ | 18534/569592 [8:55:25<419:04:23, 2.74s/it]
3%|▎ | 18535/569592 [8:55:29<476:35:57, 3.11s/it]
3%|▎ | 18535/569592 [8:55:29<476:35:57, 3.11s/it]
3%|▎ | 18536/569592 [8:55:32<487:27:28, 3.18s/it]
3%|▎ | 18536/569592 [8:55:32<487:27:28, 3.18s/it]
3%|▎ | 18537/569592 [8:55:35<489:32:25, 3.20s/it]
3%|▎ | 18537/569592 [8:55:35<489:32:25, 3.20s/it]
3%|▎ | 18538/569592 [8:55:36<383:42:06, 2.51s/it]
3%|▎ | 18538/569592 [8:55:36<383:42:06, 2.51s/it]
3%|▎ | 18539/569592 [8:55:40<441:41:32, 2.89s/it]
3%|▎ | 18539/569592 [8:55:40<441:41:32, 2.89s/it]
3%|▎ | 18540/569592 [8:55:43<455:58:31, 2.98s/it]
3%|▎ | 18540/569592 [8:55:43<455:58:31, 2.98s/it]
3%|▎ | 18541/569592 [8:55:47<477:46:40, 3.12s/it]
3%|▎ | 18541/569592 [8:55:47<477:46:40, 3.12s/it]
3%|▎ | 18542/569592 [8:55:50<503:11:09, 3.29s/it]
3%|▎ | 18542/569592 [8:55:50<503:11:09, 3.29s/it]
3%|▎ | 18543/569592 [8:55:53<492:08:08, 3.22s/it]
3%|▎ | 18543/569592 [8:55:53<492:08:08, 3.22s/it]
3%|▎ | 18544/569592 [8:55:57<516:10:20, 3.37s/it]
3%|▎ | 18544/569592 [8:55:57<516:10:20, 3.37s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 18545/569592 [8:55:58<403:45:41, 2.64s/it]
3%|▎ | 18545/569592 [8:55:58<403:45:41, 2.64s/it]
3%|▎ | 18546/569592 [8:56:01<430:01:48, 2.81s/it]
3%|▎ | 18546/569592 [8:56:01<430:01:48, 2.81s/it]
3%|▎ | 18547/569592 [8:56:04<444:59:47, 2.91s/it]
3%|▎ | 18547/569592 [8:56:04<444:59:47, 2.91s/it]
3%|▎ | 18548/569592 [8:56:08<476:02:14, 3.11s/it]
3%|▎ | 18548/569592 [8:56:08<476:02:14, 3.11s/it]
3%|▎ | 18549/569592 [8:56:11<480:44:10, 3.14s/it]
3%|▎ | 18549/569592 [8:56:11<480:44:10, 3.14s/it]
3%|▎ | 18550/569592 [8:56:16<562:54:56, 3.68s/it]
3%|▎ | 18550/569592 [8:56:16<562:54:56, 3.68s/it]
3%|▎ | 18551/569592 [8:56:17<438:29:35, 2.86s/it]
3%|▎ | 18551/569592 [8:56:17<438:29:35, 2.86s/it]
3%|▎ | 18552/569592 [8:56:20<453:15:41, 2.96s/it]
3%|▎ | 18552/569592 [8:56:20<453:15:41, 2.96s/it]
3%|▎ | 18553/569592 [8:56:21<359:45:54, 2.35s/it]
3%|▎ | 18553/569592 [8:56:21<359:45:54, 2.35s/it]
3%|▎ | 18554/569592 [8:56:26<463:04:49, 3.03s/it]
3%|▎ | 18554/569592 [8:56:26<463:04:49, 3.03s/it]
3%|▎ | 18555/569592 [8:56:30<508:24:32, 3.32s/it]
3%|▎ | 18555/569592 [8:56:30<508:24:32, 3.32s/it]
3%|▎ | 18556/569592 [8:56:33<500:29:24, 3.27s/it]
3%|▎ | 18556/569592 [8:56:33<500:29:24, 3.27s/it]
3%|▎ | 18557/569592 [8:56:36<489:52:41, 3.20s/it]
3%|▎ | 18557/569592 [8:56:36<489:52:41, 3.20s/it]
3%|▎ | 18558/569592 [8:56:39<498:24:42, 3.26s/it]
3%|▎ | 18558/569592 [8:56:39<498:24:42, 3.26s/it]
3%|▎ | 18559/569592 [8:56:43<512:10:52, 3.35s/it]
3%|▎ | 18559/569592 [8:56:43<512:10:52, 3.35s/it]
3%|▎ | 18560/569592 [8:56:46<505:43:42, 3.30s/it]
3%|▎ | 18560/569592 [8:56:46<505:43:42, 3.30s/it]
3%|▎ | 18561/569592 [8:56:50<510:11:30, 3.33s/it]
3%|▎ | 18561/569592 [8:56:50<510:11:30, 3.33s/it]
3%|▎ | 18562/569592 [8:56:53<498:57:45, 3.26s/it]
3%|▎ | 18562/569592 [8:56:53<498:57:45, 3.26s/it]
3%|▎ | 18563/569592 [8:56:54<391:50:44, 2.56s/it]
3%|▎ | 18563/569592 [8:56:54<391:50:44, 2.56s/it]
3%|▎ | 18564/569592 [8:56:58<461:34:31, 3.02s/it]
3%|▎ | 18564/569592 [8:56:58<461:34:31, 3.02s/it]
3%|▎ | 18565/569592 [8:57:02<506:24:32, 3.31s/it]
3%|▎ | 18565/569592 [8:57:02<506:24:32, 3.31s/it]
3%|▎ | 18566/569592 [8:57:03<397:09:52, 2.59s/it]
3%|▎ | 18566/569592 [8:57:03<397:09:52, 2.59s/it]
3%|▎ | 18567/569592 [8:57:04<321:44:17, 2.10s/it]
3%|▎ | 18567/569592 [8:57:04<321:44:17, 2.10s/it]
3%|▎ | 18568/569592 [8:57:07<381:04:50, 2.49s/it]
3%|▎ | 18568/569592 [8:57:07<381:04:50, 2.49s/it]
3%|▎ | 18569/569592 [8:57:11<442:01:43, 2.89s/it]
3%|▎ | 18569/569592 [8:57:11<442:01:43, 2.89s/it]
3%|▎ | 18570/569592 [8:57:12<352:26:30, 2.30s/it]
3%|▎ | 18570/569592 [8:57:12<352:26:30, 2.30s/it]
3%|▎ | 18571/569592 [8:57:16<420:35:18, 2.75s/it]
3%|▎ | 18571/569592 [8:57:16<420:35:18, 2.75s/it]
3%|▎ | 18572/569592 [8:57:17<349:00:20, 2.28s/it]
3%|▎ | 18572/569592 [8:57:17<349:00:20, 2.28s/it]
3%|▎ | 18573/569592 [8:57:20<396:32:15, 2.59s/it]
3%|▎ | 18573/569592 [8:57:20<396:32:15, 2.59s/it]
3%|▎ | 18574/569592 [8:57:21<322:15:08, 2.11s/it]
3%|▎ | 18574/569592 [8:57:21<322:15:08, 2.11s/it]
3%|▎ | 18575/569592 [8:57:25<424:18:28, 2.77s/it]
3%|▎ | 18575/569592 [8:57:25<424:18:28, 2.77s/it]
3%|▎ | 18576/569592 [8:57:29<465:11:43, 3.04s/it]
3%|▎ | 18576/569592 [8:57:29<465:11:43, 3.04s/it]
3%|▎ | 18577/569592 [8:57:30<369:14:05, 2.41s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90963360 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 18577/569592 [8:57:30<369:14:05, 2.41s/it]
3%|▎ | 18578/569592 [8:57:34<423:34:33, 2.77s/it]
3%|▎ | 18578/569592 [8:57:34<423:34:33, 2.77s/it]
3%|▎ | 18579/569592 [8:57:34<339:16:35, 2.22s/it]
3%|▎ | 18579/569592 [8:57:34<339:16:35, 2.22s/it]
3%|▎ | 18580/569592 [8:57:35<281:34:00, 1.84s/it]
3%|▎ | 18580/569592 [8:57:35<281:34:00, 1.84s/it]
3%|▎ | 18581/569592 [8:57:39<380:00:31, 2.48s/it]
3%|▎ | 18581/569592 [8:57:39<380:00:31, 2.48s/it]
3%|▎ | 18582/569592 [8:57:46<573:11:43, 3.74s/it]
3%|▎ | 18582/569592 [8:57:46<573:11:43, 3.74s/it]
3%|▎ | 18583/569592 [8:57:49<542:11:43, 3.54s/it]
3%|▎ | 18583/569592 [8:57:49<542:11:43, 3.54s/it]
3%|▎ | 18584/569592 [8:57:52<528:47:51, 3.45s/it]
3%|▎ | 18584/569592 [8:57:52<528:47:51, 3.45s/it]
3%|▎ | 18585/569592 [8:57:53<411:52:09, 2.69s/it]
3%|▎ | 18585/569592 [8:57:53<411:52:09, 2.69s/it]
3%|▎ | 18586/569592 [8:57:57<443:55:25, 2.90s/it]
3%|▎ | 18586/569592 [8:57:57<443:55:25, 2.90s/it]
3%|▎ | 18587/569592 [8:57:58<358:49:06, 2.34s/it]
3%|▎ | 18587/569592 [8:57:58<358:49:06, 2.34s/it]
3%|▎ | 18588/569592 [8:58:01<406:58:50, 2.66s/it]
3%|▎ | 18588/569592 [8:58:01<406:58:50, 2.66s/it]
3%|▎ | 18589/569592 [8:58:05<449:26:57, 2.94s/it]
3%|▎ | 18589/569592 [8:58:05<449:26:57, 2.94s/it]
3%|▎ | 18590/569592 [8:58:06<356:45:43, 2.33s/it]
3%|▎ | 18590/569592 [8:58:06<356:45:43, 2.33s/it]
3%|▎ | 18591/569592 [8:58:07<293:03:08, 1.91s/it]
3%|▎ | 18591/569592 [8:58:07<293:03:08, 1.91s/it]
3%|▎ | 18592/569592 [8:58:10<362:05:56, 2.37s/it]
3%|▎ | 18592/569592 [8:58:10<362:05:56, 2.37s/it]
3%|▎ | 18593/569592 [8:58:16<521:08:13, 3.40s/it]
3%|▎ | 18593/569592 [8:58:16<521:08:13, 3.40s/it]
3%|▎ | 18594/569592 [8:58:19<508:27:08, 3.32s/it]
3%|▎ | 18594/569592 [8:58:19<508:27:08, 3.32s/it]
3%|▎ | 18595/569592 [8:58:20<397:08:42, 2.59s/it]
3%|▎ | 18595/569592 [8:58:20<397:08:42, 2.59s/it]
3%|▎ | 18596/569592 [8:58:23<435:09:08, 2.84s/it]
3%|▎ | 18596/569592 [8:58:23<435:09:08, 2.84s/it]
3%|▎ | 18597/569592 [8:58:24<347:32:53, 2.27s/it]
3%|▎ | 18597/569592 [8:58:24<347:32:53, 2.27s/it]
3%|▎ | 18598/569592 [8:58:25<286:36:42, 1.87s/it]
3%|▎ | 18598/569592 [8:58:25<286:36:42, 1.87s/it]
3%|▎ | 18599/569592 [8:58:29<353:45:04, 2.31s/it]
3%|▎ | 18599/569592 [8:58:29<353:45:04, 2.31s/it]
3%|▎ | 18600/569592 [8:58:29<290:09:05, 1.90s/it]
3%|▎ | 18600/569592 [8:58:29<290:09:05, 1.90s/it]
3%|▎ | 18601/569592 [8:58:34<390:04:08, 2.55s/it]
3%|▎ | 18601/569592 [8:58:34<390:04:08, 2.55s/it]
3%|▎ | 18602/569592 [8:58:34<316:03:32, 2.07s/it]
3%|▎ | 18602/569592 [8:58:34<316:03:32, 2.07s/it]
3%|▎ | 18603/569592 [8:58:35<264:49:26, 1.73s/it]
3%|▎ | 18603/569592 [8:58:35<264:49:26, 1.73s/it]
3%|▎ | 18604/569592 [8:58:36<232:45:36, 1.52s/it]
3%|▎ | 18604/569592 [8:58:36<232:45:36, 1.52s/it]
3%|▎ | 18605/569592 [8:58:41<361:59:11, 2.37s/it]
3%|▎ | 18605/569592 [8:58:41<361:59:11, 2.37s/it]
3%|▎ | 18606/569592 [8:58:45<429:26:08, 2.81s/it]
3%|▎ | 18606/569592 [8:58:45<429:26:08, 2.81s/it]
3%|▎ | 18607/569592 [8:58:48<478:49:32, 3.13s/it]
3%|▎ | 18607/569592 [8:58:49<478:49:32, 3.13s/it]
3%|▎ | 18608/569592 [8:58:52<516:12:47, 3.37s/it]
3%|▎ | 18608/569592 [8:58:52<516:12:47, 3.37s/it]
3%|▎ | 18609/569592 [8:58:56<511:09:05, 3.34s/it]
3%|▎ | 18609/569592 [8:58:56<511:09:05, 3.34s/it]
3%|▎ | 18610/569592 [8:58:57<410:51:01, 2.68s/it]
3%|▎ | 18610/569592 [8:58:57<410:51:01, 2.68s/it]
3%|▎ | 18611/569592 [8:59:01<457:31:28, 2.99s/it]
3%|▎ | 18611/569592 [8:59:01<457:31:28, 2.99s/it]
3%|▎ | 18612/569592 [8:59:04<466:50:41, 3.05s/it]
3%|▎ | 18612/569592 [8:59:04<466:50:41, 3.05s/it]
3%|▎ | 18613/569592 [8:59:07<486:42:06, 3.18s/it]
3%|▎ | 18613/569592 [8:59:07<486:42:06, 3.18s/it]
3%|▎ | 18614/569592 [8:59:11<495:23:51, 3.24s/it]
3%|▎ | 18614/569592 [8:59:11<495:23:51, 3.24s/it]
3%|▎ | 18615/569592 [8:59:12<389:12:06, 2.54s/it]
3%|▎ | 18615/569592 [8:59:12<389:12:06, 2.54s/it]
3%|▎ | 18616/569592 [8:59:15<440:54:41, 2.88s/it]
3%|▎ | 18616/569592 [8:59:15<440:54:41, 2.88s/it]
3%|▎ | 18617/569592 [8:59:19<480:42:36, 3.14s/it]
3%|▎ | 18617/569592 [8:59:19<480:42:36, 3.14s/it]
3%|▎ | 18618/569592 [8:59:22<485:37:30, 3.17s/it]
3%|▎ | 18618/569592 [8:59:22<485:37:30, 3.17s/it]
3%|▎ | 18619/569592 [8:59:25<491:14:37, 3.21s/it]
3%|▎ | 18619/569592 [8:59:26<491:14:37, 3.21s/it]
3%|▎ | 18620/569592 [8:59:29<526:08:13, 3.44s/it]
3%|▎ | 18620/569592 [8:59:29<526:08:13, 3.44s/it]
3%|▎ | 18621/569592 [8:59:30<409:40:56, 2.68s/it]
3%|▎ | 18621/569592 [8:59:30<409:40:56, 2.68s/it]
3%|▎ | 18622/569592 [8:59:35<507:11:09, 3.31s/it]
3%|▎ | 18622/569592 [8:59:35<507:11:09, 3.31s/it]
3%|▎ | 18623/569592 [8:59:39<525:46:58, 3.44s/it]
3%|▎ | 18623/569592 [8:59:39<525:46:58, 3.44s/it]
3%|▎ | 18624/569592 [8:59:42<514:59:18, 3.36s/it]
3%|▎ | 18624/569592 [8:59:42<514:59:18, 3.36s/it]
3%|▎ | 18625/569592 [8:59:46<523:25:23, 3.42s/it]
3%|▎ | 18625/569592 [8:59:46<523:25:23, 3.42s/it]
3%|▎ | 18626/569592 [8:59:47<408:05:04, 2.67s/it]
3%|▎ | 18626/569592 [8:59:47<408:05:04, 2.67s/it]
3%|▎ | 18627/569592 [8:59:47<329:40:50, 2.15s/it]
3%|▎ | 18627/569592 [8:59:48<329:40:50, 2.15s/it]
3%|▎ | 18628/569592 [8:59:51<396:47:11, 2.59s/it]
3%|▎ | 18628/569592 [8:59:51<396:47:11, 2.59s/it]
3%|▎ | 18629/569592 [8:59:55<446:53:17, 2.92s/it]
3%|▎ | 18629/569592 [8:59:55<446:53:17, 2.92s/it]
3%|▎ | 18630/569592 [8:59:58<475:26:30, 3.11s/it]
3%|▎ | 18630/569592 [8:59:58<475:26:30, 3.11s/it]
3%|▎ | 18631/569592 [9:00:02<506:50:58, 3.31s/it]
3%|▎ | 18631/569592 [9:00:02<506:50:58, 3.31s/it]
3%|▎ | 18632/569592 [9:00:06<511:16:21, 3.34s/it]
3%|▎ | 18632/569592 [9:00:06<511:16:21, 3.34s/it]
3%|▎ | 18633/569592 [9:00:06<398:52:33, 2.61s/it]
3%|▎ | 18633/569592 [9:00:06<398:52:33, 2.61s/it]
3%|▎ | 18634/569592 [9:00:10<435:20:48, 2.84s/it]
3%|▎ | 18634/569592 [9:00:10<435:20:48, 2.84s/it]
3%|▎ | 18635/569592 [9:00:14<475:41:34, 3.11s/it]
3%|▎ | 18635/569592 [9:00:14<475:41:34, 3.11s/it]
3%|▎ | 18636/569592 [9:00:17<495:54:10, 3.24s/it]
3%|▎ | 18636/569592 [9:00:17<495:54:10, 3.24s/it]
3%|▎ | 18637/569592 [9:00:18<394:17:12, 2.58s/it]
3%|▎ | 18637/569592 [9:00:18<394:17:12, 2.58s/it]
3%|▎ | 18638/569592 [9:00:22<463:29:26, 3.03s/it]
3%|▎ | 18638/569592 [9:00:22<463:29:26, 3.03s/it]
3%|▎ | 18639/569592 [9:00:23<367:32:51, 2.40s/it]
3%|▎ | 18639/569592 [9:00:23<367:32:51, 2.40s/it]
3%|▎ | 18640/569592 [9:00:24<300:41:02, 1.96s/it]
3%|▎ | 18640/569592 [9:00:24<300:41:02, 1.96s/it]
3%|▎ | 18641/569592 [9:00:28<370:15:21, 2.42s/it]
3%|▎ | 18641/569592 [9:00:28<370:15:21, 2.42s/it]
3%|▎ | 18642/569592 [9:00:32<457:52:00, 2.99s/it]
3%|▎ | 18642/569592 [9:00:32<457:52:00, 2.99s/it]
3%|▎ | 18643/569592 [9:00:35<477:12:01, 3.12s/it]
3%|▎ | 18643/569592 [9:00:35<477:12:01, 3.12s/it]
3%|▎ | 18644/569592 [9:00:36<376:41:14, 2.46s/it]
3%|▎ | 18644/569592 [9:00:36<376:41:14, 2.46s/it]
3%|▎ | 18645/569592 [9:00:40<423:46:45, 2.77s/it]
3%|▎ | 18645/569592 [9:00:40<423:46:45, 2.77s/it]
3%|▎ | 18646/569592 [9:00:43<442:33:25, 2.89s/it]
3%|▎ | 18646/569592 [9:00:43<442:33:25, 2.89s/it]
3%|▎ | 18647/569592 [9:00:46<473:44:34, 3.10s/it]
3%|▎ | 18647/569592 [9:00:46<473:44:34, 3.10s/it]
3%|▎ | 18648/569592 [9:00:50<488:04:39, 3.19s/it]
3%|▎ | 18648/569592 [9:00:50<488:04:39, 3.19s/it]
3%|▎ | 18649/569592 [9:00:53<495:06:40, 3.24s/it]
3%|▎ | 18649/569592 [9:00:53<495:06:40, 3.24s/it]
3%|▎ | 18650/569592 [9:00:54<390:29:28, 2.55s/it]
3%|▎ | 18650/569592 [9:00:54<390:29:28, 2.55s/it]
3%|▎ | 18651/569592 [9:00:55<318:45:42, 2.08s/it]
3%|▎ | 18651/569592 [9:00:55<318:45:42, 2.08s/it]
3%|▎ | 18652/569592 [9:00:59<379:51:35, 2.48s/it]
3%|▎ | 18652/569592 [9:00:59<379:51:35, 2.48s/it]
3%|▎ | 18653/569592 [9:01:02<415:48:01, 2.72s/it]
3%|▎ | 18653/569592 [9:01:02<415:48:01, 2.72s/it]
3%|▎ | 18654/569592 [9:01:05<457:57:54, 2.99s/it]
3%|▎ | 18654/569592 [9:01:05<457:57:54, 2.99s/it]
3%|▎ | 18655/569592 [9:01:09<469:02:31, 3.06s/it]
3%|▎ | 18655/569592 [9:01:09<469:02:31, 3.06s/it]
3%|▎ | 18656/569592 [9:01:12<467:36:43, 3.06s/it]
3%|▎ | 18656/569592 [9:01:12<467:36:43, 3.06s/it]
3%|▎ | 18657/569592 [9:01:15<475:47:57, 3.11s/it]
3%|▎ | 18657/569592 [9:01:15<475:47:57, 3.11s/it]
3%|▎ | 18658/569592 [9:01:16<374:44:43, 2.45s/it]
3%|▎ | 18658/569592 [9:01:16<374:44:43, 2.45s/it]
3%|▎ | 18659/569592 [9:01:17<306:43:29, 2.00s/it]
3%|▎ | 18659/569592 [9:01:17<306:43:29, 2.00s/it]
3%|▎ | 18660/569592 [9:01:22<435:17:35, 2.84s/it]
3%|▎ | 18660/569592 [9:01:22<435:17:35, 2.84s/it]
3%|▎ | 18661/569592 [9:01:25<477:21:55, 3.12s/it]
3%|▎ | 18661/569592 [9:01:25<477:21:55, 3.12s/it]
3%|▎ | 18662/569592 [9:01:26<375:56:43, 2.46s/it]
3%|▎ | 18662/569592 [9:01:26<375:56:43, 2.46s/it]
3%|▎ | 18663/569592 [9:01:27<308:10:02, 2.01s/it]
3%|▎ | 18663/569592 [9:01:27<308:10:02, 2.01s/it]
3%|▎ | 18664/569592 [9:01:31<392:48:28, 2.57s/it]
3%|▎ | 18664/569592 [9:01:31<392:48:28, 2.57s/it]
3%|▎ | 18665/569592 [9:01:35<435:45:51, 2.85s/it]
3%|▎ | 18665/569592 [9:01:35<435:45:51, 2.85s/it]
3%|▎ | 18666/569592 [9:01:38<455:19:03, 2.98s/it]
3%|▎ | 18666/569592 [9:01:38<455:19:03, 2.98s/it]
3%|▎ | 18667/569592 [9:01:42<497:22:05, 3.25s/it]
3%|▎ | 18667/569592 [9:01:42<497:22:05, 3.25s/it]
3%|▎ | 18668/569592 [9:01:45<513:12:04, 3.35s/it]
3%|▎ | 18668/569592 [9:01:45<513:12:04, 3.35s/it]
3%|▎ | 18669/569592 [9:01:49<509:28:16, 3.33s/it]
3%|▎ | 18669/569592 [9:01:49<509:28:16, 3.33s/it]
3%|▎ | 18670/569592 [9:01:52<520:27:11, 3.40s/it]
3%|▎ | 18670/569592 [9:01:52<520:27:11, 3.40s/it]
3%|▎ | 18671/569592 [9:01:56<533:42:08, 3.49s/it]
3%|▎ | 18671/569592 [9:01:56<533:42:08, 3.49s/it]
3%|▎ | 18672/569592 [9:01:59<532:14:09, 3.48s/it]
3%|▎ | 18672/569592 [9:01:59<532:14:09, 3.48s/it]
3%|▎ | 18673/569592 [9:02:00<413:32:29, 2.70s/it]
3%|▎ | 18673/569592 [9:02:00<413:32:29, 2.70s/it]
3%|▎ | 18674/569592 [9:02:01<332:45:33, 2.17s/it]
3%|▎ | 18674/569592 [9:02:01<332:45:33, 2.17s/it]
3%|▎ | 18675/569592 [9:02:04<381:27:36, 2.49s/it]
3%|▎ | 18675/569592 [9:02:05<381:27:36, 2.49s/it]
3%|▎ | 18676/569592 [9:02:08<435:56:55, 2.85s/it]
3%|▎ | 18676/569592 [9:02:08<435:56:55, 2.85s/it]
3%|▎ | 18677/569592 [9:02:09<355:07:18, 2.32s/it]
3%|▎ | 18677/569592 [9:02:09<355:07:18, 2.32s/it]
3%|▎ | 18678/569592 [9:02:13<442:06:04, 2.89s/it]
3%|▎ | 18678/569592 [9:02:13<442:06:04, 2.89s/it]
3%|▎ | 18679/569592 [9:02:14<353:15:54, 2.31s/it]
3%|▎ | 18679/569592 [9:02:14<353:15:54, 2.31s/it]
3%|▎ | 18680/569592 [9:02:18<411:57:38, 2.69s/it]
3%|▎ | 18680/569592 [9:02:18<411:57:38, 2.69s/it]
3%|▎ | 18681/569592 [9:02:19<334:52:19, 2.19s/it]
3%|▎ | 18681/569592 [9:02:19<334:52:19, 2.19s/it]
3%|▎ | 18682/569592 [9:02:24<444:44:09, 2.91s/it]
3%|▎ | 18682/569592 [9:02:24<444:44:09, 2.91s/it]
3%|▎ | 18683/569592 [9:02:25<353:59:29, 2.31s/it]
3%|▎ | 18683/569592 [9:02:25<353:59:29, 2.31s/it]
3%|▎ | 18684/569592 [9:02:25<290:26:18, 1.90s/it]
3%|▎ | 18684/569592 [9:02:25<290:26:18, 1.90s/it]
3%|▎ | 18685/569592 [9:02:26<246:33:15, 1.61s/it]
3%|▎ | 18685/569592 [9:02:26<246:33:15, 1.61s/it]
3%|▎ | 18686/569592 [9:02:30<340:03:42, 2.22s/it]
3%|▎ | 18686/569592 [9:02:30<340:03:42, 2.22s/it]
3%|▎ | 18687/569592 [9:02:34<402:31:01, 2.63s/it]
3%|▎ | 18687/569592 [9:02:34<402:31:01, 2.63s/it]
3%|▎ | 18688/569592 [9:02:35<327:33:49, 2.14s/it]
3%|▎ | 18688/569592 [9:02:35<327:33:49, 2.14s/it]
3%|▎ | 18689/569592 [9:02:40<461:59:38, 3.02s/it]
3%|▎ | 18689/569592 [9:02:40<461:59:38, 3.02s/it]
3%|▎ | 18690/569592 [9:02:41<366:29:58, 2.39s/it]
3%|▎ | 18690/569592 [9:02:41<366:29:58, 2.39s/it]
3%|▎ | 18691/569592 [9:02:44<412:58:11, 2.70s/it]
3%|▎ | 18691/569592 [9:02:44<412:58:11, 2.70s/it]
3%|▎ | 18692/569592 [9:02:45<338:23:01, 2.21s/it]
3%|▎ | 18692/569592 [9:02:45<338:23:01, 2.21s/it]
3%|▎ | 18693/569592 [9:02:49<406:34:14, 2.66s/it]
3%|▎ | 18693/569592 [9:02:49<406:34:14, 2.66s/it]
3%|▎ | 18694/569592 [9:02:53<470:56:31, 3.08s/it]
3%|▎ | 18694/569592 [9:02:53<470:56:31, 3.08s/it]
3%|▎ | 18695/569592 [9:02:57<500:21:27, 3.27s/it]
3%|▎ | 18695/569592 [9:02:57<500:21:27, 3.27s/it]
3%|▎ | 18696/569592 [9:03:00<495:48:02, 3.24s/it]
3%|▎ | 18696/569592 [9:03:00<495:48:02, 3.24s/it]
3%|▎ | 18697/569592 [9:03:01<388:55:06, 2.54s/it]
3%|▎ | 18697/569592 [9:03:01<388:55:06, 2.54s/it]
3%|▎ | 18698/569592 [9:03:02<315:25:40, 2.06s/it]
3%|▎ | 18698/569592 [9:03:02<315:25:40, 2.06s/it]
3%|▎ | 18699/569592 [9:03:05<398:50:07, 2.61s/it]
3%|▎ | 18699/569592 [9:03:06<398:50:07, 2.61s/it]
3%|▎ | 18700/569592 [9:03:07<327:26:03, 2.14s/it]
3%|▎ | 18700/569592 [9:03:07<327:26:03, 2.14s/it]
3%|▎ | 18701/569592 [9:03:10<397:31:23, 2.60s/it]
3%|▎ | 18701/569592 [9:03:10<397:31:23, 2.60s/it]
3%|▎ | 18702/569592 [9:03:11<322:22:20, 2.11s/it]
3%|▎ | 18702/569592 [9:03:11<322:22:20, 2.11s/it]
3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
%|▎ | 18703/569592 [9:03:12<267:48:51, 1.75s/it]
3%|▎ | 18703/569592 [9:03:12<267:48:51, 1.75s/it]
3%|▎ | 18704/569592 [9:03:16<370:34:26, 2.42s/it]
3%|▎ | 18704/569592 [9:03:16<370:34:26, 2.42s/it]
3%|▎ | 18705/569592 [9:03:20<427:18:16, 2.79s/it]
3%|▎ | 18705/569592 [9:03:20<427:18:16, 2.79s/it]
3%|▎ | 18706/569592 [9:03:23<444:15:55, 2.90s/it]
3%|▎ | 18706/569592 [9:03:23<444:15:55, 2.90s/it]
3%|▎ | 18707/569592 [9:03:24<353:34:21, 2.31s/it]
3%|▎ | 18707/569592 [9:03:24<353:34:21, 2.31s/it]
3%|▎ | 18708/569592 [9:03:25<290:13:58, 1.90s/it]
3%|▎ | 18708/569592 [9:03:25<290:13:58, 1.90s/it]
3%|▎ | 18709/569592 [9:03:28<368:17:56, 2.41s/it]
3%|▎ | 18709/569592 [9:03:28<368:17:56, 2.41s/it]
3%|▎ | 18710/569592 [9:03:32<430:00:07, 2.81s/it]
3%|▎ | 18710/569592 [9:03:32<430:00:07, 2.81s/it]
3%|▎ | 18711/569592 [9:03:33<343:22:16, 2.24s/it]
3%|▎ | 18711/569592 [9:03:33<343:22:16, 2.24s/it]
3%|▎ | 18712/569592 [9:03:37<400:35:39, 2.62s/it]
3%|▎ | 18712/569592 [9:03:37<400:35:39, 2.62s/it]
3%|▎ | 18713/569592 [9:03:40<433:44:56, 2.83s/it]
3%|▎ | 18713/569592 [9:03:40<433:44:56, 2.83s/it]
3%|▎ | 18714/569592 [9:03:43<468:07:09, 3.06s/it]
3%|▎ | 18714/569592 [9:03:43<468:07:09, 3.06s/it]
3%|▎ | 18715/569592 [9:03:44<369:42:34, 2.42s/it]
3%|▎ | 18715/569592 [9:03:44<369:42:34, 2.42s/it]
3%|▎ | 18716/569592 [9:03:48<440:07:31, 2.88s/it]
3%|▎ | 18716/569592 [9:03:48<440:07:31, 2.88s/it]
3%|▎ | 18717/569592 [9:03:49<351:32:01, 2.30s/it]
3%|▎ | 18717/569592 [9:03:49<351:32:01, 2.30s/it]
3%|▎ | 18718/569592 [9:03:50<289:33:19, 1.89s/it]
3%|▎ | 18718/569592 [9:03:50<289:33:19, 1.89s/it]
3%|▎ | 18719/569592 [9:03:54<366:16:58, 2.39s/it]
3%|▎ | 18719/569592 [9:03:54<366:16:58, 2.39s/it]
3%|▎ | 18720/569592 [9:03:58<427:46:54, 2.80s/it]
3%|▎ | 18720/569592 [9:03:58<427:46:54, 2.80s/it]
3%|▎ | 18721/569592 [9:03:59<346:48:50, 2.27s/it]
3%|▎ | 18721/569592 [9:03:59<346:48:50, 2.27s/it]
3%|▎ | 18722/569592 [9:04:02<397:00:07, 2.59s/it]
3%|▎ | 18722/569592 [9:04:02<397:00:07, 2.59s/it]
3%|▎ | 18723/569592 [9:04:03<321:54:06, 2.10s/it]
3%|▎ | 18723/569592 [9:04:03<321:54:06, 2.10s/it]
3%|▎ | 18724/569592 [9:04:07<405:18:54, 2.65s/it]
3%|▎ | 18724/569592 [9:04:07<405:18:54, 2.65s/it]
3%|▎ | 18725/569592 [9:04:10<440:26:30, 2.88s/it]
3%|▎ | 18725/569592 [9:04:10<440:26:30, 2.88s/it]
3%|▎ | 18726/569592 [9:04:14<474:03:55, 3.10s/it]
3%|▎ | 18726/569592 [9:04:14<474:03:55, 3.10s/it]
3%|▎ | 18727/569592 [9:04:15<376:57:25, 2.46s/it]
3%|▎ | 18727/569592 [9:04:15<376:57:25, 2.46s/it]
3%|▎ | 18728/569592 [9:04:19<435:20:34, 2.85s/it]
3%|▎ | 18728/569592 [9:04:19<435:20:34, 2.85s/it]
3%|▎ | 18729/569592 [9:04:22<458:43:30, 3.00s/it]
3%|▎ | 18729/569592 [9:04:22<458:43:30, 3.00s/it]
3%|▎ | 18730/569592 [9:04:23<363:48:00, 2.38s/it]
3%|▎ | 18730/569592 [9:04:23<363:48:00, 2.38s/it]
3%|▎ | 18731/569592 [9:04:27<426:26:33, 2.79s/it]
3%|▎ | 18731/569592 [9:04:27<426:26:33, 2.79s/it]
3%|▎ | 18732/569592 [9:04:30<473:15:58, 3.09s/it]
3%|▎ | 18732/569592 [9:04:30<473:15:58, 3.09s/it]
3%|▎ | 18733/569592 [9:04:35<551:09:30, 3.60s/it]
3%|▎ | 18733/569592 [9:04:35<551:09:30, 3.60s/it]
3%|▎ | 18734/569592 [9:04:39<541:53:06, 3.54s/it]
3%|▎ | 18734/569592 [9:04:39<541:53:06, 3.54s/it]
3%|▎ | 18735/569592 [9:04:42<525:03:40, 3.43s/it]
3%|▎ | 18735/569592 [9:04:42<525:03:40, 3.43s/it]
3%|▎ | 18736/569592 [9:04:45<508:51:17, 3.33s/it]
3%|▎ | 18736/569592 [9:04:45<508:51:17, 3.33s/it]
3%|▎ | 18737/569592 [9:04:48<509:38:19, 3.33s/it]
3%|▎ | 18737/569592 [9:04:48<509:38:19, 3.33s/it]
3%|▎ | 18738/569592 [9:04:51<492:37:14, 3.22s/it]
3%|▎ | 18738/569592 [9:04:51<492:37:14, 3.22s/it]
3%|▎ | 18739/569592 [9:04:55<513:37:45, 3.36s/it]
3%|▎ | 18739/569592 [9:04:55<513:37:45, 3.36s/it]
3%|▎ | 18740/569592 [9:04:59<532:13:22, 3.48s/it]
3%|▎ | 18740/569592 [9:04:59<532:13:22, 3.48s/it]
3%|▎ | 18741/569592 [9:04:59<414:21:25, 2.71s/it]
3%|▎ | 18741/569592 [9:04:59<414:21:25, 2.71s/it]
3%|▎ | 18742/569592 [9:05:03<454:21:52, 2.97s/it]
3%|▎ | 18742/569592 [9:05:03<454:21:52, 2.97s/it]
3%|▎ | 18743/569592 [9:05:07<478:34:50, 3.13s/it]
3%|▎ | 18743/569592 [9:05:07<478:34:50, 3.13s/it]
3%|▎ | 18744/569592 [9:05:07<376:57:20, 2.46s/it]
3%|▎ | 18744/569592 [9:05:07<376:57:20, 2.46s/it]
3%|▎ | 18745/569592 [9:05:08<310:57:48, 2.03s/it]
3%|▎ | 18745/569592 [9:05:08<310:57:48, 2.03s/it]
3%|▎ | 18746/569592 [9:05:12<380:23:21, 2.49s/it]
3%|▎ | 18746/569592 [9:05:12<380:23:21, 2.49s/it]
3%|▎ | 18747/569592 [9:05:16<428:33:28, 2.80s/it]
3%|▎ | 18747/569592 [9:05:16<428:33:28, 2.80s/it]
3%|▎ | 18748/569592 [9:05:19<467:24:52, 3.05s/it]
3%|▎ | 18748/569592 [9:05:19<467:24:52, 3.05s/it]
3%|▎ | 18749/569592 [9:05:23<491:03:03, 3.21s/it]
3%|▎ | 18749/569592 [9:05:23<491:03:03, 3.21s/it]
3%|▎ | 18750/569592 [9:05:24<384:55:44, 2.52s/it]
3%|▎ | 18750/569592 [9:05:24<384:55:44, 2.52s/it]
3%|▎ | 18751/569592 [9:05:25<313:59:00, 2.05s/it]
3%|▎ | 18751/569592 [9:05:25<313:59:00, 2.05s/it]
3%|▎ | 18752/569592 [9:05:29<434:09:43, 2.84s/it]
3%|▎ | 18752/569592 [9:05:29<434:09:43, 2.84s/it]
3%|▎ | 18753/569592 [9:05:30<352:24:58, 2.30s/it]
3%|▎ | 18753/569592 [9:05:30<352:24:58, 2.30s/it]
3%|▎ | 18754/569592 [9:05:34<403:44:54, 2.64s/it]
3%|▎ | 18754/569592 [9:05:34<403:44:54, 2.64s/it]
3%|▎ | 18755/569592 [9:05:37<451:29:31, 2.95s/it]
3%|▎ | 18755/569592 [9:05:37<451:29:31, 2.95s/it]
3%|▎ | 18756/569592 [9:05:41<478:22:15, 3.13s/it]
3%|▎ | 18756/569592 [9:05:41<478:22:15, 3.13s/it]
3%|▎ | 18757/569592 [9:05:42<378:06:48, 2.47s/it]
3%|▎ | 18757/569592 [9:05:42<378:06:48, 2.47s/it]
3%|▎ | 18758/569592 [9:05:46<431:57:21, 2.82s/it]
3%|▎ | 18758/569592 [9:05:46<431:57:21, 2.82s/it]
3%|▎ | 18759/569592 [9:05:49<454:25:55, 2.97s/it]
3%|▎ | 18759/569592 [9:05:49<454:25:55, 2.97s/it]
3%|▎ | 18760/569592 [9:05:53<488:09:48, 3.19s/it]
3%|▎ | 18760/569592 [9:05:53<488:09:48, 3.19s/it]
3%|▎ | 18761/569592 [9:05:57<530:44:06, 3.47s/it]
3%|▎ | 18761/569592 [9:05:57<530:44:06, 3.47s/it]
3%|▎ | 18762/569592 [9:05:58<413:10:32, 2.70s/it]
3%|▎ | 18762/569592 [9:05:58<413:10:32, 2.70s/it]
3%|▎ | 18763/569592 [9:06:02<487:32:12, 3.19s/it]
3%|▎ | 18763/569592 [9:06:02<487:32:12, 3.19s/it]
3%|▎ | 18764/569592 [9:06:03<382:43:22, 2.50s/it]
3%|▎ | 18764/569592 [9:06:03<382:43:22, 2.50s/it]
3%|▎ | 18765/569592 [9:06:04<310:57:10, 2.03s/it]
3%|▎ | 18765/569592 [9:06:04<310:57:10, 2.03s/it]
3%|▎ | 18766/569592 [9:06:07<381:02:15, 2.49s/it]
3%|▎ | 18766/569592 [9:06:07<381:02:15, 2.49s/it]
3%|▎ | 18767/569592 [9:06:11<436:16:13, 2.85s/it]
3%|▎ | 18767/569592 [9:06:11<436:16:13, 2.85s/it]
3%|▎ | 18768/569592 [9:06:15<465:46:53, 3.04s/it]
3%|▎ | 18768/569592 [9:06:15<465:46:53, 3.04s/it]
3%|▎ | 18769/569592 [9:06:18<473:07:50, 3.09s/it]
3%|▎ | 18769/569592 [9:06:18<473:07:50, 3.09s/it]
3%|▎ | 18770/569592 [9:06:22<519:21:25, 3.39s/it]
3%|▎ | 18770/569592 [9:06:22<519:21:25, 3.39s/it]
3%|▎ | 18771/569592 [9:06:25<502:31:19, 3.28s/it]
3%|▎ | 18771/569592 [9:06:25<502:31:19, 3.28s/it]
3%|▎ | 18772/569592 [9:06:28<494:33:16, 3.23s/it]
3%|▎ | 18772/569592 [9:06:28<494:33:16, 3.23s/it]
3%|▎ | 18773/569592 [9:06:31<489:13:41, 3.20s/it]
3%|▎ | 18773/569592 [9:06:31<489:13:41, 3.20s/it]
3%|▎ | 18774/569592 [9:06:34<490:13:45, 3.20s/it]
3%|▎ | 18774/569592 [9:06:34<490:13:45, 3.20s/it]
3%|▎ | 18775/569592 [9:06:35<385:11:17, 2.52s/it]
3%|▎ | 18775/569592 [9:06:35<385:11:17, 2.52s/it]
3%|▎ | 18776/569592 [9:06:36<325:24:49, 2.13s/it]
3%|▎ | 18776/569592 [9:06:36<325:24:49, 2.13s/it]
3%|▎ | 18777/569592 [9:06:37<273:48:16, 1.79s/it]
3%|▎ | 18777/569592 [9:06:37<273:48:16, 1.79s/it]
3%|▎ | 18778/569592 [9:06:41<371:30:00, 2.43s/it]
3%|▎ | 18778/569592 [9:06:41<371:30:00, 2.43s/it]
3%|▎ | 18779/569592 [9:06:42<305:25:35, 2.00s/it]
3%|▎ | 18779/569592 [9:06:42<305:25:35, 2.00s/it]
3%|▎ | 18780/569592 [9:06:47<415:37:27, 2.72s/it]
3%|▎ | 18780/569592 [9:06:47<415:37:27, 2.72s/it]
3%|▎ | 18781/569592 [9:06:48<336:36:41, 2.20s/it]
3%|▎ | 18781/569592 [9:06:48<336:36:41, 2.20s/it]
3%|▎ | 18782/569592 [9:06:51<394:22:17, 2.58s/it]
3%|▎ | 18782/569592 [9:06:51<394:22:17, 2.58s/it]
3%|▎ | 18783/569592 [9:06:55<451:32:56, 2.95s/it]
3%|▎ | 18783/569592 [9:06:55<451:32:56, 2.95s/it]
3%|▎ | 18784/569592 [9:06:58<471:45:45, 3.08s/it]
3%|▎ | 18784/569592 [9:06:58<471:45:45, 3.08s/it]
3%|▎ | 18785/569592 [9:07:02<513:33:04, 3.36s/it]
3%|▎ | 18785/569592 [9:07:02<513:33:04, 3.36s/it]
3%|▎ | 18786/569592 [9:07:06<519:30:10, 3.40s/it]
3%|▎ | 18786/569592 [9:07:06<519:30:10, 3.40s/it]
3%|▎ | 18787/569592 [9:07:09<503:31:30, 3.29s/it]
3%|▎ | 18787/569592 [9:07:09<503:31:30, 3.29s/it]
3%|▎ | 18788/569592 [9:07:13<519:36:55, 3.40s/it]
3%|▎ | 18788/569592 [9:07:13<519:36:55, 3.40s/it]
3%|▎ | 18789/569592 [9:07:16<516:09:12, 3.37s/it]
3%|▎ | 18789/569592 [9:07:16<516:09:12, 3.37s/it]
3%|▎ | 18790/569592 [9:07:20<539:41:19, 3.53s/it]
3%|▎ | 18790/569592 [9:07:20<539:41:19, 3.53s/it]
3%|▎ | 18791/569592 [9:07:23<519:36:14, 3.40s/it]
3%|▎ | 18791/569592 [9:07:23<519:36:14, 3.40s/it]
3%|▎ | 18792/569592 [9:07:26<505:16:38, 3.30s/it]
3%|▎ | 18792/569592 [9:07:26<505:16:38, 3.30s/it]
3%|▎ | 18793/569592 [9:07:27<395:49:42, 2.59s/it]
3%|▎ | 18793/569592 [9:07:27<395:49:42, 2.59s/it]
3%|▎ | 18794/569592 [9:07:30<424:17:18, 2.77s/it]
3%|▎ | 18794/569592 [9:07:30<424:17:18, 2.77s/it]
3%|▎ | 18795/569592 [9:07:34<480:12:12, 3.14s/it]
3%|▎ | 18795/569592 [9:07:34<480:12:12, 3.14s/it]
3%|▎ | 18796/569592 [9:07:37<484:36:50, 3.17s/it]
3%|▎ | 18796/569592 [9:07:37<484:36:50, 3.17s/it]
3%|▎ | 18797/569592 [9:07:41<494:18:55, 3.23s/it]
3%|▎ | 18797/569592 [9:07:41<494:18:55, 3.23s/it]
3%|▎ | 18798/569592 [9:07:42<387:54:08, 2.54s/it]
3%|▎ | 18798/569592 [9:07:42<387:54:08, 2.54s/it]
3%|▎ | 18799/569592 [9:07:43<315:38:34, 2.06s/it]
3%|▎ | 18799/569592 [9:07:43<315:38:34, 2.06s/it]
3%|▎ | 18800/569592 [9:07:46<389:06:25, 2.54s/it]
3%|▎ | 18800/569592 [9:07:46<389:06:25, 2.54s/it]
3%|▎ | 18801/569592 [9:07:47<317:51:36, 2.08s/it]
3%|▎ | 18801/569592 [9:07:47<317:51:36, 2.08s/it]
3%|▎ | 18802/569592 [9:07:48<265:58:21, 1.74s/it]
3%|▎ | 18802/569592 [9:07:48<265:58:21, 1.74s/it]
3%|▎ | 18803/569592 [9:07:49<230:08:31, 1.50s/it]
3%|▎ | 18803/569592 [9:07:49<230:08:31, 1.50s/it]
3%|▎ | 18804/569592 [9:07:53<322:18:19, 2.11s/it]
3%|▎ | 18804/569592 [9:07:53<322:18:19, 2.11s/it]
3%|▎ | 18805/569592 [9:07:54<269:52:57, 1.76s/it]
3%|▎ | 18805/569592 [9:07:54<269:52:57, 1.76s/it]
3%|▎ | 18806/569592 [9:07:55<232:31:37, 1.52s/it]
3%|▎ | 18806/569592 [9:07:55<232:31:37, 1.52s/it]
3%|▎ | 18807/569592 [9:07:58<343:27:08, 2.24s/it]
3%|▎ | 18807/569592 [9:07:59<343:27:08, 2.24s/it]
3%|▎ | 18808/569592 [9:07:59<284:44:20, 1.86s/it]
3%|▎ | 18808/569592 [9:07:59<284:44:20, 1.86s/it]
3%|▎ | 18809/569592 [9:08:03<355:11:14, 2.32s/it]
3%|▎ | 18809/569592 [9:08:03<355:11:14, 2.32s/it]
3%|▎ | 18810/569592 [9:08:07<454:40:09, 2.97s/it]
3%|▎ | 18810/569592 [9:08:07<454:40:09, 2.97s/it]
3%|▎ | 18811/569592 [9:08:11<500:44:48, 3.27s/it]
3%|▎ | 18811/569592 [9:08:11<500:44:48, 3.27s/it]
3%|▎ | 18812/569592 [9:08:12<394:24:21, 2.58s/it]
3%|▎ | 18812/569592 [9:08:12<394:24:21, 2.58s/it]
3%|▎ | 18813/569592 [9:08:16<450:43:44, 2.95s/it]
3%|▎ | 18813/569592 [9:08:16<450:43:44, 2.95s/it]
3%|▎ | 18814/569592 [9:08:17<359:27:06, 2.35s/it]
3%|▎ | 18814/569592 [9:08:17<359:27:06, 2.35s/it]
3%|▎ | 18815/569592 [9:08:18<295:36:42, 1.93s/it]
3%|▎ | 18815/569592 [9:08:18<295:36:42, 1.93s/it]
3%|▎ | 18816/569592 [9:08:19<251:19:02, 1.64s/it]
3%|▎ | 18816/569592 [9:08:19<251:19:02, 1.64s/it]
3%|▎ | 18817/569592 [9:08:23<352:40:15, 2.31s/it]
3%|▎ | 18817/569592 [9:08:23<352:40:15, 2.31s/it]
3%|▎ | 18818/569592 [9:08:27<423:45:29, 2.77s/it]
3%|▎ | 18818/569592 [9:08:27<423:45:29, 2.77s/it]
3%|▎ | 18819/569592 [9:08:28<341:26:44, 2.23s/it]
3%|▎ | 18819/569592 [9:08:28<341:26:44, 2.23s/it]
3%|▎ | 18820/569592 [9:08:29<284:08:01, 1.86s/it]
3%|▎ | 18820/569592 [9:08:29<284:08:01, 1.86s/it]
3%|▎ | 18821/569592 [9:08:32<372:16:17, 2.43s/it]
3%|▎ | 18821/569592 [9:08:32<372:16:17, 2.43s/it]
3%|▎ | 18822/569592 [9:08:36<429:39:23, 2.81s/it]
3%|▎ | 18822/569592 [9:08:36<429:39:23, 2.81s/it]
3%|▎ | 18823/569592 [9:08:40<468:24:25, 3.06s/it]
3%|▎ | 18823/569592 [9:08:40<468:24:25, 3.06s/it]
3%|▎ | 18824/569592 [9:08:41<374:33:08, 2.45s/it]
3%|▎ | 18824/569592 [9:08:41<374:33:08, 2.45s/it]
3%|▎ | 18825/569592 [9:08:43<378:53:48, 2.48s/it]
3%|▎ | 18825/569592 [9:08:43<378:53:48, 2.48s/it]
3%|▎ | 18826/569592 [9:08:47<436:49:45, 2.86s/it]
3%|▎ | 18826/569592 [9:08:47<436:49:45, 2.86s/it]
3%|▎ | 18827/569592 [9:08:48<350:44:50, 2.29s/it]
3%|▎ | 18827/569592 [9:08:48<350:44:50, 2.29s/it]
3%|▎ | 18828/569592 [9:08:52<410:34:35, 2./home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (89910450 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
68s/it]
3%|▎ | 18828/569592 [9:08:52<410:34:35, 2.68s/it]
3%|▎ | 18829/569592 [9:08:55<436:00:51, 2.85s/it]
3%|▎ | 18829/569592 [9:08:55<436:00:51, 2.85s/it]
3%|▎ | 18830/569592 [9:08:56<379:10:40, 2.48s/it]
3%|▎ | 18830/569592 [9:08:56<379:10:40, 2.48s/it]
3%|▎ | 18831/569592 [9:09:00<421:30:18, 2.76s/it]
3%|▎ | 18831/569592 [9:09:00<421:30:18, 2.76s/it]
3%|▎ | 18832/569592 [9:09:03<456:29:02, 2.98s/it]
3%|▎ | 18832/569592 [9:09:03<456:29:02, 2.98s/it]
3%|▎ | 18833/569592 [9:09:07<475:35:31, 3.11s/it]
3%|▎ | 18833/569592 [9:09:07<475:35:31, 3.11s/it]
3%|▎ | 18834/569592 [9:09:10<493:34:16, 3.23s/it]
3%|▎ | 18834/569592 [9:09:10<493:34:16, 3.23s/it]
3%|▎ | 18835/569592 [9:09:11<390:11:26, 2.55s/it]
3%|▎ | 18835/569592 [9:09:11<390:11:26, 2.55s/it]
3%|▎ | 18836/569592 [9:09:12<320:47:29, 2.10s/it]
3%|▎ | 18836/569592 [9:09:12<320:47:29, 2.10s/it]
3%|▎ | 18837/569592 [9:09:16<373:31:40, 2.44s/it]
3%|▎ | 18837/569592 [9:09:16<373:31:40, 2.44s/it]
3%|▎ | 18838/569592 [9:09:19<428:55:38, 2.80s/it]
3%|▎ | 18838/569592 [9:09:19<428:55:38, 2.80s/it]
3%|▎ | 18839/569592 [9:09:23<464:05:42, 3.03s/it]
3%|▎ | 18839/569592 [9:09:23<464:05:42, 3.03s/it]
3%|▎ | 18840/569592 [9:09:27<499:20:18, 3.26s/it]
3%|▎ | 18840/569592 [9:09:27<499:20:18, 3.26s/it]
3%|▎ | 18841/569592 [9:09:30<495:26:28, 3.24s/it]
3%|▎ | 18841/569592 [9:09:30<495:26:28, 3.24s/it]
3%|▎ | 18842/569592 [9:09:33<501:38:01, 3.28s/it]
3%|▎ | 18842/569592 [9:09:33<501:38:01, 3.28s/it]
3%|▎ | 18843/569592 [9:09:36<492:08:49, 3.22s/it]
3%|▎ | 18843/569592 [9:09:36<492:08:49, 3.22s/it]
3%|▎ | 18844/569592 [9:09:40<497:17:53, 3.25s/it]
3%|▎ | 18844/569592 [9:09:40<497:17:53, 3.25s/it]
3%|▎ | 18845/569592 [9:09:40<390:06:12, 2.55s/it]
3%|▎ | 18845/569592 [9:09:40<390:06:12, 2.55s/it]
3%|▎ | 18846/569592 [9:09:44<428:35:49, 2.80s/it]
3%|▎ | 18846/569592 [9:09:44<428:35:49, 2.80s/it]
3%|▎ | 18847/569592 [9:09:48<480:30:30, 3.14s/it]
3%|▎ | 18847/569592 [9:09:48<480:30:30, 3.14s/it]
3%|▎ | 18848/569592 [9:09:49<382:28:31, 2.50s/it]
3%|▎ | 18848/569592 [9:09:49<382:28:31, 2.50s/it]
3%|▎ | 18849/569592 [9:09:52<438:00:13, 2.86s/it]
3%|▎ | 18849/569592 [9:09:52<438:00:13, 2.86s/it]
3%|▎ | 18850/569592 [9:09:56<464:15:31, 3.03s/it]
3%|▎ | 18850/569592 [9:09:56<464:15:31, 3.03s/it]
3%|▎ | 18851/569592 [9:09:59<485:00:37, 3.17s/it]
3%|▎ | 18851/569592 [9:09:59<485:00:37, 3.17s/it]
3%|▎ | 18852/569592 [9:10:03<494:32:22, 3.23s/it]
3%|▎ | 18852/569592 [9:10:03<494:32:22, 3.23s/it]
3%|▎ | 18853/569592 [9:10:06<507:38:00, 3.32s/it]
3%|▎ | 18853/569592 [9:10:06<507:38:00, 3.32s/it]
3%|▎ | 18854/569592 [9:10:10<519:03:54, 3.39s/it]
3%|▎ | 18854/569592 [9:10:10<519:03:54, 3.39s/it]
3%|▎ | 18855/569592 [9:10:13<523:45:35, 3.42s/it]
3%|▎ | 18855/569592 [9:10:13<523:45:35, 3.42s/it]
3%|▎ | 18856/569592 [9:10:16<510:26:21, 3.34s/it]
3%|▎ | 18856/569592 [9:10:16<510:26:21, 3.34s/it]
3%|▎ | 18857/569592 [9:10:20<533:56:18, 3.49s/it]
3%|▎ | 18857/569592 [9:10:20<533:56:18, 3.49s/it]
3%|▎ | 18858/569592 [9:10:24<543:37:37, 3.55s/it]
3%|▎ | 18858/569592 [9:10:24<543:37:37, 3.55s/it]
3%|▎ | 18859/569592 [9:10:25<421:28:35, 2.76s/it]
3%|▎ | 18859/569592 [9:10:25<421:28:35, 2.76s/it]
3%|▎ | 18860/569592 [9:10:29<475:57:02, 3.11s/it]
3%|▎ | 18860/569592 [9:10:29<475:57:02, 3.11s/it]
3%|▎ | 18861/569592 [9:10:32<485:09:34, 3.17s/it]
3%|▎ | 18861/569592 [9:10:32<485:09:34, 3.17s/it]
3%|▎ | 18862/569592 [9:10:33<383:53:12, 2.51s/it]
3%|▎ | 18862/569592 [9:10:33<383:53:12, 2.51s/it]
3%|▎ | 18863/569592 [9:10:34<312:46:56, 2.04s/it]
3%|▎ | 18863/569592 [9:10:34<312:46:56, 2.04s/it]
3%|▎ | 18864/569592 [9:10:38<396:03:25, 2.59s/it]
3%|▎ | 18864/569592 [9:10:38<396:03:25, 2.59s/it]
3%|▎ | 18865/569592 [9:10:42<453:41:56, 2.97s/it]
3%|▎ | 18865/569592 [9:10:42<453:41:56, 2.97s/it]
3%|▎ | 18866/569592 [9:10:45<472:38:03, 3.09s/it]
3%|▎ | 18866/569592 [9:10:45<472:38:03, 3.09s/it]
3%|▎ | 18867/569592 [9:10:49<494:38:26, 3.23s/it]
3%|▎ | 18867/569592 [9:10:49<494:38:26, 3.23s/it]
3%|▎ | 18868/569592 [9:10:50<389:18:49, 2.54s/it]
3%|▎ | 18868/569592 [9:10:50<389:18:49, 2.54s/it]
3%|▎ | 18869/569592 [9:10:51<316:37:45, 2.07s/it]
3%|▎ | 18869/569592 [9:10:51<316:37:45, 2.07s/it]
3%|▎ | 18870/569592 [9:10:54<382:34:53, 2.50s/it]
3%|▎ | 18870/569592 [9:10:54<382:34:53, 2.50s/it]
3%|▎ | 18871/569592 [9:10:58<459:13:22, 3.00s/it]
3%|▎ | 18871/569592 [9:10:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94163472 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
58<459:13:22, 3.00s/it]
3%|▎ | 18872/569592 [9:10:59<373:46:37, 2.44s/it]
3%|▎ | 18872/569592 [9:10:59<373:46:37, 2.44s/it]
3%|▎ | 18873/569592 [9:11:03<431:55:02, 2.82s/it]
3%|▎ | 18873/569592 [9:11:03<431:55:02, 2.82s/it]
3%|▎ | 18874/569592 [9:11:08<532:23:08, 3.48s/it]
3%|▎ | 18874/569592 [9:11:08<532:23:08, 3.48s/it]
3%|▎ | 18875/569592 [9:11:09<413:58:22, 2.71s/it]
3%|▎ | 18875/569592 [9:11:09<413:58:22, 2.71s/it]
3%|▎ | 18876/569592 [9:11:12<430:12:18, 2.81s/it]
3%|▎ | 18876/569592 [9:11:12<430:12:18, 2.81s/it]
3%|▎ | 18877/569592 [9:11:15<450:08:51, 2.94s/it]
3%|▎ | 18877/569592 [9:11:15<450:08:51, 2.94s/it]
3%|▎ | 18878/569592 [9:11:16<359:57:00, 2.35s/it]
3%|▎ | 18878/569592 [9:11:16<359:57:00, 2.35s/it]
3%|▎ | 18879/569592 [9:11:20<418:33:14, 2.74s/it]
3%|▎ | 18879/569592 [9:11:20<418:33:14, 2.74s/it]
3%|▎ | 18880/569592 [9:11:21<335:18:23, 2.19s/it]
3%|▎ | 18880/569592 [9:11:21<335:18:23, 2.19s/it]
3%|▎ | 18881/569592 [9:11:25<412:06:58, 2.69s/it]
3%|▎ | 18881/569592 [9:11:25<412:06:58, 2.69s/it]
3%|▎ | 18882/569592 [9:11:26<331:06:25, 2.16s/it]
3%|▎ |/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
18882/569592 [9:11:26<331:06:25, 2.16s/it]
3%|▎ | 18883/569592 [9:11:27<275:10:55, 1.80s/it]
3%|▎ | 18883/569592 [9:11:27<275:10:55, 1.80s/it]
3%|▎ | 18884/569592 [9:11:31<412:02:12, 2.69s/it]
3%|▎ | 18884/569592 [9:11:31<412:02:12, 2.69s/it]
3%|▎ | 18885/569592 [9:11:32<331:10:02, 2.16s/it]
3%|▎ | 18885/569592 [9:11:32<331:10:02, 2.16s/it]
3%|▎ | 18886/569592 [9:11:36<417:11:34, 2.73s/it]
3%|▎ | 18886/569592 [9:11:36<417:11:34, 2.73s/it]
3%|▎ | 18887/569592 [9:11:40<468:36:41, 3.06s/it]
3%|▎ | 18887/569592 [9:11:40<468:36:41, 3.06s/it]
3%|▎ | 18888/569592 [9:11:41<372:42:12, 2.44s/it]
3%|▎ | 18888/569592 [9:11:41<372:42:12, 2.44s/it]
3%|▎ | 18889/569592 [9:11:44<408:30:51, 2.67s/it]
3%|▎ | 18889/569592 [9:11:44<408:30:51, 2.67s/it]
3%|▎ | 18890/569592 [9:11:45<333:25:22, 2.18s/it]
3%|▎ | 18890/569592 [9:11:46<333:25:22, 2.18s/it]
3%|▎ | 18891/569592 [9:11:50<418:41:19, 2.74s/it]
3%|▎ | 18891/569592 [9:11:50<418:41:19, 2.74s/it]
3%|▎ | 18892/569592 [9:11:53<454:42:00, 2.97s/it]
3%|▎ | 18892/569592 [9:11:53<454:42:00, 2.97s/it]
3%|▎ | 18893/569592 [9:11:54<361:51:12, 2.37s/it]
3%|▎ | 18893/569592 [9:11:54<361:51:12, 2.37s/it]
3%|▎ | 18894/569592 [9:11:59<462:25:54, 3.02s/it]
3%|▎ | 18894/569592 [9:11:59<462:25:54, 3.02s/it]
3%|▎ | 18895/569592 [9:12:02<469:11:46, 3.07s/it]
3%|▎ | 18895/569592 [9:12:02<469:11:46, 3.07s/it]
3%|▎ | 18896/569592 [9:12:05<476:37:46, 3.12s/it]
3%|▎ | 18896/569592 [9:12:05<476:37:46, 3.12s/it]
3%|▎ | 18897/569592 [9:12:08<485:52:41, 3.18s/it]
3%|▎ | 18897/569592 [9:12:08<485:52:41, 3.18s/it]
3%|▎ | 18898/569592 [9:12:09<381:22:10, 2.49s/it]
3%|▎ | 18898/569592 [9:12:09<381:22:10, 2.49s/it]
3%|▎ | 18899/569592 [9:12:10<311:04:53, 2.03s/it]
3%|▎ | 18899/569592 [9:12:10<311:04:53, 2.03s/it]
3%|▎ | 18900/569592 [9:12:13<370:33:06, 2.42s/it]
3%|▎ | 18900/569592 [9:12:13<370:33:06, 2.42s/it]
3%|▎ | 18901/569592 [9:12:14<301:42:18, 1.97s/it]
3%|▎ | 18901/569592 [9:12:14<301:42:18, 1.97s/it]
3%|▎ | 18902/569592 [9:12:18<373:15:38, 2.44s/it]
3%|▎ | 18902/569592 [9:12:18<373:15:38, 2.44s/it]
3%|▎ | 18903/569592 [9:12:23<489:32:08, 3.20s/it]
3%|▎ | 18903/569592 [9:12:23<489:32:08, 3.20s/it]
3%|▎ | 18904/569592 [9:12:26<488:40:30, 3.19s/it]
3%|▎ | 18904/569592 [9:12:26<488:40:30, 3.19s/it]
3%|▎ | 18905/569592 [9:12:29<498:17:53, 3.26s/it]
3%|▎ | 18905/569592 [9:12:29<498:17:53, 3.26s/it]
3%|▎ | 18906/569592 [9:12:33<508:25:23, 3.32s/it]
3%|▎ | 18906/569592 [9:12:33<508:25:23, 3.32s/it]
3%|▎ | 18907/569592 [9:12:38<595:06:13, 3.89s/it]
3%|▎ | 18907/569592 [9:12:38<595:06:13, 3.89s/it]
3%|▎ | 18908/569592 [9:12:39<458:11:02, 3.00s/it]
3%|▎ | 18908/569592 [9:12:39<458:11:02, 3.00s/it]
3%|▎ | 18909/569592 [9:12:40<363:02:04, 2.37s/it]
3%|▎ | 18909/569592 [9:12:40<363:02:04, 2.37s/it]
3%|▎ | 18910/569592 [9:12:44<421:41:20, 2.76s/it]
3%|▎ | 18910/569592 [9:12:44<421:41:20, 2.76s/it]
3%|▎ | 18911/569592 [9:12:45<340:07:04, 2.22s/it]
3%|▎ | 18911/569592 [9:12:45<340:07:04, 2.22s/it]
3%|▎ | 18912/569592 [9:12:49<439:36:19, 2.87s/it]
3%|▎ | 18912/569592 [9:12:49<439:36:19, 2.87s/it]
3%|▎ | 18913/569592 [9:12:53<481:48:43, 3.15s/it]
3%|▎ | 18913/569592 [9:12:53<481:48:43, 3.15s/it]
3%|▎ | 18914/569592 [9:12:57<508:06:40, 3.32s/it]
3%|▎ | 18914/569592 [9:12:57<508:06:40, 3.32s/it]
3%|▎ | 18915/569592 [9:13:00<512:38:24, 3.35s/it]
3%|▎ | 18915/569592 [9:13:00<512:38:24, 3.35s/it]
3%|▎ | 18916/569592 [9:13:01<400:58:06, 2.62s/it]
3%|▎ | 18916/569592 [9:13:01<400:58:06, 2.62s/it]
3%|▎ | 18917/569592 [9:13:02<323:32:38, 2.12s/it]
3%|▎ | 18917/569592 [9:13:02<323:32:38, 2.12s/it]
3%|▎ | 18918/569592 [9:13:03<270:51:24, 1.77s/it]
3%|▎ | 18918/569592 [9:13:03<270:51:24, 1.77s/it]
3%|▎ | 18919/569592 [9:13:04<232:22:57, 1.52s/it]
3%|▎ | 18919/569592 [9:13:04<232:22:57, 1.52s/it]
3%|▎ | 18920/569592 [9:13:08<338:52:57, 2.22s/it]
3%|▎ | 18920/569592 [9:13:08<338:52:57, 2.22s/it]
3%|▎ | 18921/569592 [9:13:09<286:47:50, 1.87s/it]
3%|▎ | 18921/569592 [9:13:09<286:47:50, 1.87s/it]
3%|▎ | 18922/569592 [9:13:12<374:42:47, 2.45s/it]
3%|▎ | 18922/569592 [9:13:12<374:42:47, 2.45s/it]
3%|▎ | 18923/569592 [9:13:16<428:45:06, 2.80s/it]
3%|▎ | 18923/569592 [9:13:16<428:45:06, 2.80s/it]
3%|▎ | 18924/569592 [9:13:17<344:38:30, 2.25s/it]
3%|▎ | 18924/569592 [9:13:17<344:38:30, 2.25s/it]
3%|▎ | 18925/569592 [9:13:18<285:04:30, 1.86s/it]
3%|▎ | 18925/569592 [9:13:18<285:04:30, 1.86s/it]
3%|▎ | 18926/569592 [9:13:19<241:48:49, 1.58s/it]
3%|▎ | 18926/569592 [9:13:19<241:48:49, 1.58s/it]
3%|▎ | 18927/569592 [9:13:22<323:35:27, 2.12s/it]
3%|▎ | 18927/569592 [9:13:22<323:35:27, 2.12s/it]
3%|▎ | 18928/569592 [9:13:26<411:36:02, 2.69s/it]
3%|▎ | 18928/569592 [9:13:26<411:36:02, 2.69s/it]
3%|▎ | 18929/569592 [9:13:30<461:12:32, 3.02s/it]
3%|▎ | 18929/569592 [9:13:30<461:12:32, 3.02s/it]
3%|▎ | 18930/569592 [9:13:34<492:22:30, 3.22s/it]
3%|▎ | 18930/569592 [9:13:34<492:22:30, 3.22s/it]
3%|▎ | 18931/569592 [9:13:37<493:33:47, 3.23s/it]
3%|▎ | 18931/569592 [9:13:37<493:33:47, 3.23s/it]
3%|▎ | 18932/569592 [9:13:38<389:45:53, 2.55s/it]
3%|▎ | 18932/569592 [9:13:38<389:45:53, 2.55s/it]
3%|▎ | 18933/569592 [9:13:39<316:04:52, 2.07s/it]
3%|▎ | 18933/569592 [9:13:39<316:04:52, 2.07s/it]
3%|▎ | 18934/569592 [9:13:42<380:21:11, 2.49s/it]
3%|▎ | 18934/569592 [9:13:42<380:21:11, 2.49s/it]
3%|▎ | 18935/569592 [9:13:46<431:13:26, 2.82s/it]
3%|▎ | 18935/569592 [9:13:46<431:13:26, 2.82s/it]
3%|▎ | 18936/569592 [9:13:50<464:32:57, 3.04s/it]
3%|▎ | 18936/569592 [9:13:50<464:32:57, 3.04s/it]
3%|▎ | 18937/569592 [9:13:51<371:47:50, 2.43s/it]
3%|▎ | 18937/569592 [9:13:51<371:47:50, 2.43s/it]
3%|▎ | 18938/569592 [9:13:51<304:44:42, 1.99s/it]
3%|▎ | 18938/569592 [9:13:52<304:44:42, 1.99s/it]
3%|▎ | 18939/569592 [9:13:55<367:24:41, 2.40s/it]
3%|▎ | 18939/569592 [9:13:55<367:24:41, 2.40s/it]
3%|▎ | 18940/569592 [9:13:58<420:00:43, 2.75s/it]
3%|▎ | 18940/569592 [9:13:58<420:00:43, 2.75s/it]
3%|▎ | 18941/569592 [9:14:02<467:14:06, 3.05s/it]
3%|▎ | 18941/569592 [9:14:02<467:14:06, 3.05s/it]
3%|▎ | 18942/569592 [9:14:03<370:39:42, 2.42s/it]
3%|▎ | 18942/569592 [9:14:03<370:39:42, 2.42s/it]
3%|▎ | 18943/569592 [9:14:04<303:52:03, 1.99s/it]
3%|▎ | 18943/569592 [9:14:04<303:52:03, 1.99s/it]
3%|▎ | 18944/569592 [9:14:07<334:56:51, 2.19s/it]
3%|▎ | 18944/569592 [9:14:07<334:56:51, 2.19s/it]
3%|▎ | 18945/569592 [9:14:11<418:29:42, 2.74s/it]
3%|▎ | 18945/569592 [9:14:11<418:29:42, 2.74s/it]
3%|▎ | 18946/569592 [9:14:12<337:07:54, 2.20s/it]
3%|▎ | 18946/569592 [9:14:12<337:07:54, 2.20s/it]
3%|▎ | 18947/569592 [9:14:15<387:38:25, 2.53s/it]
3%|▎ | 18947/569592 [9:14:15<387:38:25, 2.53s/it]
3%|▎ | 18948/569592 [9:14:18<417:24:35, 2.73s/it]
3%|▎ | 18948/569592 [9:14:18<417:24:35, 2.73s/it]
3%|▎ | 18949/569592 [9:14:22<449:49:35, 2.94s/it]
3%|▎ | 18949/569592 [9:14:22<449:49:35, 2.94s/it]
3%|▎ | 18950/569592 [9:14:25<460:28:01, 3.01s/it]
3%|▎ | 18950/569592 [9:14:25<460:28:01, 3.01s/it]
3%|▎ | 18951/569592 [9:14:28<477:40:11, 3.12s/it]
3%|▎ | 18951/569592 [9:14:28<477:40:11, 3.12s/it]
3%|▎ | 18952/569592 [9:14:31<485:09:15, 3.17s/it]
3%|▎ | 18952/569592 [9:14:32<485:09:15, 3.17s/it]
3%|▎ | 18953/569592 [9:14:35<508:50:05, 3.33s/it]
3%|▎ | 18953/569592 [9:14:35<508:50:05, 3.33s/it]
3%|▎ | 18954/569592 [9:14:36<397:53:08, 2.60s/it]
3%|▎ | 18954/569592 [9:14:36<397:53:08, 2.60s/it]
3%|▎ | 18955/569592 [9:14:40<438:07:01, 2.86s/it]
3%|▎ | 18955/569592 [9:14:40<438:07:01, 2.86s/it]
3%|▎ | 18956/569592 [9:14:43<449:59:58, 2.94s/it]
3%|▎ | 18956/569592 [9:14:43<449:59:58, 2.94s/it]
3%|▎ | 18957/569592 [9:14:47<514:41:21, 3.36s/it]
3%|▎ | 18957/569592 [9:14:47<514:41:21, 3.36s/it]
3%|▎ | 18958/569592 [9:14:48<405:39:03, 2.65s/it]
3%|▎ | 18958/569592 [9:14:48<405:39:03, 2.65s/it]
3%|▎ | 18959/569592 [9:14:51<433:22:31, 2.83s/it]
3%|▎ | 18959/569592 [9:14:51<433:22:31, 2.83s/it]
3%|▎ | 18960/569592 [9:14:55<475:18:32, 3.11s/it]
3%|▎ | 18960/569592 [9:14:55<475:18:32, 3.11s/it]
3%|▎ | 18961/569592 [9:14:58<489:03:19, 3.20s/it]
3%|▎ | 18961/569592 [9:14:58<489:03:19, 3.20s/it]
3%|▎ | 18962/569592 [9:15:02<506:56:28, 3.31s/it]
3%|▎ | 18962/569592 [9:15:02<506:56:28, 3.31s/it]
3%|▎ | 18963/569592 [9:15:06<513:58:08, 3.36s/it]
3%|▎ | 18963/569592 [9:15:06<513:58:08, 3.36s/it]
3%|▎ | 18964/569592 [9:15:09<531:33:35, 3.48s/it]
3%|▎ | 18964/569592 [9:15:09<531:33:35, 3.48s/it]
3%|▎ | 18965/569592 [9:15:12<515:10:28, 3.37s/it]
3%|▎ | 18965/569592 [9:15:12<515:10:28, 3.37s/it]
3%|▎ | 18966/569592 [9:15:16<524:05:51, 3.43s/it]
3%|▎ | 18966/569592 [9:15:16<524:05:51, 3.43s/it]
3%|▎ | 18967/569592 [9:15:17<408:24:18, 2.67s/it]
3%|▎ | 18967/569592 [9:15:17<408:24:18, 2.67s/it]
3%|▎ | 18968/569592 [9:15:20<453:04:17, 2.96s/it]
3%|▎ | 18968/569592 [9:15:20<453:04:17, 2.96s/it]
3%|▎ | 18969/569592 [9:15:24<487:53:34, 3.19s/it]
3%|▎ | 18969/569592 [9:15:24<487:53:34, 3.19s/it]
3%|▎ | 18970/569592 [9:15:28<504:18:31, 3.30s/it]
3%|▎ | 18970/569592 [9:15:28<504:18:31, 3.30s/it]
3%|▎ | 18971/569592 [9:15:29<395:16:31, 2.58s/it]
3%|▎ | 18971/569592 [9:15:29<395:16:31, 2.58s/it]
3%|▎ | 18972/569592 [9:15:30<321:58:38, 2.11s/it]
3%|▎ | 18972/569592 [9:15:30<321:58:38, 2.11s/it]
3%|▎ | 18973/569592 [9:15:33<390:38:34, 2.55s/it]
3%|▎ | 18973/569592 [9:15:33<390:38:34, 2.55s/it]
3%|▎ | 18974/569592 [9:15:37<448:15:39, 2.93s/it]
3%|▎ | 18974/569592 [9:15:37<448:15:39, 2.93s/it]
3%|▎ | 18975/569592 [9:15:41<487:00:05, 3.18s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90595392 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 18975/569592 [9:15:41<487:00:05, 3.18s/it]
3%|▎ | 18976/569592 [9:15:42<384:16:09, 2.51s/it]
3%|▎ | 18976/569592 [9:15:42<384:16:09, 2.51s/it]
3%|▎ | 18977/569592 [9:15:45<431:16:11, 2.82s/it]
3%|▎ | 18977/569592 [9:15:45<431:16:11, 2.82s/it]
3%|▎ | 18978/569592 [9:15:49<472:26:37, 3.09s/it]
3%|▎ | 18978/569592 [9:15:49<472:26:37, 3.09s/it]
3%|▎ | 18979/569592 [9:15:50<380:02:51, 2.48s/it]
3%|▎ | 18979/569592 [9:15:50<380:02:51, 2.48s/it]
3%|▎ | 18980/569592 [9:15:55<491:36:17, 3.21s/it]
3%|▎ | 18980/569592 [9:15:55<491:36:17, 3.21s/it]
3%|▎ | 18981/569592 [9:15:59<510:27:47, 3.34s/it]
3%|▎ | 18981/569592 [9:15:59<510:27:47, 3.34s/it]
3%|▎ | 18982/569592 [9:16:02<519:46:49, 3.40s/it]
3%|▎ | 18982/569592 [9:16:02<519:46:49, 3.40s/it]
3%|▎ | 18983/569592 [9:16:05<513:14:16, 3.36s/it]
3%|▎ | 18983/569592 [9:16:05<513:14:16, 3.36s/it]
3%|▎ | 18984/569592 [9:16:09<523:46:47, 3.42s/it]
3%|▎ | 18984/569592 [9:16:09<523:46:47, 3.42s/it]
3%|▎ | 18985/569592 [9:16:12<523:38:01, 3.42s/it]
3%|▎ | 18985/569592 [9:16:12<523:38:01, 3.42s/it]
3%|▎ | 18986/569592 [9:16:16<550:11:36, 3.60s/it]
3%|▎ | 18986/569592 [9:16:16<550:11:36, 3.60s/it]
3%|▎ | 18987/569592 [9:16:17<426:12:51, 2.79s/it]
3%|▎ | 18987/569592 [9:16:17<426:12:51, 2.79s/it]
3%|▎ | 18988/569592 [9:16:21<451:12:42, 2.95s/it]
3%|▎ | 18988/569592 [9:16:21<451:12:42, 2.95s/it]
3%|▎ | 18989/569592 [9:16:22<358:17:13, 2.34s/it]
3%|▎ | 18989/569592 [9:16:22<358:17:13, 2.34s/it]
3%|▎ | 18990/569592 [9:16:25<428:54:24, 2.80s/it]
3%|▎ | 18990/569592 [9:16:26<428:54:24, 2.80s/it]
3%|▎ | 18991/569592 [9:16:29<463:49:00, 3.03s/it]
3%|▎ | 18991/569592 [9:16:29<463:49:00, 3.03s/it]
3%|▎ | 18992/569592 [9:16:30<372:22:45, 2.43s/it]
3%|▎ | 18992/569592 [9:16:30<372:22:45, 2.43s/it]
3%|▎ | 18993/569592 [9:16:31<304:06:24, 1.99s/it]
3%|▎ | 18993/569592 [9:16:31<304:06:24, 1.99s/it]
3%|▎ | 18994/569592 [9:16:34<365:22:06, 2.39s/it]
3%|▎ | 18994/569592 [9:16:34<365:22:06, 2.39s/it]
3%|▎ | 18995/569592 [9:16:36<311:58:42, 2.04s/it]
3%|▎ | 18995/569592 [9:16:36<311:58:42, 2.04s/it]
3%|▎ | 18996/569592 [9:16:39<377:41:15, 2.47s/it]
3%|▎ | 18996/569592 [9:16:39<377:41:15, 2.47s/it]
3%|▎ | 18997/569592 [9:16:43<424:00:57, 2.77s/it]
3%|▎ | 18997/569592 [9:16:43<424:00:57, 2.77s/it]
3%|▎ | 18998/569592 [9:16:46<449:43:11, 2.94s/it]
3%|▎ | 18998/569592 [9:16:46<449:43:11, 2.94s/it]
3%|▎ | 18999/569592 [9:16:49<472:54:04, 3.09s/it]
3%|▎ | 18999/569592 [9:16:49<472:54:04, 3.09s/it]
3%|▎ | 19000/569592 [9:16:53<484:54:23, 3.17s/it]
3%|▎ | 19000/569592 [9:16:53<484:54:23, 3.17s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-19000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-18000] due to args.save_total_limit
3%|▎ | 19001/569592 [9:19:11<6706:38:33, 43.85s/it]
3%|▎ | 19001/569592 [9:19:11<6706:38:33, 43.85s/it]
3%|▎ | 19002/569592 [9:19:16<4884:59:08, 31.94s/it]
3%|▎ | 19002/569592 [9:19:16<4884:59:08, 31.94s/it]
3%|▎ | 19003/569592 [9:19:19<3584:48:28, 23.44s/it]
3%|▎ | 19003/569592 [9:19:19<3584:48:28, 23.44s/it]
3%|▎ | 19004/569592 [9:19:23<2700:50:01, 17.66s/it]
3%|▎ | 19004/569592 [9:19:23<2700:50:01, 17.66s/it]
3%|▎ | 19005/569592 [9:19:24<1931:59:58, 12.63s/it]
3%|▎ | 19005/569592 [9:19:24<1931:59:58, 12.63s/it]
3%|▎ | 19006/569592 [9:19:28<1536:33:42, 10.05s/it]
3%|▎ | 19006/569592 [9:19:28<1536:33:42, 10.05s/it]
3%|▎ | 19007/569592 [9:19:29<1118:49:07, 7.32s/it]
3%|▎ | 19007/569592 [9:19:29<1118:49:07, 7.32s/it]
3%|▎ | 19008/569592 [9:19:34<982:35:26, 6.42s/it]
3%|▎ | 19008/569592 [9:19:34<982:35:26, 6.42s/it]
3%|▎ | 19009/569592 [9:19:38<872:45:59, 5.71s/it]
3%|▎ | 19009/569592 [9:19:38<872:45:59, 5.71s/it]
3%|▎ | 19010/569592 [9:19:39<652:28:20, 4.27s/it]
3%|▎ | 19010/569592 [9:19:39<652:28:20, 4.27s/it]
3%|▎ | 19011/569592 [9:19:39<499:16:32, 3.26s/it]
3%|▎ | 19011/569592 [9:19:39<499:16:32, 3.26s/it]
3%|▎ | 19012/569592 [9:19:43<514:14:48, 3.36s/it]
3%|▎ | 19012/569592 [9:19:43<514:14:48, 3.36s/it]
3%|▎ | 19013/569592 [9:19:44<404:41:07, 2.65s/it]
3%|▎ | 19013/569592 [9:19:44<404:41:07, 2.65s/it]
3%|▎ | 19014/569592 [9:19:48<478:33:12, 3.13s/it]
3%|▎ | 19014/569592 [9:19:48<478:33:12, 3.13s/it]
3%|▎ | 19015/569592 [9:19:49<379:49:44, 2.48s/it]
3%|▎ | 19015/569592 [9:19:49<379:49:44, 2.48s/it]
3%|▎ | 19016/569592 [9:19:50<314:25:14, 2.06s/it]
3%|▎ | 19016/569592 [9:19:50<314:25:14, 2.06s/it]
3%|▎ | 19017/569592 [9:19:55<421:52:03, 2.76s/it]
3%|▎ | 19017/569592 [9:19:55<421:52:03, 2.76s/it]
3%|▎ | 19018/569592 [9:19:59<486:58:27, 3.18s/it]
3%|▎ | 19018/569592 [9:19:59<486:58:27, 3.18s/it]
3%|▎ | 19019/569592 [9:20:04<562:37:16, 3.68s/it]
3%|▎ | 19019/569592 [9:20:04<562:37:16, 3.68s/it]
3%|▎ | 19020/569592 [9:20:08<580:35:55, 3.80s/it]
3%|▎ | 19020/569592 [9:20:08<580:35:55, 3.80s/it]
3%|▎ | 19021/569592 [9:20:11<550:35:23, 3.60s/it]
3%|▎ | 19021/569592 [9:20:11<550:35:23, 3.60s/it]
3%|▎ | 19022/569592 [9:20:15<563:46:20, 3.69s/it]
3%|▎ | 19022/569592 [9:20:15<563:46:20, 3.69s/it]
3%|▎ | 19023/569592 [9:20:19<575:19:27, 3.76s/it]
3%|▎ | 19023/569592 [9:20:19<575:19:27, 3.76s/it]
3%|▎ | 19024/569592 [9:20:22<552:13:39, 3.61s/it]
3%|▎ | 19024/569592 [9:20:22<552:13:39, 3.61s/it]
3%|▎ | 19025/569592 [9:20:23<427:35:50, 2.80s/it]
3%|▎ | 19025/569592 [9:20:23<427:35:50, 2.80s/it]
3%|▎ | 19026/569592 [9:20:26<444:35:48, 2.91s/it]
3%|▎ | 19026/569592 [9:20:26<444:35:48, 2.91s/it]
3%|▎ | 19027/569592 [9:20:30<503:23:45, 3.29s/it]
3%|▎ | 19027/569592 [9:20:30<503:23:45, 3.29s/it]
3%|▎ | 19028/569592 [9:20:34<525:19:27, 3.43s/it]
3%|▎ | 19028/569592 [9:20:34<525:19:27, 3.43s/it]
3%|▎ | 19029/569592 [9:20:38<540:25:31, 3.53s/it]
3%|▎ | 19029/569592 [9:20:38<540:25:31, 3.53s/it]
3%|▎ | 19030/569592 [9:20:42<566:48:41, 3.71s/it]
3%|▎ | 19030/569592 [9:20:42<566:48:41, 3.71s/it]
3%|▎ | 19031/569592 [9:20:45<553:34:34, 3.62s/it]
3%|▎ | 19031/569592 [9:20:45<553:34:34, 3.62s/it]
3%|▎ | 19032/569592 [9:20:46<428:22:28, 2.80s/it]
3%|▎ | 19032/569592 [9:20:46<428:22:28, 2.80s/it]
3%|▎ | 19033/569592 [9:20:47<341:16:07, 2.23s/it]
3%|▎ | 19033/569592 [9:20:47<341:16:07, 2.23s/it]
3%|▎ | 19034/569592 [9:20:52<441:22:42, 2.89s/it]
3%|▎ | 19034/569592 [9:20:52<441:22:42, 2.89s/it]
3%|▎ | 19035/569592 [9:20:52<353:08:01, 2.31s/it]
3%|▎ | 19035/569592 [9:20:53<353:08:01, 2.31s/it]
3%|▎ | 19036/569592 [9:20:53<289:55:23, 1.90s/it]
3%|▎ | 19036/569592 [9:20:53<289:55:23, 1.90s/it]
3%|▎ | 19037/569592 [9:20:57<387:23:43, 2.53s/it]
3%|▎ | 19037/569592 [9:20:57<387:23:43, 2.53s/it]
3%|▎ | 19038/569592 [9:21:03<504:28:28, 3.30s/it]
3%|▎ | 19038/569592 [9:21:03<504:28:28, 3.30s/it]
3%|▎ | 19039/569592 [9:21:03<396:40:25, 2.59s/it]
3%|▎ | 19039/569592 [9:21:03<396:40:25, 2.59s/it]
3%|▎ | 19040/569592 [9:21:04<321:38:01, 2.10s/it]
3%|▎ | 19040/569592 [9:21:04<321:38:01, 2.10s/it]
3%|▎ | 19041/569592 [9:21:05<269:24:13, 1.76s/it]
3%|▎ | 19041/569592 [9:21:05<269:24:13, 1.76s/it]
3%|▎ | 19042/569592 [9:21:06<231:46:34, 1.52s/it]
3%|▎ | 19042/569592 [9:21:06<231:46:34, 1.52s/it]
3%|▎ | 19043/569592 [9:21:10<315:11:34, 2.06s/it]
3%|▎ | 19043/569592 [9:21:10<315:11:34, 2.06s/it]
3%|▎ | 19044/569592 [9:21:14<412:03:45, 2.69s/it]
3%|▎ | 19044/569592 [9:21:14<412:03:45, 2.69s/it]
3%|▎ | 19045/569592 [9:21:18<467:09:52, 3.05s/it]
3%|▎ | 19045/569592 [9:21:18<467:09:52, 3.05s/it]
3%|▎ | 19046/569592 [9:21:22<503:49:51, 3.29s/it]
3%|▎ | 19046/569592 [9:21:22<503:49:51, 3.29s/it]
3%|▎ | 19047/569592 [9:21:25<508:04:51, 3.32s/it]
3%|▎ | 19047/569592 [9:21:25<508:04:51, 3.32s/it]
3%|▎ | 19048/569592 [9:21:26<398:18:05, 2.60s/it]
3%|▎ | 19048/569592 [9:21:26<398:18:05, 2.60s/it]
3%|▎ | 19049/569592 [9:21:29<438:06:09, 2.86s/it]
3%|▎ | 19049/569592 [9:21:29<438/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:06:09, 2.86s/it]
3%|▎ | 19050/569592 [9:21:33<494:27:38, 3.23s/it]
3%|▎ | 19050/569592 [9:21:33<494:27:38, 3.23s/it]
3%|▎ | 19051/569592 [9:21:37<519:47:06, 3.40s/it]
3%|▎ | 19051/569592 [9:21:37<519:47:06, 3.40s/it]
3%|▎ | 19052/569592 [9:21:38<405:37:45, 2.65s/it]
3%|▎ | 19052/569592 [9:21:38<405:37:45, 2.65s/it]
3%|▎ | 19053/569592 [9:21:39<325:48:13, 2.13s/it]
3%|▎ | 19053/569592 [9:21:39<325:48:13, 2.13s/it]
3%|▎ | 19054/569592 [9:21:43<401:37:01, 2.63s/it]
3%|▎ | 19054/569592 [9:21:43<401:37:01, 2.63s/it]
3%|▎ | 19055/569592 [9:21:47<464:04:41, 3.03s/it]
3%|▎ | 19055/569592 [9:21:47<464:04:41, 3.03s/it]
3%|▎ | 19056/569592 [9:21:51<511:23:06, 3.34s/it]
3%|▎ | 19056/569592 [9:21:51<511:23:06, 3.34s/it]
3%|▎ | 19057/569592 [9:21:54<505:39:08, 3.31s/it]
3%|▎ | 19057/569592 [9:21:54<505:39:08, 3.31s/it]
3%|▎ | 19058/569592 [9:21:55<398:32:08, 2.61s/it]
3%|▎ | 19058/569592 [9:21:55<398:32:08, 2.61s/it]
3%|▎ | 19059/569592 [9:21:58<425:37:20, 2.78s/it]
3%|▎ | 19059/569592 [9:21:58<425:37:20, 2.78s/it]
3%|▎ | 19060/569592 [9:22:02<475:07:37, 3.11s/it]
3%|▎ | 19060/569592 [9:22:02<475:07:37, 3.11s/it]
3%|▎ | 19061/569592 [9:22:03<376:06:47, 2.46s/it]
3%|▎ | 19061/569592 [9:22:03<376:06:47, 2.46s/it]
3%|▎ | 19062/569592 [9:22:07<442:23:58, 2.89s/it]
3%|▎ | 19062/569592 [9:22:07<442:23:58, 2.89s/it]
3%|▎ | 19063/569592 [9:22:10<462:20:27, 3.02s/it]
3%|▎ | 19063/569592 [9:22:10<462:20:27, 3.02s/it]
3%|▎ | 19064/569592 [9:22:11<369:19:35, 2.42s/it]
3%|▎ | 19064/569592 [9:22:11<369:19:35, 2.42s/it]
3%|▎ | 19065/569592 [9:22:15<422:53:50, 2.77s/it]
3%|▎ | 19065/569592 [9:22:15<422:53:50, 2.77s/it]
3%|▎ | 19066/569592 [9:22:19<463:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
25:06, 3.03s/it]
3%|▎ | 19066/569592 [9:22:19<463:25:06, 3.03s/it]
3%|▎ | 19067/569592 [9:22:22<493:48:43, 3.23s/it]
3%|▎ | 19067/569592 [9:22:22<493:48:43, 3.23s/it]
3%|▎ | 19068/569592 [9:22:25<493:14:38, 3.23s/it]
3%|▎ | 19068/569592 [9:22:25<493:14:38, 3.23s/it]
3%|▎ | 19069/569592 [9:22:27<427:58:53, 2.80s/it]
3%|▎ | 19069/569592 [9:22:27<427:58:53, 2.80s/it]
3%|▎ | 19070/569592 [9:22:31<472:59:59, 3.09s/it]
3%|▎ | 19070/569592 [9:22:31<472:59:59, 3.09s/it]
3%|▎ | 19071/569592 [9:22:32<375:44:08, 2.46s/it]
3%|▎ | 19071/569592 [9:22:32<375:44:08, 2.46s/it]
3%|▎ | 19072/569592 [9:22:35<415:21:19, 2.72s/it]
3%|▎ | 19072/569592 [9:22:35<415:21:19, 2.72s/it]
3%|▎ | 19073/569592 [9:22:40<497:35:16, 3.25s/it]
3%|▎ | 19073/569592 [9:22:40<497:35:16, 3.25s/it]
3%|▎ | 19074/569592 [9:22:44<533:30:59, 3.49s/it]
3%|▎ | 19074/569592 [9:22:44<533:30:59, 3.49s/it]
3%|▎ | 19075/569592 [9:22:48<552:22:30, 3.61s/it]
3%|▎ | 19075/569592 [9:22:48<552:22:30, 3.61s/it]
3%|▎ | 19076/569592 [9:22:51<546:42:35, 3.58s/it]
3%|▎ | 19076/569592 [9:22:51<546:42:35, 3.58s/it]
3%|▎ | 19077/569592 [9:22:52<423:43:14, 2.77s/it]
3%|▎ | 19077/569592 [9:22:52<423:43:14, 2.77s/it]
3%|▎ | 19078/569592 [9:22:56<468:23:05, 3.06s/it]
3%|▎ | 19078/569592 [9:22:56<468:23:05, 3.06s/it]
3%|▎ | 19079/569592 [9:23:01<541:24:02, 3.54s/it]
3%|▎ | 19079/569592 [9:23:01<541:24:02, 3.54s/it]
3%|▎ | 19080/569592 [9:23:01<420:40:51, 2.75s/it]
3%|▎ | 19080/569592 [9:23:02<420:40:51, 2.75s/it]
3%|▎ | 19081/569592 [9:23:05<462:31:47, 3.02s/it]
3%|▎ | 19081/569592 [9:23:05<462:31:47, 3.02s/it]
3%|▎ | 19082/569592 [9:23:09<513:52:07, 3.36s/it]
3%|▎ | 19082/569592 [9:23:09<513:52:07, 3.36s/it]
3%|▎ | 19083/569592 [9:23:13<525:42:32, 3.44s/it]
3%|▎ | 19083/569592 [9:23:13<525:42:32, 3.44s/it]
3%|▎ | 19084/569592 [9:23:17<549:18:37, 3.59s/it]
3%|▎ | 19084/569592 [9:23:17<549:18:37, 3.59s/it]
3%|▎ | 19085/569592 [9:23:21<571:46:28, 3.74s/it]
3%|▎ | 19085/569592 [9:23:21<571:46:28, 3.74s/it]
3%|▎ | 19086/569592 [9:23:25<602:19:31, 3.94s/it]
3%|▎ | 19086/569592 [9:23:25<602:19:31, 3.94s/it]
3%|▎ | 19087/569592 [9:23:29<599:47:11, 3.92s/it]
3%|▎ | 19087/569592 [9:23:29<599:47:11, 3.92s/it]
3%|▎ | 19088/569592 [9:23:33<585:17:35, 3.83s/it]
3%|▎ | 19088/569592 [9:23:33<585:17:35, 3.83s/it]
3%|▎ | 19089/569592 [9:23:37<595:06:13, 3.89s/it]
3%|▎ | 19089/569592 [9:23:37<595:06:13, 3.89s/it]
3%|▎ | 19090/569592 [9:23:41<586:11:31, 3.83s/it]
3%|▎ | 19090/569592 [9:23:41<586:11:31, 3.83s/it]
3%|▎ | 19091/569592 [9:23:44<579:42:54, 3.79s/it]
3%|▎ | 19091/569592 [9:23:44<579:42:54, 3.79s/it]
3%|▎ | 19092/569592 [9:23:48<565:04:44, 3.70s/it]
3%|▎ | 19092/569592 [9:23:48<565:04:44, 3.70s/it]
3%|▎ | 19093/569592 [9:23:52<598:15:23, 3.91s/it]
3%|▎ | 19093/569592 [9:23:52<598:15:23, 3.91s/it]
3%|▎ | 19094/569592 [9:23:57<654:53:44, 4.28s/it]
3%|▎ | 19094/569592 [9:23:57<654:53:44, 4.28s/it]
3%|▎ | 19095/569592 [9:24:01<629:58:38, 4.12s/it]
3%|▎ | 19095/569592 [9:24:01<629:58:38, 4.12s/it]
3%|▎ | 19096/569592 [9:24:05<615:44:40, 4.03s/it]
3%|▎ | 19096/569592 [9:24:05<615:44:40, 4.03s/it]
3%|▎ | 19097/569592 [9:24:09<608:57:09, 3.98s/it]
3%|▎ | 19097/569592 [9:24:09<608:57:09, 3.98s/it]
3%|▎ | 19098/569592 [9:24:12<588:09:10, 3.85s/it]
3%|▎ | 19098/569592 [9:24:12<588:09:10, 3.85s/it]
3%|▎ | 19099/569592 [9:24:16<585:01:07, 3.83s/it]
3%|▎ | 19099/569592 [9:24:16<585:01:07, 3.83s/it]
3%|▎ | 19100/569592 [9:24:20<573:23:44, 3.75s/it]
3%|▎ | 19100/569592 [9:24:20<573:23:44, 3.75s/it]
3%|▎ | 19101/569592 [9:24:24<617:27:23, 4.04s/it]
3%|▎ | 19101/569592 [9:24:24<617:27:23, 4.04s/it]
3%|▎ | 19102/569592 [9:24:27<568:01:03, 3.71s/it]
3%|▎ | 19102/569592 [9:24:27<568:01:03, 3.71s/it]
3%|▎ | 19103/569592 [9:24:31<577:51:48, 3.78s/it]
3%|▎ | 19103/569592 [9:24:31<577:51:48, 3.78s/it]
3%|▎ | 19104/569592 [9:24:36<604:56:43, 3.96s/it]
3%|▎ | 19104/569592 [9:24:36<604:56:43, 3.96s/it]
3%|▎ | 19105/569592 [9:24:36<464:27:12, 3.04s/it]
3%|▎ | 19105/569592 [9:24:37<464:27:12, 3.04s/it]
3%|▎ | 19106/569592 [9:24:37<369:31:04, 2.42s/it]
3%|▎ | 19106/569592 [9:24:37<369:31:04, 2.42s/it]
3%|▎ | 19107/569592 [9:24:38<301:25:33, 1.97s/it]
3%|▎ | 19107/569592 [9:24:38<301:25:33, 1.97s/it]
3%|▎ | 19108/569592 [9:24:42<363:18:51, 2.38s/it]
3%|▎ | 19108/569592 [9:24:42<363:18:51, 2.38s/it]
3%|▎ | 19109/569592 [9:24:46<442:30:21, 2.89s/it]
3%|▎ | 19109/569592 [9:24:46<442:30:21, 2.89s/it]
3%|▎ | 19110/569592 [9:24:50<492:49:34, 3.22s/it]
3%|▎ | 19110/569592 [9:24:50<492:49:34, 3.22s/it]
3%|▎ | 19111/569592 [9:24:51<390:21:27, 2.55s/it]
3%|▎ | 19111/569592 [9:24:51<390:21:27, 2.55s/it]
3%|▎ | 19112/569592 [9:24:55<474:16:34, 3.10s/it]
3%|▎ | 19112/569592 [9:24:55<474:16:34, 3.10s/it]
3%|▎ | 19113/569592 [9:24:59<505:08:31, 3.30s/it]
3%|▎ | 19113/569592 [9:24:59<505:08:31, 3.30s/it]
3%|▎ | 19114/569592 [9:25:00<396:54:12, 2.60s/it]
3%|▎ | 19114/569592 [9:25:00<396:54:12, 2.60s/it]
3%|▎ | 19115/569592 [9:25:04<446:26:14, 2.92s/it]
3%|▎ | 19115/569592 [9:25:04<446:26:14, 2.92s/it]
3%|▎ | 19116/569592 [9:25:07<481:08:03, 3.15s/it]
3%|▎ | 19116/569592 [9:25:07<481:08:03, 3.15s/it]
3%|▎ | 19117/569592 [9:25:11<505:20:46, 3.30s/it]
3%|▎ | 19117/569592 [9:25:11<505:20:46, 3.30s/it]
3%|▎ | 19118/569592 [9:25:15<521:32:02, 3.41s/it]
3%|▎ | 19118/569592 [9:25:15<521:32:02, 3.41s/it]
3%|▎ | 19119/569592 [9:25:18<525:52:19, 3.44s/it]
3%|▎ | 19119/569592 [9:25:18<525:52:19, 3.44s/it]
3%|▎ | 19120/569592 [9:25:22<541:04:36, 3.54s/it]
3%|▎ | 19120/569592 [9:25:22<541:04:36, 3.54s/it]
3%|▎ | 19121/569592 [9:25:25<539:19:57, 3.53s/it]
3%|▎ | 19121/569592 [9:25:25<539:19:57, 3.53s/it]
3%|▎ | 19122/569592 [9:25:29<524:14:07, 3.43s/it]
3%|▎ | 19122/569592 [9:25:29<524:14:07, 3.43s/it]
3%|▎ | 19123/569592 [9:25:30<412:19:47, 2.70s/it]
3%|▎ | 19123/569592 [9:25:30<412:19:47, 2.70s/it]
3%|▎ | 19124/569592 [9:25:30<331:48:14, 2.17s/it]
3%|▎ | 19124/569592 [9:25:30<331:48:14, 2.17s/it]
3%|▎ | 19125/569592 [9:25:31<275:45:23, 1.80s/it]
3%|▎ | 19125/569592 [9:25:31<275:45:23, 1.80s/it]
3%|▎ | 19126/569592 [9:25:36<390:58:35, 2.56s/it]
3%|▎ | 19126/569592 [9:25:36<390:58:35, 2.56s/it]
3%|▎ | 19127/569592 [9:25:39<428:22:45, 2.80s/it]
3%|▎ | 19127/569592 [9:25:39<428:22:45, 2.80s/it]
3%|▎ | 19128/569592 [9:25:40<343:07:18, 2.24s/it]
3%|▎ | 19128/569592 [9:25:40<343:07:18, 2.24s/it]
3%|▎ | 19129/569592 [9:25:44<424:18:07, 2.77s/it]
3%|▎ | 19129/569592 [9:25:44<424:18:07, 2.77s/it]
3%|▎ | 19130/569592 [9:25:45<345:04:24, 2.26s/it]
3%|▎ | 19130/569592 [9:25:45<345:04:24, 2.26s/it]
3%|▎ | 19131/569592 [9:25:46<283:53:47, 1.86s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3%|▎ | 19131/569592 [9:25:46<283:53:47, 1.86s/it]
3%|▎ | 19132/569592 [9:25:50<359:50:46, 2.35s/it]
3%|▎ | 19132/569592 [9:25:50<359:50:46, 2.35s/it]
3%|▎ | 19133/569592 [9:25:53<417:02:34, 2.73s/it]
3%|▎ | 19133/569592 [9:25:53<417:02:34, 2.73s/it]
3%|▎ | 19134/569592 [9:25:57<451:36:16, 2.95s/it]
3%|▎ | 19134/569592 [9:25:57<451:36:16, 2.95s/it]
3%|▎ | 19135/569592 [9:26:00<478:21:14, 3.13s/it]
3%|▎ | 19135/569592 [9:26:00<478:21:14, 3.13s/it]
3%|▎ | 19136/569592 [9:26:04<516:56:24, 3.38s/it]
3%|▎ | 19136/569592 [9:26:04<516:56:24, 3.38s/it]
3%|▎ | 19137/569592 [9:26:05<404:39:34, 2.65s/it]
3%|▎ | 19137/569592 [9:26:05<404:39:34, 2.65s/it]
3%|▎ | 19138/569592 [9:26:06<328:45:53, 2.15s/it]
3%|▎ | 19138/569592 [9:26:06<328:45:53, 2.15s/it]
3%|▎ | 19139/569592 [9:26:10<423:07:23, 2.77s/it]
3%|▎ | 19139/569592 [9:26:10<423:07:23, 2.77s/it]
3%|▎ | 19140/569592 [9:26:11<340:48:12, 2.23s/it]
3%|▎ | 19140/569592 [9:26:11<340:48:12, 2.23s/it]
3%|▎ | 19141/569592 [9:26:15<387:59:20, 2.54s/it]
3%|▎ | 19141/569592 [9:26:15<387:59:20, 2.54s/it]
3%|▎ | 19142/569592 [9:26:18<430:05:55, 2.81s/it]
3%|▎ | 19142/569592 [9:26:18<430:05:55, 2.81s/it]
3%|▎ | 19143/569592 [9:26:22<492:59:14, 3.22s/it]
3%|▎ | 19143/569592 [9:26:22<492:59:14, 3.22s/it]
3%|▎ | 19144/569592 [9:26:25<496:44:18, 3.25s/it]
3%|▎ | 19144/569592 [9:26:25<496:44:18, 3.25s/it]
3%|▎ | 19145/569592 [9:26:26<390:42:57, 2.56s/it]
3%|▎ | 19145/569592 [9:26:26<390:42:57, 2.56s/it]
3%|▎ | 19146/569592 [9:26:27<316:27:27, 2.07s/it]
3%|▎ | 19146/569592 [9:26:27<316:27:27, 2.07s/it]
3%|▎ | 19147/569592 [9:26:28<264:20:15, 1.73s/it]
3%|▎ | 19147/569592 [9:26:28<264:20:15, 1.73s/it]
3%|▎ | 19148/569592 [9:26:32<336:40:40, 2.20s/it]
3%|▎ | 19148/569592 [9:26:32<336:40:40, 2.20s/it]
3%|▎ | 19149/569592 [9:26:36<429:02:06, 2.81s/it]
3%|▎ | 19149/569592 [9:26:36<429:02:06, 2.81s/it]
3%|▎ | 19150/569592 [9:26:39<455:29:24, 2.98s/it]
3%|▎ | 19150/569592 [9:26:39<455:29:24, 2.98s/it]
3%|▎ | 19151/569592 [9:26:43<474:48:05, 3.11s/it]
3%|▎ | 19151/569592 [9:26:43<474:48:05, 3.11s/it]
3%|▎ | 19152/569592 [9:26:46<507:36:49, 3.32s/it]
3%|▎ | 19152/569592 [9:26:47<507:36:49, 3.32s/it]
3%|▎ | 19153/569592 [9:26:48<412:58:48, 2.70s/it]
3%|▎ | 19153/569592 [9:26:48<412:58:48, 2.70s/it]
3%|▎ | 19154/569592 [9:26:49<335:05:22, 2.19s/it]
3%|▎ | 19154/569592 [9:26:49<335:05:22, 2.19s/it]
3%|▎ | 19155/569592 [9:26:53<423:03:36, 2.77s/it]
3%|▎ | 19155/569592 [9:26:53<423:03:36, 2.77s/it]
3%|▎ | 19156/569592 [9:26:54<338:14:00, 2.21s/it]
3%|▎ | 19156/569592 [9:26:54<338:14:00, 2.21s/it]
3%|▎ | 19157/569592 [9:26:55<281:04:02, 1.84s/it]
3%|▎ | 19157/569592 [9:26:55<281:04:02, 1.84s/it]
3%|▎ | 19158/569592 [9:26:56<248:34:45, 1.63s/it]
3%|▎ | 19158/569592 [9:26:56<248:34:45, 1.63s/it]
3%|▎ | 19159/569592 [9:27:00<351:34:15, 2.30s/it]
3%|▎ | 19159/569592 [9:27:00<351:34:15, 2.30s/it]
3%|▎ | 19160/569592 [9:27:03<398:39:46, 2.61s/it]
3%|▎ | 19160/569592 [9:27:03<398:39:46, 2.61s/it]
3%|▎ | 19161/569592 [9:27:04<321:40:34, 2.10s/it]
3%|▎ | 19161/569592 [9:27:04<321:40:34, 2.10s/it]
3%|▎ | 19162/569592 [9:27:05<267:29:10, 1.75s/it]
3%|▎ | 19162/569592 [9:27:05<267:29:10, 1.75s/it]
3%|▎ | 19163/569592 [9:27:09<359:29:35, 2.35s/it]
3%|▎ | 19163/569592 [9:27:09<359:29:35, 2.35s/it]
3%|▎ | 19164/569592 [9:27:10<302:41:21, 1.98s/it]
3%|▎ | 19164/569592 [9:27:10<302:41:21, 1.98s/it]
3%|▎ | 19165/569592 [9:27:13<365:53:58, 2.39s/it]
3%|▎ | 19165/569592 [9:27:13<365:53:58, 2.39s/it]
3%|▎ | 19166/569592 [9:27:17<415:57:04, 2.72s/it]
3%|▎ | 19166/569592 [9:27:17<415:57:04, 2.72s/it]
3%|▎ | 19167/569592 [9:27:21<480:08:57, 3.14s/it]
3%|▎ | 19167/569592 [9:27:21<480:08:57, 3.14s/it]
3%|▎ | 19168/569592 [9:27:22<378:01:04, 2.47s/it]
3%|▎ | 19168/569592 [9:27:22<378:01:04, 2.47s/it]
3%|▎ | 19169/569592 [9:27:25<438:29:26, 2.87s/it]
3%|▎ | 19169/569592 [9:27:25<438:29:26, 2.87s/it]
3%|▎ | 19170/569592 [9:27:29<457:46:04, 2.99s/it]
3%|▎ | 19170/569592 [9:27:29<457:46:04, 2.99s/it]
3%|▎ | 19171/569592 [9:27:32<473:38:13, 3.10s/it]
3%|▎ | 19171/569592 [9:27:32<473:38:13, 3.10s/it]
3%|▎ | 19172/569592 [9:27:37<543:27:21, 3.55s/it]
3%|▎ | 19172/569592 [9:27:37<543:27:21, 3.55s/it]
3%|▎ | 19173/569592 [9:27:40<533:52:28, 3.49s/it]
3%|▎ | 19173/569592 [9:27:40<533:52:28, 3.49s/it]
3%|▎ | 19174/569592 [9:27:41<414:39:28, 2.71s/it]
3%|▎ | 19174/569592 [9:27:41<414:39:28, 2.71s/it]
3%|▎ | 19175/569592 [9:27:42<331:41:56, 2.17s/it]
3%|▎ | 19175/569592 [9:27:42<331:41:56, 2.17s/it]
3%|▎ | 19176/569592 [9:27:45<388:36:48, 2.54s/it]
3%|▎ | 19176/569592 [9:27:45<388:36:48, 2.54s/it]
3%|▎ | 19177/569592 [9:27:46<315:19:30, 2.06s/it]
3%|▎ | 19177/569592 [9:27:46<315:19:30, 2.06s/it]
3%|▎ | 19178/569592 [9:27:49<371:59:18, 2.43s/it]
3%|▎ | 19178/569592 [9:27:49<371:59:18, 2.43s/it]
3%|▎ | 19179/569592 [9:27:50<304:08:18, 1.99s/it]
3%|▎ | 19179/569592 [9:27:50<304:08:18, 1.99s/it]
3%|▎ | 19180/569592 [9:27:54<373:54:36, 2.45s/it]
3%|▎ | 19180/569592 [9:27:54<373:54:36, 2.45s/it]
3%|▎ | 19181/569592 [9:27:55<305:36:32, 2.00s/it]
3%|▎ | 19181/569592 [9:27:55<305:36:32, 2.00s/it]
3%|▎ | 19182/569592 [9:27:56<258:24:28, 1.69s/it]
3%|▎ | 19182/569592 [9:27:56<258:24:28, 1.69s/it]
3%|▎ | 19183/569592 [9:28:00<359:00:19, 2.35s/it]
3%|▎ | 19183/569592 [9:28:00<359:00:19, 2.35s/it]
3%|▎ | 19184/569592 [9:28:03<413:59:31, 2.71s/it]
3%|▎ | 19184/569592 [9:28:03<413:59:31, 2.71s/it]
3%|▎ | 19185/569592 [9:28:04<333:02:06, 2.18s/it]
3%|▎ | 19185/569592 [9:28:04<333:02:06, 2.18s/it]
3%|▎ | 19186/569592 [9:28:08<407:15:44, 2.66s/it]
3%|▎ | 19186/569592 [9:28:08<407:15:44, 2.66s/it]
3%|▎ | 19187/569592 [9:28:12<466:26:03, 3.05s/it]
3%|▎ | 19187/569592 [9:28:12<466:26:03, 3.05s/it]
3%|▎ | 19188/569592 [9:28:16<529:31:38, 3.46s/it]
3%|▎ | 19188/569592 [9:28:16<529:31:38, 3.46s/it]
3%|▎ | 19189/569592 [9:28:17<413:06:58, 2.70s/it]
3%|▎ | 19189/569592 [9:28:17<413:06:58, 2.70s/it]
3%|▎ | 19190/569592 [9:28:21<443:19:35, 2.90s/it]
3%|▎ | 19190/569592 [9:28:21<443:19:35, 2.90s/it]
3%|▎ | 19191/569592 [9:28:22<355:31:27, 2.33s/it]
3%|▎ | 19191/569592 [9:28:22<355:31:27, 2.33s/it]
3%|▎ | 19192/569592 [9:28:23<294:01:38, 1.92s/it]
3%|▎ | 19192/569592 [9:28:23<294:01:38, 1.92s/it]
3%|▎ | 19193/569592 [9:28:24<248:12:25, 1.62s/it]
3%|▎ | 19193/569592 [9:28:24<248:12:25, 1.62s/it]
3%|▎ | 19194/569592 [9:28:27<347:42:43, 2.27s/it]
3%|▎ | 19194/569592 [9:28:27<347:42:43, 2.27s/it]
3%|▎ | 19195/569592 [9:28:32<445:28:45, 2.91s/it]
3%|▎ | 19195/569592 [9:28:32<445:28:45, 2.91s/it]
3%|▎ | 19196/569592 [9:28:33<355:37:02, 2.33s/it]
3%|▎ | 19196/569592 [9:28:33<355:37:02, 2.33s/it]
3%|▎ | 19197/569592 [9:28:36<424:29:17, 2.78s/it]
3%|▎ | 19197/569592 [9:28:37<424:29:17, 2.78s/it]
3%|▎ | 19198/569592 [9:28:38<343:43:34, 2.25s/it]
3%|▎ | 19198/569592 [9:28:38<343:43:34, 2.25s/it]
3%|▎ | 19199/569592 [9:28:42<427:06:29, 2.79s/it]
3%|▎ | 19199/569592 [9:28:42<427:06:29, 2.79s/it]
3%|▎ | 19200/569592 [9:28:42<341:09:28, 2.23s/it]
3%|▎ | 19200/569592 [9:28:43<341:09:28, 2.23s/it]
3%|▎ | 19201/569592 [9:28:46<404:40:16, 2.65s/it]
3%|▎ | 19201/569592 [9:28:46<404:40:16, 2.65s/it]
3%|▎ | 19202/569592 [9:28:51<514:18:05, 3.36s/it]
3%|▎ | 19202/569592 [9:28:51<514:18:05, 3.36s/it]
3%|▎ | 19203/569592 [9:28:55<542:17:35, 3.55s/it]
3%|▎ | 19203/569592 [9:28:55<542:17:35, 3.55s/it]
3%|▎ | 19204/569592 [9:28:59<542:50:37, 3.55s/it]
3%|▎ | 19204/569592 [9:28:59<542:50:37, 3.55s/it]
3%|▎ | 19205/569592 [9:29:02<532:27:32, 3.48s/it]
3%|▎ | 19205/569592 [9:29:02<532:27:32, 3.48s/it]
3%|▎ | 19206/569592 [9:29:05<519:10:40, 3.40s/it]
3%|▎ | 19206/569592 [9:29:05<519:10:40, 3.40s/it]
3%|▎ | 19207/569592 [9:29:06<405:16:24, 2.65s/it]
3%|▎ | 19207/569592 [9:29:06<405:16:24, 2.65s/it]
3%|▎ | 19208/569592 [9:29:10<453:47:22, 2.97s/it]
3%|▎ | 19208/569592 [9:29:10<453:47:22, 2.97s/it]
3%|▎ | 19209/569592 [9:29:14<499:10:51, 3.27s/it]
3%|▎ | 19209/569592 [9:29:14<499:10:51, 3.27s/it]
3%|▎ | 19210/569592 [9:29:17<506:05:38, 3.31s/it]
3%|▎ | 19210/569592 [9:29:17<506:05:38, 3.31s/it]
3%|▎ | 19211/569592 [9:29:21<539:04:27, 3.53s/it]
3%|▎ | 19211/569592 [9:29:21<539:04:27, 3.53s/it]
3%|▎ | 19212/569592 [9:29:25<537:22:50, 3.51s/it]
3%|▎ | 19212/569592 [9:29:25<537:22:50, 3.51s/it]
3%|▎ | 19213/569592 [9:29:28<527:07:33, 3.45s/it]
3%|▎ | 19213/569592 [9:29:28<527:07:33, 3.45s/it]
3%|▎ | 19214/569592 [9:29:29<411:39:49, 2.69s/it]
3%|▎ | 19214/569592 [9:29:29<411:39:49, 2.69s/it]
3%|▎ | 19215/569592 [9:29:30<331:15:04, 2.17s/it]
3%|▎ | 19215/569592 [9:29:30<331:15:04, 2.17s/it]
3%|▎ | 19216/569592 [9:29:33<383:39:06, 2.51s/it]
3%|▎ | 19216/569592 [9:29:33<383:39:06, 2.51s/it]
3%|▎ | 19217/569592 [9:29:37<452:43:11, 2.96s/it]
3%|▎ | 19217/569592 [9:29:37<452:43:11, 2.96s/it]
3%|▎ | 19218/569592 [9:29:41<479:54:37, 3.14s/it]
3%|▎ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 19218/569592 [9:29:41<479:54:37, 3.14s/it]
3%|▎ | 19219/569592 [9:29:44<491:04:29, 3.21s/it]
3%|▎ | 19219/569592 [9:29:44<491:04:29, 3.21s/it]
3%|▎ | 19220/569592 [9:29:45<384:51:17, 2.52s/it]
3%|▎ | 19220/569592 [9:29:45<384:51:17, 2.52s/it]
3%|▎ | 19221/569592 [9:29:50<493:24:25, 3.23s/it]
3%|▎ | 19221/569592 [9:29:50<493:24:25, 3.23s/it]
3%|▎ | 19222/569592 [9:29:51<388:36:36, 2.54s/it]
3%|▎ | 19222/569592 [9:29:51<388:36:36, 2.54s/it]
3%|▎ | 19223/569592 [9:29:54<420:27:28, 2.75s/it]
3%|▎ | 19223/569592 [9:29:54<420:27:28, 2.75s/it]
3%|▎ | 19224/569592 [9:29:55<335:50:43, 2.20s/it]
3%|▎ | 19224/569592 [9:29:55<335:50:43, 2.20s/it]
3%|▎ | 19225/569592 [9:29:59<419:29:35, 2.74s/it]
3%|▎ | 19225/569592 [9:29:59<419:29:35, 2.74s/it]
3%|▎ | 19226/569592 [9:30:00<342:32:13, 2.24s/it]
3%|▎ | 19226/569592 [9:30:00<342:32:13, 2.24s/it]
3%|▎ | 19227/569592 [9:30:04<423:42:50, 2.77s/it]
3%|▎ | 19227/569592 [9:30:04<423:42:50, 2.77s/it]
3%|▎ | 19228/569592 [9:30:08<457:00:03, 2.99s/it]
3%|▎ | 19228/569592 [9:30:08<457:00:03, 2.99s/it]
3%|▎ | 19229/569592 [9:30:11<467:09:57, 3.06s/it]
3%|▎ | 19229/569592 [9:30:11<467:09:57, 3.06s/it]
3%|▎ | 19230/569592 [9:30:15<500:07:48, 3.27s/it]
3%|▎ | 19230/569592 [9:30:15<500:07:48, 3.27s/it]
3%|▎ | 19231/569592 [9:30:18<505:48:16, 3.31s/it]
3%|▎ | 19231/569592 [9:30:18<505:48:16, 3.31s/it]
3%|▎ | 19232/569592 [9:30:22<516:14:08, 3.38s/it]
3%|▎ | 19232/569592 [9:30:22<516:14:08, 3.38s/it]
3%|▎ | 19233/569592 [9:30:25<538:04:52, 3.52s/it]
3%|▎ | 19233/569592 [9:30:25<538:04:52, 3.52s/it]
3%|▎ | 19234/569592 [9:30:26<418:09:06, 2.74s/it]
3%|▎ | 19234/569592 [9:30:26<418:09:06, 2.74s/it]
3%|▎ | 19235/569592 [9:30:30<442:43:17, 2.90s/it]
3%|▎ | 19235/569592 [9:30:30<442:43:17, 2.90s/it]
3%|▎ | 19236/569592 [9:30:33<481:15:42, 3.15s/it]
3%|▎ | 19236/569592 [9:30:33<481:15:42, 3.15s/it]
3%|▎ | 19237/569592 [9:30:34<389:15:44, 2.55s/it]
3%|▎ | 19237/569592 [9:30:34<389:15:44, 2.55s/it]
3%|▎ | 19238/569592 [9:30:35<318:27:50, 2.08s/it]
3%|▎ | 19238/569592 [9:30:35<318:27:50, 2.08s/it]
3%|▎ | 19239/569592 [9:30:39<407:23:42, 2.66s/it]
3%|▎ | 19239/569592 [9:30:39<407:23:42, 2.66s/it]
3%|▎ | 19240/569592 [9:30:43<444:07:43, 2.91s/it]
3%|▎ | 19240/569592 [9:30:43<444:07:43, 2.91s/it]
3%|▎ | 19241/569592 [9:30:44<354:44:20, 2.32s/it]
3%|▎ | 19241/569592 [9:30:44<354:44:20, 2.32s/it]
3%|▎ | 19242/569592 [9:30:47<400:37:30, 2.62s/it]
3%|▎ | 19242/569592 [9:30:47<400:37:30, 2.62s/it]
3%|▎ | 19243/569592 [9:30:48<325:28:48, 2.13s/it]
3%|▎ | 19243/569592 [9:30:48<325:28:48, 2.13s/it]
3%|▎ | 19244/569592 [9:30:52<413:57:27, 2.71s/it]
3%|▎ | 19244/569592 [9:30:52<413:57:27, 2.71s/it]
3%|▎ | 19245/569592 [9:30:56<464:48:09, 3.04s/it]
3%|▎ | 19245/569592 [9:30:56<464:48:09, 3.04s/it]
3%|▎ | 19246/569592 [9:31:00<493:31:09, 3.23s/it]
3%|▎ | 19246/569592 [9:31:00<493:31:09, 3.23s/it]
3%|▎ | 19247/569592 [9:31:03<497:55:25, 3.26s/it]
3%|▎ | 19247/569592 [9:31:03<497:55:25, 3.26s/it]
3%|▎ | 19248/569592 [9:31:06<499:30:06, 3.27s/it]
3%|▎ | 19248/569592 [9:31:06<499:30:06, 3.27s/it]
3%|▎ | 19249/569592 [9:31:10<510:20:48, 3.34s/it]
3%|▎ | 19249/569592 [9:31:10<510:20:48, 3.34s/it]
3%|▎ | 19250/569592 [9:31:11<399:02:13, 2.61s/it]
3%|▎ | 19250/569592 [9:31:11<399:02:13, 2.61s/it]
3%|▎ | 19251/569592 [9:31:12<322:53:51, 2.11s/it]
3%|▎ | 19251/569592 [9:31:12<322:53:51, 2.11s/it]
3%|▎ | 19252/569592 [9:31:15<385:10:56, 2.52s/it]
3%|▎ | 19252/569592 [9:31:15<385:10:56, 2.52s/it]
3%|▎ | 19253/569592 [9:31:19<450:40:01, 2.95s/it]
3%|▎ | 19253/569592 [9:31:19<450:40:01, 2.95s/it]
3%|▎ | 19254/569592 [9:31:23<472:54:39, 3.09s/it]
3%|▎ | 19254/569592 [9:31:23<472:54:39, 3.09s/it]
3%|▎ | 19255/569592 [9:31:23<372:54:29, 2.44s/it]
3%|▎ | 19255/569592 [9:31:23<372:54:29, 2.44s/it]
3%|▎ | 19256/569592 [9:31:27<435:25:41, 2.85s/it]
3%|▎ | 19256/569592 [9:31:27<435:25:41, 2.85s/it]
3%|▎ | 19257/569592 [9:31:31<465:14:32, 3.04s/it]
3%|▎ | 19257/569592 [9:31:31<465:14:32, 3.04s/it]
3%|▎ | 19258/569592 [9:31:34<484:25:02, 3.17s/it]
3%|▎ | 19258/569592 [9:31:34<484:25:02, 3.17s/it]
3%|▎ | 19259/569592 [9:31:35<383:05:12, 2.51s/it]
3%|▎ | 19259/569592 [9:31:35<383:05:12, 2.51s/it]
3%|▎ | 19260/569592 [9:31:36<311:44:25, 2.04s/it]
3%|▎ | 19260/569592 [9:31:36<311:44:25, 2.04s/it]
3%|▎ | 19261/569592 [9:31:40<389:36:06, 2.55s/it]
3%|▎ | 19261/569592 [9:31:40<389:36:06, 2.55s/it]
3%|▎ | 19262/569592 [9:31:43<427:23:05, 2.80s/it]
3%|▎ | 19262/569592 [9:31:43<427:23:05, 2.80s/it]
3%|▎ | 19263/569592 [9:31:47<451:52:15, 2.96s/it]
3%|▎ | 19263/569592 [9:31:47<451:52:15, 2.96s/it]
3%|▎ | 19264/569592 [9:31:48<359:13:46, 2.35s/it]
3%|▎ | 19264/569592 [9:31:48<359:13:46, 2.35s/it]
3%|▎ | 19265/569592 [9:31:48<294:58:39, 1.93s/it]
3%|▎ | 19265/569592 [9:31:48<294:58:39, 1.93s/it]
3%|▎ | 19266/569592 [9:31:52<360:52:46, 2.36s/it]
3%|▎ | 19266/569592 [9:31:52<360:52:46, 2.36s/it]
3%|▎ | 19267/569592 [9:31:56<459:34:53, 3.01s/it]
3%|▎ | 1/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90662021 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9267/569592 [9:31:56<459:34:53, 3.01s/it]
3%|▎ | 19268/569592 [9:31:57<366:13:55, 2.40s/it]
3%|▎ | 19268/569592 [9:31:57<366:13:55, 2.40s/it]
3%|▎ | 19269/569592 [9:31:58<300:39:33, 1.97s/it]
3%|▎ | 19269/569592 [9:31:58<300:39:33, 1.97s/it]
3%|▎ | 19270/569592 [9:32:02<386:30:28, 2.53s/it]
3%|▎ | 19270/569592 [9:32:02<386:30:28, 2.53s/it]
3%|▎ | 19271/569592 [9:32:03<316:31:31, 2.07s/it]
3%|▎ | 19271/569592 [9:32:03<316:31:31, 2.07s/it]
3%|▎ | 19272/569592 [9:32:08<426:29:49, 2.79s/it]
3%|▎ | 19272/569592 [9:32:08<426:29:49, 2.79s/it]
3%|▎ | 19273/569592 [9:32:12
train(attn_implementation="flash_attention_2")
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py", line 1631, in train
trainer.train(resume_from_checkpoint=True)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2612, in _inner_training_loop
self._maybe_log_save_evaluate(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 3093, in _maybe_log_save_evaluate
self.control = self.callback_handler.on_save(self.args, self.state, self.control)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 546, in on_save
return self.call_event("on_save", args, state, control)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 557, in call_event
result = getattr(callback, event)(
File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py", line 1608, in on_save
upload_folder(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1524, in _inner
return fn(self, *args, **kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4677, in upload_folder
commit_info = self.create_commit(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1524, in _inner
return fn(self, *args, **kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3961, in create_commit
self.preupload_lfs_files(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files
_upload_lfs_files(
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 452, in _upload_lfs_files
thread_map(
File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/usr/local/lib/python3.10/dist-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 439, in _wrapped_lfs_upload
raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc
RuntimeError: Error while uploading 'checkpoint-30000/rng_state_101.pth' to the Hub.
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
[rank0]: response.raise_for_status()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1024, in raise_for_status
[rank0]: raise HTTPError(http_error_msg, response=self)
[rank0]: requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/complete_multipart?uploadId=7Kmb1Pc9aaAEQVAPW5WowD35RLZSwmHasMnyMVnAxQ1kC7POBPjOojl2diFOQb3TaSYujtO6667Xpn5rI4HALaTz8BVAfAMFmsbR7Bz5Oi82KA_gFGLsZtElN8LPu7VQ&bucket=hf-hub-lfs-us-east-1&prefix=repos%2Fe0%2F93%2Fe09307d4e6846d783ee679d0240cc38c52f308be58f71c108a7978856a27c6ad&expiration=Wed%2C+19+Feb+2025+14%3A58%3A44+GMT&signature=a8119b282ccf29579b2529f4bb9bb63120f50b7fad42ccd024a6035b4a738206
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 437, in _wrapped_lfs_upload
[rank0]: lfs_upload(operation=operation, lfs_batch_action=batch_action, headers=headers, endpoint=endpoint)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 246, in lfs_upload
[rank0]: _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_url)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 355, in _upload_multi_part
[rank0]: hf_raise_for_status(completion_res)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
[rank0]: raise _format(HfHubHTTPError, str(e), response) from e
[rank0]: huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/complete_multipart?uploadId=7Kmb1Pc9aaAEQVAPW5WowD35RLZSwmHasMnyMVnAxQ1kC7POBPjOojl2diFOQb3TaSYujtO6667Xpn5rI4HALaTz8BVAfAMFmsbR7Bz5Oi82KA_gFGLsZtElN8LPu7VQ&bucket=hf-hub-lfs-us-east-1&prefix=repos%2Fe0%2F93%2Fe09307d4e6846d783ee679d0240cc38c52f308be58f71c108a7978856a27c6ad&expiration=Wed%2C+19+Feb+2025+14%3A58%3A44+GMT&signature=a8119b282ccf29579b2529f4bb9bb63120f50b7fad42ccd024a6035b4a738206
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train_mem.py", line 4, in
[rank0]: train(attn_implementation="flash_attention_2")
[rank0]: File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py", line 1631, in train
[rank0]: trainer.train(resume_from_checkpoint=True)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
[rank0]: return inner_training_loop(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2612, in _inner_training_loop
[rank0]: self._maybe_log_save_evaluate(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py", line 3093, in _maybe_log_save_evaluate
[rank0]: self.control = self.callback_handler.on_save(self.args, self.state, self.control)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 546, in on_save
[rank0]: return self.call_event("on_save", args, state, control)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 557, in call_event
[rank0]: result = getattr(callback, event)(
[rank0]: File "/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py", line 1608, in on_save
[rank0]: upload_folder(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1524, in _inner
[rank0]: return fn(self, *args, **kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4677, in upload_folder
[rank0]: commit_info = self.create_commit(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1524, in _inner
[rank0]: return fn(self, *args, **kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3961, in create_commit
[rank0]: self.preupload_lfs_files(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files
[rank0]: _upload_lfs_files(
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 452, in _upload_lfs_files
[rank0]: thread_map(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
[rank0]: return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
[rank0]: return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
[rank0]: File "/usr/local/lib/python3.10/dist-packages/tqdm/std.py", line 1181, in __iter__
[rank0]: for obj in iterable:
[rank0]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
[rank0]: yield _result_or_cancel(fs.pop())
[rank0]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
[rank0]: return fut.result(timeout)
[rank0]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
[rank0]: return self.__get_result()
[rank0]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
[rank0]: raise self._exception
[rank0]: File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
[rank0]: result = self.fn(*self.args, **self.kwargs)
[rank0]: File "/home/zhaojiang/.local/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 439, in _wrapped_lfs_upload
[rank0]: raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc
[rank0]: RuntimeError: Error while uploading 'checkpoint-30000/rng_state_101.pth' to the Hub.
[rank0]:[W218 15:00:19.506684404 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
[rank13]:[E218 15:08:11.511786445 ProcessGroupNCCL.cpp:616] [Rank 13] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
[rank13]:[E218 15:08:11.532294490 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 13] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank28]:[E218 15:08:11.148231267 ProcessGroupNCCL.cpp:616] [Rank 28] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600036 milliseconds before timing out.
[rank45]:[E218 15:08:11.082260003 ProcessGroupNCCL.cpp:616] [Rank 45] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
[rank28]:[E218 15:08:11.163320437 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 28] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank45]:[E218 15:08:11.094462969 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 45] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank50]:[E218 15:08:11.210762659 ProcessGroupNCCL.cpp:616] [Rank 50] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
[rank12]:[E218 15:08:11.798960516 ProcessGroupNCCL.cpp:616] [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600015 milliseconds before timing out.
[rank89]:[E218 15:08:11.808220410 ProcessGroupNCCL.cpp:616] [Rank 89] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600038 milliseconds before timing out.
[rank79]:[E218 15:08:12.760853759 ProcessGroupNCCL.cpp:616] [Rank 79] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600031 milliseconds before timing out.
[rank40]:[E218 15:08:11.284080024 ProcessGroupNCCL.cpp:616] [Rank 40] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600007 milliseconds before timing out.
[rank79]:[E218 15:08:12.833471351 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 79] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank50]:[E218 15:08:12.510866545 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 50] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank78]:[E218 15:08:12.784192241 ProcessGroupNCCL.cpp:616] [Rank 78] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600093 milliseconds before timing out.
[rank78]:[E218 15:08:12.851187036 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 78] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank124]:[E218 15:08:11.357746138 ProcessGroupNCCL.cpp:616] [Rank 124] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600027 milliseconds before timing out.
[rank112]:[E218 15:08:12.782257805 ProcessGroupNCCL.cpp:616] [Rank 112] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600019 milliseconds before timing out.
[rank47]:[E218 15:08:11.143274930 ProcessGroupNCCL.cpp:616] [Rank 47] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600076 milliseconds before timing out.
[rank44]:[E218 15:08:11.254865946 ProcessGroupNCCL.cpp:616] [Rank 44] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
[rank14]:[E218 15:08:11.794412876 ProcessGroupNCCL.cpp:616] [Rank 14] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[rank77]:[E218 15:08:11.679452190 ProcessGroupNCCL.cpp:616] [Rank 77] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[rank84]:[E218 15:08:11.514791443 ProcessGroupNCCL.cpp:616] [Rank 84] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
[rank95]:[E218 15:08:12.894518066 ProcessGroupNCCL.cpp:616] [Rank 95] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600090 milliseconds before timing out.
[rank32]:[E218 15:08:12.270628394 ProcessGroupNCCL.cpp:616] [Rank 32] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600046 milliseconds before timing out.
[rank33]:[E218 15:08:11.166797970 ProcessGroupNCCL.cpp:616] [Rank 33] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600008 milliseconds before timing out.
[rank86]:[E218 15:08:11.515976848 ProcessGroupNCCL.cpp:616] [Rank 86] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600050 milliseconds before timing out.
[rank118]:[E218 15:08:12.801188487 ProcessGroupNCCL.cpp:616] [Rank 118] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600025 milliseconds before timing out.
[rank64]:[E218 15:08:11.515249683 ProcessGroupNCCL.cpp:616] [Rank 64] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600028 milliseconds before timing out.
[rank93]:[E218 15:08:11.808944114 ProcessGroupNCCL.cpp:616] [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600055 milliseconds before timing out.
[rank88]:[E218 15:08:11.803106691 ProcessGroupNCCL.cpp:616] [Rank 88] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600043 milliseconds before timing out.
[rank80]:[E218 15:08:11.529899222 ProcessGroupNCCL.cpp:616] [Rank 80] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
[rank70]:[E218 15:08:11.519639275 ProcessGroupNCCL.cpp:616] [Rank 70] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
[rank67]:[E218 15:08:11.524045098 ProcessGroupNCCL.cpp:616] [Rank 67] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600029 milliseconds before timing out.
[rank105]:[E218 15:08:11.609619742 ProcessGroupNCCL.cpp:616] [Rank 105] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600019 milliseconds before timing out.
[rank65]:[E218 15:08:11.527042194 ProcessGroupNCCL.cpp:616] [Rank 65] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600040 milliseconds before timing out.
[rank100]:[E218 15:08:11.702252229 ProcessGroupNCCL.cpp:616] [Rank 100] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600036 milliseconds before timing out.
[rank34]:[E218 15:08:12.231774913 ProcessGroupNCCL.cpp:616] [Rank 34] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[rank89]:[E218 15:08:12.024034481 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 89] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank51]:[E218 15:08:11.440715397 ProcessGroupNCCL.cpp:616] [Rank 51] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600068 milliseconds before timing out.
[rank71]:[E218 15:08:11.533883602 ProcessGroupNCCL.cpp:616] [Rank 71] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600034 milliseconds before timing out.
[rank61]:[E218 15:08:11.589879011 ProcessGroupNCCL.cpp:616] [Rank 61] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
[rank113]:[E218 15:08:11.750788629 ProcessGroupNCCL.cpp:616] [Rank 113] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600028 milliseconds before timing out.
[rank24]:[E218 15:08:12.459261646 ProcessGroupNCCL.cpp:616] [Rank 24] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600092 milliseconds before timing out.
[rank68]:[E218 15:08:11.539524148 ProcessGroupNCCL.cpp:616] [Rank 68] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600034 milliseconds before timing out.
[rank53]:[E218 15:08:11.450319652 ProcessGroupNCCL.cpp:616] [Rank 53] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[rank69]:[E218 15:08:11.541042477 ProcessGroupNCCL.cpp:616] [Rank 69] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600032 milliseconds before timing out.
[rank18]:[E218 15:08:12.605340919 ProcessGroupNCCL.cpp:616] [Rank 18] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
[rank27]:[E218 15:08:11.367213203 ProcessGroupNCCL.cpp:616] [Rank 27] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600002 milliseconds before timing out.
[rank96]:[E218 15:08:12.824465618 ProcessGroupNCCL.cpp:616] [Rank 96] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600081 milliseconds before timing out.
[rank122]:[E218 15:08:12.465299508 ProcessGroupNCCL.cpp:616] [Rank 122] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
[rank37]:[E218 15:08:11.169304223 ProcessGroupNCCL.cpp:616] [Rank 37] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600009 milliseconds before timing out.
[rank52]:[E218 15:08:12.463056854 ProcessGroupNCCL.cpp:616] [Rank 52] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
[rank120]:[E218 15:08:11.384319683 ProcessGroupNCCL.cpp:616] [Rank 120] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[rank102]:[E218 15:08:12.736968523 ProcessGroupNCCL.cpp:616] [Rank 102] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
[rank26]:[E218 15:08:12.379641492 ProcessGroupNCCL.cpp:616] [Rank 26] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600002 milliseconds before timing out.
[rank35]:[E218 15:08:11.166793130 ProcessGroupNCCL.cpp:616] [Rank 35] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
[rank30]:[E218 15:08:12.418535492 ProcessGroupNCCL.cpp:616] [Rank 30] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600086 milliseconds before timing out.
[rank47]:[E218 15:08:12.489623993 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 47] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank107]:[E218 15:08:11.600658646 ProcessGroupNCCL.cpp:616] [Rank 107] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
[rank43]:[E218 15:08:11.300975924 ProcessGroupNCCL.cpp:616] [Rank 43] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[rank39]:[E218 15:08:12.254576917 ProcessGroupNCCL.cpp:616] [Rank 39] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[rank117]:[E218 15:08:12.790096068 ProcessGroupNCCL.cpp:616] [Rank 117] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600065 milliseconds before timing out.
[rank87]:[E218 15:08:12.598000652 ProcessGroupNCCL.cpp:616] [Rank 87] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600036 milliseconds before timing out.
[rank114]:[E218 15:08:11.739264552 ProcessGroupNCCL.cpp:616] [Rank 114] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[rank57]:[E218 15:08:12.630331688 ProcessGroupNCCL.cpp:616] [Rank 57] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600048 milliseconds before timing out.
[rank19]:[E218 15:08:12.616380708 ProcessGroupNCCL.cpp:616] [Rank 19] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600052 milliseconds before timing out.
[rank101]:[E218 15:08:11.702242579 ProcessGroupNCCL.cpp:616] [Rank 101] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank111]:[E218 15:08:11.597471430 ProcessGroupNCCL.cpp:616] [Rank 111] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600052 milliseconds before timing out.
[rank12]:[E218 15:08:12.087887824 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 12] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank58]:[E218 15:08:12.635482956 ProcessGroupNCCL.cpp:616] [Rank 58] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
[rank115]:[E218 15:08:12.805657173 ProcessGroupNCCL.cpp:616] [Rank 115] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
[rank85]:[E218 15:08:12.611838044 ProcessGroupNCCL.cpp:616] [Rank 85] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
[rank94]:[E218 15:08:12.888854726 ProcessGroupNCCL.cpp:616] [Rank 94] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600025 milliseconds before timing out.
[rank110]:[E218 15:08:11.607988292 ProcessGroupNCCL.cpp:616] [Rank 110] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
[rank56]:[E218 15:08:11.596110276 ProcessGroupNCCL.cpp:616] [Rank 56] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600092 milliseconds before timing out.
[rank15]:[E218 15:08:11.809146954 ProcessGroupNCCL.cpp:616] [Rank 15] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600012 milliseconds before timing out.
[rank90]:[E218 15:08:12.902790047 ProcessGroupNCCL.cpp:616] [Rank 90] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600097 milliseconds before timing out.
[rank14]:[E218 15:08:12.075226896 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 14] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank32]:[E218 15:08:12.487916049 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 32] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank109]:[E218 15:08:11.617420772 ProcessGroupNCCL.cpp:616] [Rank 109] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600013 milliseconds before timing out.
[rank44]:[E218 15:08:12.534335659 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 44] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank121]:[E218 15:08:11.416099884 ProcessGroupNCCL.cpp:616] [Rank 121] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
[rank76]:[E218 15:08:11.737962346 ProcessGroupNCCL.cpp:616] [Rank 76] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
[rank123]:[E218 15:08:12.507390406 ProcessGroupNCCL.cpp:616] [Rank 123] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600078 milliseconds before timing out.
[rank62]:[E218 15:08:12.679291675 ProcessGroupNCCL.cpp:616] [Rank 62] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600096 milliseconds before timing out.
[rank55]:[E218 15:08:11.438343471 ProcessGroupNCCL.cpp:616] [Rank 55] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[rank54]:[E218 15:08:11.442853819 ProcessGroupNCCL.cpp:616] [Rank 54] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600016 milliseconds before timing out.
[rank66]:[E218 15:08:12.632210157 ProcessGroupNCCL.cpp:616] [Rank 66] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600007 milliseconds before timing out.
[rank124]:[E218 15:08:12.663631118 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 124] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank84]:[E218 15:08:12.802354290 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 84] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank104]:[E218 15:08:12.710649104 ProcessGroupNCCL.cpp:616] [Rank 104] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600045 milliseconds before timing out.
[rank40]:[E218 15:08:12.567706033 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 40] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank88]:[E218 15:08:12.086546986 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 88] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank126]:[E218 15:08:12.505141086 ProcessGroupNCCL.cpp:616] [Rank 126] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600056 milliseconds before timing out.
[rank118]:[E218 15:08:12.036556017 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 118] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank106]:[E218 15:08:12.721921103 ProcessGroupNCCL.cpp:616] [Rank 106] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600099 milliseconds before timing out.
[rank91]:[E218 15:08:12.867316908 ProcessGroupNCCL.cpp:616] [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600084 milliseconds before timing out.
[rank103]:[E218 15:08:11.726260437 ProcessGroupNCCL.cpp:616] [Rank 103] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[rank127]:[E218 15:08:11.426092194 ProcessGroupNCCL.cpp:616] [Rank 127] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
[rank38]:[E218 15:08:12.243116328 ProcessGroupNCCL.cpp:616] [Rank 38] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600017 milliseconds before timing out.
[rank125]:[E218 15:08:12.508899709 ProcessGroupNCCL.cpp:616] [Rank 125] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600086 milliseconds before timing out.
[rank82]:[E218 15:08:12.588273636 ProcessGroupNCCL.cpp:616] [Rank 82] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
[rank42]:[E218 15:08:12.342887035 ProcessGroupNCCL.cpp:616] [Rank 42] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600096 milliseconds before timing out.
[rank74]:[E218 15:08:12.848068658 ProcessGroupNCCL.cpp:616] [Rank 74] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[rank86]:[E218 15:08:12.827832849 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 86] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank60]:[E218 15:08:12.709198449 ProcessGroupNCCL.cpp:616] [Rank 60] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600088 milliseconds before timing out.
[rank112]:[E218 15:08:12.055295456 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 112] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank41]:[E218 15:08:12.304197816 ProcessGroupNCCL.cpp:616] [Rank 41] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600065 milliseconds before timing out.
[rank33]:[E218 15:08:12.497016616 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 33] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank80]:[E218 15:08:12.837530304 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 80] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank48]:[E218 15:08:12.573803144 ProcessGroupNCCL.cpp:616] [Rank 48] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600077 milliseconds before timing out.
[rank99]:[E218 15:08:12.831141240 ProcessGroupNCCL.cpp:616] [Rank 99] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600058 milliseconds before timing out.
[rank34]:[E218 15:08:12.506640773 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 34] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank25]:[E218 15:08:12.466844858 ProcessGroupNCCL.cpp:616] [Rank 25] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600040 milliseconds before timing out.
[rank75]:[E218 15:08:12.771668816 ProcessGroupNCCL.cpp:616] [Rank 75] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600082 milliseconds before timing out.
[rank120]:[E218 15:08:12.706345290 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 120] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank93]:[E218 15:08:12.125228927 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 93] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank95]:[E218 15:08:12.159109315 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 95] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank70]:[E218 15:08:12.828148142 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 70] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank67]:[E218 15:08:12.828161492 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 67] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank64]:[E218 15:08:12.828155982 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 64] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank49]:[E218 15:08:12.568251380 ProcessGroupNCCL.cpp:616] [Rank 49] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600090 milliseconds before timing out.
[rank11]:[E218 15:08:12.888312208 ProcessGroupNCCL.cpp:616] [Rank 11] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
[rank108]:[E218 15:08:12.687519378 ProcessGroupNCCL.cpp:616] [Rank 108] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600071 milliseconds before timing out.
[rank122]:[E218 15:08:12.714412918 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 122] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank114]:[E218 15:08:12.046868573 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 114] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank65]:[E218 15:08:12.833859369 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 65] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank8]:[E218 15:08:12.886718682 ProcessGroupNCCL.cpp:616] [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
[rank35]:[E218 15:08:12.514891145 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 35] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank21]:[E218 15:08:12.712203639 ProcessGroupNCCL.cpp:616] [Rank 21] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
[rank81]:[E218 15:08:12.611043997 ProcessGroupNCCL.cpp:616] [Rank 81] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600066 milliseconds before timing out.
[rank29]:[E218 15:08:12.478582518 ProcessGroupNCCL.cpp:616] [Rank 29] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600094 milliseconds before timing out.
[rank101]:[E218 15:08:12.020974360 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 101] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank119]:[E218 15:08:12.811305870 ProcessGroupNCCL.cpp:616] [Rank 119] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[rank63]:[E218 15:08:12.633639450 ProcessGroupNCCL.cpp:616] [Rank 63] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600064 milliseconds before timing out.
[rank9]:[E218 15:08:12.891455956 ProcessGroupNCCL.cpp:616] [Rank 9] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600074 milliseconds before timing out.
[rank51]:[E218 15:08:12.747826338 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 51] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank10]:[E218 15:08:12.897764493 ProcessGroupNCCL.cpp:616] [Rank 10] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600042 milliseconds before timing out.
[rank76]:[E218 15:08:12.042557058 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 76] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank105]:[E218 15:08:12.940819489 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 105] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank36]:[E218 15:08:12.267064922 ProcessGroupNCCL.cpp:616] [Rank 36] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600062 milliseconds before timing out.
[rank94]:[E218 15:08:12.159335149 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 94] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank100]:[E218 15:08:12.020995901 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 100] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank83]:[E218 15:08:12.716103008 ProcessGroupNCCL.cpp:616] [Rank 83] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600088 milliseconds before timing out.
[rank23]:[E218 15:08:12.719929480 ProcessGroupNCCL.cpp:616] [Rank 23] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600061 milliseconds before timing out.
[rank111]:[E218 15:08:12.946865893 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 111] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank71]:[E218 15:08:12.851872547 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 71] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank113]:[E218 15:08:12.066744701 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 113] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank117]:[E218 15:08:12.100449076 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 117] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank53]:[E218 15:08:12.761063008 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 53] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank37]:[E218 15:08:12.514459038 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 37] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank72]:[E218 15:08:12.871283252 ProcessGroupNCCL.cpp:616] [Rank 72] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600056 milliseconds before timing out.
[rank92]:[E218 15:08:12.900515062 ProcessGroupNCCL.cpp:616] [Rank 92] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600042 milliseconds before timing out.
[rank68]:[E218 15:08:12.859018561 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 68] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank107]:[E218 15:08:12.940822989 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 107] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank98]:[E218 15:08:12.801757444 ProcessGroupNCCL.cpp:616] [Rank 98] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600065 milliseconds before timing out.
[rank56]:[E218 15:08:12.913525710 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 56] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank69]:[E218 15:08:12.863024066 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 69] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank61]:[E218 15:08:12.913525670 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 61] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank38]:[E218 15:08:12.545390623 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 38] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank102]:[E218 15:08:12.039213831 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 102] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank27]:[E218 15:08:12.688503978 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 27] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank110]:[E218 15:08:12.961792201 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 110] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank30]:[E218 15:08:12.692294804 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 30] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank46]:[E218 15:08:12.427045212 ProcessGroupNCCL.cpp:616] [Rank 46] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600060 milliseconds before timing out.
[rank59]:[E218 15:08:12.646547451 ProcessGroupNCCL.cpp:616] [Rank 59] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600093 milliseconds before timing out.
[rank77]:[E218 15:08:12.070485872 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 77] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank66]:[E218 15:08:12.877092060 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 66] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank116]:[E218 15:08:12.905629226 ProcessGroupNCCL.cpp:616] [Rank 116] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600070 milliseconds before timing out.
[rank55]:[E218 15:08:12.782283798 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 55] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank16]:[E218 15:08:12.785303195 ProcessGroupNCCL.cpp:616] [Rank 16] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
[rank15]:[E218 15:08:12.167823424 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 15] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank26]:[E218 15:08:12.710258659 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 26] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank39]:[E218 15:08:12.570728950 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 39] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank108]:[E218 15:08:12.989199286 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 108] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank43]:[E218 15:08:12.640513058 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 43] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank87]:[E218 15:08:12.910265911 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 87] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank82]:[E218 15:08:12.883214239 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 82] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank91]:[E218 15:08:12.194525223 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 91] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank121]:[E218 15:08:12.757744783 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 121] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank54]:[E218 15:08:12.780936118 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 54] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank57]:[E218 15:08:12.946511291 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 57] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank8]:[E218 15:08:12.201334291 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 8] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank24]:[E218 15:08:12.754354376 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 24] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank127]:[E218 15:08:12.779915061 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 127] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank109]:[E218 15:08:12.973216587 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 109] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank58]:[E218 15:08:12.953413512 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 58] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank62]:[E218 15:08:12.954845508 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 62] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank42]:[E218 15:08:12.641139919 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 42] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank22]:[E218 15:08:12.799246860 ProcessGroupNCCL.cpp:616] [Rank 22] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
[rank115]:[E218 15:08:12.117273432 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 115] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank97]:[E218 15:08:12.830087981 ProcessGroupNCCL.cpp:616] [Rank 97] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600086 milliseconds before timing out.
[rank9]:[E218 15:08:12.209504703 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 9] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank126]:[E218 15:08:12.788533381 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 126] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank96]:[E218 15:08:12.116983043 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 96] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank60]:[E218 15:08:12.959152825 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 60] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank17]:[E218 15:08:12.705705729 ProcessGroupNCCL.cpp:616] [Rank 17] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600059 milliseconds before timing out.
[rank85]:[E218 15:08:12.929967097 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 85] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank106]:[E218 15:08:12.042848330 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 106] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank41]:[E218 15:08:12.664969930 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 41] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank52]:[E218 15:08:12.785314024 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 52] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank103]:[E218 15:08:12.072959104 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 103] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank48]:[E218 15:08:12.820941281 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 48] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank18]:[E218 15:08:12.962083809 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 18] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank74]:[E218 15:08:12.116996105 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 74] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank123]:[E218 15:08:12.801825996 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 123] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank125]:[E218 15:08:12.799435353 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 125] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank75]:[E218 15:08:12.119984775 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 75] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank90]:[E218 15:08:12.223283851 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 90] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank104]:[E218 15:08:12.029667057 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 104] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank20]:[E218 15:08:12.726368936 ProcessGroupNCCL.cpp:616] [Rank 20] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600058 milliseconds before timing out.
[rank119]:[E218 15:08:12.150444927 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 119] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank19]:[E218 15:08:12.926739665 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 19] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank11]:[E218 15:08:12.224567154 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 11] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank16]:[E218 15:08:12.015802290 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 16] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank63]:[E218 15:08:12.993091479 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 63] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank83]:[E218 15:08:12.962109887 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 83] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank72]:[E218 15:08:12.154839878 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 72] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank92]:[E218 15:08:12.254855414 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 92] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank98]:[E218 15:08:12.132091058 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 98] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank10]:[E218 15:08:12.242866229 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 10] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank81]:[E218 15:08:12.956331525 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 81] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank116]:[E218 15:08:12.186173206 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 116] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank46]:[E218 15:08:12.724128447 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 46] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank99]:[E218 15:08:12.144806999 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 99] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank36]:[E218 15:08:12.626523295 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 36] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank29]:[E218 15:08:12.788132953 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 29] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank49]:[E218 15:08:12.875875239 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 49] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank59]:[E218 15:08:12.024597393 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 59] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank25]:[E218 15:08:12.800393903 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 25] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank21]:[E218 15:08:12.027302616 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 21] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank23]:[E218 15:08:12.039356433 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 23] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank20]:[E218 15:08:12.042083008 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 20] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank22]:[E218 15:08:12.045295991 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 22] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank97]:[E218 15:08:12.177547603 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 97] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank17]:[E218 15:08:12.048589619 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 17] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank31]:[E218 15:08:14.942412459 ProcessGroupNCCL.cpp:616] [Rank 31] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
[rank31]:[E218 15:08:14.254680649 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 31] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank73]:[E218 15:08:15.181598415 ProcessGroupNCCL.cpp:616] [Rank 73] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank73]:[E218 15:08:15.496723006 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 73] Exception (either an error or timeout) detected by watchdog at work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank63]:[E218 15:08:20.118882385 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 63] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank61]:[E218 15:08:20.118882895 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 61] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank63]:[E218 15:08:20.118910986 ProcessGroupNCCL.cpp:630] [Rank 63] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank61]:[E218 15:08:20.118914136 ProcessGroupNCCL.cpp:630] [Rank 61] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank63]:[E218 15:08:20.118919656 ProcessGroupNCCL.cpp:636] [Rank 63] To avoid data inconsistency, we are taking the entire process down.
[rank61]:[E218 15:08:20.118922956 ProcessGroupNCCL.cpp:636] [Rank 61] To avoid data inconsistency, we are taking the entire process down.
[rank125]:[E218 15:08:20.969243403 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 125] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank127]:[E218 15:08:20.969252804 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 127] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank125]:[E218 15:08:20.969277284 ProcessGroupNCCL.cpp:630] [Rank 125] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank125]:[E218 15:08:20.969285454 ProcessGroupNCCL.cpp:636] [Rank 125] To avoid data inconsistency, we are taking the entire process down.
[rank127]:[E218 15:08:20.969283664 ProcessGroupNCCL.cpp:630] [Rank 127] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank127]:[E218 15:08:20.969291904 ProcessGroupNCCL.cpp:636] [Rank 127] To avoid data inconsistency, we are taking the entire process down.
[rank57]:[E218 15:08:20.162113575 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 57] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank57]:[E218 15:08:20.162143086 ProcessGroupNCCL.cpp:630] [Rank 57] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank57]:[E218 15:08:20.162149486 ProcessGroupNCCL.cpp:636] [Rank 57] To avoid data inconsistency, we are taking the entire process down.
[rank121]:[E218 15:08:20.008674579 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 121] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank121]:[E218 15:08:20.008715860 ProcessGroupNCCL.cpp:630] [Rank 121] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank121]:[E218 15:08:20.008723860 ProcessGroupNCCL.cpp:636] [Rank 121] To avoid data inconsistency, we are taking the entire process down.
[rank63]:[E218 15:08:20.186092538 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 63] Process group watchdog thread terminated with exception: [Rank 63] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600064 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74bf19db9446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74becf02a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74becf031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74becf03361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74bf1a8665c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74bf1e694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74bf1e726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank61]:[E218 15:08:20.186088618 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 61] Process group watchdog thread terminated with exception: [Rank 61] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x712adc793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x712a91a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x712a91a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x712a91a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x712adc8ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x712ae1094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x712ae1126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank57]:[E218 15:08:20.186093498 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 57] Process group watchdog thread terminated with exception: [Rank 57] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600048 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x791c65350446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x791c1a62a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x791c1a631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x791c1a63361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x791c654ab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x791c69c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x791c69d26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 61] Process group watchdog thread terminated with exception: [Rank 61] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x712adc793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x712a91a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x712a91a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x712a91a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x712adc8ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x712ae1094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x712ae1126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x712adc793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x712a916a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x712adc8ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x712ae1094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x712ae1126850 in /lib/x86_64-linux-gnu/libc.so.6)
what(): what(): [PG ID 1 PG GUID 1 Rank 57] Process group watchdog thread terminated with exception: [Rank 57] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600048 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x791c65350446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x791c1a62a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x791c1a631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x791c1a63361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x791c654ab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x791c69c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x791c69d26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x791c65350446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x791c1a2a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x791c654ab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x791c69c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x791c69d26850 in /lib/x86_64-linux-gnu/libc.so.6)
[PG ID 1 PG GUID 1 Rank 63] Process group watchdog thread terminated with exception: [Rank 63] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600064 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74bf19db9446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74becf02a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74becf031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74becf03361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74bf1a8665c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74bf1e694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74bf1e726850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74bf19db9446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x74bececa071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x74bf1a8665c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x74bf1e694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x74bf1e726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank127]:[E218 15:08:20.028067730 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 127] Process group watchdog thread terminated with exception: [Rank 127] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c569c0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c565142a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c5651431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c565143361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c569c8735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7c56a0a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c56a0b26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank121]:[E218 15:08:20.028078670 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 121] Process group watchdog thread terminated with exception: [Rank 121] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c2cb6f6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c2c6c62a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c2c6c631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c2c6c63361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c2cb7c5c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
[rank10]:[E218 15:08:20.449732576 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 10] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank8]:[E218 15:08:20.449737997 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 8] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank8]:[E218 15:08:20.449759268 ProcessGroupNCCL.cpp:630] [Rank 8] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank10]:[E218 15:08:20.449759658 ProcessGroupNCCL.cpp:630] [Rank 10] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank8]:[E218 15:08:20.449766259 ProcessGroupNCCL.cpp:636] [Rank 8] To avoid data inconsistency, we are taking the entire process down.
frame #5: + 0x94ac3 (0x7c2cbbc94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c2cbbd26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank10]:[E218 15:08:20.449766339 ProcessGroupNCCL.cpp:636] [Rank 10] To avoid data inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 127] Process group watchdog thread terminated with exception: [Rank 127] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c569c0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c565142a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c5651431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c565143361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c569c8735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7c56a0a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c56a0b26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c569c0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7c56510a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7c569c8735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7c56a0a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7c56a0b26850 in /lib/x86_64-linux-gnu/libc.so.6)
what(): [PG ID 1 PG GUID 1 Rank 121] Process group watchdog thread terminated with exception: [Rank 121] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c2cb6f6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c2c6c62a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c2c6c631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c2c6c63361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c2cb7c5c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7c2cbbc94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c2cbbd26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c2cb6f6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7c2c6c2a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7c2cb7c5c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7c2cbbc94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7c2cbbd26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank125]:[E218 15:08:20.041683082 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 125] Process group watchdog thread terminated with exception: [Rank 125] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600086 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f52ef2db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f52a462a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f52a4631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f52a463361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f52efe585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f52f3c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f52f3d26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 125] Process group watchdog thread terminated with exception: [Rank 125] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600086 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f52ef2db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f52a462a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f52a4631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f52a463361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f52efe585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f52f3c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f52f3d26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f52ef2db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7f52a42a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7f52efe585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7f52f3c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7f52f3d26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank14]:[E218 15:08:20.466306177 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 14] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank14]:[E218 15:08:20.466336449 ProcessGroupNCCL.cpp:630] [Rank 14] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank14]:[E218 15:08:20.466344670 ProcessGroupNCCL.cpp:636] [Rank 14] To avoid data inconsistency, we are taking the entire process down.
[rank12]:[E218 15:08:20.495050579 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 12] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank12]:[E218 15:08:20.495072461 ProcessGroupNCCL.cpp:630] [Rank 12] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank12]:[E218 15:08:20.495079331 ProcessGroupNCCL.cpp:636] [Rank 12] To avoid data inconsistency, we are taking the entire process down.
[rank93]:[E218 15:08:20.495168856 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 93] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank93]:[E218 15:08:20.495203367 ProcessGroupNCCL.cpp:630] [Rank 93] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank93]:[E218 15:08:20.495211727 ProcessGroupNCCL.cpp:636] [Rank 93] To avoid data inconsistency, we are taking the entire process down.
[rank123]:[E218 15:08:20.079890701 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 123] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank123]:[E218 15:08:20.079918831 ProcessGroupNCCL.cpp:630] [Rank 123] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank123]:[E218 15:08:20.079925931 ProcessGroupNCCL.cpp:636] [Rank 123] To avoid data inconsistency, we are taking the entire process down.
[rank123]:[E218 15:08:20.081858084 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 123] Process group watchdog thread terminated with exception: [Rank 123] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600078 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7230c9576446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x72307e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x72307e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x72307e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7230c9c585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7230cde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7230cdf26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 123] Process group watchdog thread terminated with exception: [Rank 123] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600078 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7230c9576446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x72307e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x72307e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x72307e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7230c9c585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7230cde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7230cdf26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7230c9576446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x72307e4a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7230c9c585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7230cde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7230cdf26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank10]:[E218 15:08:20.505607573 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 10] Process group watchdog thread terminated with exception: [Rank 10] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600042 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7936e4793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x793699a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x793699a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x793699a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7936e48ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7936e9094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7936e9126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank8]:[E218 15:08:20.505608994 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 8] Process group watchdog thread terminated with exception: [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7137ebadb446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7137a0e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7137a0e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7137a0e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7137ec6525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7137f0494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7137f0526850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 10] Process group watchdog thread terminated with exception: [Rank 10] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600042 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7936e4793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x793699a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x793699a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x793699a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7936e48ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7936e9094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7936e9126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7936e4793446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7936996a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7936e48ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7936e9094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7936e9126850 in /lib/x86_64-linux-gnu/libc.so.6)
what(): [PG ID 1 PG GUID 1 Rank 8] Process group watchdog thread terminated with exception: [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600039 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7137ebadb446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7137a0e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7137a0e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7137a0e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7137ec6525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7137f0494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7137f0526850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7137ebadb446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7137a0aa071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7137ec6525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7137f0494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7137f0526850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank122]:[E218 15:08:20.088578223 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 122] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank122]:[E218 15:08:20.088602444 ProcessGroupNCCL.cpp:630] [Rank 122] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank122]:[E218 15:08:20.088609564 ProcessGroupNCCL.cpp:636] [Rank 122] To avoid data inconsistency, we are taking the entire process down.
[rank120]:[E218 15:08:20.088822179 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 120] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank120]:[E218 15:08:20.088849080 ProcessGroupNCCL.cpp:630] [Rank 120] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank120]:[E218 15:08:20.088855189 ProcessGroupNCCL.cpp:636] [Rank 120] To avoid data inconsistency, we are taking the entire process down.
[rank120]:[E218 15:08:20.090840134 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 120] Process group watchdog thread terminated with exception: [Rank 120] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7271138e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7270c8c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7270c8c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7270c8c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7271144585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x727118294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x727118326850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 120] Process group watchdog thread terminated with exception: [Rank 120] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7271138e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7270c8c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7270c8c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7270c8c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7271144585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x727118294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x727118326850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7271138e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7270c88a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7271144585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x727118294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x727118326850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank14]:[E218 15:08:20.518744135 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 14] Process group watchdog thread terminated with exception: [Rank 14] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b807152a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b802682a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b8026831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b802683361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b8071c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b8075e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b8075f26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 14] Process group watchdog thread terminated with exception: [Rank 14] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b807152a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b802682a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b8026831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b802683361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b8071c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b8075e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b8075f26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b807152a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7b80264a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7b8071c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7b8075e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7b8075f26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank122]:[E218 15:08:20.099983647 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 122] Process group watchdog thread terminated with exception: [Rank 122] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x71cb9df2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x71cb5322a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x71cb53231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x71cb5323361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x71cb9e6735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x71cba2894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x71cba2926850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 122] Process group watchdog thread terminated with exception: [Rank 122] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600072 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x71cb9df2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x71cb5322a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x71cb53231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x71cb5323361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x71cb9e6735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x71cba2894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x71cba2926850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x71cb9df2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x71cb52ea071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x71cb9e6735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x71cba2894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x71cba2926850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank12]:[E218 15:08:20.545640083 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 12] Process group watchdog thread terminated with exception: [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600015 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f45e26bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f4597a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f4597a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f4597a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f45e28175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f45e7094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f45e7126850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 12] Process group watchdog thread terminated with exception: [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600015 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f45e26bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f4597a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f4597a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f4597a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f45e28175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f45e7094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f45e7126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f45e26bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7f45976a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7f45e28175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7f45e7094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7f45e7126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank91]:[E218 15:08:20.552192072 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 91] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank91]:[E218 15:08:20.552219702 ProcessGroupNCCL.cpp:630] [Rank 91] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank91]:[E218 15:08:20.552224612 ProcessGroupNCCL.cpp:636] [Rank 91] To avoid data inconsistency, we are taking the entire process down.
[rank93]:[E218 15:08:20.552673641 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 93] Process group watchdog thread terminated with exception: [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600055 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d20cdb50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7d2082e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7d2082e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7d2082e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7d20cdcab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7d20d2494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7d20d2526850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
[rank91]:[E218 15:08:20.554086578 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 91] Process group watchdog thread terminated with exception: [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600084 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77f66ab50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77f61fe2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77f61fe31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77f61fe3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x77f66acab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x77f66f494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x77f66f526850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 93] Process group watchdog thread terminated with exception: [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600055 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d20cdb50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7d2082e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7d2082e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7d2082e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7d20cdcab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7d20d2494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7d20d2526850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d20cdb50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7d2082aa071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7d20cdcab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7d20d2494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7d20d2526850 in /lib/x86_64-linux-gnu/libc.so.6)
what(): [PG ID 1 PG GUID 1 Rank 91] Process group watchdog thread terminated with exception: [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600084 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77f66ab50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77f61fe2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77f61fe31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77f61fe3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x77f66acab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x77f66f494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x77f66f526850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77f66ab50446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x77f61faa071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x77f66acab5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x77f66f494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x77f66f526850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank59]:[E218 15:08:20.333368142 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 59] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank59]:[E218 15:08:20.333390972 ProcessGroupNCCL.cpp:630] [Rank 59] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank59]:[E218 15:08:20.333398052 ProcessGroupNCCL.cpp:636] [Rank 59] To avoid data inconsistency, we are taking the entire process down.
[rank59]:[E218 15:08:20.335343631 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 59] Process group watchdog thread terminated with exception: [Rank 59] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600093 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7a5d056e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7a5cbaa2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7a5cbaa31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7a5cbaa3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7a5d062585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7a5d0a094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7a5d0a126850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 59] Process group watchdog thread terminated with exception: [Rank 59] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600093 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7a5d056e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7a5cbaa2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7a5cbaa31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7a5cbaa3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7a5d062585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7a5d0a094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7a5d0a126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7a5d056e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7a5cba6a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7a5d062585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7a5d0a094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7a5d0a126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank56]:[E218 15:08:20.336582621 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 56] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank56]:[E218 15:08:20.336606722 ProcessGroupNCCL.cpp:630] [Rank 56] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank56]:[E218 15:08:20.336613802 ProcessGroupNCCL.cpp:636] [Rank 56] To avoid data inconsistency, we are taking the entire process down.
[rank56]:[E218 15:08:20.338443657 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 56] Process group watchdog thread terminated with exception: [Rank 56] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600092 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfe2156c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7cfdd6c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7cfdd6c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7cfdd6c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7cfe2225c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7cfe26294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7cfe26326850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 56] Process group watchdog thread terminated with exception: [Rank 56] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600092 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfe2156c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7cfdd6c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7cfdd6c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7cfdd6c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7cfe2225c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7cfe26294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7cfe26326850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfe2156c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7cfdd68a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7cfe2225c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7cfe26294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7cfe26326850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank58]:[E218 15:08:20.339092633 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 58] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank58]:[E218 15:08:20.339122684 ProcessGroupNCCL.cpp:630] [Rank 58] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank58]:[E218 15:08:20.339129844 ProcessGroupNCCL.cpp:636] [Rank 58] To avoid data inconsistency, we are taking the entire process down.
[rank60]:[E218 15:08:20.339306939 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 60] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank60]:[E218 15:08:20.339323729 ProcessGroupNCCL.cpp:630] [Rank 60] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank60]:[E218 15:08:20.339328099 ProcessGroupNCCL.cpp:636] [Rank 60] To avoid data inconsistency, we are taking the entire process down.
[rank95]:[E218 15:08:20.589379642 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 95] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank95]:[E218 15:08:20.589411523 ProcessGroupNCCL.cpp:630] [Rank 95] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank95]:[E218 15:08:20.589431563 ProcessGroupNCCL.cpp:636] [Rank 95] To avoid data inconsistency, we are taking the entire process down.
[rank62]:[E218 15:08:20.339865412 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 62] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank62]:[E218 15:08:20.339883503 ProcessGroupNCCL.cpp:630] [Rank 62] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank62]:[E218 15:08:20.339889543 ProcessGroupNCCL.cpp:636] [Rank 62] To avoid data inconsistency, we are taking the entire process down.
[rank58]:[E218 15:08:20.341168265 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 58] Process group watchdog thread terminated with exception: [Rank 58] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e1de9d6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e1d9f42a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e1d9f431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e1d9f43361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e1dea15e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e1dee894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e1dee926850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank60]:[E218 15:08:20.341167675 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 60] Process group watchdog thread terminated with exception: [Rank 60] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600088 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70a9aed2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x70a96402a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x70a964031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x70a96403361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x70a9af46d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x70a9b3694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x70a9b3726850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 58] Process group watchdog thread terminated with exception: [Rank 58] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600030 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e1de9d6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e1d9f42a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e1d9f431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e1d9f43361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e1dea15e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e1dee894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e1dee926850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e1de9d6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7e1d9f0a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7e1dea15e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7e1dee894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7e1dee926850 in /lib/x86_64-linux-gnu/libc.so.6)
what(): [PG ID 1 PG GUID 1 Rank 60] Process group watchdog thread terminated with exception: [Rank 60] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600088 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70a9aed2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x70a96402a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x70a964031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x70a96403361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x70a9af46d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x70a9b3694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x70a9b3726850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70a9aed2a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x70a963ca071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x70a9af46d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x70a9b3694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x70a9b3726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank62]:[E218 15:08:20.352223108 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 62] Process group watchdog thread terminated with exception: [Rank 62] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600096 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x788ae76bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x788a9ca2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x788a9ca31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x788a9ca3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x788ae78175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x788aec094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x788aec126850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 62] Process group watchdog thread terminated with exception: [Rank 62] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600096 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x788ae76bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x788a9ca2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x788a9ca31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x788a9ca3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x788ae78175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x788aec094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x788aec126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x788ae76bc446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x788a9c6a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x788ae78175c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x788aec094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x788aec126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank126]:[E218 15:08:20.190461656 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 126] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank126]:[E218 15:08:20.190487067 ProcessGroupNCCL.cpp:630] [Rank 126] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank126]:[E218 15:08:20.190492707 ProcessGroupNCCL.cpp:636] [Rank 126] To avoid data inconsistency, we are taking the entire process down.
[rank124]:[E218 15:08:20.191642843 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 124] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank124]:[E218 15:08:20.191671513 ProcessGroupNCCL.cpp:630] [Rank 124] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank124]:[E218 15:08:20.191678893 ProcessGroupNCCL.cpp:636] [Rank 124] To avoid data inconsistency, we are taking the entire process down.
[rank124]:[E218 15:08:20.193621706 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 124] Process group watchdog thread terminated with exception: [Rank 124] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600027 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7610a9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x76105e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x76105e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x76105e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7610a96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7610ade94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7610adf26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 124] Process group watchdog thread terminated with exception: [Rank 124] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600027 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7610a9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x76105e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x76105e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x76105e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7610a96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7610ade94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7610adf26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7610a9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x76105e4a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7610a96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7610ade94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7610adf26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank89]:[E218 15:08:20.618077719 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 89] Timeout at NCCL work: 159205, last enqueued NCCL work: 159208, last completed NCCL work: 159204.
[rank89]:[E218 15:08:20.618105979 ProcessGroupNCCL.cpp:630] [Rank 89] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank89]:[E218 15:08:20.618110909 ProcessGroupNCCL.cpp:636] [Rank 89] To avoid data inconsistency, we are taking the entire process down.
[rank126]:[E218 15:08:20.203521566 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 126] Process group watchdog thread terminated with exception: [Rank 126] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=159205, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600056 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cda807b9446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7cda35a2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7cda35a31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7cda35a3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: