2024-09-05 22:43:35.221687: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-09-05 22:43:35.239699: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-09-05 22:43:35.261393: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-09-05 22:43:35.267947: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-09-05 22:43:35.283519: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-09-05 22:43:36.568969: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1525: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( 09/05/2024 22:43:38 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False 09/05/2024 22:43:38 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, batch_eval_metrics=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=True, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_steps=None, eval_strategy=epoch, eval_use_gather_object=False, evaluation_strategy=epoch, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=2, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=True, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, load_best_model_at_end=True, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/content/dissertation/scripts/ner/output/tb, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=f1, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=10.0, optim=adamw_torch, optim_args=None, optim_target_modules=None, output_dir=/content/dissertation/scripts/ner/output, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=/content/dissertation/scripts/ner/output, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=epoch, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, ) Downloading builder script: 0%| | 0.00/3.54k [00:00> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/config.json [INFO|configuration_utils.py:800] 2024-09-05 22:43:57,310 >> Model config RobertaConfig { "_name_or_path": "PlanTL-GOB-ES/bsc-bio-ehr-es", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "finetuning_task": "ner", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "O", "1": "B-ENFERMEDAD", "2": "I-ENFERMEDAD" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "B-ENFERMEDAD": 1, "I-ENFERMEDAD": 2, "O": 0 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.44.2", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50262 } [INFO|configuration_utils.py:733] 2024-09-05 22:43:57,806 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/config.json [INFO|configuration_utils.py:800] 2024-09-05 22:43:57,808 >> Model config RobertaConfig { "_name_or_path": "PlanTL-GOB-ES/bsc-bio-ehr-es", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.44.2", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50262 } [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file vocab.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/vocab.json [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file merges.txt from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/merges.txt [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file tokenizer.json from cache at None [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/special_tokens_map.json [INFO|tokenization_utils_base.py:2269] 2024-09-05 22:44:00,154 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/tokenizer_config.json [INFO|configuration_utils.py:733] 2024-09-05 22:44:00,155 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/config.json [INFO|configuration_utils.py:800] 2024-09-05 22:44:00,156 >> Model config RobertaConfig { "_name_or_path": "PlanTL-GOB-ES/bsc-bio-ehr-es", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.44.2", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50262 } /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( [INFO|configuration_utils.py:733] 2024-09-05 22:44:00,243 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/config.json [INFO|configuration_utils.py:800] 2024-09-05 22:44:00,244 >> Model config RobertaConfig { "_name_or_path": "PlanTL-GOB-ES/bsc-bio-ehr-es", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.44.2", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50262 } [INFO|modeling_utils.py:3678] 2024-09-05 22:44:31,232 >> loading weights file pytorch_model.bin from cache at /root/.cache/huggingface/hub/models--PlanTL-GOB-ES--bsc-bio-ehr-es/snapshots/1e543adb2d21f19d85a89305eebdbd64ab656b99/pytorch_model.bin [INFO|modeling_utils.py:4497] 2024-09-05 22:44:31,384 >> Some weights of the model checkpoint at PlanTL-GOB-ES/bsc-bio-ehr-es were not used when initializing RobertaForTokenClassification: ['lm_head.bias', 'lm_head.decoder.bias', 'lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight'] - This IS expected if you are initializing RobertaForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:4509] 2024-09-05 22:44:31,384 >> Some weights of RobertaForTokenClassification were not initialized from the model checkpoint at PlanTL-GOB-ES/bsc-bio-ehr-es and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Map: 0%| | 0/31947 [00:00> The following columns in the training set don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: id, ner_tags, tokens. If id, ner_tags, tokens are not expected by `RobertaForTokenClassification.forward`, you can safely ignore this message. [INFO|trainer.py:2134] 2024-09-05 22:44:39,141 >> ***** Running training ***** [INFO|trainer.py:2135] 2024-09-05 22:44:39,141 >> Num examples = 31,947 [INFO|trainer.py:2136] 2024-09-05 22:44:39,141 >> Num Epochs = 10 [INFO|trainer.py:2137] 2024-09-05 22:44:39,141 >> Instantaneous batch size per device = 32 [INFO|trainer.py:2140] 2024-09-05 22:44:39,141 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:2141] 2024-09-05 22:44:39,141 >> Gradient Accumulation steps = 2 [INFO|trainer.py:2142] 2024-09-05 22:44:39,141 >> Total optimization steps = 4,990 [INFO|trainer.py:2143] 2024-09-05 22:44:39,142 >> Number of trainable parameters = 124,055,043 0%| | 0/4990 [00:00> The following columns in the evaluation set don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: id, ner_tags, tokens. If id, ner_tags, tokens are not expected by `RobertaForTokenClassification.forward`, you can safely ignore this message. [INFO|trainer.py:3819] 2024-09-05 22:46:40,953 >> ***** Running Evaluation ***** [INFO|trainer.py:3821] 2024-09-05 22:46:40,953 >> Num examples = 6810 [INFO|trainer.py:3824] 2024-09-05 22:46:40,953 >> Batch size = 8 0%| | 0/852 [00:00> Saving model checkpoint to /content/dissertation/scripts/ner/output/checkpoint-499 [INFO|configuration_utils.py:472] 2024-09-05 22:46:55,160 >> Configuration saved in /content/dissertation/scripts/ner/output/checkpoint-499/config.json [INFO|modeling_utils.py:2799] 2024-09-05 22:46:56,182 >> Model weights saved in /content/dissertation/scripts/ner/output/checkpoint-499/model.safetensors [INFO|tokenization_utils_base.py:2684] 2024-09-05 22:46:56,183 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/checkpoint-499/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-09-05 22:46:56,183 >> Special tokens file saved in /content/dissertation/scripts/ner/output/checkpoint-499/special_tokens_map.json [INFO|tokenization_utils_base.py:2684] 2024-09-05 22:46:58,215 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-09-05 22:46:58,215 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json 10%|█ | 500/4990 [02:19<6:45:16, 5.42s/it] 10%|█ | 500/4990 [02:19<6:45:16, 5.42s/it] 10%|█ | 501/4990 [02:19<4:48:18, 3.85s/it] 10%|█ | 502/4990 [02:19<3:26:57, 2.77s/it] 10%|█ | 503/4990 [02:19<2:30:05, 2.01s/it] 10%|█ | 504/4990 [02:20<1:50:49, 1.48s/it] 10%|█ | 505/4990 [02:20<1:24:52, 1.14s/it] 10%|█ | 506/4990 [02:20<1:04:26, 1.16it/s] 10%|█ | 507/4990 [02:20<49:53, 1.50it/s] 10%|█ | 508/4990 [02:21<41:00, 1.82it/s] 10%|█ | 509/4990 [02:21<33:10, 2.25it/s] 10%|█ | 510/4990 [02:21<27:07, 2.75it/s] 10%|█ | 511/4990 [02:21<23:16, 3.21it/s] 10%|█ | 512/4990 [02:21<21:02, 3.55it/s] 10%|█ | 513/4990 [02:22<18:52, 3.95it/s] 10%|█ | 514/4990 [02:22<22:26, 3.32it/s] 10%|█ | 515/4990 [02:22<20:00, 3.73it/s] 10%|█ | 516/4990 [02:22<18:22, 4.06it/s] 10%|█ | 517/4990 [02:23<17:28, 4.27it/s] 10%|█ | 518/4990 [02:23<16:30, 4.52it/s] 10%|█ | 519/4990 [02:23<18:31, 4.02it/s] 10%|█ | 520/4990 [02:23<16:33, 4.50it/s] 10%|█ | 521/4990 [02:24<16:20, 4.56it/s] 10%|█ | 522/4990 [02:24<16:24, 4.54it/s] 10%|█ | 523/4990 [02:24<15:59, 4.65it/s] 11%|█ | 524/4990 [02:24<19:35, 3.80it/s] 11%|█ | 525/4990 [02:25<19:32, 3.81it/s] 11%|█ | 526/4990 [02:25<18:10, 4.09it/s] 11%|█ | 527/4990 [02:25<17:04, 4.36it/s] 11%|█ | 528/4990 [02:25<17:16, 4.31it/s] 11%|█ | 529/4990 [02:25<17:11, 4.33it/s] 11%|█ | 530/4990 [02:26<16:46, 4.43it/s] 11%|█ | 531/4990 [02:26<15:43, 4.73it/s] 11%|█ | 532/4990 [02:26<15:18, 4.85it/s] 11%|█ | 533/4990 [02:26<15:13, 4.88it/s] 11%|█ | 534/4990 [02:27<28:20, 2.62it/s] 11%|█ | 535/4990 [02:27<24:26, 3.04it/s] 11%|█ | 536/4990 [02:28<22:16, 3.33it/s] 11%|█ | 537/4990 [02:28<22:43, 3.27it/s] 11%|█ | 538/4990 [02:28<22:19, 3.32it/s] 11%|█ | 539/4990 [02:28<20:12, 3.67it/s] 11%|█ | 540/4990 [02:29<21:08, 3.51it/s] 11%|█ | 541/4990 [02:29<19:02, 3.89it/s] 11%|█ | 542/4990 [02:29<17:45, 4.17it/s] 11%|█ | 543/4990 [02:29<16:52, 4.39it/s] 11%|█ | 544/4990 [02:29<15:30, 4.78it/s] 11%|█ | 545/4990 [02:30<16:11, 4.58it/s] 11%|█ | 546/4990 [02:30<16:42, 4.43it/s] 11%|█ | 547/4990 [02:30<15:52, 4.66it/s] 11%|█ | 548/4990 [02:30<16:27, 4.50it/s] 11%|█ | 549/4990 [02:31<22:41, 3.26it/s] 11%|█ | 550/4990 [02:31<22:32, 3.28it/s] 11%|█ | 551/4990 [02:31<22:49, 3.24it/s] 11%|█ | 552/4990 [02:32<19:56, 3.71it/s] 11%|█ | 553/4990 [02:32<18:11, 4.06it/s] 11%|█ | 554/4990 [02:32<19:06, 3.87it/s] 11%|█ | 555/4990 [02:32<17:31, 4.22it/s] 11%|█ | 556/4990 [02:32<16:52, 4.38it/s] 11%|█ | 557/4990 [02:33<16:57, 4.36it/s] 11%|█ | 558/4990 [02:33<16:32, 4.47it/s] 11%|█ | 559/4990 [02:33<16:17, 4.53it/s] 11%|█ | 560/4990 [02:33<17:38, 4.19it/s] 11%|█ | 561/4990 [02:34<29:58, 2.46it/s] 11%|█▏ | 562/4990 [02:34<26:44, 2.76it/s] 11%|█▏ | 563/4990 [02:35<29:38, 2.49it/s] 11%|█▏ | 564/4990 [02:35<24:19, 3.03it/s] 11%|█▏ | 565/4990 [02:35<22:20, 3.30it/s] 11%|█▏ | 566/4990 [02:36<19:36, 3.76it/s] 11%|█▏ | 567/4990 [02:36<18:36, 3.96it/s] 11%|█▏ | 568/4990 [02:36<18:25, 4.00it/s] 11%|█▏ | 569/4990 [02:36<17:19, 4.25it/s] 11%|█▏ | 570/4990 [02:37<18:45, 3.93it/s] 11%|█▏ | 571/4990 [02:37<18:27, 3.99it/s] 11%|█▏ | 572/4990 [02:37<21:34, 3.41it/s] 11%|█▏ | 573/4990 [02:37<21:03, 3.50it/s] 12%|█▏ | 574/4990 [02:38<19:30, 3.77it/s] 12%|█▏ | 575/4990 [02:38<18:21, 4.01it/s] 12%|█▏ | 576/4990 [02:38<17:29, 4.21it/s] 12%|█▏ | 577/4990 [02:38<17:17, 4.25it/s] 12%|█▏ | 578/4990 [02:38<16:36, 4.43it/s] 12%|█▏ | 579/4990 [02:39<17:44, 4.14it/s] 12%|█▏ | 580/4990 [02:39<18:01, 4.08it/s] 12%|█▏ | 581/4990 [02:39<22:05, 3.33it/s] 12%|█▏ | 582/4990 [02:40<20:32, 3.58it/s] 12%|█▏ | 583/4990 [02:40<18:21, 4.00it/s] 12%|█▏ | 584/4990 [02:40<17:22, 4.23it/s] 12%|█▏ | 585/4990 [02:40<15:44, 4.66it/s] 12%|█▏ | 586/4990 [02:40<14:56, 4.91it/s] 12%|█▏ | 587/4990 [02:41<15:23, 4.77it/s] 12%|█▏ | 588/4990 [02:41<15:02, 4.88it/s] 12%|█▏ | 589/4990 [02:41<15:48, 4.64it/s] 12%|█▏ | 590/4990 [02:41<16:13, 4.52it/s] 12%|█▏ | 591/4990 [02:42<16:01, 4.58it/s] 12%|█▏ | 592/4990 [02:42<16:48, 4.36it/s] 12%|█▏ | 593/4990 [02:42<17:41, 4.14it/s] 12%|█▏ | 594/4990 [02:42<18:45, 3.91it/s] 12%|█▏ | 595/4990 [02:43<18:12, 4.02it/s] 12%|█▏ | 596/4990 [02:43<16:06, 4.55it/s] 12%|█▏ | 597/4990 [02:43<15:39, 4.67it/s] 12%|█▏ | 598/4990 [02:43<17:10, 4.26it/s] 12%|█▏ | 599/4990 [02:43<17:44, 4.12it/s] 12%|█▏ | 600/4990 [02:44<17:03, 4.29it/s] 12%|█▏ | 601/4990 [02:44<16:09, 4.53it/s] 12%|█▏ | 602/4990 [02:44<15:17, 4.78it/s] 12%|█▏ | 603/4990 [02:44<15:23, 4.75it/s] 12%|█▏ | 604/4990 [02:44<14:31, 5.04it/s] 12%|█▏ | 605/4990 [02:45<16:54, 4.32it/s] 12%|█▏ | 606/4990 [02:45<17:54, 4.08it/s] 12%|█▏ | 607/4990 [02:45<19:41, 3.71it/s] 12%|█▏ | 608/4990 [02:46<17:59, 4.06it/s] 12%|█▏ | 609/4990 [02:46<17:00, 4.29it/s] 12%|█▏ | 610/4990 [02:46<15:46, 4.63it/s] 12%|█▏ | 611/4990 [02:46<15:23, 4.74it/s] 12%|█▏ | 612/4990 [02:46<14:48, 4.93it/s] 12%|█▏ | 613/4990 [02:47<15:47, 4.62it/s] 12%|█▏ | 614/4990 [02:47<14:57, 4.88it/s] 12%|█▏ | 615/4990 [02:47<14:19, 5.09it/s] 12%|█▏ | 616/4990 [02:47<14:04, 5.18it/s] 12%|█▏ | 617/4990 [02:47<13:24, 5.44it/s] 12%|█▏ | 618/4990 [02:47<14:28, 5.03it/s] 12%|█▏ | 619/4990 [02:48<15:06, 4.82it/s] 12%|█▏ | 620/4990 [02:48<15:07, 4.82it/s] 12%|█▏ | 621/4990 [02:48<15:22, 4.73it/s] 12%|█▏ | 622/4990 [02:48<15:06, 4.82it/s] 12%|█▏ | 623/4990 [02:49<15:49, 4.60it/s] 13%|█▎ | 624/4990 [02:49<14:46, 4.92it/s] 13%|█▎ | 625/4990 [02:49<14:36, 4.98it/s] 13%|█▎ | 626/4990 [02:49<14:10, 5.13it/s] 13%|█▎ | 627/4990 [02:49<14:20, 5.07it/s] 13%|█▎ | 628/4990 [02:50<18:54, 3.85it/s]