|
Model parameters: d_model 224 ffw_size 896 kv_size 32 n_heads 7 n_layers 4 |
|
Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 4 --hidden-size 224 --num-attention-heads 7 --kv-channels 32 --ffn-hidden-size 896 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 32 --global-batch-size 256 --train-samples 390_625 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-14m800m100m --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 390_625 --lr-warmup-samples 3906 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 1000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_14m800m100m --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_14m800m100m --load checkpoints_14m800m100m --train-weighted-split-paths-path train100m.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3423814.json --zero-stage 0 |
|
START 3423814: Thu 27 Apr 2023 04:05:43 PM EEST |
|
0: |
|
0: |
|
0: ======================= ROCm System Management Interface ======================= |
|
0: ================================= Concise Info ================================= |
|
0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% |
|
0: 0 48.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 2 40.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 3 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 4 50.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 5 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 6 44.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: ================================================================================ |
|
0: ============================= End of ROCm SMI Log ============================== |
|
0: Launching on nid005141 (0/1), master nid005141 port 9999, GPUs 8, CUDA: True |
|
0: using world size: 8, data-parallel-size: 8, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 |
|
0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. |
|
0: using torch.bfloat16 for parameters ... |
|
0: ------------------------ arguments ------------------------ |
|
0: abort_on_unmet_fused_kernel_constraints ......... False |
|
0: accumulate_allreduce_grads_in_fp32 .............. True |
|
0: adam_beta1 ...................................... 0.9 |
|
0: adam_beta2 ...................................... 0.999 |
|
0: adam_eps ........................................ 1e-08 |
|
0: adlr_autoresume ................................. False |
|
0: adlr_autoresume_interval ........................ 1000 |
|
0: apply_query_key_layer_scaling ................... True |
|
0: apply_residual_connection_post_layernorm ........ False |
|
0: attention_dropout ............................... 0.1 |
|
0: attention_softmax_in_fp32 ....................... False |
|
0: bert_binary_head ................................ True |
|
0: bert_load ....................................... None |
|
0: bf16 ............................................ True |
|
0: bias_dropout_fusion ............................. True |
|
0: bias_gelu_fusion ................................ True |
|
0: biencoder_projection_dim ........................ 0 |
|
0: biencoder_shared_query_context_model ............ False |
|
0: block_data_path ................................. None |
|
0: checkpoint_activations .......................... True |
|
0: checkpoint_in_cpu ............................... False |
|
0: checkpoint_num_layers ........................... 1 |
|
0: clip_grad ....................................... 1.0 |
|
0: codecarbon_dir .................................. None |
|
0: consumed_train_samples .......................... 0 |
|
0: consumed_train_tokens ........................... 0 |
|
0: consumed_valid_samples .......................... 0 |
|
0: contigious_checkpointing ........................ False |
|
0: cpu_optimizer ................................... False |
|
0: cpu_torch_adam .................................. False |
|
0: curriculum_learning ............................. False |
|
0: data_impl ....................................... mmap |
|
0: data_parallel_size .............................. 8 |
|
0: data_path ....................................... None |
|
0: dataloader_type ................................. single |
|
0: DDP_impl ........................................ local |
|
0: decoder_seq_length .............................. None |
|
0: deepscale ....................................... False |
|
0: deepscale_config ................................ None |
|
0: deepspeed ....................................... True |
|
0: deepspeed_activation_checkpointing .............. False |
|
0: deepspeed_config ................................ ds_configs/3423814.json |
|
0: deepspeed_mpi ................................... False |
|
0: distribute_checkpointed_activations ............. False |
|
0: distributed_backend ............................. nccl |
|
0: embed_layernorm ................................. False |
|
0: embedding_path .................................. None |
|
0: encoder_seq_length .............................. 2048 |
|
0: eod_mask_loss ................................... False |
|
0: eval_interval ................................... 1000 |
|
0: eval_iters ...................................... 1 |
|
0: eval_only ....................................... None |
|
0: evidence_data_path .............................. None |
|
0: exit_duration_in_mins ........................... None |
|
0: exit_interval ................................... None |
|
0: ffn_hidden_size ................................. 896 |
|
0: finetune ........................................ False |
|
0: fp16 ............................................ False |
|
0: fp16_lm_cross_entropy ........................... False |
|
0: fp32_residual_connection ........................ False |
|
0: gigaflos_no_embeds .............................. 0 |
|
0: global_batch_size ............................... 256 |
|
0: glu_activation .................................. None |
|
0: hidden_dropout .................................. 0.1 |
|
0: hidden_size ..................................... 224 |
|
0: hysteresis ...................................... 2 |
|
0: ict_head_size ................................... None |
|
0: ict_load ........................................ None |
|
0: img_dim ......................................... 224 |
|
0: indexer_batch_size .............................. 128 |
|
0: indexer_log_interval ............................ 1000 |
|
0: inference ....................................... False |
|
0: init_method_std ................................. 0.02 |
|
0: init_method_xavier_uniform ...................... False |
|
0: initial_loss_scale .............................. 4294967296 |
|
0: kill_switch_path ................................ kill-switch-14m800m100m |
|
0: kv_channels ..................................... 32 |
|
0: layer_norm_fusion ............................... True |
|
0: layernorm_epsilon ............................... 1e-05 |
|
0: lazy_mpu_init ................................... None |
|
0: load ............................................ checkpoints_14m800m100m |
|
0: local_rank ...................................... None |
|
0: log_batch_size_to_tensorboard ................... True |
|
0: log_interval .................................... 10 |
|
0: log_learning_rate_to_tensorboard ................ True |
|
0: log_level ....................................... None |
|
0: log_level_replica ............................... None |
|
0: log_loss_scale_to_tensorboard ................... True |
|
0: log_num_zeros_in_grad ........................... False |
|
0: log_params_norm ................................. False |
|
0: log_path ........................................ None |
|
0: log_timers_to_tensorboard ....................... True |
|
0: log_validation_ppl_to_tensorboard ............... True |
|
0: loss_on_targets_only ............................ False |
|
0: loss_scale ...................................... 12.0 |
|
0: loss_scale_window ............................... 1000 |
|
0: lr .............................................. 0.0002 |
|
0: lr_decay_iters .................................. None |
|
0: lr_decay_samples ................................ 390625 |
|
0: lr_decay_style .................................. cosine |
|
0: lr_decay_tokens ................................. None |
|
0: lr_warmup_fraction .............................. None |
|
0: lr_warmup_iters ................................. 0 |
|
0: lr_warmup_samples ............................... 3906 |
|
0: make_vocab_size_divisible_by .................... 128 |
|
0: mask_prob ....................................... 0.15 |
|
0: masked_softmax_fusion ........................... True |
|
0: max_position_embeddings ......................... 2048 |
|
0: mean_noise_span_length .......................... None |
|
0: memory_centric_tiled_linear ..................... False |
|
0: merge_file ...................................... gpt2/merges.txt |
|
0: micro_batch_size ................................ 32 |
|
0: min_loss_scale .................................. 1.0 |
|
0: min_lr .......................................... 2e-05 |
|
0: mmap_warmup ..................................... False |
|
0: no_load_optim ................................... None |
|
0: no_load_rng ..................................... None |
|
0: no_save_optim ................................... None |
|
0: no_save_rng ..................................... None |
|
0: noise_density ................................... None |
|
0: num_attention_heads ............................. 7 |
|
0: num_channels .................................... 3 |
|
0: num_classes ..................................... 1000 |
|
0: num_layers ...................................... 4 |
|
0: num_layers_per_virtual_pipeline_stage ........... None |
|
0: num_workers ..................................... 2 |
|
0: onnx_safe ....................................... None |
|
0: openai_gelu ..................................... False |
|
0: optimizer ....................................... adam |
|
0: optimizer_fusion ................................ True |
|
0: override_lr_scheduler ........................... False |
|
0: pad_vocab_size_to ............................... None |
|
0: params_dtype .................................... torch.bfloat16 |
|
0: partition_activations ........................... False |
|
0: patch_dim ....................................... 16 |
|
0: pipeline_model_parallel_size .................... 1 |
|
0: position_embedding_type ......................... PositionEmbeddingType.absolute |
|
0: pp_partition_method ............................. None |
|
0: profile_backward ................................ False |
|
0: query_in_block_prob ............................. 0.1 |
|
0: rampup_batch_size ............................... None |
|
0: rank ............................................ 0 |
|
0: remote_device ................................... none |
|
0: reset_attention_mask ............................ False |
|
0: reset_position_ids .............................. False |
|
0: reset_progress .................................. None |
|
0: retriever_report_topk_accuracies ................ [] |
|
0: retriever_score_scaling ......................... False |
|
0: retriever_seq_length ............................ 256 |
|
0: reweight_loss_based_on_position_frequency ....... False |
|
0: sample_rate ..................................... 1.0 |
|
0: save ............................................ checkpoints_14m800m100m |
|
0: save_interval ................................... 1000 |
|
0: scatter_gather_tensors_in_pipeline .............. True |
|
0: scattered_embeddings ............................ False |
|
0: seed ............................................ 1234 |
|
0: seq_length ...................................... 2048 |
|
0: sgd_momentum .................................... 0.9 |
|
0: short_seq_prob .................................. 0.1 |
|
0: skip_train_iteration_range ...................... None |
|
0: split ........................................... None |
|
0: split_transformers .............................. False |
|
0: sync_tp_duplicated_parameters ................... False |
|
0: synchronize_each_layer .......................... False |
|
0: tensor_model_parallel_size ...................... 1 |
|
0: tensorboard_dir ................................. tensorboard_14m800m100m |
|
0: tensorboard_log_interval ........................ 1 |
|
0: tensorboard_queue_size .......................... 5 |
|
0: test_weighted_split_paths ....................... None |
|
0: test_weighted_split_paths_path .................. None |
|
0: tile_factor ..................................... 1 |
|
0: titles_data_path ................................ None |
|
0: tokenizer_name_or_path .......................... None |
|
0: tokenizer_type .................................. GPT2BPETokenizer |
|
0: train_iters ..................................... None |
|
0: train_samples ................................... 390625 |
|
0: train_tokens .................................... None |
|
0: train_weighted_split_names ...................... ['train'] |
|
0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document']] |
|
0: train_weighted_split_paths_path ................. None |
|
0: train_weighted_split_splits ..................... [['0:1']] |
|
0: train_weighted_split_weights .................... [['1.0']] |
|
0: universal_checkpoint ............................ False |
|
0: use_bnb_optimizer ............................... False |
|
0: use_checkpoint_lr_scheduler ..................... False |
|
0: use_contiguous_buffers_in_ddp ................... True |
|
0: use_cpu_initialization .......................... None |
|
0: use_one_sent_docs ............................... False |
|
0: use_pin_memory .................................. False |
|
0: valid_num_workers ............................... 2 |
|
0: valid_weighted_split_names ...................... ['validation'] |
|
0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] |
|
0: valid_weighted_split_paths_path ................. None |
|
0: valid_weighted_split_splits ..................... [['0:1']] |
|
0: valid_weighted_split_weights .................... [['1.0']] |
|
0: virtual_pipeline_model_parallel_size ............ None |
|
0: vocab_extra_ids ................................. 0 |
|
0: vocab_file ...................................... gpt2/vocab.json |
|
0: weight_decay .................................... 0.1 |
|
0: world_size ...................................... 8 |
|
0: zero_allgather_bucket_size ...................... 0.0 |
|
0: zero_contigious_gradients ....................... False |
|
0: zero_reduce_bucket_size ......................... 0.0 |
|
0: zero_reduce_scatter ............................. False |
|
0: zero_stage ...................................... 0 |
|
0: -------------------- end of arguments --------------------- |
|
0: setting number of micro-batches to constant 1 |
|
0: > building GPT2BPETokenizer tokenizer ... |
|
0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) |
|
0: DeepSpeed general environment info: |
|
0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] |
|
0: torch version .................... 1.13.0+rocm5.2 |
|
0: torch cuda version ............... None |
|
0: torch hip version ................ 5.2.21151-afdc89f8 |
|
0: nvcc version ..................... None |
|
0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] |
|
0: deepspeed info ................... 0.7.5, unknown, unknown |
|
0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 |
|
0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
0: > initializing torch distributed ... |
|
0: [2023-04-27 16:08:15,848] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl |
|
0: > setting tensorboard ... |
|
0: > initializing tensor model parallel with size 1 |
|
0: > initializing pipeline model parallel with size 1 |
|
0: > setting random seeds to 1234 ... |
|
0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 |
|
0: > compiling dataset index builder ... |
|
0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' |
|
0: make: Nothing to be done for 'default'. |
|
0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' |
|
0: >>> done with dataset index builder. Compilation time: 0.110 seconds |
|
0: > compiling and loading fused kernels ... |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 87 |
|
0: ninja: no work to do. |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 63 |
|
0: ninja: no work to do. |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 67 |
|
0: ninja: no work to do. |
|
0: >>> done with compiling and loading fused kernels. Compilation time: 10.803 seconds |
|
0: time to initialize megatron (seconds): 75.248 |
|
0: [after megatron is initialized] datetime: 2023-04-27 16:08:27 |
|
0: building GPT model ... |
|
0: [2023-04-27 16:08:27,375] [INFO] [utils.py:827:see_memory_usage] Before Building Model |
|
0: [2023-04-27 16:08:27,376] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:27,376] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.64 GB, percent = 7.5% |
|
0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None |
|
0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7} |
|
0: [2023-04-27 16:08:27,617] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer |
|
0: stage=0 layers=11 |
|
0: 0: _to_float16 |
|
0: 1: EmbeddingPipe |
|
0: 2: <lambda> |
|
0: 3: ParallelTransformerLayerPipe |
|
0: 4: ParallelTransformerLayerPipe |
|
0: 5: ParallelTransformerLayerPipe |
|
0: 6: ParallelTransformerLayerPipe |
|
0: 7: undo |
|
0: 8: MixedFusedLayerNorm |
|
0: 9: EmbeddingPipe |
|
0: 10: float16_to_fp32 |
|
0: loss: CrossEntropy |
|
0: [2023-04-27 16:08:27,906] [INFO] [utils.py:827:see_memory_usage] After Building Model |
|
0: [2023-04-27 16:08:27,907] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:27,907] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.67 GB, percent = 7.5% |
|
0: setting training iterations to 1525 |
|
0: > learning rate decay style: cosine |
|
0: DeepSpeed is enabled. |
|
0: [2023-04-27 16:08:27,908] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown |
|
0: [2023-04-27 16:08:32,445] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False |
|
0: [2023-04-27 16:08:32,446] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer |
|
0: [2023-04-27 16:08:32,446] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer |
|
0: [2023-04-27 16:08:32,446] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam |
|
0: [2023-04-27 16:08:32,446] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer |
|
0: [2023-04-27 16:08:32,562] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer |
|
0: [2023-04-27 16:08:32,563] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:32,563] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 39.95 GB, percent = 7.9% |
|
0: ninja: no work to do. |
|
0: Time to load utils op: 0.5685186386108398 seconds |
|
0: Time to load utils op: 0.5682590007781982 secondsTime to load utils op: 0.45250701904296875 seconds |
|
0: |
|
0: Time to load utils op: 0.5687985420227051 seconds |
|
0: Time to load utils op: 0.5690939426422119 seconds |
|
0: Time to load utils op: 0.5684058666229248 secondsTime to load utils op: 0.5682809352874756 seconds |
|
0: |
|
0: Time to load utils op: 0.5694985389709473 seconds |
|
0: [2023-04-27 16:08:33,127] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 |
|
0: [2023-04-27 16:08:33,128] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:33,128] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 39.4 GB, percent = 7.8% |
|
0: Time to load utils op: 0.0025482177734375 secondsTime to load utils op: 0.0024967193603515625 secondsTime to load utils op: 0.0026845932006835938 seconds |
|
0: |
|
0: |
|
0: Time to load utils op: 0.002675771713256836 seconds |
|
0: Time to load utils op: 0.0027112960815429688 seconds |
|
0: Time to load utils op: 0.002604246139526367 seconds |
|
0: Time to load utils op: 0.002574443817138672 seconds |
|
0: [2023-04-27 16:08:33,652] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 |
|
0: [2023-04-27 16:08:33,653] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:33,653] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.69 GB, percent = 7.7% |
|
0: [2023-04-27 16:08:33,761] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 |
|
0: [2023-04-27 16:08:33,762] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:33,762] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.71 GB, percent = 7.7% |
|
0: [2023-04-27 16:08:33,865] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 |
|
0: [2023-04-27 16:08:33,866] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:33,866] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.5 GB, percent = 7.7% |
|
0: [2023-04-27 16:08:33,967] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 |
|
0: [2023-04-27 16:08:33,968] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:33,968] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.28 GB, percent = 7.6% |
|
0: [2023-04-27 16:08:34,070] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 |
|
0: [2023-04-27 16:08:34,071] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:34,071] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.04 GB, percent = 7.6% |
|
0: [2023-04-27 16:08:34,171] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer |
|
0: [2023-04-27 16:08:34,172] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:34,172] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.89 GB, percent = 7.5% |
|
0: [2023-04-27 16:08:34,279] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer |
|
0: [2023-04-27 16:08:34,279] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:34,280] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.89 GB, percent = 7.5% |
|
0: [2023-04-27 16:08:34,379] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer |
|
0: [2023-04-27 16:08:34,380] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 16:08:34,380] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.89 GB, percent = 7.5% |
|
0: [2023-04-27 16:08:34,380] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam |
|
0: [2023-04-27 16:08:34,380] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler |
|
0: [2023-04-27 16:08:34,381] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x14d694e65af0> |
|
0: [2023-04-27 16:08:34,381] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] |
|
0: [2023-04-27 16:08:34,381] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: |
|
0: [2023-04-27 16:08:34,381] [INFO] [config.py:1011:print] activation_checkpointing_config { |
|
0: "partition_activations": false, |
|
0: "contiguous_memory_optimization": false, |
|
0: "cpu_checkpointing": false, |
|
0: "number_checkpoints": null, |
|
0: "synchronize_checkpoint_boundary": false, |
|
0: "profile": false |
|
0: } |
|
0: [2023-04-27 16:08:34,381] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} |
|
0: [2023-04-27 16:08:34,381] [INFO] [config.py:1011:print] amp_enabled .................. False |
|
0: [2023-04-27 16:08:34,381] [INFO] [config.py:1011:print] amp_params ................... False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] autotuning_config ............ { |
|
0: "enabled": false, |
|
0: "start_step": null, |
|
0: "end_step": null, |
|
0: "metric_path": null, |
|
0: "arg_mappings": null, |
|
0: "metric": "throughput", |
|
0: "model_info": null, |
|
0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", |
|
0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", |
|
0: "overwrite": true, |
|
0: "fast": true, |
|
0: "start_profile_step": 3, |
|
0: "end_profile_step": 5, |
|
0: "tuner_type": "gridsearch", |
|
0: "tuner_early_stopping": 5, |
|
0: "tuner_num_trials": 50, |
|
0: "model_info_path": null, |
|
0: "mp_size": 1, |
|
0: "max_train_batch_size": null, |
|
0: "min_train_batch_size": 1, |
|
0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, |
|
0: "min_train_micro_batch_size_per_gpu": 1, |
|
0: "num_tuning_micro_batch_sizes": 3 |
|
0: } |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] bfloat16_enabled ............. True |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x14d694e658b0> |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] communication_data_type ...... None |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa |
|
0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] curriculum_enabled ........... False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] curriculum_params ............ False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] dataloader_drop_last ......... False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] disable_allgather ............ False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] dump_state ................... False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 |
|
0: [2023-04-27 16:08:34,382] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] elasticity_enabled ........... False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] flops_profiler_config ........ { |
|
0: "enabled": false, |
|
0: "profile_step": 1, |
|
0: "module_depth": -1, |
|
0: "top_modules": 1, |
|
0: "detailed": true, |
|
0: "output_file": null |
|
0: } |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] fp16_auto_cast ............... None |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] fp16_enabled ................. False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] global_rank .................. 0 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] load_universal_checkpoint .... False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] loss_scale ................... 1.0 |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] memory_breakdown ............. False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] monitor_config ............... <deepspeed.monitor.config.DeepSpeedMonitorConfig object at 0x14d694e65820> |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] nebula_config ................ { |
|
0: "enabled": false, |
|
0: "persistent_storage_path": null, |
|
0: "persistent_time_interval": 100, |
|
0: "num_of_version_in_retention": 2, |
|
0: "enable_nebula_load": true, |
|
0: "load_path": null |
|
0: } |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False |
|
0: [2023-04-27 16:08:34,383] [INFO] [config.py:1011:print] optimizer_name ............... None |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] optimizer_params ............. None |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] pld_enabled .................. False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] pld_params ................... False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] prescale_gradients ........... False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] scheduler_name ............... None |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] scheduler_params ............. None |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] sparse_attention ............. None |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] steps_per_print .............. 2000 |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] train_batch_size ............. 256 |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 32 |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] use_node_local_storage ....... False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] world_size ................... 8 |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] zero_enabled ................. False |
|
0: [2023-04-27 16:08:34,384] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 |
|
0: [2023-04-27 16:08:34,385] [INFO] [config.py:996:print_user_config] json = { |
|
0: "train_micro_batch_size_per_gpu": 32, |
|
0: "train_batch_size": 256, |
|
0: "gradient_clipping": 1.0, |
|
0: "zero_optimization": { |
|
0: "stage": 0 |
|
0: }, |
|
0: "bf16": { |
|
0: "enabled": true |
|
0: }, |
|
0: "steps_per_print": 2.000000e+03, |
|
0: "wall_clock_breakdown": false |
|
0: } |
|
0: Time to load utils op: 0.00045561790466308594 seconds |
|
0: [2023-04-27 16:08:34,385] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=32 |
|
0: [2023-04-27 16:08:34,398] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=11 [0, 11) STAGE_PARAMS=14147392 (14.147M) TOTAL_PARAMS=14147392 (14.147M) UNIQUE_PARAMS=14147392 (14.147M) |
|
0: [2023-04-27 16:08:34,399] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: WARNING: could not find the metadata file checkpoints_14m800m100m |
|
0: will not load any checkpoints and will start from random |
|
0: [2023-04-27 16:08:34,399] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,399] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 16:08:34,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m800m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: time (ms) | load-checkpoint: 1.17 |
|
0: estimated model parameters: 0.014147392 |
|
0: estimated model parameters without embeddings: 0.002420544 |
|
0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-04-27 16:08:35 |
|
0: > building train, validation, and test datasets ... |
|
0: > datasets target sizes (minimum size): |
|
0: train: 390625 |
|
0: validation: 512 |
|
0: test: 256 |
|
0: > building train, validation, and test datasets for GPT ... |
|
0: > building dataset index ... |
|
0: reading sizes... |
|
0: reading pointers... |
|
0: reading document index... |
|
0: creating numpy buffer of mmap... |
|
0: creating memory view of numpy buffer... |
|
0: > finished creating indexed dataset in 0.033962 seconds |
|
0: number of documents: 208931 |
|
0: > dataset split: |
|
0: train: |
|
0: document indices in [0, 208931) total of 208931 documents |
|
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_390625ns_2048sl_1234s_doc_idx.npy |
|
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_390625ns_2048sl_1234s_sample_idx.npy |
|
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_390625ns_2048sl_1234s_shuffle_idx.npy |
|
0: loaded indexed file in 0.125 seconds |
|
0: total number of samples: 439244 |
|
0: total number of epochs: 9 |
|
0: > building dataset index ... |
|
0: reading sizes... |
|
0: reading pointers... |
|
0: reading document index... |
|
0: creating numpy buffer of mmap... |
|
0: creating memory view of numpy buffer... |
|
0: > finished creating indexed dataset in 0.088767 seconds |
|
0: number of documents: 364608 |
|
0: > dataset split: |
|
0: validation: |
|
0: document indices in [0, 364608) total of 364608 documents |
|
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_512ns_2048sl_1234s_doc_idx.npy |
|
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_512ns_2048sl_1234s_sample_idx.npy |
|
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_512ns_2048sl_1234s_shuffle_idx.npy |
|
0: loaded indexed file in 0.143 seconds |
|
0: total number of samples: 84978 |
|
0: total number of epochs: 1 |
|
0: > finished creating GPT datasets ... |
|
0: time (ms) | model-and-optimizer-setup: 7742.68 | train/valid/test-data-iterators-setup: 5562.67 |
|
0: [after dataloaders are built] datetime: 2023-04-27 16:08:40 |
|
0: done with setup ... |
|
0: training ... |
|
0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: |
|
0: [000-000] 0.0141B / 0.0024B |
|
0: [before the start of training step] datetime: 2023-04-27 16:08:40 |
|
0: [2023-04-27 16:08:41,632] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information |
|
0: [2023-04-27 16:08:41,632] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False |
|
0: [2023-04-27 16:08:41,632] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers |
|
0: [2023-04-27 16:08:41,632] [INFO] [checkpointing.py:560:forward] ----Synchronization False |
|
0: [2023-04-27 16:08:41,632] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False |
|
0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 12710.28759765625 | max allocated: 31761.787109375 | reserved: 39838.0 | max reserved: 39838.0 |
|
0: iteration 10/ 1525 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 1.16 | learning rate: 1.311E-04 | global batch size: 256 | lm loss: 1.068973E+01 | grad norm: 1.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 219.990 | TFLOPs: 6.55 | |
|
0: iteration 20/ 1525 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 0.47 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 1.020747E+01 | grad norm: 1.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.375 | TFLOPs: 16.14 | |
|
0: iteration 30/ 1525 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 0.47 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 9.654769E+00 | grad norm: 1.224 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.533 | TFLOPs: 16.14 | |
|
0: iteration 40/ 1525 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 0.47 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 9.129231E+00 | grad norm: 1.222 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.833 | TFLOPs: 16.09 | |
|
0: iteration 50/ 1525 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 0.47 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 8.691612E+00 | grad norm: 1.191 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.597 | TFLOPs: 16.15 | |
|
0: iteration 60/ 1525 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 0.47 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 8.330186E+00 | grad norm: 1.147 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.053 | TFLOPs: 16.13 | |
|
0: iteration 70/ 1525 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 0.47 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 8.030439E+00 | grad norm: 1.029 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.796 | TFLOPs: 16.12 | |
|
0: iteration 80/ 1525 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 0.47 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 7.807533E+00 | grad norm: 0.858 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.189 | TFLOPs: 16.13 | |
|
0: iteration 90/ 1525 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 0.47 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 7.630698E+00 | grad norm: 0.894 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.137 | TFLOPs: 16.13 | |
|
0: iteration 100/ 1525 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.47 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 7.485824E+00 | grad norm: 0.492 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.894 | TFLOPs: 16.12 | |
|
0: iteration 110/ 1525 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 0.47 | learning rate: 1.983E-04 | global batch size: 256 | lm loss: 7.377650E+00 | grad norm: 0.568 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.066 | TFLOPs: 16.13 | |
|
0: iteration 120/ 1525 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 0.47 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 7.290695E+00 | grad norm: 0.559 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.200 | TFLOPs: 16.10 | |
|
0: iteration 130/ 1525 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 0.47 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 7.188296E+00 | grad norm: 0.830 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.944 | TFLOPs: 16.10 | |
|
0: iteration 140/ 1525 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 0.47 | learning rate: 1.970E-04 | global batch size: 256 | lm loss: 7.126910E+00 | grad norm: 0.509 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.730 | TFLOPs: 16.09 | |
|
0: iteration 150/ 1525 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 0.47 | learning rate: 1.965E-04 | global batch size: 256 | lm loss: 7.056492E+00 | grad norm: 0.593 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.638 | TFLOPs: 16.09 | |
|
0: iteration 160/ 1525 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 0.47 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 6.992710E+00 | grad norm: 0.253 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.324 | TFLOPs: 16.08 | |
|
0: iteration 170/ 1525 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 0.47 | learning rate: 1.954E-04 | global batch size: 256 | lm loss: 6.927315E+00 | grad norm: 0.745 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.281 | TFLOPs: 16.08 | |
|
0: iteration 180/ 1525 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 0.47 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 6.889413E+00 | grad norm: 0.317 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.987 | TFLOPs: 16.07 | |
|
0: iteration 190/ 1525 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 0.47 | learning rate: 1.941E-04 | global batch size: 256 | lm loss: 6.835899E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.773 | TFLOPs: 16.06 | |
|
0: iteration 200/ 1525 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.47 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 6.775999E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.843 | TFLOPs: 16.06 | |
|
0: iteration 210/ 1525 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 0.47 | learning rate: 1.927E-04 | global batch size: 256 | lm loss: 6.757192E+00 | grad norm: 0.463 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.646 | TFLOPs: 16.06 | |
|
0: iteration 220/ 1525 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 0.47 | learning rate: 1.920E-04 | global batch size: 256 | lm loss: 6.728449E+00 | grad norm: 0.215 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.576 | TFLOPs: 16.06 | |
|
0: iteration 230/ 1525 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 0.47 | learning rate: 1.912E-04 | global batch size: 256 | lm loss: 6.686837E+00 | grad norm: 1.230 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.444 | TFLOPs: 16.05 | |
|
0: iteration 240/ 1525 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 0.47 | learning rate: 1.903E-04 | global batch size: 256 | lm loss: 6.656981E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.332 | TFLOPs: 16.05 | |
|
0: iteration 250/ 1525 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 0.47 | learning rate: 1.895E-04 | global batch size: 256 | lm loss: 6.646482E+00 | grad norm: 0.320 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.126 | TFLOPs: 16.04 | |
|
0: iteration 260/ 1525 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 0.47 | learning rate: 1.886E-04 | global batch size: 256 | lm loss: 6.599119E+00 | grad norm: 0.480 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.015 | TFLOPs: 16.04 | |
|
0: iteration 270/ 1525 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 0.47 | learning rate: 1.877E-04 | global batch size: 256 | lm loss: 6.582709E+00 | grad norm: 0.608 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.046 | TFLOPs: 16.04 | |
|
0: iteration 280/ 1525 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 0.48 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 6.564145E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.885 | TFLOPs: 16.04 | |
|
0: iteration 290/ 1525 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 0.48 | learning rate: 1.857E-04 | global batch size: 256 | lm loss: 6.541116E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.818 | TFLOPs: 16.03 | |
|
0: iteration 300/ 1525 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.48 | learning rate: 1.847E-04 | global batch size: 256 | lm loss: 6.525187E+00 | grad norm: 0.372 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.764 | TFLOPs: 16.03 | |
|
0: iteration 310/ 1525 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 0.48 | learning rate: 1.836E-04 | global batch size: 256 | lm loss: 6.522860E+00 | grad norm: 0.382 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.584 | TFLOPs: 16.03 | |
|
0: iteration 320/ 1525 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 0.48 | learning rate: 1.825E-04 | global batch size: 256 | lm loss: 6.494178E+00 | grad norm: 0.312 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.782 | TFLOPs: 16.03 | |
|
0: iteration 330/ 1525 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 0.48 | learning rate: 1.814E-04 | global batch size: 256 | lm loss: 6.469495E+00 | grad norm: 0.740 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.495 | TFLOPs: 16.02 | |
|
0: iteration 340/ 1525 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 0.48 | learning rate: 1.802E-04 | global batch size: 256 | lm loss: 6.460699E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.641 | TFLOPs: 16.03 | |
|
0: iteration 350/ 1525 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 0.48 | learning rate: 1.791E-04 | global batch size: 256 | lm loss: 6.438235E+00 | grad norm: 0.235 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.517 | TFLOPs: 16.02 | |
|
0: iteration 360/ 1525 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 0.48 | learning rate: 1.778E-04 | global batch size: 256 | lm loss: 6.421720E+00 | grad norm: 0.454 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.521 | TFLOPs: 16.02 | |
|
0: iteration 370/ 1525 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 0.48 | learning rate: 1.766E-04 | global batch size: 256 | lm loss: 6.416362E+00 | grad norm: 0.493 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.561 | TFLOPs: 16.03 | |
|
0: iteration 380/ 1525 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 0.48 | learning rate: 1.753E-04 | global batch size: 256 | lm loss: 6.396294E+00 | grad norm: 0.687 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.285 | TFLOPs: 16.02 | |
|
0: iteration 390/ 1525 | consumed samples: 99840 | consumed tokens: 204472320 | elapsed time per iteration (s): 0.48 | learning rate: 1.740E-04 | global batch size: 256 | lm loss: 6.374857E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.387 | TFLOPs: 16.02 | |
|
0: iteration 400/ 1525 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 0.48 | learning rate: 1.727E-04 | global batch size: 256 | lm loss: 6.374295E+00 | grad norm: 0.479 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.271 | TFLOPs: 16.02 | |
|
0: iteration 410/ 1525 | consumed samples: 104960 | consumed tokens: 214958080 | elapsed time per iteration (s): 0.48 | learning rate: 1.713E-04 | global batch size: 256 | lm loss: 6.361293E+00 | grad norm: 0.453 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.230 | TFLOPs: 16.02 | |
|
0: iteration 420/ 1525 | consumed samples: 107520 | consumed tokens: 220200960 | elapsed time per iteration (s): 0.48 | learning rate: 1.700E-04 | global batch size: 256 | lm loss: 6.342772E+00 | grad norm: 0.254 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.178 | TFLOPs: 16.01 | |
|
0: iteration 430/ 1525 | consumed samples: 110080 | consumed tokens: 225443840 | elapsed time per iteration (s): 0.48 | learning rate: 1.685E-04 | global batch size: 256 | lm loss: 6.337056E+00 | grad norm: 0.607 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.271 | TFLOPs: 16.02 | |
|
0: iteration 440/ 1525 | consumed samples: 112640 | consumed tokens: 230686720 | elapsed time per iteration (s): 0.48 | learning rate: 1.671E-04 | global batch size: 256 | lm loss: 6.327776E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.089 | TFLOPs: 16.01 | |
|
0: iteration 450/ 1525 | consumed samples: 115200 | consumed tokens: 235929600 | elapsed time per iteration (s): 0.48 | learning rate: 1.657E-04 | global batch size: 256 | lm loss: 6.329643E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.090 | TFLOPs: 16.01 | |
|
0: iteration 460/ 1525 | consumed samples: 117760 | consumed tokens: 241172480 | elapsed time per iteration (s): 0.48 | learning rate: 1.642E-04 | global batch size: 256 | lm loss: 6.316489E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.035 | TFLOPs: 16.01 | |
|
0: iteration 470/ 1525 | consumed samples: 120320 | consumed tokens: 246415360 | elapsed time per iteration (s): 0.48 | learning rate: 1.627E-04 | global batch size: 256 | lm loss: 6.295700E+00 | grad norm: 0.348 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.126 | TFLOPs: 16.01 | |
|
0: iteration 480/ 1525 | consumed samples: 122880 | consumed tokens: 251658240 | elapsed time per iteration (s): 0.48 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 6.282529E+00 | grad norm: 0.357 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.994 | TFLOPs: 16.01 | |
|
0: iteration 490/ 1525 | consumed samples: 125440 | consumed tokens: 256901120 | elapsed time per iteration (s): 0.48 | learning rate: 1.596E-04 | global batch size: 256 | lm loss: 6.286076E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.034 | TFLOPs: 16.01 | |
|
0: iteration 500/ 1525 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 0.48 | learning rate: 1.580E-04 | global batch size: 256 | lm loss: 6.274917E+00 | grad norm: 0.301 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.962 | TFLOPs: 16.01 | |
|
0: iteration 510/ 1525 | consumed samples: 130560 | consumed tokens: 267386880 | elapsed time per iteration (s): 0.48 | learning rate: 1.564E-04 | global batch size: 256 | lm loss: 6.275820E+00 | grad norm: 0.339 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.999 | TFLOPs: 16.01 | |
|
0: iteration 520/ 1525 | consumed samples: 133120 | consumed tokens: 272629760 | elapsed time per iteration (s): 0.48 | learning rate: 1.548E-04 | global batch size: 256 | lm loss: 6.259384E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.712 | TFLOPs: 16.00 | |
|
0: iteration 530/ 1525 | consumed samples: 135680 | consumed tokens: 277872640 | elapsed time per iteration (s): 0.48 | learning rate: 1.532E-04 | global batch size: 256 | lm loss: 6.235233E+00 | grad norm: 0.362 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.967 | TFLOPs: 16.01 | |
|
0: iteration 540/ 1525 | consumed samples: 138240 | consumed tokens: 283115520 | elapsed time per iteration (s): 0.48 | learning rate: 1.515E-04 | global batch size: 256 | lm loss: 6.237875E+00 | grad norm: 0.799 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.606 | TFLOPs: 16.00 | |
|
0: iteration 550/ 1525 | consumed samples: 140800 | consumed tokens: 288358400 | elapsed time per iteration (s): 0.48 | learning rate: 1.499E-04 | global batch size: 256 | lm loss: 6.221644E+00 | grad norm: 0.551 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.767 | TFLOPs: 16.00 | |
|
0: iteration 560/ 1525 | consumed samples: 143360 | consumed tokens: 293601280 | elapsed time per iteration (s): 0.48 | learning rate: 1.482E-04 | global batch size: 256 | lm loss: 6.227534E+00 | grad norm: 0.342 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.636 | TFLOPs: 16.00 | |
|
0: iteration 570/ 1525 | consumed samples: 145920 | consumed tokens: 298844160 | elapsed time per iteration (s): 0.48 | learning rate: 1.465E-04 | global batch size: 256 | lm loss: 6.215597E+00 | grad norm: 0.730 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.897 | TFLOPs: 16.01 | |
|
0: iteration 580/ 1525 | consumed samples: 148480 | consumed tokens: 304087040 | elapsed time per iteration (s): 0.48 | learning rate: 1.447E-04 | global batch size: 256 | lm loss: 6.220334E+00 | grad norm: 0.846 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.884 | TFLOPs: 16.01 | |
|
0: iteration 590/ 1525 | consumed samples: 151040 | consumed tokens: 309329920 | elapsed time per iteration (s): 0.48 | learning rate: 1.430E-04 | global batch size: 256 | lm loss: 6.195013E+00 | grad norm: 0.493 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.758 | TFLOPs: 16.00 | |
|
0: iteration 600/ 1525 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 0.48 | learning rate: 1.413E-04 | global batch size: 256 | lm loss: 6.177606E+00 | grad norm: 0.949 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.876 | TFLOPs: 16.01 | |
|
0: iteration 610/ 1525 | consumed samples: 156160 | consumed tokens: 319815680 | elapsed time per iteration (s): 0.48 | learning rate: 1.395E-04 | global batch size: 256 | lm loss: 6.174138E+00 | grad norm: 0.522 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.865 | TFLOPs: 16.00 | |
|
0: iteration 620/ 1525 | consumed samples: 158720 | consumed tokens: 325058560 | elapsed time per iteration (s): 0.48 | learning rate: 1.377E-04 | global batch size: 256 | lm loss: 6.163391E+00 | grad norm: 0.604 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.903 | TFLOPs: 16.01 | |
|
0: iteration 630/ 1525 | consumed samples: 161280 | consumed tokens: 330301440 | elapsed time per iteration (s): 0.48 | learning rate: 1.359E-04 | global batch size: 256 | lm loss: 6.139440E+00 | grad norm: 0.704 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.909 | TFLOPs: 16.01 | |
|
0: iteration 640/ 1525 | consumed samples: 163840 | consumed tokens: 335544320 | elapsed time per iteration (s): 0.48 | learning rate: 1.341E-04 | global batch size: 256 | lm loss: 6.150462E+00 | grad norm: 0.856 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.901 | TFLOPs: 16.01 | |
|
0: iteration 650/ 1525 | consumed samples: 166400 | consumed tokens: 340787200 | elapsed time per iteration (s): 0.48 | learning rate: 1.323E-04 | global batch size: 256 | lm loss: 6.134192E+00 | grad norm: 0.706 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.745 | TFLOPs: 16.00 | |
|
0: iteration 660/ 1525 | consumed samples: 168960 | consumed tokens: 346030080 | elapsed time per iteration (s): 0.48 | learning rate: 1.305E-04 | global batch size: 256 | lm loss: 6.122786E+00 | grad norm: 0.489 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.694 | TFLOPs: 16.00 | |
|
0: iteration 670/ 1525 | consumed samples: 171520 | consumed tokens: 351272960 | elapsed time per iteration (s): 0.48 | learning rate: 1.287E-04 | global batch size: 256 | lm loss: 6.121948E+00 | grad norm: 0.522 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.719 | TFLOPs: 16.00 | |
|
0: iteration 680/ 1525 | consumed samples: 174080 | consumed tokens: 356515840 | elapsed time per iteration (s): 0.48 | learning rate: 1.269E-04 | global batch size: 256 | lm loss: 6.122119E+00 | grad norm: 0.571 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.607 | TFLOPs: 16.00 | |
|
0: iteration 690/ 1525 | consumed samples: 176640 | consumed tokens: 361758720 | elapsed time per iteration (s): 0.48 | learning rate: 1.250E-04 | global batch size: 256 | lm loss: 6.089357E+00 | grad norm: 0.567 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.684 | TFLOPs: 16.00 | |
|
0: iteration 700/ 1525 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 0.48 | learning rate: 1.232E-04 | global batch size: 256 | lm loss: 6.108974E+00 | grad norm: 0.573 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.645 | TFLOPs: 16.00 | |
|
0: iteration 710/ 1525 | consumed samples: 181760 | consumed tokens: 372244480 | elapsed time per iteration (s): 0.48 | learning rate: 1.213E-04 | global batch size: 256 | lm loss: 6.096107E+00 | grad norm: 0.707 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.764 | TFLOPs: 16.00 | |
|
0: iteration 720/ 1525 | consumed samples: 184320 | consumed tokens: 377487360 | elapsed time per iteration (s): 0.48 | learning rate: 1.194E-04 | global batch size: 256 | lm loss: 6.076715E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.578 | TFLOPs: 16.00 | |
|
0: iteration 730/ 1525 | consumed samples: 186880 | consumed tokens: 382730240 | elapsed time per iteration (s): 0.48 | learning rate: 1.176E-04 | global batch size: 256 | lm loss: 6.066757E+00 | grad norm: 0.508 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.604 | TFLOPs: 16.00 | |
|
0: iteration 740/ 1525 | consumed samples: 189440 | consumed tokens: 387973120 | elapsed time per iteration (s): 0.48 | learning rate: 1.157E-04 | global batch size: 256 | lm loss: 6.056355E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.515 | TFLOPs: 15.99 | |
|
0: iteration 750/ 1525 | consumed samples: 192000 | consumed tokens: 393216000 | elapsed time per iteration (s): 0.48 | learning rate: 1.138E-04 | global batch size: 256 | lm loss: 6.071131E+00 | grad norm: 0.525 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.717 | TFLOPs: 16.00 | |
|
0: iteration 760/ 1525 | consumed samples: 194560 | consumed tokens: 398458880 | elapsed time per iteration (s): 0.48 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 6.053228E+00 | grad norm: 0.676 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.599 | TFLOPs: 16.00 | |
|
0: iteration 770/ 1525 | consumed samples: 197120 | consumed tokens: 403701760 | elapsed time per iteration (s): 0.48 | learning rate: 1.101E-04 | global batch size: 256 | lm loss: 6.033627E+00 | grad norm: 0.677 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.385 | TFLOPs: 15.99 | |
|
0: iteration 780/ 1525 | consumed samples: 199680 | consumed tokens: 408944640 | elapsed time per iteration (s): 0.48 | learning rate: 1.082E-04 | global batch size: 256 | lm loss: 6.057899E+00 | grad norm: 0.680 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.623 | TFLOPs: 16.00 | |
|
0: iteration 790/ 1525 | consumed samples: 202240 | consumed tokens: 414187520 | elapsed time per iteration (s): 0.48 | learning rate: 1.064E-04 | global batch size: 256 | lm loss: 6.041494E+00 | grad norm: 0.536 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.751 | TFLOPs: 16.00 | |
|
0: iteration 800/ 1525 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 0.48 | learning rate: 1.045E-04 | global batch size: 256 | lm loss: 6.026423E+00 | grad norm: 0.652 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.419 | TFLOPs: 15.99 | |
|
0: iteration 810/ 1525 | consumed samples: 207360 | consumed tokens: 424673280 | elapsed time per iteration (s): 0.48 | learning rate: 1.026E-04 | global batch size: 256 | lm loss: 6.024977E+00 | grad norm: 0.672 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.501 | TFLOPs: 15.99 | |
|
0: iteration 820/ 1525 | consumed samples: 209920 | consumed tokens: 429916160 | elapsed time per iteration (s): 0.48 | learning rate: 1.008E-04 | global batch size: 256 | lm loss: 6.010468E+00 | grad norm: 0.698 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.365 | TFLOPs: 15.99 | |
|
0: iteration 830/ 1525 | consumed samples: 212480 | consumed tokens: 435159040 | elapsed time per iteration (s): 0.48 | learning rate: 9.890E-05 | global batch size: 256 | lm loss: 6.012144E+00 | grad norm: 0.657 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.622 | TFLOPs: 16.00 | |
|
0: iteration 840/ 1525 | consumed samples: 215040 | consumed tokens: 440401920 | elapsed time per iteration (s): 0.48 | learning rate: 9.705E-05 | global batch size: 256 | lm loss: 6.010719E+00 | grad norm: 0.601 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.513 | TFLOPs: 15.99 | |
|
0: iteration 850/ 1525 | consumed samples: 217600 | consumed tokens: 445644800 | elapsed time per iteration (s): 0.48 | learning rate: 9.520E-05 | global batch size: 256 | lm loss: 5.999532E+00 | grad norm: 0.570 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.303 | TFLOPs: 15.99 | |
|
0: iteration 860/ 1525 | consumed samples: 220160 | consumed tokens: 450887680 | elapsed time per iteration (s): 0.48 | learning rate: 9.336E-05 | global batch size: 256 | lm loss: 5.997955E+00 | grad norm: 0.517 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.351 | TFLOPs: 15.99 | |
|
0: iteration 870/ 1525 | consumed samples: 222720 | consumed tokens: 456130560 | elapsed time per iteration (s): 0.48 | learning rate: 9.152E-05 | global batch size: 256 | lm loss: 5.990480E+00 | grad norm: 0.469 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.578 | TFLOPs: 16.00 | |
|
0: iteration 880/ 1525 | consumed samples: 225280 | consumed tokens: 461373440 | elapsed time per iteration (s): 0.48 | learning rate: 8.969E-05 | global batch size: 256 | lm loss: 5.985769E+00 | grad norm: 0.572 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.335 | TFLOPs: 15.99 | |
|
0: iteration 890/ 1525 | consumed samples: 227840 | consumed tokens: 466616320 | elapsed time per iteration (s): 0.48 | learning rate: 8.788E-05 | global batch size: 256 | lm loss: 5.988488E+00 | grad norm: 0.658 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.070 | TFLOPs: 15.98 | |
|
0: iteration 900/ 1525 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 0.48 | learning rate: 8.607E-05 | global batch size: 256 | lm loss: 5.973943E+00 | grad norm: 0.883 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.435 | TFLOPs: 15.99 | |
|
0: iteration 910/ 1525 | consumed samples: 232960 | consumed tokens: 477102080 | elapsed time per iteration (s): 0.48 | learning rate: 8.427E-05 | global batch size: 256 | lm loss: 5.978433E+00 | grad norm: 0.668 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.177 | TFLOPs: 15.98 | |
|
0: iteration 920/ 1525 | consumed samples: 235520 | consumed tokens: 482344960 | elapsed time per iteration (s): 0.48 | learning rate: 8.248E-05 | global batch size: 256 | lm loss: 5.968143E+00 | grad norm: 0.532 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 535.619 | TFLOPs: 15.94 | |
|
0: iteration 930/ 1525 | consumed samples: 238080 | consumed tokens: 487587840 | elapsed time per iteration (s): 0.48 | learning rate: 8.070E-05 | global batch size: 256 | lm loss: 5.959432E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.279 | TFLOPs: 15.99 | |
|
0: iteration 940/ 1525 | consumed samples: 240640 | consumed tokens: 492830720 | elapsed time per iteration (s): 0.48 | learning rate: 7.894E-05 | global batch size: 256 | lm loss: 5.957594E+00 | grad norm: 0.516 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.230 | TFLOPs: 15.96 | |
|
0: iteration 950/ 1525 | consumed samples: 243200 | consumed tokens: 498073600 | elapsed time per iteration (s): 0.48 | learning rate: 7.719E-05 | global batch size: 256 | lm loss: 5.958137E+00 | grad norm: 0.713 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.237 | TFLOPs: 15.99 | |
|
0: iteration 960/ 1525 | consumed samples: 245760 | consumed tokens: 503316480 | elapsed time per iteration (s): 0.48 | learning rate: 7.545E-05 | global batch size: 256 | lm loss: 5.957088E+00 | grad norm: 0.834 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.315 | TFLOPs: 15.99 | |
|
0: iteration 970/ 1525 | consumed samples: 248320 | consumed tokens: 508559360 | elapsed time per iteration (s): 0.48 | learning rate: 7.373E-05 | global batch size: 256 | lm loss: 5.949002E+00 | grad norm: 0.653 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.335 | TFLOPs: 15.99 | |
|
0: iteration 980/ 1525 | consumed samples: 250880 | consumed tokens: 513802240 | elapsed time per iteration (s): 0.48 | learning rate: 7.203E-05 | global batch size: 256 | lm loss: 5.954190E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.196 | TFLOPs: 15.99 | |
|
0: iteration 990/ 1525 | consumed samples: 253440 | consumed tokens: 519045120 | elapsed time per iteration (s): 0.48 | learning rate: 7.034E-05 | global batch size: 256 | lm loss: 5.940887E+00 | grad norm: 0.624 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.261 | TFLOPs: 15.99 | |
|
0: iteration 1000/ 1525 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 0.48 | learning rate: 6.867E-05 | global batch size: 256 | lm loss: 5.923114E+00 | grad norm: 0.564 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.318 | TFLOPs: 15.99 | |
|
0: ----------------------------------------------------------------------------------------------- |
|
0: validation loss at iteration 1000 | lm loss value: 5.905182E+00 | lm loss PPL: 3.669340E+02 | |
|
0: ----------------------------------------------------------------------------------------------- |
|
0: saving checkpoint at iteration 1000 to checkpoints_14m800m100m |
|
0: [2023-04-27 16:16:43,062] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step1000 is begin to save! |
|
0: [2023-04-27 16:16:43,129] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_01-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,157] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_01-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,157] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_03-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,160] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_03-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,160] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_04-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,163] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_04-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,163] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_05-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,166] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_05-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,166] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_06-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,168] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_06-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,169] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/layer_08-model_00-model_states.pt... |
|
0: [2023-04-27 16:16:43,169] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/layer_08-model_00-model_states.pt. |
|
0: [2023-04-27 16:16:43,170] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_14m800m100m/global_step1000/mp_rank_00_model_states.pt |
|
0: [2023-04-27 16:16:43,170] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/mp_rank_00_model_states.pt... |
|
0: [2023-04-27 16:16:43,173] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/mp_rank_00_model_states.pt. |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,176] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:16:43,203] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,203] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,203] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,203] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,209] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,209] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,209] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,209] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,209] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,209] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:16:43,217] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,217] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,217] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,217] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,217] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: [2023-04-27 16:16:43,236] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:16:43,236] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! |
|
0: successfully saved checkpoint at iteration 1000 to checkpoints_14m800m100m |
|
0: time (ms) | save-checkpoint: 177.85 |
|
0: iteration 1010/ 1525 | consumed samples: 258560 | consumed tokens: 529530880 | elapsed time per iteration (s): 0.51 | learning rate: 6.701E-05 | global batch size: 256 | lm loss: 5.945189E+00 | grad norm: 0.483 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 497.543 | TFLOPs: 14.81 | |
|
0: iteration 1020/ 1525 | consumed samples: 261120 | consumed tokens: 534773760 | elapsed time per iteration (s): 0.48 | learning rate: 6.538E-05 | global batch size: 256 | lm loss: 5.927592E+00 | grad norm: 0.638 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.300 | TFLOPs: 15.99 | |
|
0: iteration 1030/ 1525 | consumed samples: 263680 | consumed tokens: 540016640 | elapsed time per iteration (s): 0.48 | learning rate: 6.376E-05 | global batch size: 256 | lm loss: 5.929830E+00 | grad norm: 0.506 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.190 | TFLOPs: 15.98 | |
|
0: iteration 1040/ 1525 | consumed samples: 266240 | consumed tokens: 545259520 | elapsed time per iteration (s): 0.48 | learning rate: 6.217E-05 | global batch size: 256 | lm loss: 5.928193E+00 | grad norm: 0.509 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.228 | TFLOPs: 15.99 | |
|
0: iteration 1050/ 1525 | consumed samples: 268800 | consumed tokens: 550502400 | elapsed time per iteration (s): 0.48 | learning rate: 6.059E-05 | global batch size: 256 | lm loss: 5.929659E+00 | grad norm: 0.547 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.287 | TFLOPs: 15.99 | |
|
0: iteration 1060/ 1525 | consumed samples: 271360 | consumed tokens: 555745280 | elapsed time per iteration (s): 0.48 | learning rate: 5.904E-05 | global batch size: 256 | lm loss: 5.916668E+00 | grad norm: 0.477 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.224 | TFLOPs: 15.99 | |
|
0: iteration 1070/ 1525 | consumed samples: 273920 | consumed tokens: 560988160 | elapsed time per iteration (s): 0.48 | learning rate: 5.751E-05 | global batch size: 256 | lm loss: 5.918845E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.147 | TFLOPs: 15.98 | |
|
0: iteration 1080/ 1525 | consumed samples: 276480 | consumed tokens: 566231040 | elapsed time per iteration (s): 0.48 | learning rate: 5.600E-05 | global batch size: 256 | lm loss: 5.929090E+00 | grad norm: 0.499 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.271 | TFLOPs: 15.99 | |
|
0: iteration 1090/ 1525 | consumed samples: 279040 | consumed tokens: 571473920 | elapsed time per iteration (s): 0.48 | learning rate: 5.451E-05 | global batch size: 256 | lm loss: 5.920293E+00 | grad norm: 0.539 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.340 | TFLOPs: 15.99 | |
|
0: iteration 1100/ 1525 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 0.48 | learning rate: 5.305E-05 | global batch size: 256 | lm loss: 5.915980E+00 | grad norm: 0.378 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.155 | TFLOPs: 15.98 | |
|
0: iteration 1110/ 1525 | consumed samples: 284160 | consumed tokens: 581959680 | elapsed time per iteration (s): 0.48 | learning rate: 5.161E-05 | global batch size: 256 | lm loss: 5.922095E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.187 | TFLOPs: 15.98 | |
|
0: iteration 1120/ 1525 | consumed samples: 286720 | consumed tokens: 587202560 | elapsed time per iteration (s): 0.48 | learning rate: 5.020E-05 | global batch size: 256 | lm loss: 5.914421E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.093 | TFLOPs: 15.98 | |
|
0: iteration 1130/ 1525 | consumed samples: 289280 | consumed tokens: 592445440 | elapsed time per iteration (s): 0.48 | learning rate: 4.882E-05 | global batch size: 256 | lm loss: 5.898801E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.204 | TFLOPs: 15.99 | |
|
0: iteration 1140/ 1525 | consumed samples: 291840 | consumed tokens: 597688320 | elapsed time per iteration (s): 0.48 | learning rate: 4.746E-05 | global batch size: 256 | lm loss: 5.907529E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.100 | TFLOPs: 15.98 | |
|
0: iteration 1150/ 1525 | consumed samples: 294400 | consumed tokens: 602931200 | elapsed time per iteration (s): 0.48 | learning rate: 4.613E-05 | global batch size: 256 | lm loss: 5.904577E+00 | grad norm: 0.472 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.125 | TFLOPs: 15.98 | |
|
0: iteration 1160/ 1525 | consumed samples: 296960 | consumed tokens: 608174080 | elapsed time per iteration (s): 0.48 | learning rate: 4.482E-05 | global batch size: 256 | lm loss: 5.901854E+00 | grad norm: 0.460 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.229 | TFLOPs: 15.99 | |
|
0: iteration 1170/ 1525 | consumed samples: 299520 | consumed tokens: 613416960 | elapsed time per iteration (s): 0.48 | learning rate: 4.354E-05 | global batch size: 256 | lm loss: 5.902044E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.085 | TFLOPs: 15.98 | |
|
0: iteration 1180/ 1525 | consumed samples: 302080 | consumed tokens: 618659840 | elapsed time per iteration (s): 0.48 | learning rate: 4.230E-05 | global batch size: 256 | lm loss: 5.883190E+00 | grad norm: 0.359 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.199 | TFLOPs: 15.99 | |
|
0: iteration 1190/ 1525 | consumed samples: 304640 | consumed tokens: 623902720 | elapsed time per iteration (s): 0.48 | learning rate: 4.108E-05 | global batch size: 256 | lm loss: 5.892776E+00 | grad norm: 0.364 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.253 | TFLOPs: 15.99 | |
|
0: iteration 1200/ 1525 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 0.48 | learning rate: 3.989E-05 | global batch size: 256 | lm loss: 5.890031E+00 | grad norm: 0.517 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.083 | TFLOPs: 15.98 | |
|
0: iteration 1210/ 1525 | consumed samples: 309760 | consumed tokens: 634388480 | elapsed time per iteration (s): 0.48 | learning rate: 3.873E-05 | global batch size: 256 | lm loss: 5.899716E+00 | grad norm: 0.473 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.010 | TFLOPs: 15.98 | |
|
0: iteration 1220/ 1525 | consumed samples: 312320 | consumed tokens: 639631360 | elapsed time per iteration (s): 0.48 | learning rate: 3.760E-05 | global batch size: 256 | lm loss: 5.893303E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.166 | TFLOPs: 15.98 | |
|
0: iteration 1230/ 1525 | consumed samples: 314880 | consumed tokens: 644874240 | elapsed time per iteration (s): 0.48 | learning rate: 3.651E-05 | global batch size: 256 | lm loss: 5.904821E+00 | grad norm: 0.365 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.241 | TFLOPs: 15.99 | |
|
0: iteration 1240/ 1525 | consumed samples: 317440 | consumed tokens: 650117120 | elapsed time per iteration (s): 0.48 | learning rate: 3.544E-05 | global batch size: 256 | lm loss: 5.875194E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.080 | TFLOPs: 15.98 | |
|
0: iteration 1250/ 1525 | consumed samples: 320000 | consumed tokens: 655360000 | elapsed time per iteration (s): 0.48 | learning rate: 3.441E-05 | global batch size: 256 | lm loss: 5.882400E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.095 | TFLOPs: 15.98 | |
|
0: iteration 1260/ 1525 | consumed samples: 322560 | consumed tokens: 660602880 | elapsed time per iteration (s): 0.48 | learning rate: 3.341E-05 | global batch size: 256 | lm loss: 5.879632E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.023 | TFLOPs: 15.98 | |
|
0: iteration 1270/ 1525 | consumed samples: 325120 | consumed tokens: 665845760 | elapsed time per iteration (s): 0.48 | learning rate: 3.245E-05 | global batch size: 256 | lm loss: 5.872038E+00 | grad norm: 0.376 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.202 | TFLOPs: 15.99 | |
|
0: iteration 1280/ 1525 | consumed samples: 327680 | consumed tokens: 671088640 | elapsed time per iteration (s): 0.48 | learning rate: 3.151E-05 | global batch size: 256 | lm loss: 5.858654E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.100 | TFLOPs: 15.98 | |
|
0: iteration 1290/ 1525 | consumed samples: 330240 | consumed tokens: 676331520 | elapsed time per iteration (s): 0.48 | learning rate: 3.061E-05 | global batch size: 256 | lm loss: 5.882155E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.991 | TFLOPs: 15.98 | |
|
0: iteration 1300/ 1525 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 0.48 | learning rate: 2.975E-05 | global batch size: 256 | lm loss: 5.876309E+00 | grad norm: 0.376 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.234 | TFLOPs: 15.99 | |
|
0: iteration 1310/ 1525 | consumed samples: 335360 | consumed tokens: 686817280 | elapsed time per iteration (s): 0.48 | learning rate: 2.892E-05 | global batch size: 256 | lm loss: 5.873679E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.140 | TFLOPs: 15.98 | |
|
0: iteration 1320/ 1525 | consumed samples: 337920 | consumed tokens: 692060160 | elapsed time per iteration (s): 0.48 | learning rate: 2.812E-05 | global batch size: 256 | lm loss: 5.881009E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.902 | TFLOPs: 15.98 | |
|
0: iteration 1330/ 1525 | consumed samples: 340480 | consumed tokens: 697303040 | elapsed time per iteration (s): 0.48 | learning rate: 2.736E-05 | global batch size: 256 | lm loss: 5.878789E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.000 | TFLOPs: 15.98 | |
|
0: iteration 1340/ 1525 | consumed samples: 343040 | consumed tokens: 702545920 | elapsed time per iteration (s): 0.48 | learning rate: 2.664E-05 | global batch size: 256 | lm loss: 5.873897E+00 | grad norm: 0.356 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.115 | TFLOPs: 15.98 | |
|
0: iteration 1350/ 1525 | consumed samples: 345600 | consumed tokens: 707788800 | elapsed time per iteration (s): 0.48 | learning rate: 2.595E-05 | global batch size: 256 | lm loss: 5.868530E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.178 | TFLOPs: 15.98 | |
|
0: iteration 1360/ 1525 | consumed samples: 348160 | consumed tokens: 713031680 | elapsed time per iteration (s): 0.48 | learning rate: 2.530E-05 | global batch size: 256 | lm loss: 5.868209E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.927 | TFLOPs: 15.98 | |
|
0: iteration 1370/ 1525 | consumed samples: 350720 | consumed tokens: 718274560 | elapsed time per iteration (s): 0.48 | learning rate: 2.469E-05 | global batch size: 256 | lm loss: 5.873995E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.095 | TFLOPs: 15.98 | |
|
0: iteration 1380/ 1525 | consumed samples: 353280 | consumed tokens: 723517440 | elapsed time per iteration (s): 0.48 | learning rate: 2.411E-05 | global batch size: 256 | lm loss: 5.866612E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.082 | TFLOPs: 15.98 | |
|
0: iteration 1390/ 1525 | consumed samples: 355840 | consumed tokens: 728760320 | elapsed time per iteration (s): 0.48 | learning rate: 2.357E-05 | global batch size: 256 | lm loss: 5.867944E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.217 | TFLOPs: 15.99 | |
|
0: iteration 1400/ 1525 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 0.48 | learning rate: 2.307E-05 | global batch size: 256 | lm loss: 5.871935E+00 | grad norm: 0.377 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.107 | TFLOPs: 15.98 | |
|
0: iteration 1410/ 1525 | consumed samples: 360960 | consumed tokens: 739246080 | elapsed time per iteration (s): 0.48 | learning rate: 2.260E-05 | global batch size: 256 | lm loss: 5.873799E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.019 | TFLOPs: 15.98 | |
|
0: iteration 1420/ 1525 | consumed samples: 363520 | consumed tokens: 744488960 | elapsed time per iteration (s): 0.48 | learning rate: 2.217E-05 | global batch size: 256 | lm loss: 5.860938E+00 | grad norm: 0.351 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.959 | TFLOPs: 15.98 | |
|
0: iteration 1430/ 1525 | consumed samples: 366080 | consumed tokens: 749731840 | elapsed time per iteration (s): 0.48 | learning rate: 2.178E-05 | global batch size: 256 | lm loss: 5.860669E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.969 | TFLOPs: 15.98 | |
|
0: iteration 1440/ 1525 | consumed samples: 368640 | consumed tokens: 754974720 | elapsed time per iteration (s): 0.48 | learning rate: 2.143E-05 | global batch size: 256 | lm loss: 5.858232E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.916 | TFLOPs: 15.98 | |
|
0: iteration 1450/ 1525 | consumed samples: 371200 | consumed tokens: 760217600 | elapsed time per iteration (s): 0.48 | learning rate: 2.112E-05 | global batch size: 256 | lm loss: 5.865171E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.963 | TFLOPs: 15.98 | |
|
0: iteration 1460/ 1525 | consumed samples: 373760 | consumed tokens: 765460480 | elapsed time per iteration (s): 0.48 | learning rate: 2.084E-05 | global batch size: 256 | lm loss: 5.859318E+00 | grad norm: 0.373 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.939 | TFLOPs: 15.98 | |
|
0: iteration 1470/ 1525 | consumed samples: 376320 | consumed tokens: 770703360 | elapsed time per iteration (s): 0.48 | learning rate: 2.061E-05 | global batch size: 256 | lm loss: 5.848261E+00 | grad norm: 0.375 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.004 | TFLOPs: 15.98 | |
|
0: iteration 1480/ 1525 | consumed samples: 378880 | consumed tokens: 775946240 | elapsed time per iteration (s): 0.48 | learning rate: 2.041E-05 | global batch size: 256 | lm loss: 5.856167E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.066 | TFLOPs: 15.98 | |
|
0: iteration 1490/ 1525 | consumed samples: 381440 | consumed tokens: 781189120 | elapsed time per iteration (s): 0.48 | learning rate: 2.025E-05 | global batch size: 256 | lm loss: 5.857590E+00 | grad norm: 0.352 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.037 | TFLOPs: 15.98 | |
|
0: iteration 1500/ 1525 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 0.48 | learning rate: 2.013E-05 | global batch size: 256 | lm loss: 5.855051E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.988 | TFLOPs: 15.98 | |
|
0: iteration 1510/ 1525 | consumed samples: 386560 | consumed tokens: 791674880 | elapsed time per iteration (s): 0.48 | learning rate: 2.005E-05 | global batch size: 256 | lm loss: 5.854537E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 536.983 | TFLOPs: 15.98 | |
|
0: iteration 1520/ 1525 | consumed samples: 389120 | consumed tokens: 796917760 | elapsed time per iteration (s): 0.48 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 5.852026E+00 | grad norm: 0.377 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.067 | TFLOPs: 15.98 | |
|
0: [after training is done] datetime: 2023-04-27 16:20:53 |
|
0: saving checkpoint at iteration 1525 to checkpoints_14m800m100m |
|
0: ----------------------------------------------------------------------------------------------------------------- |
|
0: validation loss at the end of training for val data | lm loss value: 5.799075E+00 | lm loss PPL: 3.299942E+02 | |
|
0: ----------------------------------------------------------------------------------------------------------------- |
|
0: [2023-04-27 16:20:53,669] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step1525 is begin to save! |
|
0: [2023-04-27 16:20:53,671] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_01-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,696] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_01-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,696] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_03-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,699] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_03-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,699] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_04-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,702] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_04-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,702] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_05-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,705] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_05-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,705] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_06-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,708] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_06-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,708] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/layer_08-model_00-model_states.pt... |
|
0: [2023-04-27 16:20:53,709] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/layer_08-model_00-model_states.pt. |
|
0: [2023-04-27 16:20:53,709] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_14m800m100m/global_step1525/mp_rank_00_model_states.pt |
|
0: [2023-04-27 16:20:53,709] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/mp_rank_00_model_states.pt... |
|
0: [2023-04-27 16:20:53,711] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/mp_rank_00_model_states.pt. |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 16:20:53,741] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,741] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,741] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,742] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,742] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,742] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,742] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,743] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,743] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,743] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,743] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,743] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,743] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,743] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,743] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,748] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,748] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,748] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,748] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,748] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 16:20:53,748] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,748] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: [2023-04-27 16:20:53,770] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m800m100m/global_step1525/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 16:20:53,770] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1525 is ready now! |
|
0: successfully saved checkpoint at iteration 1525 to checkpoints_14m800m100m |
|
END 3423814: Thu 27 Apr 2023 04:21:03 PM EEST |
|
|