lm1-misc / 14m200m100m /3423740.out
Muennighoff's picture
A
f4bcf01
Model parameters: d_model 224 ffw_size 896 kv_size 32 n_heads 7 n_layers 4
Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 4 --hidden-size 224 --num-attention-heads 7 --kv-channels 32 --ffn-hidden-size 896 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 32 --global-batch-size 256 --train-samples 97656 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-14m200m100m --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 97656 --lr-warmup-samples 977 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 1000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_14m200m100m --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_14m200m100m --load checkpoints_14m200m100m --train-weighted-split-paths-path train100m.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3423740.json --zero-stage 0
START 3423740: Thu 27 Apr 2023 03:38:05 PM EEST
0:
0:
0: ======================= ROCm System Management Interface =======================
0: ================================= Concise Info =================================
0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%
0: 0 47.0c 100.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0%
0: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0%
0: 2 42.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0%
0: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0%
0: 4 49.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0%
0: 5 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0%
0: 6 42.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0%
0: 7 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0%
0: ================================================================================
0: ============================= End of ROCm SMI Log ==============================
0: Launching on nid005327 (0/1), master nid005327 port 9999, GPUs 8, CUDA: True
0: using world size: 8, data-parallel-size: 8, tensor-model-parallel size: 1, pipeline-model-parallel size: 1
0: accumulate and all-reduce gradients in fp32 for bfloat16 data type.
0: using torch.bfloat16 for parameters ...
0: ------------------------ arguments ------------------------
0: abort_on_unmet_fused_kernel_constraints ......... False
0: accumulate_allreduce_grads_in_fp32 .............. True
0: adam_beta1 ...................................... 0.9
0: adam_beta2 ...................................... 0.999
0: adam_eps ........................................ 1e-08
0: adlr_autoresume ................................. False
0: adlr_autoresume_interval ........................ 1000
0: apply_query_key_layer_scaling ................... True
0: apply_residual_connection_post_layernorm ........ False
0: attention_dropout ............................... 0.1
0: attention_softmax_in_fp32 ....................... False
0: bert_binary_head ................................ True
0: bert_load ....................................... None
0: bf16 ............................................ True
0: bias_dropout_fusion ............................. True
0: bias_gelu_fusion ................................ True
0: biencoder_projection_dim ........................ 0
0: biencoder_shared_query_context_model ............ False
0: block_data_path ................................. None
0: checkpoint_activations .......................... True
0: checkpoint_in_cpu ............................... False
0: checkpoint_num_layers ........................... 1
0: clip_grad ....................................... 1.0
0: codecarbon_dir .................................. None
0: consumed_train_samples .......................... 0
0: consumed_train_tokens ........................... 0
0: consumed_valid_samples .......................... 0
0: contigious_checkpointing ........................ False
0: cpu_optimizer ................................... False
0: cpu_torch_adam .................................. False
0: curriculum_learning ............................. False
0: data_impl ....................................... mmap
0: data_parallel_size .............................. 8
0: data_path ....................................... None
0: dataloader_type ................................. single
0: DDP_impl ........................................ local
0: decoder_seq_length .............................. None
0: deepscale ....................................... False
0: deepscale_config ................................ None
0: deepspeed ....................................... True
0: deepspeed_activation_checkpointing .............. False
0: deepspeed_config ................................ ds_configs/3423740.json
0: deepspeed_mpi ................................... False
0: distribute_checkpointed_activations ............. False
0: distributed_backend ............................. nccl
0: embed_layernorm ................................. False
0: embedding_path .................................. None
0: encoder_seq_length .............................. 2048
0: eod_mask_loss ................................... False
0: eval_interval ................................... 1000
0: eval_iters ...................................... 1
0: eval_only ....................................... None
0: evidence_data_path .............................. None
0: exit_duration_in_mins ........................... None
0: exit_interval ................................... None
0: ffn_hidden_size ................................. 896
0: finetune ........................................ False
0: fp16 ............................................ False
0: fp16_lm_cross_entropy ........................... False
0: fp32_residual_connection ........................ False
0: gigaflos_no_embeds .............................. 0
0: global_batch_size ............................... 256
0: glu_activation .................................. None
0: hidden_dropout .................................. 0.1
0: hidden_size ..................................... 224
0: hysteresis ...................................... 2
0: ict_head_size ................................... None
0: ict_load ........................................ None
0: img_dim ......................................... 224
0: indexer_batch_size .............................. 128
0: indexer_log_interval ............................ 1000
0: inference ....................................... False
0: init_method_std ................................. 0.02
0: init_method_xavier_uniform ...................... False
0: initial_loss_scale .............................. 4294967296
0: kill_switch_path ................................ kill-switch-14m200m100m
0: kv_channels ..................................... 32
0: layer_norm_fusion ............................... True
0: layernorm_epsilon ............................... 1e-05
0: lazy_mpu_init ................................... None
0: load ............................................ checkpoints_14m200m100m
0: local_rank ...................................... None
0: log_batch_size_to_tensorboard ................... True
0: log_interval .................................... 10
0: log_learning_rate_to_tensorboard ................ True
0: log_level ....................................... None
0: log_level_replica ............................... None
0: log_loss_scale_to_tensorboard ................... True
0: log_num_zeros_in_grad ........................... False
0: log_params_norm ................................. False
0: log_path ........................................ None
0: log_timers_to_tensorboard ....................... True
0: log_validation_ppl_to_tensorboard ............... True
0: loss_on_targets_only ............................ False
0: loss_scale ...................................... 12.0
0: loss_scale_window ............................... 1000
0: lr .............................................. 0.0002
0: lr_decay_iters .................................. None
0: lr_decay_samples ................................ 97656
0: lr_decay_style .................................. cosine
0: lr_decay_tokens ................................. None
0: lr_warmup_fraction .............................. None
0: lr_warmup_iters ................................. 0
0: lr_warmup_samples ............................... 977
0: make_vocab_size_divisible_by .................... 128
0: mask_prob ....................................... 0.15
0: masked_softmax_fusion ........................... True
0: max_position_embeddings ......................... 2048
0: mean_noise_span_length .......................... None
0: memory_centric_tiled_linear ..................... False
0: merge_file ...................................... gpt2/merges.txt
0: micro_batch_size ................................ 32
0: min_loss_scale .................................. 1.0
0: min_lr .......................................... 2e-05
0: mmap_warmup ..................................... False
0: no_load_optim ................................... None
0: no_load_rng ..................................... None
0: no_save_optim ................................... None
0: no_save_rng ..................................... None
0: noise_density ................................... None
0: num_attention_heads ............................. 7
0: num_channels .................................... 3
0: num_classes ..................................... 1000
0: num_layers ...................................... 4
0: num_layers_per_virtual_pipeline_stage ........... None
0: num_workers ..................................... 2
0: onnx_safe ....................................... None
0: openai_gelu ..................................... False
0: optimizer ....................................... adam
0: optimizer_fusion ................................ True
0: override_lr_scheduler ........................... False
0: pad_vocab_size_to ............................... None
0: params_dtype .................................... torch.bfloat16
0: partition_activations ........................... False
0: patch_dim ....................................... 16
0: pipeline_model_parallel_size .................... 1
0: position_embedding_type ......................... PositionEmbeddingType.absolute
0: pp_partition_method ............................. None
0: profile_backward ................................ False
0: query_in_block_prob ............................. 0.1
0: rampup_batch_size ............................... None
0: rank ............................................ 0
0: remote_device ................................... none
0: reset_attention_mask ............................ False
0: reset_position_ids .............................. False
0: reset_progress .................................. None
0: retriever_report_topk_accuracies ................ []
0: retriever_score_scaling ......................... False
0: retriever_seq_length ............................ 256
0: reweight_loss_based_on_position_frequency ....... False
0: sample_rate ..................................... 1.0
0: save ............................................ checkpoints_14m200m100m
0: save_interval ................................... 1000
0: scatter_gather_tensors_in_pipeline .............. True
0: scattered_embeddings ............................ False
0: seed ............................................ 1234
0: seq_length ...................................... 2048
0: sgd_momentum .................................... 0.9
0: short_seq_prob .................................. 0.1
0: skip_train_iteration_range ...................... None
0: split ........................................... None
0: split_transformers .............................. False
0: sync_tp_duplicated_parameters ................... False
0: synchronize_each_layer .......................... False
0: tensor_model_parallel_size ...................... 1
0: tensorboard_dir ................................. tensorboard_14m200m100m
0: tensorboard_log_interval ........................ 1
0: tensorboard_queue_size .......................... 5
0: test_weighted_split_paths ....................... None
0: test_weighted_split_paths_path .................. None
0: tile_factor ..................................... 1
0: titles_data_path ................................ None
0: tokenizer_name_or_path .......................... None
0: tokenizer_type .................................. GPT2BPETokenizer
0: train_iters ..................................... None
0: train_samples ................................... 97656
0: train_tokens .................................... None
0: train_weighted_split_names ...................... ['train']
0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document']]
0: train_weighted_split_paths_path ................. None
0: train_weighted_split_splits ..................... [['0:1']]
0: train_weighted_split_weights .................... [['1.0']]
0: universal_checkpoint ............................ False
0: use_bnb_optimizer ............................... False
0: use_checkpoint_lr_scheduler ..................... False
0: use_contiguous_buffers_in_ddp ................... True
0: use_cpu_initialization .......................... None
0: use_one_sent_docs ............................... False
0: use_pin_memory .................................. False
0: valid_num_workers ............................... 2
0: valid_weighted_split_names ...................... ['validation']
0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']]
0: valid_weighted_split_paths_path ................. None
0: valid_weighted_split_splits ..................... [['0:1']]
0: valid_weighted_split_weights .................... [['1.0']]
0: virtual_pipeline_model_parallel_size ............ None
0: vocab_extra_ids ................................. 0
0: vocab_file ...................................... gpt2/vocab.json
0: weight_decay .................................... 0.1
0: world_size ...................................... 8
0: zero_allgather_bucket_size ...................... 0.0
0: zero_contigious_gradients ....................... False
0: zero_reduce_bucket_size ......................... 0.0
0: zero_reduce_scatter ............................. False
0: zero_stage ...................................... 0
0: -------------------- end of arguments ---------------------
0: setting number of micro-batches to constant 1
0: > building GPT2BPETokenizer tokenizer ...
0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
0: DeepSpeed general environment info:
0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch']
0: torch version .................... 1.13.0+rocm5.2
0: torch cuda version ............... None
0: torch hip version ................ 5.2.21151-afdc89f8
0: nvcc version ..................... None
0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed']
0: deepspeed info ................... 0.7.5, unknown, unknown
0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1
0: > setting tensorboard ...
0: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
0: > initializing torch distributed ...
0: [2023-04-27 15:39:30,746] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
0: > initializing tensor model parallel with size 1
0: > initializing pipeline model parallel with size 1
0: > setting random seeds to 1234 ...
0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
0: > compiling dataset index builder ...
0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data'
0: make: Nothing to be done for 'default'.
0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data'
0: >>> done with dataset index builder. Compilation time: 0.091 seconds
0: > compiling and loading fused kernels ...
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0:
0:
0: Total number of replaced kernel launches: 87
0: ninja: no work to do.
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0:
0:
0: Total number of replaced kernel launches: 63
0: [1/1] c++ scaled_masked_softmax_hip.cuda.o scaled_masked_softmax_hip.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/opt/rocm/lib -lamdhip64 -o scaled_masked_softmax_cuda.so
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0:
0:
0: Total number of replaced kernel launches: 67
0: ninja: no work to do.
0: >>> done with compiling and loading fused kernels. Compilation time: 11.037 seconds
0: time to initialize megatron (seconds): 14.221
0: [after megatron is initialized] datetime: 2023-04-27 15:39:42
0: building GPT model ...
0: [2023-04-27 15:39:42,348] [INFO] [utils.py:827:see_memory_usage] Before Building Model
0: [2023-04-27 15:39:42,349] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
0: [2023-04-27 15:39:42,349] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 40.78 GB, percent = 8.1%
0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7}
0: [2023-04-27 15:39:42,570] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer
0: stage=0 layers=11
0: 0: _to_float16
0: 1: EmbeddingPipe
0: 2: <lambda>
0: 3: ParallelTransformerLayerPipe
0: 4: ParallelTransformerLayerPipe
0: 5: ParallelTransformerLayerPipe
0: 6: ParallelTransformerLayerPipe
0: 7: undo
0: 8: MixedFusedLayerNorm
0: 9: EmbeddingPipe
0: 10: float16_to_fp32
0: loss: CrossEntropy
0: [2023-04-27 15:39:42,760] [INFO] [utils.py:827:see_memory_usage] After Building Model
0: [2023-04-27 15:39:42,760] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB
0: [2023-04-27 15:39:42,760] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 40.8 GB, percent = 8.1%
0: setting training iterations to 381
0: > learning rate decay style: cosine
0: DeepSpeed is enabled.
0: [2023-04-27 15:39:42,761] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown
0: [2023-04-27 15:39:47,077] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
0: [2023-04-27 15:39:47,077] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer
0: [2023-04-27 15:39:47,077] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer
0: [2023-04-27 15:39:47,078] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam
0: [2023-04-27 15:39:47,078] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer
0: [2023-04-27 15:39:47,195] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer
0: [2023-04-27 15:39:47,195] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB
0: [2023-04-27 15:39:47,196] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 42.66 GB, percent = 8.5%
0: ninja: no work to do.
0: Time to load utils op: 0.4214780330657959 secondsTime to load utils op: 0.4215986728668213 secondsTime to load utils op: 0.4215419292449951 seconds
0:
0:
0: Time to load utils op: 0.4230659008026123 secondsTime to load utils op: 0.42234373092651367 seconds
0:
0: Time to load utils op: 0.423114538192749 seconds
0: Time to load utils op: 0.42148780822753906 seconds
0: Time to load utils op: 0.31270337104797363 seconds
0: [2023-04-27 15:39:47,613] [INFO] [utils.py:827:see_memory_usage] before initializing group 0
0: [2023-04-27 15:39:47,613] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB
0: [2023-04-27 15:39:47,613] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 42.37 GB, percent = 8.4%
0: Time to load utils op: 0.00362396240234375 seconds
0: Time to load utils op: 0.0037581920623779297 secondsTime to load utils op: 0.0036110877990722656 seconds
0: Time to load utils op: 0.0037806034088134766 seconds
0:
0: Time to load utils op: 0.0036420822143554688 seconds
0: Time to load utils op: 0.003623485565185547 secondsTime to load utils op: 0.003846406936645508 seconds
0:
0: [2023-04-27 15:39:47,895] [INFO] [utils.py:827:see_memory_usage] after initializing group 0
0: [2023-04-27 15:39:47,895] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:47,896] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 42.15 GB, percent = 8.4%
0: [2023-04-27 15:39:48,004] [INFO] [utils.py:827:see_memory_usage] before initializing group 1
0: [2023-04-27 15:39:48,005] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,005] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 42.01 GB, percent = 8.3%
0: [2023-04-27 15:39:48,108] [INFO] [utils.py:827:see_memory_usage] after initializing group 1
0: [2023-04-27 15:39:48,109] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,109] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.82 GB, percent = 8.3%
0: [2023-04-27 15:39:48,212] [INFO] [utils.py:827:see_memory_usage] before initializing group 2
0: [2023-04-27 15:39:48,213] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,213] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.53 GB, percent = 8.3%
0: [2023-04-27 15:39:48,317] [INFO] [utils.py:827:see_memory_usage] after initializing group 2
0: [2023-04-27 15:39:48,317] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,317] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.3 GB, percent = 8.2%
0: [2023-04-27 15:39:48,420] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer
0: [2023-04-27 15:39:48,421] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,421] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.03 GB, percent = 8.2%
0: [2023-04-27 15:39:48,529] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer
0: [2023-04-27 15:39:48,530] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,530] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.03 GB, percent = 8.2%
0: [2023-04-27 15:39:48,631] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer
0: [2023-04-27 15:39:48,631] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB
0: [2023-04-27 15:39:48,631] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.03 GB, percent = 8.2%
0: [2023-04-27 15:39:48,632] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam
0: [2023-04-27 15:39:48,632] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler
0: [2023-04-27 15:39:48,632] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x14dcc5598f10>
0: [2023-04-27 15:39:48,632] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)]
0: [2023-04-27 15:39:48,632] [INFO] [config.py:1007:print] DeepSpeedEngine configuration:
0: [2023-04-27 15:39:48,632] [INFO] [config.py:1011:print] activation_checkpointing_config {
0: "partition_activations": false,
0: "contiguous_memory_optimization": false,
0: "cpu_checkpointing": false,
0: "number_checkpoints": null,
0: "synchronize_checkpoint_boundary": false,
0: "profile": false
0: }
0: [2023-04-27 15:39:48,632] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
0: [2023-04-27 15:39:48,632] [INFO] [config.py:1011:print] amp_enabled .................. False
0: [2023-04-27 15:39:48,632] [INFO] [config.py:1011:print] amp_params ................... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] autotuning_config ............ {
0: "enabled": false,
0: "start_step": null,
0: "end_step": null,
0: "metric_path": null,
0: "arg_mappings": null,
0: "metric": "throughput",
0: "model_info": null,
0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results",
0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps",
0: "overwrite": true,
0: "fast": true,
0: "start_profile_step": 3,
0: "end_profile_step": 5,
0: "tuner_type": "gridsearch",
0: "tuner_early_stopping": 5,
0: "tuner_num_trials": 50,
0: "model_info_path": null,
0: "mp_size": 1,
0: "max_train_batch_size": null,
0: "min_train_batch_size": 1,
0: "max_train_micro_batch_size_per_gpu": 1.024000e+03,
0: "min_train_micro_batch_size_per_gpu": 1,
0: "num_tuning_micro_batch_sizes": 3
0: }
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] bfloat16_enabled ............. True
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x14dcc5598cd0>
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] communication_data_type ...... None
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa
0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] curriculum_enabled ........... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] curriculum_params ............ False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] dataloader_drop_last ......... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] disable_allgather ............ False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] dump_state ................... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] elasticity_enabled ........... False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] flops_profiler_config ........ {
0: "enabled": false,
0: "profile_step": 1,
0: "module_depth": -1,
0: "top_modules": 1,
0: "detailed": true,
0: "output_file": null
0: }
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] fp16_auto_cast ............... None
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] fp16_enabled ................. False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] global_rank .................. 0
0: [2023-04-27 15:39:48,633] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] load_universal_checkpoint .... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] loss_scale ................... 1.0
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] memory_breakdown ............. False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] monitor_config ............... <deepspeed.monitor.config.DeepSpeedMonitorConfig object at 0x14dcc5598c40>
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] nebula_config ................ {
0: "enabled": false,
0: "persistent_storage_path": null,
0: "persistent_time_interval": 100,
0: "num_of_version_in_retention": 2,
0: "enable_nebula_load": true,
0: "load_path": null
0: }
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] optimizer_name ............... None
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] optimizer_params ............. None
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] pld_enabled .................. False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] pld_params ................... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] prescale_gradients ........... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] scheduler_name ............... None
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] scheduler_params ............. None
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] sparse_attention ............. None
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] steps_per_print .............. 2000
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] train_batch_size ............. 256
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 32
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] use_node_local_storage ....... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] world_size ................... 8
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] zero_enabled ................. False
0: [2023-04-27 15:39:48,634] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0
0: [2023-04-27 15:39:48,634] [INFO] [config.py:996:print_user_config] json = {
0: "train_micro_batch_size_per_gpu": 32,
0: "train_batch_size": 256,
0: "gradient_clipping": 1.0,
0: "zero_optimization": {
0: "stage": 0
0: },
0: "bf16": {
0: "enabled": true
0: },
0: "steps_per_print": 2.000000e+03,
0: "wall_clock_breakdown": false
0: }
0: Time to load utils op: 0.0004229545593261719 seconds
0: [2023-04-27 15:39:48,635] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=32
0: [2023-04-27 15:39:48,678] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=11 [0, 11) STAGE_PARAMS=14147392 (14.147M) TOTAL_PARAMS=14147392 (14.147M) UNIQUE_PARAMS=14147392 (14.147M)
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: WARNING: could not find the metadata file checkpoints_14m200m100m
0: will not load any checkpoints and will start from random
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: [2023-04-27 15:39:48,680] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m200m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint.
0: time (ms) | load-checkpoint: 1.04
0: estimated model parameters: 0.014147392
0: estimated model parameters without embeddings: 0.002420544
0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-04-27 15:39:48
0: > building train, validation, and test datasets ...
0: > datasets target sizes (minimum size):
0: train: 97656
0: validation: 256
0: test: 256
0: > building train, validation, and test datasets for GPT ...
0: > building dataset index ...
0: reading sizes...
0: reading pointers...
0: reading document index...
0: creating numpy buffer of mmap...
0: creating memory view of numpy buffer...
0: > finished creating indexed dataset in 0.007179 seconds
0: number of documents: 208931
0: > dataset split:
0: train:
0: document indices in [0, 208931) total of 208931 documents
0: > WARNING: could not find index map files, building the indices on rank 0 ...
0: > last epoch number of samples (47) is smaller than 95.0% of number of samples per epoch (48804), setting separate_last_epoch to True
0: > elasped time to build and save doc-idx mapping (seconds): 0.052216
0: using:
0: number of documents: 208931
0: number of epochs: 3
0: sequence length: 2048
0: total number of samples: 146414
0: > elasped time to build and save sample-idx mapping (seconds): 0.010831
0: > building shuffle index with split [0, 97609) and [97609, 146414) ...
0: > elasped time to build and save shuffle-idx mapping (seconds): 0.005049
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_97656ns_2048sl_1234s_doc_idx.npy
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_97656ns_2048sl_1234s_sample_idx.npy
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_97656ns_2048sl_1234s_shuffle_idx.npy
0: loaded indexed file in 0.127 seconds
0: total number of samples: 146415
0: total number of epochs: 3
0: > building dataset index ...
0: reading sizes...
0: reading pointers...
0: reading document index...
0: creating numpy buffer of mmap...
0: creating memory view of numpy buffer...
0: > finished creating indexed dataset in 0.086375 seconds
0: number of documents: 364608
0: > dataset split:
0: validation:
0: document indices in [0, 364608) total of 364608 documents
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_doc_idx.npy
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_sample_idx.npy
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_shuffle_idx.npy
0: loaded indexed file in 0.110 seconds
0: total number of samples: 84978
0: total number of epochs: 1
0: > finished creating GPT datasets ...
0: [after dataloaders are built] datetime: 2023-04-27 15:39:55
0: done with setup ...
0: training ...
0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings:
0: time (ms) | model-and-optimizer-setup: 6635.29 | train/valid/test-data-iterators-setup: 6736.28
0: [000-000] 0.0141B / 0.0024B
0: [before the start of training step] datetime: 2023-04-27 15:39:55
0: [2023-04-27 15:39:55,850] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information
0: [2023-04-27 15:39:55,851] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False
0: [2023-04-27 15:39:55,851] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers
0: [2023-04-27 15:39:55,851] [INFO] [checkpointing.py:560:forward] ----Synchronization False
0: [2023-04-27 15:39:55,851] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False
0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 12710.28759765625 | max allocated: 31761.787109375 | reserved: 39838.0 | max reserved: 39838.0
0: iteration 10/ 381 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 0.99 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 1.054844E+01 | grad norm: 1.231 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 259.371 | TFLOPs: 7.72 |
0: iteration 20/ 381 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 0.47 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 9.954537E+00 | grad norm: 1.232 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.989 | TFLOPs: 16.16 |
0: iteration 30/ 381 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 0.47 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 9.416174E+00 | grad norm: 1.234 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.917 | TFLOPs: 16.16 |
0: iteration 40/ 381 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 0.47 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 8.939543E+00 | grad norm: 1.223 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 543.062 | TFLOPs: 16.16 |
0: iteration 50/ 381 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 0.47 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 8.557494E+00 | grad norm: 1.173 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.719 | TFLOPs: 16.15 |
0: iteration 60/ 381 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 0.47 | learning rate: 1.903E-04 | global batch size: 256 | lm loss: 8.228969E+00 | grad norm: 1.122 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.121 | TFLOPs: 16.13 |
0: iteration 70/ 381 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 0.47 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 7.980126E+00 | grad norm: 1.009 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.047 | TFLOPs: 16.13 |
0: iteration 80/ 381 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 0.47 | learning rate: 1.825E-04 | global batch size: 256 | lm loss: 7.772469E+00 | grad norm: 0.867 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.840 | TFLOPs: 16.12 |
0: iteration 90/ 381 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 0.47 | learning rate: 1.778E-04 | global batch size: 256 | lm loss: 7.611230E+00 | grad norm: 0.700 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.591 | TFLOPs: 16.12 |
0: iteration 100/ 381 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.47 | learning rate: 1.727E-04 | global batch size: 256 | lm loss: 7.479670E+00 | grad norm: 0.602 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.317 | TFLOPs: 16.11 |
0: iteration 110/ 381 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 0.47 | learning rate: 1.671E-04 | global batch size: 256 | lm loss: 7.396822E+00 | grad norm: 0.546 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.189 | TFLOPs: 16.10 |
0: iteration 120/ 381 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 0.47 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 7.317059E+00 | grad norm: 0.645 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.991 | TFLOPs: 16.10 |
0: iteration 130/ 381 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 0.47 | learning rate: 1.548E-04 | global batch size: 256 | lm loss: 7.229152E+00 | grad norm: 0.761 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.465 | TFLOPs: 16.08 |
0: iteration 140/ 381 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 0.47 | learning rate: 1.482E-04 | global batch size: 256 | lm loss: 7.162913E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.206 | TFLOPs: 16.04 |
0: iteration 150/ 381 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 0.47 | learning rate: 1.413E-04 | global batch size: 256 | lm loss: 7.094919E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.113 | TFLOPs: 16.07 |
0: iteration 160/ 381 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 0.47 | learning rate: 1.341E-04 | global batch size: 256 | lm loss: 7.054077E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.700 | TFLOPs: 16.06 |
0: iteration 170/ 381 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 0.47 | learning rate: 1.269E-04 | global batch size: 256 | lm loss: 7.009175E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.589 | TFLOPs: 16.06 |
0: iteration 180/ 381 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 0.47 | learning rate: 1.194E-04 | global batch size: 256 | lm loss: 6.966074E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.607 | TFLOPs: 16.06 |
0: iteration 190/ 381 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 0.47 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 6.927870E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.597 | TFLOPs: 16.06 |
0: iteration 200/ 381 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.47 | learning rate: 1.045E-04 | global batch size: 256 | lm loss: 6.903503E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.453 | TFLOPs: 16.05 |
0: iteration 210/ 381 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 0.47 | learning rate: 9.705E-05 | global batch size: 256 | lm loss: 6.869786E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.539 | TFLOPs: 16.05 |
0: iteration 220/ 381 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 0.48 | learning rate: 8.969E-05 | global batch size: 256 | lm loss: 6.850215E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.762 | TFLOPs: 16.03 |
0: iteration 230/ 381 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 0.48 | learning rate: 8.248E-05 | global batch size: 256 | lm loss: 6.832250E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.485 | TFLOPs: 16.02 |
0: iteration 240/ 381 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 0.48 | learning rate: 7.545E-05 | global batch size: 256 | lm loss: 6.799271E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.356 | TFLOPs: 16.02 |
0: iteration 250/ 381 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 0.48 | learning rate: 6.867E-05 | global batch size: 256 | lm loss: 6.794190E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.323 | TFLOPs: 16.02 |
0: iteration 260/ 381 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 0.48 | learning rate: 6.217E-05 | global batch size: 256 | lm loss: 6.755166E+00 | grad norm: 0.332 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.200 | TFLOPs: 16.01 |
0: iteration 270/ 381 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 0.48 | learning rate: 5.600E-05 | global batch size: 256 | lm loss: 6.767900E+00 | grad norm: 0.460 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.202 | TFLOPs: 16.02 |
0: iteration 280/ 381 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 0.48 | learning rate: 5.020E-05 | global batch size: 256 | lm loss: 6.747310E+00 | grad norm: 0.265 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.160 | TFLOPs: 16.01 |
0: iteration 290/ 381 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 0.48 | learning rate: 4.482E-05 | global batch size: 256 | lm loss: 6.742122E+00 | grad norm: 0.209 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.843 | TFLOPs: 16.00 |
0: iteration 300/ 381 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.48 | learning rate: 3.989E-05 | global batch size: 256 | lm loss: 6.733646E+00 | grad norm: 0.208 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.020 | TFLOPs: 16.01 |
0: iteration 310/ 381 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 0.48 | learning rate: 3.544E-05 | global batch size: 256 | lm loss: 6.706232E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.987 | TFLOPs: 16.01 |
0: iteration 320/ 381 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 0.48 | learning rate: 3.151E-05 | global batch size: 256 | lm loss: 6.712791E+00 | grad norm: 0.204 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.048 | TFLOPs: 16.01 |
0: iteration 330/ 381 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 0.48 | learning rate: 2.812E-05 | global batch size: 256 | lm loss: 6.704942E+00 | grad norm: 0.265 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.980 | TFLOPs: 16.01 |
0: iteration 340/ 381 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 0.48 | learning rate: 2.530E-05 | global batch size: 256 | lm loss: 6.701794E+00 | grad norm: 0.228 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.935 | TFLOPs: 16.01 |
0: iteration 350/ 381 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 0.48 | learning rate: 2.307E-05 | global batch size: 256 | lm loss: 6.695718E+00 | grad norm: 0.241 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.992 | TFLOPs: 16.01 |
0: iteration 360/ 381 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 0.48 | learning rate: 2.143E-05 | global batch size: 256 | lm loss: 6.704263E+00 | grad norm: 0.193 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.753 | TFLOPs: 16.00 |
0: iteration 370/ 381 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 0.48 | learning rate: 2.041E-05 | global batch size: 256 | lm loss: 6.685706E+00 | grad norm: 0.216 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.826 | TFLOPs: 16.00 |
0: iteration 380/ 381 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 0.48 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 6.680477E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.824 | TFLOPs: 16.00 |
0: [after training is done] datetime: 2023-04-27 15:43:01
0: saving checkpoint at iteration 381 to checkpoints_14m200m100m
0: -----------------------------------------------------------------------------------------------------------------
0: validation loss at the end of training for val data | lm loss value: 6.639688E+00 | lm loss PPL: 7.648567E+02 |
0: -----------------------------------------------------------------------------------------------------------------
0: [2023-04-27 15:43:01,686] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step381 is begin to save!
0: [2023-04-27 15:43:01,797] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_01-model_00-model_states.pt...
0: [2023-04-27 15:43:01,825] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_01-model_00-model_states.pt.
0: [2023-04-27 15:43:01,825] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_03-model_00-model_states.pt...
0: [2023-04-27 15:43:01,828] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_03-model_00-model_states.pt.
0: [2023-04-27 15:43:01,828] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_04-model_00-model_states.pt...
0: [2023-04-27 15:43:01,831] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_04-model_00-model_states.pt.
0: [2023-04-27 15:43:01,831] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_05-model_00-model_states.pt...
0: [2023-04-27 15:43:01,833] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_05-model_00-model_states.pt.
0: [2023-04-27 15:43:01,834] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_06-model_00-model_states.pt...
0: [2023-04-27 15:43:01,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_06-model_00-model_states.pt.
0: [2023-04-27 15:43:01,836] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/layer_08-model_00-model_states.pt...
0: [2023-04-27 15:43:01,837] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/layer_08-model_00-model_states.pt.
0: [2023-04-27 15:43:01,838] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_14m200m100m/global_step381/mp_rank_00_model_states.pt
0: [2023-04-27 15:43:01,838] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/mp_rank_00_model_states.pt...
0: [2023-04-27 15:43:01,840] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/mp_rank_00_model_states.pt.
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,844] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt...
0: [2023-04-27 15:43:01,868] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,871] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,871] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,871] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,871] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,872] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,872] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,872] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,872] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,872] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,872] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,872] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,872] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,873] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,873] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,873] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,878] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,878] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt.
0: [2023-04-27 15:43:01,878] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,879] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,879] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,879] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: [2023-04-27 15:43:01,905] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m200m100m/global_step381/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
0: [2023-04-27 15:43:01,905] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step381 is ready now!
0: successfully saved checkpoint at iteration 381 to checkpoints_14m200m100m
END 3423740: Thu 27 Apr 2023 03:43:11 PM EEST