File size: 37,419 Bytes
f9fc05c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
Model parameters: d_model 768 ffw_size 3072 kv_size 64 n_heads 12 n_layers 15
Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 15 --hidden-size 768 --num-attention-heads 12 --kv-channels 64 --ffn-hidden-size 3072 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 1 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --clip-grad 1.0 --kill-switch-path kill-switch-146m14b100mdedupval --bf16 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 1 --lr-warmup-samples 0 --clip-grad 1.0 --weight-decay 1e-1 --no-load-optim --reset-progress --override-lr-scheduler --log-interval 10 --save-interval 1000 --eval-interval 1 --eval-iters 100 --eval-only true --tensorboard-dir tensorboard_146m14b100mdedupval --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_146m14b100mdedup --load checkpoints_146m14b100mdedup --train-weighted-split-paths-path train14b.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3328731.json --zero-stage 0
START 3328731: Fri 17 Mar 2023 10:24:10 AM EET
0: 
0: 
0: ======================= ROCm System Management Interface =======================
0: ================================= Concise Info =================================
0: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
0: 0    43.0c  86.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
0: 1    44.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
0: 2    47.0c  89.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
0: 3    44.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
0: 4    42.0c  96.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
0: 5    46.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
0: 6    42.0c  85.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
0: 7    43.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
0: ================================================================================
0: ============================= End of ROCm SMI Log ==============================
6: 
6: 
6: ======================= ROCm System Management Interface =======================
6: ================================= Concise Info =================================
6: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
6: 0    41.0c  92.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
6: 1    43.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
6: 2    41.0c  96.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
6: 3    50.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
6: 4    45.0c  88.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
6: 5    48.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
6: 6    43.0c  94.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
6: 7    43.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
6: ================================================================================
6: ============================= End of ROCm SMI Log ==============================
2: 
2: 
2: ======================= ROCm System Management Interface =======================
2: ================================= Concise Info =================================
2: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
2: 0    43.0c  90.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
2: 1    48.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
2: 2    39.0c  91.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
2: 3    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
2: 4    37.0c  85.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
2: 5    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
2: 6    37.0c  86.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
2: 7    49.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
2: ================================================================================
2: ============================= End of ROCm SMI Log ==============================
3: 
3: 
3: ======================= ROCm System Management Interface =======================
3: ================================= Concise Info =================================
3: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
3: 0    44.0c  93.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
3: 1    42.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
3: 2    41.0c  88.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
3: 3    41.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
3: 4    43.0c  91.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
3: 5    43.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
3: 6    41.0c  92.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
3: 7    45.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
3: ================================================================================
3: ============================= End of ROCm SMI Log ==============================
5: 
5: 
5: ======================= ROCm System Management Interface =======================
5: ================================= Concise Info =================================
5: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
5: 0    47.0c  91.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
5: 1    45.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
5: 2    40.0c  90.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
5: 3    45.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
5: 4    38.0c  85.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
5: 5    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
5: 6    45.0c  88.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
5: 7    43.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
5: ================================================================================
5: ============================= End of ROCm SMI Log ==============================
7: 
7: 
7: ======================= ROCm System Management Interface =======================
7: ================================= Concise Info =================================
7: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
7: 0    47.0c  90.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
7: 1    48.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
7: 2    45.0c  84.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
7: 3    40.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
7: 4    44.0c  91.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
7: 5    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
7: 6    43.0c  95.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
7: 7    44.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
7: ================================================================================
7: ============================= End of ROCm SMI Log ==============================
4: 
4: 
4: ======================= ROCm System Management Interface =======================
4: ================================= Concise Info =================================
4: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
4: 0    40.0c  90.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
4: 1    45.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
4: 2    40.0c  87.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
4: 3    50.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
4: 4    42.0c  96.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
4: 5    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
4: 6    42.0c  88.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
4: 7    46.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
4: ================================================================================
4: ============================= End of ROCm SMI Log ==============================
1: 
1: 
1: ======================= ROCm System Management Interface =======================
1: ================================= Concise Info =================================
1: GPU  Temp   AvgPwr  SCLK    MCLK     Fan  Perf  PwrCap  VRAM%  GPU%  
1: 0    49.0c  89.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
1: 1    54.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
1: 2    42.0c  88.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
1: 3    42.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
1: 4    41.0c  95.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
1: 5    42.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
1: 6    46.0c  91.0W   800Mhz  1600Mhz  0%   auto  560.0W    0%   0%    
1: 7    47.0c  N/A     800Mhz  1600Mhz  0%   auto  0.0W      0%   0%    
1: ================================================================================
1: ============================= End of ROCm SMI Log ==============================
2: Launching on nid005360 (2/8), master nid005358 port 9999, GPUs 8, CUDA: True
6: Launching on nid005364 (6/8), master nid005358 port 9999, GPUs 8, CUDA: True
1: Launching on nid005359 (1/8), master nid005358 port 9999, GPUs 8, CUDA: True
7: Launching on nid005365 (7/8), master nid005358 port 9999, GPUs 8, CUDA: True
0: Launching on nid005358 (0/8), master nid005358 port 9999, GPUs 8, CUDA: True
4: Launching on nid005362 (4/8), master nid005358 port 9999, GPUs 8, CUDA: True
5: Launching on nid005363 (5/8), master nid005358 port 9999, GPUs 8, CUDA: True
3: Launching on nid005361 (3/8), master nid005358 port 9999, GPUs 8, CUDA: True
0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 
0: accumulate and all-reduce gradients in fp32 for bfloat16 data type.
0: using torch.bfloat16 for parameters ...
0: ------------------------ arguments ------------------------
0:   abort_on_unmet_fused_kernel_constraints ......... False
0:   accumulate_allreduce_grads_in_fp32 .............. True
0:   adam_beta1 ...................................... 0.9
0:   adam_beta2 ...................................... 0.999
0:   adam_eps ........................................ 1e-08
0:   adlr_autoresume ................................. False
0:   adlr_autoresume_interval ........................ 1000
0:   apply_query_key_layer_scaling ................... True
0:   apply_residual_connection_post_layernorm ........ False
0:   attention_dropout ............................... 0.1
0:   attention_softmax_in_fp32 ....................... False
0:   bert_binary_head ................................ True
0:   bert_load ....................................... None
0:   bf16 ............................................ True
0:   bias_dropout_fusion ............................. True
0:   bias_gelu_fusion ................................ True
0:   biencoder_projection_dim ........................ 0
0:   biencoder_shared_query_context_model ............ False
0:   block_data_path ................................. None
0:   checkpoint_activations .......................... False
0:   checkpoint_in_cpu ............................... False
0:   checkpoint_num_layers ........................... 1
0:   clip_grad ....................................... 1.0
0:   codecarbon_dir .................................. None
0:   consumed_train_samples .......................... 0
0:   consumed_train_tokens ........................... 0
0:   consumed_valid_samples .......................... 0
0:   contigious_checkpointing ........................ False
0:   cpu_optimizer ................................... False
0:   cpu_torch_adam .................................. False
0:   curriculum_learning ............................. False
0:   data_impl ....................................... mmap
0:   data_parallel_size .............................. 64
0:   data_path ....................................... None
0:   dataloader_type ................................. single
0:   DDP_impl ........................................ local
0:   decoder_seq_length .............................. None
0:   deepscale ....................................... False
0:   deepscale_config ................................ None
0:   deepspeed ....................................... True
0:   deepspeed_activation_checkpointing .............. False
0:   deepspeed_config ................................ ds_configs/3328731.json
0:   deepspeed_mpi ................................... False
0:   distribute_checkpointed_activations ............. False
0:   distributed_backend ............................. nccl
0:   embed_layernorm ................................. False
0:   embedding_path .................................. None
0:   encoder_seq_length .............................. 2048
0:   eod_mask_loss ................................... False
0:   eval_interval ................................... 1
0:   eval_iters ...................................... 100
0:   eval_only ....................................... True
0:   evidence_data_path .............................. None
0:   exit_duration_in_mins ........................... None
0:   exit_interval ................................... None
0:   ffn_hidden_size ................................. 3072
0:   finetune ........................................ False
0:   fp16 ............................................ False
0:   fp16_lm_cross_entropy ........................... False
0:   fp32_residual_connection ........................ False
0:   gigaflos_no_embeds .............................. 0
0:   global_batch_size ............................... 256
0:   glu_activation .................................. None
0:   hidden_dropout .................................. 0.1
0:   hidden_size ..................................... 768
0:   hysteresis ...................................... 2
0:   ict_head_size ................................... None
0:   ict_load ........................................ None
0:   img_dim ......................................... 224
0:   indexer_batch_size .............................. 128
0:   indexer_log_interval ............................ 1000
0:   inference ....................................... False
0:   init_method_std ................................. 0.02
0:   init_method_xavier_uniform ...................... False
0:   initial_loss_scale .............................. 4294967296
0:   kill_switch_path ................................ kill-switch-146m14b100mdedupval
0:   kv_channels ..................................... 64
0:   layer_norm_fusion ............................... True
0:   layernorm_epsilon ............................... 1e-05
0:   lazy_mpu_init ................................... None
0:   load ............................................ checkpoints_146m14b100mdedup
0:   local_rank ...................................... None
0:   log_batch_size_to_tensorboard ................... True
0:   log_interval .................................... 10
0:   log_learning_rate_to_tensorboard ................ True
0:   log_level ....................................... None
0:   log_level_replica ............................... None
0:   log_loss_scale_to_tensorboard ................... True
0:   log_num_zeros_in_grad ........................... False
0:   log_params_norm ................................. False
0:   log_path ........................................ None
0:   log_timers_to_tensorboard ....................... True
0:   log_validation_ppl_to_tensorboard ............... True
0:   loss_on_targets_only ............................ False
0:   loss_scale ...................................... None
0:   loss_scale_window ............................... 1000
0:   lr .............................................. 0.0002
0:   lr_decay_iters .................................. None
0:   lr_decay_samples ................................ 1
0:   lr_decay_style .................................. cosine
0:   lr_decay_tokens ................................. None
0:   lr_warmup_fraction .............................. None
0:   lr_warmup_iters ................................. 0
0:   lr_warmup_samples ............................... 0
0:   make_vocab_size_divisible_by .................... 128
0:   mask_prob ....................................... 0.15
0:   masked_softmax_fusion ........................... True
0:   max_position_embeddings ......................... 2048
0:   mean_noise_span_length .......................... None
0:   memory_centric_tiled_linear ..................... False
0:   merge_file ...................................... gpt2/merges.txt
0:   micro_batch_size ................................ 4
0:   min_loss_scale .................................. 1.0
0:   min_lr .......................................... 2e-05
0:   mmap_warmup ..................................... False
0:   no_load_optim ................................... True
0:   no_load_rng ..................................... None
0:   no_save_optim ................................... None
0:   no_save_rng ..................................... None
0:   noise_density ................................... None
0:   num_attention_heads ............................. 12
0:   num_channels .................................... 3
0:   num_classes ..................................... 1000
0:   num_layers ...................................... 15
0:   num_layers_per_virtual_pipeline_stage ........... None
0:   num_workers ..................................... 2
0:   onnx_safe ....................................... None
0:   openai_gelu ..................................... False
0:   optimizer ....................................... adam
0:   optimizer_fusion ................................ True
0:   override_lr_scheduler ........................... True
0:   pad_vocab_size_to ............................... None
0:   params_dtype .................................... torch.bfloat16
0:   partition_activations ........................... False
0:   patch_dim ....................................... 16
0:   pipeline_model_parallel_size .................... 1
0:   position_embedding_type ......................... PositionEmbeddingType.absolute
0:   pp_partition_method ............................. None
0:   profile_backward ................................ False
0:   query_in_block_prob ............................. 0.1
0:   rampup_batch_size ............................... None
0:   rank ............................................ 0
0:   remote_device ................................... none
0:   reset_attention_mask ............................ False
0:   reset_position_ids .............................. False
0:   reset_progress .................................. True
0:   retriever_report_topk_accuracies ................ []
0:   retriever_score_scaling ......................... False
0:   retriever_seq_length ............................ 256
0:   reweight_loss_based_on_position_frequency ....... False
0:   sample_rate ..................................... 1.0
0:   save ............................................ checkpoints_146m14b100mdedup
0:   save_interval ................................... 1000
0:   scatter_gather_tensors_in_pipeline .............. True
0:   scattered_embeddings ............................ False
0:   seed ............................................ 1234
0:   seq_length ...................................... 2048
0:   sgd_momentum .................................... 0.9
0:   short_seq_prob .................................. 0.1
0:   skip_train_iteration_range ...................... None
0:   split ........................................... None
0:   split_transformers .............................. False
0:   sync_tp_duplicated_parameters ................... False
0:   synchronize_each_layer .......................... False
0:   tensor_model_parallel_size ...................... 1
0:   tensorboard_dir ................................. tensorboard_146m14b100mdedupval
0:   tensorboard_log_interval ........................ 1
0:   tensorboard_queue_size .......................... 5
0:   test_weighted_split_paths ....................... None
0:   test_weighted_split_paths_path .................. None
0:   tile_factor ..................................... 1
0:   titles_data_path ................................ None
0:   tokenizer_name_or_path .......................... None
0:   tokenizer_type .................................. GPT2BPETokenizer
0:   train_iters ..................................... None
0:   train_samples ................................... 1
0:   train_tokens .................................... None
0:   train_weighted_split_names ...................... ['train']
0:   train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_14B_text_document']]
0:   train_weighted_split_paths_path ................. None
0:   train_weighted_split_splits ..................... [['0:1']]
0:   train_weighted_split_weights .................... [['1.0']]
0:   universal_checkpoint ............................ False
0:   use_bnb_optimizer ............................... False
0:   use_checkpoint_lr_scheduler ..................... False
0:   use_contiguous_buffers_in_ddp ................... True
0:   use_cpu_initialization .......................... None
0:   use_one_sent_docs ............................... False
0:   use_pin_memory .................................. False
0:   valid_num_workers ............................... 2
0:   valid_weighted_split_names ...................... ['validation']
0:   valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']]
0:   valid_weighted_split_paths_path ................. None
0:   valid_weighted_split_splits ..................... [['0:1']]
0:   valid_weighted_split_weights .................... [['1.0']]
0:   virtual_pipeline_model_parallel_size ............ None
0:   vocab_extra_ids ................................. 0
0:   vocab_file ...................................... gpt2/vocab.json
0:   weight_decay .................................... 0.1
0:   world_size ...................................... 64
0:   zero_allgather_bucket_size ...................... 0.0
0:   zero_contigious_gradients ....................... False
0:   zero_reduce_bucket_size ......................... 0.0
0:   zero_reduce_scatter ............................. False
0:   zero_stage ...................................... 0
0: -------------------- end of arguments ---------------------
0: setting number of micro-batches to constant 1
0: > building GPT2BPETokenizer tokenizer ...
0:  > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
0: DeepSpeed general environment info:
0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch']
0: torch version .................... 1.13.0+rocm5.2
0: torch cuda version ............... None
0: torch hip version ................ 5.2.21151-afdc89f8
0: nvcc version ..................... None
0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed']
0: deepspeed info ................... 0.7.5, unknown, unknown
0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1
0: **** Git info for Megatron: git_hash=unknown git_branch=unknown ****
0: > initializing torch distributed ...
0: [2023-03-17 10:27:13,249] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
7: > setting tensorboard ...
0: > initializing tensor model parallel with size 1
0: > initializing pipeline model parallel with size 1
0: > setting random seeds to 1234 ...
0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
0: > compiling dataset index builder ...
0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data'
0: make: Nothing to be done for 'default'.
0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data'
0: >>> done with dataset index builder. Compilation time: 0.111 seconds
0: > compiling and loading fused kernels ...
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0: 
0: 
0: Total number of replaced kernel launches: 87
0: [1/1] c++ scaled_upper_triang_masked_softmax_hip.o scaled_upper_triang_masked_softmax_hip.cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o scaled_upper_triang_masked_softmax_cuda.so
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0: 
0: 
0: Total number of replaced kernel launches: 63
0: [1/1] c++ scaled_masked_softmax_hip.o scaled_masked_softmax_hip.cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o scaled_masked_softmax_cuda.so
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified]
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified]
0: Total number of unsupported CUDA function calls: 0
0: 
0: 
0: Total number of replaced kernel launches: 67
0: ninja: no work to do.
0: >>> done with compiling and loading fused kernels. Compilation time: 23.698 seconds
0: time to initialize megatron (seconds): -4.253
0: [after megatron is initialized] datetime: 2023-03-17 10:27:39 
0: building GPT model ...
0: [2023-03-17 10:27:39,879] [INFO] [utils.py:827:see_memory_usage] Before Building Model
0: [2023-03-17 10:27:39,880] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB         Max_MA 0.0 GB         CA 0.0 GB         Max_CA 0 GB 
0: [2023-03-17 10:27:39,880] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory:  used = 30.59 GB, percent = 6.1%
0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None
0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi
0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4
0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63}
0: [2023-03-17 10:27:41,886] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer
0: stage=0 layers=22
0:      0: _to_float16
0:      1: EmbeddingPipe
0:      2: <lambda>
0:      3: ParallelTransformerLayerPipe
0:      4: ParallelTransformerLayerPipe
0:      5: ParallelTransformerLayerPipe
0:      6: ParallelTransformerLayerPipe
0:      7: ParallelTransformerLayerPipe
0:      8: ParallelTransformerLayerPipe
0:      9: ParallelTransformerLayerPipe
0:     10: ParallelTransformerLayerPipe
0:     11: ParallelTransformerLayerPipe
0:     12: ParallelTransformerLayerPipe
0:     13: ParallelTransformerLayerPipe
0:     14: ParallelTransformerLayerPipe
0:     15: ParallelTransformerLayerPipe
0:     16: ParallelTransformerLayerPipe
0:     17: ParallelTransformerLayerPipe
0:     18: undo
0:     19: MixedFusedLayerNorm
0:     20: EmbeddingPipe
0:     21: float16_to_fp32
0:   loss: CrossEntropy
0: [2023-03-17 10:27:42,188] [INFO] [utils.py:827:see_memory_usage] After Building Model
0: [2023-03-17 10:27:42,189] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB         Max_MA 0.28 GB         CA 0.29 GB         Max_CA 0 GB 
0: [2023-03-17 10:27:42,189] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory:  used = 30.61 GB, percent = 6.1%
0: setting training iterations to 0
0: > learning rate decay style: cosine
0: DeepSpeed is enabled.
0: [2023-03-17 10:27:42,191] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown