Scheduled epochs: 150 (epochs + cooldown_epochs). Warmup within epochs when warmup_prefix=False. LR stepped per epoch. Train: 0 [ 0/156 ( 1%)] Loss: 6.94 (6.94) Time: 3.833s, 267.18/s (3.833s, 267.18/s) LR: 1.000e-05 Data: 1.327 (1.327) Train: 0 [ 50/156 ( 33%)] Loss: 6.94 (6.94) Time: 0.383s, 2670.17/s (0.450s, 2277.47/s) LR: 1.000e-05 Data: 0.027 (0.053) Train: 0 [ 100/156 ( 65%)] Loss: 6.94 (6.94) Time: 0.384s, 2668.70/s (0.417s, 2455.05/s) LR: 1.000e-05 Data: 0.026 (0.040) Train: 0 [ 150/156 ( 97%)] Loss: 6.95 (6.94) Time: 0.385s, 2657.88/s (0.407s, 2517.90/s) LR: 1.000e-05 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.650 (1.650) Loss: 6.946 ( 6.946) Acc@1: 0.098 ( 0.098) Acc@5: 0.488 ( 0.488) Test: [ 48/48] Time: 0.690 (0.349) Loss: 6.940 ( 6.939) Acc@1: 0.118 ( 0.080) Acc@5: 0.354 ( 0.500) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 1 [ 0/156 ( 1%)] Loss: 6.94 (6.94) Time: 1.965s, 521.22/s (1.965s, 521.22/s) LR: 8.001e-02 Data: 1.608 (1.608) Train: 1 [ 50/156 ( 33%)] Loss: 6.82 (6.86) Time: 0.393s, 2603.41/s (0.423s, 2422.77/s) LR: 8.001e-02 Data: 0.027 (0.058) Train: 1 [ 100/156 ( 65%)] Loss: 6.72 (6.81) Time: 0.395s, 2594.27/s (0.408s, 2508.49/s) LR: 8.001e-02 Data: 0.028 (0.042) Train: 1 [ 150/156 ( 97%)] Loss: 6.64 (6.77) Time: 0.394s, 2596.75/s (0.404s, 2535.05/s) LR: 8.001e-02 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.570 (1.570) Loss: 6.599 ( 6.599) Acc@1: 0.195 ( 0.195) Acc@5: 1.855 ( 1.855) Test: [ 48/48] Time: 0.090 (0.327) Loss: 6.539 ( 6.579) Acc@1: 0.943 ( 0.586) Acc@5: 2.830 ( 2.378) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 2 [ 0/156 ( 1%)] Loss: 6.62 (6.62) Time: 1.723s, 594.26/s (1.723s, 594.26/s) LR: 1.600e-01 Data: 1.271 (1.271) Train: 2 [ 50/156 ( 33%)] Loss: 6.61 (6.61) Time: 0.401s, 2551.05/s (0.424s, 2417.40/s) LR: 1.600e-01 Data: 0.027 (0.051) Train: 2 [ 100/156 ( 65%)] Loss: 6.52 (6.59) Time: 0.399s, 2569.40/s (0.412s, 2486.93/s) LR: 1.600e-01 Data: 0.027 (0.040) Train: 2 [ 150/156 ( 97%)] Loss: 6.47 (6.56) Time: 0.400s, 2560.14/s (0.408s, 2510.25/s) LR: 1.600e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 6.374 ( 6.374) Acc@1: 1.660 ( 1.660) Acc@5: 4.590 ( 4.590) Test: [ 48/48] Time: 0.089 (0.328) Loss: 6.324 ( 6.357) Acc@1: 1.179 ( 1.334) Acc@5: 4.009 ( 4.724) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 3 [ 0/156 ( 1%)] Loss: 6.41 (6.41) Time: 1.694s, 604.35/s (1.694s, 604.35/s) LR: 2.400e-01 Data: 1.326 (1.326) Train: 3 [ 50/156 ( 33%)] Loss: 6.47 (6.44) Time: 0.402s, 2545.07/s (0.427s, 2399.14/s) LR: 2.400e-01 Data: 0.028 (0.053) Train: 3 [ 100/156 ( 65%)] Loss: 6.36 (6.40) Time: 0.405s, 2530.91/s (0.414s, 2470.94/s) LR: 2.400e-01 Data: 0.028 (0.040) Train: 3 [ 150/156 ( 97%)] Loss: 6.25 (6.37) Time: 0.402s, 2549.26/s (0.411s, 2490.77/s) LR: 2.400e-01 Data: 0.024 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.501 (1.501) Loss: 5.956 ( 5.956) Acc@1: 3.223 ( 3.223) Acc@5: 9.082 ( 9.082) Test: [ 48/48] Time: 0.090 (0.328) Loss: 5.930 ( 5.972) Acc@1: 2.712 ( 2.838) Acc@5: 9.788 ( 8.890) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 4 [ 0/156 ( 1%)] Loss: 6.19 (6.19) Time: 1.685s, 607.81/s (1.685s, 607.81/s) LR: 3.200e-01 Data: 1.312 (1.312) Train: 4 [ 50/156 ( 33%)] Loss: 6.17 (6.22) Time: 0.403s, 2540.43/s (0.432s, 2372.76/s) LR: 3.200e-01 Data: 0.026 (0.052) Train: 4 [ 100/156 ( 65%)] Loss: 6.04 (6.18) Time: 0.409s, 2502.50/s (0.419s, 2443.30/s) LR: 3.200e-01 Data: 0.027 (0.040) Train: 4 [ 150/156 ( 97%)] Loss: 6.14 (6.13) Time: 0.401s, 2551.14/s (0.415s, 2469.40/s) LR: 3.200e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.416 (1.416) Loss: 5.681 ( 5.681) Acc@1: 4.199 ( 4.199) Acc@5: 12.988 ( 12.988) Test: [ 48/48] Time: 0.089 (0.328) Loss: 5.695 ( 5.707) Acc@1: 5.307 ( 3.956) Acc@5: 12.736 ( 12.264) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 5 [ 0/156 ( 1%)] Loss: 5.97 (5.97) Time: 1.657s, 617.97/s (1.657s, 617.97/s) LR: 3.989e-01 Data: 1.220 (1.220) Train: 5 [ 50/156 ( 33%)] Loss: 5.90 (5.97) Time: 0.402s, 2549.68/s (0.429s, 2388.97/s) LR: 3.989e-01 Data: 0.026 (0.050) Train: 5 [ 100/156 ( 65%)] Loss: 5.78 (5.92) Time: 0.406s, 2520.10/s (0.417s, 2454.14/s) LR: 3.989e-01 Data: 0.027 (0.039) Train: 5 [ 150/156 ( 97%)] Loss: 5.71 (5.88) Time: 0.409s, 2500.68/s (0.414s, 2470.70/s) LR: 3.989e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.418 (1.418) Loss: 5.647 ( 5.647) Acc@1: 4.395 ( 4.395) Acc@5: 13.477 ( 13.477) Test: [ 48/48] Time: 0.090 (0.328) Loss: 5.543 ( 5.625) Acc@1: 4.717 ( 4.992) Acc@5: 16.274 ( 14.390) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 6 [ 0/156 ( 1%)] Loss: 5.79 (5.79) Time: 1.618s, 632.89/s (1.618s, 632.89/s) LR: 3.984e-01 Data: 1.240 (1.240) Train: 6 [ 50/156 ( 33%)] Loss: 5.67 (5.69) Time: 0.410s, 2500.13/s (0.434s, 2358.68/s) LR: 3.984e-01 Data: 0.028 (0.051) Train: 6 [ 100/156 ( 65%)] Loss: 5.63 (5.66) Time: 0.409s, 2504.88/s (0.423s, 2420.50/s) LR: 3.984e-01 Data: 0.026 (0.039) Train: 6 [ 150/156 ( 97%)] Loss: 5.55 (5.62) Time: 0.406s, 2520.01/s (0.418s, 2447.11/s) LR: 3.984e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.445 (1.445) Loss: 5.148 ( 5.148) Acc@1: 9.375 ( 9.375) Acc@5: 22.754 ( 22.754) Test: [ 48/48] Time: 0.090 (0.330) Loss: 5.117 ( 5.184) Acc@1: 8.255 ( 7.966) Acc@5: 23.113 ( 21.558) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 7 [ 0/156 ( 1%)] Loss: 5.43 (5.43) Time: 1.570s, 652.24/s (1.570s, 652.24/s) LR: 3.979e-01 Data: 1.191 (1.191) Train: 7 [ 50/156 ( 33%)] Loss: 5.39 (5.46) Time: 0.411s, 2493.51/s (0.438s, 2335.25/s) LR: 3.979e-01 Data: 0.028 (0.049) Train: 7 [ 100/156 ( 65%)] Loss: 5.31 (5.44) Time: 0.410s, 2498.24/s (0.424s, 2414.78/s) LR: 3.979e-01 Data: 0.028 (0.038) Train: 7 [ 150/156 ( 97%)] Loss: 5.37 (5.41) Time: 0.407s, 2517.99/s (0.419s, 2443.77/s) LR: 3.979e-01 Data: 0.024 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.447 (1.447) Loss: 4.930 ( 4.930) Acc@1: 10.645 ( 10.645) Acc@5: 24.902 ( 24.902) Test: [ 48/48] Time: 0.090 (0.328) Loss: 4.846 ( 4.923) Acc@1: 10.731 ( 10.430) Acc@5: 25.354 ( 25.536) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 8 [ 0/156 ( 1%)] Loss: 5.27 (5.27) Time: 1.824s, 561.51/s (1.824s, 561.51/s) LR: 3.972e-01 Data: 1.446 (1.446) Train: 8 [ 50/156 ( 33%)] Loss: 5.17 (5.25) Time: 0.412s, 2487.69/s (0.439s, 2333.02/s) LR: 3.972e-01 Data: 0.027 (0.055) Train: 8 [ 100/156 ( 65%)] Loss: 5.23 (5.24) Time: 0.410s, 2494.76/s (0.425s, 2409.06/s) LR: 3.972e-01 Data: 0.027 (0.041) Train: 8 [ 150/156 ( 97%)] Loss: 5.09 (5.22) Time: 0.406s, 2520.21/s (0.420s, 2436.96/s) LR: 3.972e-01 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.429 (1.429) Loss: 4.782 ( 4.782) Acc@1: 11.816 ( 11.816) Acc@5: 28.613 ( 28.613) Test: [ 48/48] Time: 0.089 (0.330) Loss: 4.623 ( 4.767) Acc@1: 14.033 ( 12.080) Acc@5: 31.840 ( 28.742) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 9 [ 0/156 ( 1%)] Loss: 5.06 (5.06) Time: 1.911s, 535.98/s (1.911s, 535.98/s) LR: 3.965e-01 Data: 1.176 (1.176) Train: 9 [ 50/156 ( 33%)] Loss: 5.12 (5.06) Time: 0.405s, 2529.38/s (0.435s, 2356.57/s) LR: 3.965e-01 Data: 0.027 (0.049) Train: 9 [ 100/156 ( 65%)] Loss: 4.96 (5.05) Time: 0.410s, 2496.00/s (0.422s, 2426.18/s) LR: 3.965e-01 Data: 0.026 (0.038) Train: 9 [ 150/156 ( 97%)] Loss: 4.96 (5.03) Time: 0.411s, 2490.53/s (0.418s, 2446.98/s) LR: 3.965e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.428 (1.428) Loss: 4.627 ( 4.627) Acc@1: 14.648 ( 14.648) Acc@5: 33.301 ( 33.301) Test: [ 48/48] Time: 0.090 (0.328) Loss: 4.422 ( 4.607) Acc@1: 15.920 ( 13.676) Acc@5: 35.142 ( 31.530) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-0.pth.tar', 0.0799999999666214) Train: 10 [ 0/156 ( 1%)] Loss: 4.93 (4.93) Time: 1.584s, 646.45/s (1.584s, 646.45/s) LR: 3.956e-01 Data: 1.204 (1.204) Train: 10 [ 50/156 ( 33%)] Loss: 4.91 (4.90) Time: 0.403s, 2538.42/s (0.431s, 2378.01/s) LR: 3.956e-01 Data: 0.027 (0.050) Train: 10 [ 100/156 ( 65%)] Loss: 4.88 (4.89) Time: 0.401s, 2550.76/s (0.417s, 2457.49/s) LR: 3.956e-01 Data: 0.027 (0.039) Train: 10 [ 150/156 ( 97%)] Loss: 4.87 (4.88) Time: 0.398s, 2570.41/s (0.412s, 2487.48/s) LR: 3.956e-01 Data: 0.024 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.478 (1.478) Loss: 4.386 ( 4.386) Acc@1: 15.625 ( 15.625) Acc@5: 36.816 ( 36.816) Test: [ 48/48] Time: 0.090 (0.329) Loss: 4.217 ( 4.340) Acc@1: 16.392 ( 16.646) Acc@5: 39.269 ( 36.706) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-1.pth.tar', 0.5859999997329712) Train: 11 [ 0/156 ( 1%)] Loss: 4.65 (4.65) Time: 1.837s, 557.35/s (1.837s, 557.35/s) LR: 3.947e-01 Data: 1.076 (1.076) Train: 11 [ 50/156 ( 33%)] Loss: 4.77 (4.76) Time: 0.405s, 2527.91/s (0.434s, 2361.17/s) LR: 3.947e-01 Data: 0.026 (0.047) Train: 11 [ 100/156 ( 65%)] Loss: 4.70 (4.76) Time: 0.412s, 2484.24/s (0.421s, 2432.52/s) LR: 3.947e-01 Data: 0.027 (0.037) Train: 11 [ 150/156 ( 97%)] Loss: 4.67 (4.75) Time: 0.404s, 2532.25/s (0.417s, 2453.66/s) LR: 3.947e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.491 (1.491) Loss: 4.153 ( 4.153) Acc@1: 19.238 ( 19.238) Acc@5: 40.234 ( 40.234) Test: [ 48/48] Time: 0.090 (0.330) Loss: 3.970 ( 4.139) Acc@1: 20.165 ( 19.140) Acc@5: 44.929 ( 40.478) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-2.pth.tar', 1.3339999991607665) Train: 12 [ 0/156 ( 1%)] Loss: 4.66 (4.66) Time: 2.013s, 508.63/s (2.013s, 508.63/s) LR: 3.937e-01 Data: 1.301 (1.301) Train: 12 [ 50/156 ( 33%)] Loss: 4.68 (4.64) Time: 0.407s, 2518.70/s (0.436s, 2347.98/s) LR: 3.937e-01 Data: 0.027 (0.052) Train: 12 [ 100/156 ( 65%)] Loss: 4.64 (4.63) Time: 0.412s, 2485.45/s (0.422s, 2424.31/s) LR: 3.937e-01 Data: 0.027 (0.040) Train: 12 [ 150/156 ( 97%)] Loss: 4.64 (4.63) Time: 0.399s, 2566.01/s (0.417s, 2456.71/s) LR: 3.937e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.429 (1.429) Loss: 4.004 ( 4.004) Acc@1: 22.168 ( 22.168) Acc@5: 44.434 ( 44.434) Test: [ 48/48] Time: 0.089 (0.329) Loss: 3.862 ( 3.997) Acc@1: 23.231 ( 20.828) Acc@5: 46.934 ( 43.218) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-3.pth.tar', 2.837999998474121) Train: 13 [ 0/156 ( 1%)] Loss: 4.50 (4.50) Time: 2.072s, 494.10/s (2.072s, 494.10/s) LR: 3.926e-01 Data: 1.180 (1.180) Train: 13 [ 50/156 ( 33%)] Loss: 4.49 (4.52) Time: 0.393s, 2604.24/s (0.429s, 2387.99/s) LR: 3.926e-01 Data: 0.026 (0.050) Train: 13 [ 100/156 ( 65%)] Loss: 4.52 (4.53) Time: 0.396s, 2588.36/s (0.412s, 2482.45/s) LR: 3.926e-01 Data: 0.027 (0.039) Train: 13 [ 150/156 ( 97%)] Loss: 4.47 (4.52) Time: 0.397s, 2576.11/s (0.407s, 2513.91/s) LR: 3.926e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.475 (1.475) Loss: 3.878 ( 3.878) Acc@1: 22.559 ( 22.559) Acc@5: 44.629 ( 44.629) Test: [ 48/48] Time: 0.089 (0.331) Loss: 3.733 ( 3.899) Acc@1: 23.349 ( 21.924) Acc@5: 49.292 ( 44.988) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-4.pth.tar', 3.9559999942016604) Train: 14 [ 0/156 ( 1%)] Loss: 4.33 (4.33) Time: 1.484s, 689.80/s (1.484s, 689.80/s) LR: 3.915e-01 Data: 1.116 (1.116) Train: 14 [ 50/156 ( 33%)] Loss: 4.41 (4.43) Time: 0.399s, 2566.20/s (0.421s, 2432.18/s) LR: 3.915e-01 Data: 0.029 (0.049) Train: 14 [ 100/156 ( 65%)] Loss: 4.41 (4.42) Time: 0.403s, 2543.19/s (0.411s, 2489.22/s) LR: 3.915e-01 Data: 0.027 (0.038) Train: 14 [ 150/156 ( 97%)] Loss: 4.35 (4.42) Time: 0.406s, 2524.91/s (0.409s, 2503.47/s) LR: 3.915e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 3.938 ( 3.938) Acc@1: 20.898 ( 20.898) Acc@5: 44.629 ( 44.629) Test: [ 48/48] Time: 0.090 (0.330) Loss: 3.767 ( 3.920) Acc@1: 23.349 ( 22.078) Acc@5: 46.344 ( 44.934) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-5.pth.tar', 4.992000004730224) Train: 15 [ 0/156 ( 1%)] Loss: 4.34 (4.34) Time: 1.609s, 636.46/s (1.609s, 636.46/s) LR: 3.902e-01 Data: 1.234 (1.234) Train: 15 [ 50/156 ( 33%)] Loss: 4.34 (4.33) Time: 0.410s, 2494.66/s (0.432s, 2372.63/s) LR: 3.902e-01 Data: 0.027 (0.051) Train: 15 [ 100/156 ( 65%)] Loss: 4.38 (4.33) Time: 0.410s, 2495.04/s (0.421s, 2434.90/s) LR: 3.902e-01 Data: 0.028 (0.039) Train: 15 [ 150/156 ( 97%)] Loss: 4.37 (4.33) Time: 0.406s, 2522.43/s (0.417s, 2452.96/s) LR: 3.902e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.424 (1.424) Loss: 3.631 ( 3.631) Acc@1: 27.637 ( 27.637) Acc@5: 50.684 ( 50.684) Test: [ 48/48] Time: 0.089 (0.330) Loss: 3.546 ( 3.703) Acc@1: 28.538 ( 25.432) Acc@5: 53.774 ( 49.290) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-6.pth.tar', 7.965999998168945) Train: 16 [ 0/156 ( 1%)] Loss: 4.20 (4.20) Time: 1.493s, 685.90/s (1.493s, 685.90/s) LR: 3.889e-01 Data: 1.123 (1.123) Train: 16 [ 50/156 ( 33%)] Loss: 4.16 (4.24) Time: 0.402s, 2548.94/s (0.423s, 2421.40/s) LR: 3.889e-01 Data: 0.027 (0.049) Train: 16 [ 100/156 ( 65%)] Loss: 4.18 (4.23) Time: 0.406s, 2521.86/s (0.413s, 2478.58/s) LR: 3.889e-01 Data: 0.027 (0.038) Train: 16 [ 150/156 ( 97%)] Loss: 4.27 (4.24) Time: 0.404s, 2535.96/s (0.411s, 2492.78/s) LR: 3.889e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.692 (1.692) Loss: 3.766 ( 3.766) Acc@1: 25.098 ( 25.098) Acc@5: 47.949 ( 47.949) Test: [ 48/48] Time: 0.090 (0.328) Loss: 3.591 ( 3.779) Acc@1: 27.712 ( 24.222) Acc@5: 51.061 ( 47.836) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-7.pth.tar', 10.43000000732422) Train: 17 [ 0/156 ( 1%)] Loss: 4.10 (4.10) Time: 2.032s, 503.83/s (2.032s, 503.83/s) LR: 3.875e-01 Data: 1.130 (1.130) Train: 17 [ 50/156 ( 33%)] Loss: 4.16 (4.15) Time: 0.410s, 2499.78/s (0.441s, 2320.26/s) LR: 3.875e-01 Data: 0.026 (0.048) Train: 17 [ 100/156 ( 65%)] Loss: 4.15 (4.16) Time: 0.410s, 2496.63/s (0.426s, 2403.47/s) LR: 3.875e-01 Data: 0.027 (0.038) Train: 17 [ 150/156 ( 97%)] Loss: 4.23 (4.16) Time: 0.410s, 2500.00/s (0.421s, 2432.44/s) LR: 3.875e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 3.593 ( 3.593) Acc@1: 27.344 ( 27.344) Acc@5: 51.270 ( 51.270) Test: [ 48/48] Time: 0.090 (0.327) Loss: 3.433 ( 3.599) Acc@1: 27.594 ( 26.518) Acc@5: 53.184 ( 51.080) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-8.pth.tar', 12.08000001953125) Train: 18 [ 0/156 ( 1%)] Loss: 4.11 (4.11) Time: 1.516s, 675.26/s (1.516s, 675.26/s) LR: 3.860e-01 Data: 1.140 (1.140) Train: 18 [ 50/156 ( 33%)] Loss: 4.09 (4.08) Time: 0.406s, 2519.21/s (0.433s, 2362.70/s) LR: 3.860e-01 Data: 0.026 (0.050) Train: 18 [ 100/156 ( 65%)] Loss: 4.11 (4.09) Time: 0.403s, 2540.07/s (0.419s, 2443.12/s) LR: 3.860e-01 Data: 0.029 (0.039) Train: 18 [ 150/156 ( 97%)] Loss: 4.09 (4.10) Time: 0.398s, 2574.94/s (0.413s, 2480.32/s) LR: 3.860e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.512 (1.512) Loss: 3.406 ( 3.406) Acc@1: 31.543 ( 31.543) Acc@5: 54.199 ( 54.199) Test: [ 48/48] Time: 0.089 (0.332) Loss: 3.284 ( 3.397) Acc@1: 31.958 ( 29.746) Acc@5: 55.189 ( 55.294) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-9.pth.tar', 13.675999998779297) Train: 19 [ 0/156 ( 1%)] Loss: 3.95 (3.95) Time: 1.639s, 624.65/s (1.639s, 624.65/s) LR: 3.844e-01 Data: 1.273 (1.273) Train: 19 [ 50/156 ( 33%)] Loss: 3.98 (4.00) Time: 0.399s, 2569.11/s (0.424s, 2416.98/s) LR: 3.844e-01 Data: 0.027 (0.052) Train: 19 [ 100/156 ( 65%)] Loss: 4.07 (4.02) Time: 0.401s, 2555.46/s (0.412s, 2483.37/s) LR: 3.844e-01 Data: 0.025 (0.040) Train: 19 [ 150/156 ( 97%)] Loss: 3.95 (4.03) Time: 0.404s, 2534.95/s (0.410s, 2500.41/s) LR: 3.844e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.511 (1.511) Loss: 4.063 ( 4.063) Acc@1: 22.266 ( 22.266) Acc@5: 43.457 ( 43.457) Test: [ 48/48] Time: 0.090 (0.330) Loss: 3.898 ( 3.998) Acc@1: 25.472 ( 22.816) Acc@5: 47.406 ( 44.870) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-10.pth.tar', 16.646000009765626) Train: 20 [ 0/156 ( 1%)] Loss: 4.09 (4.09) Time: 1.705s, 600.65/s (1.705s, 600.65/s) LR: 3.827e-01 Data: 1.201 (1.201) Train: 20 [ 50/156 ( 33%)] Loss: 4.03 (3.95) Time: 0.410s, 2495.77/s (0.433s, 2365.54/s) LR: 3.827e-01 Data: 0.028 (0.050) Train: 20 [ 100/156 ( 65%)] Loss: 3.93 (3.96) Time: 0.411s, 2488.50/s (0.422s, 2428.03/s) LR: 3.827e-01 Data: 0.027 (0.039) Train: 20 [ 150/156 ( 97%)] Loss: 3.93 (3.96) Time: 0.406s, 2524.60/s (0.418s, 2452.17/s) LR: 3.827e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.425 (1.425) Loss: 3.479 ( 3.479) Acc@1: 29.590 ( 29.590) Acc@5: 53.711 ( 53.711) Test: [ 48/48] Time: 0.090 (0.327) Loss: 3.299 ( 3.505) Acc@1: 33.137 ( 28.862) Acc@5: 56.014 ( 53.182) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-11.pth.tar', 19.140000000610353) Train: 21 [ 0/156 ( 1%)] Loss: 3.97 (3.97) Time: 1.631s, 628.00/s (1.631s, 628.00/s) LR: 3.810e-01 Data: 1.241 (1.241) Train: 21 [ 50/156 ( 33%)] Loss: 3.98 (3.89) Time: 0.408s, 2509.47/s (0.433s, 2365.75/s) LR: 3.810e-01 Data: 0.027 (0.051) Train: 21 [ 100/156 ( 65%)] Loss: 3.93 (3.89) Time: 0.410s, 2495.34/s (0.421s, 2431.30/s) LR: 3.810e-01 Data: 0.028 (0.039) Train: 21 [ 150/156 ( 97%)] Loss: 4.05 (3.90) Time: 0.407s, 2517.99/s (0.417s, 2456.60/s) LR: 3.810e-01 Data: 0.024 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.434 (1.434) Loss: 3.374 ( 3.374) Acc@1: 31.738 ( 31.738) Acc@5: 56.055 ( 56.055) Test: [ 48/48] Time: 0.089 (0.331) Loss: 3.248 ( 3.389) Acc@1: 31.958 ( 30.518) Acc@5: 59.316 ( 56.050) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-12.pth.tar', 20.828000007324217) Train: 22 [ 0/156 ( 1%)] Loss: 3.84 (3.84) Time: 1.678s, 610.43/s (1.678s, 610.43/s) LR: 3.791e-01 Data: 1.302 (1.302) Train: 22 [ 50/156 ( 33%)] Loss: 3.93 (3.82) Time: 0.411s, 2488.77/s (0.435s, 2351.39/s) LR: 3.791e-01 Data: 0.027 (0.052) Train: 22 [ 100/156 ( 65%)] Loss: 3.80 (3.84) Time: 0.412s, 2485.71/s (0.423s, 2420.99/s) LR: 3.791e-01 Data: 0.028 (0.040) Train: 22 [ 150/156 ( 97%)] Loss: 3.97 (3.85) Time: 0.409s, 2505.92/s (0.419s, 2444.65/s) LR: 3.791e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 3.208 ( 3.208) Acc@1: 33.789 ( 33.789) Acc@5: 58.887 ( 58.887) Test: [ 48/48] Time: 0.090 (0.330) Loss: 3.033 ( 3.233) Acc@1: 34.670 ( 33.096) Acc@5: 61.910 ( 58.532) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-13.pth.tar', 21.923999993896484) Train: 23 [ 0/156 ( 1%)] Loss: 3.76 (3.76) Time: 1.705s, 600.67/s (1.705s, 600.67/s) LR: 3.772e-01 Data: 1.323 (1.323) Train: 23 [ 50/156 ( 33%)] Loss: 3.74 (3.77) Time: 0.412s, 2487.41/s (0.434s, 2359.86/s) LR: 3.772e-01 Data: 0.031 (0.053) Train: 23 [ 100/156 ( 65%)] Loss: 3.66 (3.78) Time: 0.408s, 2507.46/s (0.421s, 2433.69/s) LR: 3.772e-01 Data: 0.026 (0.040) Train: 23 [ 150/156 ( 97%)] Loss: 3.76 (3.79) Time: 0.404s, 2532.45/s (0.416s, 2459.64/s) LR: 3.772e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 3.208 ( 3.208) Acc@1: 33.496 ( 33.496) Acc@5: 58.496 ( 58.496) Test: [ 48/48] Time: 0.089 (0.328) Loss: 3.009 ( 3.177) Acc@1: 37.500 ( 34.570) Acc@5: 61.910 ( 59.712) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-14.pth.tar', 22.078000026245117) Train: 24 [ 0/156 ( 1%)] Loss: 3.75 (3.75) Time: 1.603s, 638.88/s (1.603s, 638.88/s) LR: 3.753e-01 Data: 1.230 (1.230) Train: 24 [ 50/156 ( 33%)] Loss: 3.76 (3.71) Time: 0.404s, 2531.97/s (0.431s, 2375.07/s) LR: 3.753e-01 Data: 0.028 (0.051) Train: 24 [ 100/156 ( 65%)] Loss: 3.75 (3.73) Time: 0.408s, 2510.35/s (0.419s, 2445.82/s) LR: 3.753e-01 Data: 0.028 (0.039) Train: 24 [ 150/156 ( 97%)] Loss: 3.89 (3.74) Time: 0.408s, 2508.32/s (0.416s, 2463.87/s) LR: 3.753e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.431 (1.431) Loss: 3.189 ( 3.189) Acc@1: 34.277 ( 34.277) Acc@5: 58.984 ( 58.984) Test: [ 48/48] Time: 0.089 (0.330) Loss: 3.011 ( 3.196) Acc@1: 37.500 ( 34.160) Acc@5: 63.679 ( 59.428) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-19.pth.tar', 22.81600001098633) Train: 25 [ 0/156 ( 1%)] Loss: 3.57 (3.57) Time: 1.727s, 592.96/s (1.727s, 592.96/s) LR: 3.732e-01 Data: 1.351 (1.351) Train: 25 [ 50/156 ( 33%)] Loss: 3.70 (3.67) Time: 0.409s, 2506.41/s (0.435s, 2355.57/s) LR: 3.732e-01 Data: 0.027 (0.053) Train: 25 [ 100/156 ( 65%)] Loss: 3.74 (3.68) Time: 0.401s, 2551.92/s (0.420s, 2438.78/s) LR: 3.732e-01 Data: 0.026 (0.040) Train: 25 [ 150/156 ( 97%)] Loss: 3.79 (3.69) Time: 0.400s, 2561.88/s (0.414s, 2472.67/s) LR: 3.732e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 3.186 ( 3.186) Acc@1: 35.547 ( 35.547) Acc@5: 58.691 ( 58.691) Test: [ 48/48] Time: 0.089 (0.330) Loss: 3.021 ( 3.169) Acc@1: 35.849 ( 34.266) Acc@5: 62.618 ( 59.706) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-16.pth.tar', 24.221999982299806) Train: 26 [ 0/156 ( 1%)] Loss: 3.61 (3.61) Time: 1.586s, 645.60/s (1.586s, 645.60/s) LR: 3.711e-01 Data: 1.217 (1.217) Train: 26 [ 50/156 ( 33%)] Loss: 3.57 (3.62) Time: 0.406s, 2524.45/s (0.427s, 2397.06/s) LR: 3.711e-01 Data: 0.028 (0.051) Train: 26 [ 100/156 ( 65%)] Loss: 3.70 (3.64) Time: 0.410s, 2499.13/s (0.417s, 2454.30/s) LR: 3.711e-01 Data: 0.028 (0.039) Train: 26 [ 150/156 ( 97%)] Loss: 3.57 (3.65) Time: 0.409s, 2503.08/s (0.415s, 2467.72/s) LR: 3.711e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.439 (1.439) Loss: 2.954 ( 2.954) Acc@1: 38.184 ( 38.184) Acc@5: 63.672 ( 63.672) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.852 ( 2.970) Acc@1: 39.858 ( 37.520) Acc@5: 64.623 ( 63.158) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-15.pth.tar', 25.431999985351563) Train: 27 [ 0/156 ( 1%)] Loss: 3.62 (3.62) Time: 1.604s, 638.48/s (1.604s, 638.48/s) LR: 3.689e-01 Data: 1.200 (1.200) Train: 27 [ 50/156 ( 33%)] Loss: 3.50 (3.56) Time: 0.401s, 2555.72/s (0.425s, 2409.52/s) LR: 3.689e-01 Data: 0.027 (0.050) Train: 27 [ 100/156 ( 65%)] Loss: 3.58 (3.58) Time: 0.405s, 2530.33/s (0.414s, 2475.47/s) LR: 3.689e-01 Data: 0.028 (0.039) Train: 27 [ 150/156 ( 97%)] Loss: 3.60 (3.60) Time: 0.406s, 2521.43/s (0.411s, 2492.19/s) LR: 3.689e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.455 (1.455) Loss: 3.132 ( 3.132) Acc@1: 36.230 ( 36.230) Acc@5: 61.133 ( 61.133) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.978 ( 3.132) Acc@1: 36.557 ( 35.486) Acc@5: 64.387 ( 60.746) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-17.pth.tar', 26.518000028076173) Train: 28 [ 0/156 ( 1%)] Loss: 3.42 (3.42) Time: 1.786s, 573.20/s (1.786s, 573.20/s) LR: 3.666e-01 Data: 1.412 (1.412) Train: 28 [ 50/156 ( 33%)] Loss: 3.51 (3.53) Time: 0.410s, 2498.16/s (0.436s, 2348.82/s) LR: 3.666e-01 Data: 0.026 (0.054) Train: 28 [ 100/156 ( 65%)] Loss: 3.55 (3.55) Time: 0.408s, 2507.47/s (0.423s, 2422.18/s) LR: 3.666e-01 Data: 0.029 (0.041) Train: 28 [ 150/156 ( 97%)] Loss: 3.60 (3.56) Time: 0.401s, 2553.96/s (0.417s, 2456.84/s) LR: 3.666e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 2.924 ( 2.924) Acc@1: 39.551 ( 39.551) Acc@5: 64.062 ( 64.062) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.770 ( 2.960) Acc@1: 40.920 ( 37.732) Acc@5: 66.981 ( 63.434) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-20.pth.tar', 28.861999979248047) Train: 29 [ 0/156 ( 1%)] Loss: 3.46 (3.46) Time: 1.596s, 641.63/s (1.596s, 641.63/s) LR: 3.642e-01 Data: 1.225 (1.225) Train: 29 [ 50/156 ( 33%)] Loss: 3.47 (3.48) Time: 0.406s, 2521.85/s (0.427s, 2395.82/s) LR: 3.642e-01 Data: 0.027 (0.050) Train: 29 [ 100/156 ( 65%)] Loss: 3.57 (3.49) Time: 0.407s, 2513.97/s (0.418s, 2450.73/s) LR: 3.642e-01 Data: 0.026 (0.039) Train: 29 [ 150/156 ( 97%)] Loss: 3.66 (3.51) Time: 0.406s, 2521.86/s (0.415s, 2469.29/s) LR: 3.642e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.463 (1.463) Loss: 3.003 ( 3.003) Acc@1: 38.672 ( 38.672) Acc@5: 62.988 ( 62.988) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.850 ( 3.038) Acc@1: 39.858 ( 36.546) Acc@5: 65.802 ( 61.880) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-18.pth.tar', 29.745999984130858) Train: 30 [ 0/156 ( 1%)] Loss: 3.50 (3.50) Time: 1.670s, 613.03/s (1.670s, 613.03/s) LR: 3.618e-01 Data: 1.299 (1.299) Train: 30 [ 50/156 ( 33%)] Loss: 3.43 (3.45) Time: 0.404s, 2533.32/s (0.427s, 2396.18/s) LR: 3.618e-01 Data: 0.029 (0.052) Train: 30 [ 100/156 ( 65%)] Loss: 3.65 (3.47) Time: 0.408s, 2511.89/s (0.416s, 2458.70/s) LR: 3.618e-01 Data: 0.027 (0.040) Train: 30 [ 150/156 ( 97%)] Loss: 3.50 (3.48) Time: 0.407s, 2516.65/s (0.414s, 2474.57/s) LR: 3.618e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.450 (1.450) Loss: 2.824 ( 2.824) Acc@1: 40.137 ( 40.137) Acc@5: 64.941 ( 64.941) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.695 ( 2.853) Acc@1: 43.632 ( 39.456) Acc@5: 67.689 ( 64.984) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-21.pth.tar', 30.51800001647949) Train: 31 [ 0/156 ( 1%)] Loss: 3.35 (3.35) Time: 1.984s, 516.14/s (1.984s, 516.14/s) LR: 3.593e-01 Data: 1.355 (1.355) Train: 31 [ 50/156 ( 33%)] Loss: 3.49 (3.41) Time: 0.407s, 2513.37/s (0.439s, 2333.10/s) LR: 3.593e-01 Data: 0.026 (0.053) Train: 31 [ 100/156 ( 65%)] Loss: 3.46 (3.42) Time: 0.408s, 2512.24/s (0.423s, 2418.45/s) LR: 3.593e-01 Data: 0.027 (0.040) Train: 31 [ 150/156 ( 97%)] Loss: 3.56 (3.44) Time: 0.405s, 2526.67/s (0.418s, 2449.18/s) LR: 3.593e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.443 (1.443) Loss: 2.959 ( 2.959) Acc@1: 37.402 ( 37.402) Acc@5: 63.184 ( 63.184) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.826 ( 2.960) Acc@1: 39.976 ( 37.466) Acc@5: 64.269 ( 63.420) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-22.pth.tar', 33.09600006347656) Train: 32 [ 0/156 ( 1%)] Loss: 3.30 (3.30) Time: 1.691s, 605.67/s (1.691s, 605.67/s) LR: 3.567e-01 Data: 1.165 (1.165) Train: 32 [ 50/156 ( 33%)] Loss: 3.39 (3.37) Time: 0.408s, 2512.46/s (0.432s, 2370.47/s) LR: 3.567e-01 Data: 0.027 (0.050) Train: 32 [ 100/156 ( 65%)] Loss: 3.36 (3.39) Time: 0.405s, 2528.51/s (0.420s, 2436.93/s) LR: 3.567e-01 Data: 0.027 (0.039) Train: 32 [ 150/156 ( 97%)] Loss: 3.39 (3.40) Time: 0.401s, 2556.02/s (0.415s, 2468.49/s) LR: 3.567e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.459 (1.459) Loss: 2.910 ( 2.910) Acc@1: 39.551 ( 39.551) Acc@5: 63.086 ( 63.086) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.721 ( 2.933) Acc@1: 43.396 ( 38.442) Acc@5: 67.571 ( 64.014) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-24.pth.tar', 34.16) Train: 33 [ 0/156 ( 1%)] Loss: 3.31 (3.31) Time: 1.468s, 697.48/s (1.468s, 697.48/s) LR: 3.541e-01 Data: 1.099 (1.099) Train: 33 [ 50/156 ( 33%)] Loss: 3.28 (3.33) Time: 0.405s, 2529.08/s (0.424s, 2416.45/s) LR: 3.541e-01 Data: 0.027 (0.048) Train: 33 [ 100/156 ( 65%)] Loss: 3.40 (3.35) Time: 0.406s, 2523.39/s (0.415s, 2466.71/s) LR: 3.541e-01 Data: 0.027 (0.038) Train: 33 [ 150/156 ( 97%)] Loss: 3.43 (3.37) Time: 0.409s, 2505.63/s (0.413s, 2477.21/s) LR: 3.541e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.502 (1.502) Loss: 2.760 ( 2.760) Acc@1: 41.211 ( 41.211) Acc@5: 67.480 ( 67.480) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.582 ( 2.766) Acc@1: 45.873 ( 41.414) Acc@5: 68.632 ( 67.104) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-25.pth.tar', 34.265999993896486) Train: 34 [ 0/156 ( 1%)] Loss: 3.31 (3.31) Time: 1.555s, 658.68/s (1.555s, 658.68/s) LR: 3.514e-01 Data: 1.179 (1.179) Train: 34 [ 50/156 ( 33%)] Loss: 3.30 (3.30) Time: 0.409s, 2505.47/s (0.430s, 2378.71/s) LR: 3.514e-01 Data: 0.027 (0.049) Train: 34 [ 100/156 ( 65%)] Loss: 3.45 (3.32) Time: 0.407s, 2517.06/s (0.419s, 2442.97/s) LR: 3.514e-01 Data: 0.027 (0.038) Train: 34 [ 150/156 ( 97%)] Loss: 3.31 (3.33) Time: 0.405s, 2528.41/s (0.415s, 2467.47/s) LR: 3.514e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 2.803 ( 2.803) Acc@1: 41.699 ( 41.699) Acc@5: 67.285 ( 67.285) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.730 ( 2.841) Acc@1: 42.335 ( 40.386) Acc@5: 68.160 ( 65.870) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-23.pth.tar', 34.57) Train: 35 [ 0/156 ( 1%)] Loss: 3.21 (3.21) Time: 1.728s, 592.69/s (1.728s, 592.69/s) LR: 3.486e-01 Data: 1.156 (1.156) Train: 35 [ 50/156 ( 33%)] Loss: 3.29 (3.26) Time: 0.401s, 2553.19/s (0.430s, 2379.92/s) LR: 3.486e-01 Data: 0.027 (0.049) Train: 35 [ 100/156 ( 65%)] Loss: 3.43 (3.28) Time: 0.402s, 2547.66/s (0.416s, 2462.55/s) LR: 3.486e-01 Data: 0.028 (0.038) Train: 35 [ 150/156 ( 97%)] Loss: 3.34 (3.30) Time: 0.398s, 2574.34/s (0.411s, 2494.21/s) LR: 3.486e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 2.847 ( 2.847) Acc@1: 41.309 ( 41.309) Acc@5: 64.844 ( 64.844) Test: [ 48/48] Time: 0.089 (0.327) Loss: 2.708 ( 2.861) Acc@1: 42.807 ( 39.744) Acc@5: 69.104 ( 65.296) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-27.pth.tar', 35.48600004272461) Train: 36 [ 0/156 ( 1%)] Loss: 3.22 (3.22) Time: 1.754s, 583.80/s (1.754s, 583.80/s) LR: 3.458e-01 Data: 1.120 (1.120) Train: 36 [ 50/156 ( 33%)] Loss: 3.23 (3.23) Time: 0.404s, 2537.13/s (0.428s, 2393.89/s) LR: 3.458e-01 Data: 0.028 (0.048) Train: 36 [ 100/156 ( 65%)] Loss: 3.27 (3.25) Time: 0.407s, 2518.02/s (0.416s, 2459.57/s) LR: 3.458e-01 Data: 0.027 (0.038) Train: 36 [ 150/156 ( 97%)] Loss: 3.34 (3.27) Time: 0.407s, 2515.54/s (0.413s, 2477.19/s) LR: 3.458e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 2.858 ( 2.858) Acc@1: 39.062 ( 39.062) Acc@5: 65.625 ( 65.625) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.706 ( 2.870) Acc@1: 44.340 ( 40.084) Acc@5: 67.453 ( 65.468) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-29.pth.tar', 36.54600005493164) Train: 37 [ 0/156 ( 1%)] Loss: 3.19 (3.19) Time: 1.588s, 644.85/s (1.588s, 644.85/s) LR: 3.429e-01 Data: 1.137 (1.137) Train: 37 [ 50/156 ( 33%)] Loss: 3.27 (3.19) Time: 0.407s, 2518.63/s (0.431s, 2374.58/s) LR: 3.429e-01 Data: 0.030 (0.049) Train: 37 [ 100/156 ( 65%)] Loss: 3.31 (3.22) Time: 0.409s, 2504.79/s (0.419s, 2442.91/s) LR: 3.429e-01 Data: 0.027 (0.039) Train: 37 [ 150/156 ( 97%)] Loss: 3.33 (3.23) Time: 0.407s, 2518.34/s (0.416s, 2461.56/s) LR: 3.429e-01 Data: 0.024 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.428 (1.428) Loss: 2.736 ( 2.736) Acc@1: 42.285 ( 42.285) Acc@5: 67.676 ( 67.676) Test: [ 48/48] Time: 0.090 (0.327) Loss: 2.543 ( 2.725) Acc@1: 45.991 ( 41.956) Acc@5: 70.283 ( 67.476) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-31.pth.tar', 37.466000041503904) Train: 38 [ 0/156 ( 1%)] Loss: 3.05 (3.05) Time: 1.539s, 665.53/s (1.539s, 665.53/s) LR: 3.399e-01 Data: 1.080 (1.080) Train: 38 [ 50/156 ( 33%)] Loss: 3.16 (3.15) Time: 0.409s, 2504.28/s (0.431s, 2374.77/s) LR: 3.399e-01 Data: 0.027 (0.048) Train: 38 [ 100/156 ( 65%)] Loss: 3.24 (3.18) Time: 0.416s, 2463.32/s (0.420s, 2435.52/s) LR: 3.399e-01 Data: 0.033 (0.038) Train: 38 [ 150/156 ( 97%)] Loss: 3.26 (3.20) Time: 0.405s, 2531.01/s (0.416s, 2459.07/s) LR: 3.399e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.452 (1.452) Loss: 2.753 ( 2.753) Acc@1: 42.285 ( 42.285) Acc@5: 66.602 ( 66.602) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.608 ( 2.812) Acc@1: 42.689 ( 40.846) Acc@5: 68.750 ( 66.018) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-26.pth.tar', 37.519999990234375) Train: 39 [ 0/156 ( 1%)] Loss: 3.04 (3.04) Time: 1.742s, 587.78/s (1.742s, 587.78/s) LR: 3.369e-01 Data: 1.146 (1.146) Train: 39 [ 50/156 ( 33%)] Loss: 3.20 (3.13) Time: 0.409s, 2501.73/s (0.435s, 2355.41/s) LR: 3.369e-01 Data: 0.027 (0.049) Train: 39 [ 100/156 ( 65%)] Loss: 3.22 (3.15) Time: 0.405s, 2530.93/s (0.422s, 2429.35/s) LR: 3.369e-01 Data: 0.026 (0.038) Train: 39 [ 150/156 ( 97%)] Loss: 3.24 (3.17) Time: 0.409s, 2502.71/s (0.417s, 2456.62/s) LR: 3.369e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 2.808 ( 2.808) Acc@1: 41.406 ( 41.406) Acc@5: 66.016 ( 66.016) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.634 ( 2.807) Acc@1: 44.811 ( 40.964) Acc@5: 68.868 ( 66.564) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-28.pth.tar', 37.73200006347656) Train: 40 [ 0/156 ( 1%)] Loss: 3.14 (3.14) Time: 1.650s, 620.72/s (1.650s, 620.72/s) LR: 3.338e-01 Data: 1.274 (1.274) Train: 40 [ 50/156 ( 33%)] Loss: 3.11 (3.10) Time: 0.410s, 2499.78/s (0.431s, 2373.22/s) LR: 3.338e-01 Data: 0.028 (0.051) Train: 40 [ 100/156 ( 65%)] Loss: 3.11 (3.12) Time: 0.405s, 2525.28/s (0.419s, 2443.56/s) LR: 3.338e-01 Data: 0.026 (0.039) Train: 40 [ 150/156 ( 97%)] Loss: 3.25 (3.14) Time: 0.403s, 2538.96/s (0.415s, 2469.78/s) LR: 3.338e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.526 (1.526) Loss: 2.824 ( 2.824) Acc@1: 42.578 ( 42.578) Acc@5: 66.016 ( 66.016) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.632 ( 2.820) Acc@1: 45.519 ( 41.046) Acc@5: 69.575 ( 66.340) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-32.pth.tar', 38.44199997558594) Train: 41 [ 0/156 ( 1%)] Loss: 3.07 (3.07) Time: 1.737s, 589.59/s (1.737s, 589.59/s) LR: 3.307e-01 Data: 1.361 (1.361) Train: 41 [ 50/156 ( 33%)] Loss: 3.00 (3.08) Time: 0.410s, 2496.19/s (0.435s, 2353.38/s) LR: 3.307e-01 Data: 0.027 (0.053) Train: 41 [ 100/156 ( 65%)] Loss: 3.19 (3.10) Time: 0.407s, 2516.64/s (0.423s, 2422.60/s) LR: 3.307e-01 Data: 0.024 (0.040) Train: 41 [ 150/156 ( 97%)] Loss: 3.19 (3.11) Time: 0.402s, 2548.03/s (0.417s, 2456.17/s) LR: 3.307e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.454 (1.454) Loss: 2.731 ( 2.731) Acc@1: 41.602 ( 41.602) Acc@5: 66.895 ( 66.895) Test: [ 48/48] Time: 0.088 (0.329) Loss: 2.624 ( 2.790) Acc@1: 43.632 ( 41.576) Acc@5: 69.340 ( 66.748) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-30.pth.tar', 39.456000078125) Train: 42 [ 0/156 ( 1%)] Loss: 3.12 (3.12) Time: 1.480s, 692.04/s (1.480s, 692.04/s) LR: 3.275e-01 Data: 1.111 (1.111) Train: 42 [ 50/156 ( 33%)] Loss: 3.00 (3.04) Time: 0.402s, 2548.95/s (0.423s, 2422.73/s) LR: 3.275e-01 Data: 0.027 (0.049) Train: 42 [ 100/156 ( 65%)] Loss: 3.07 (3.06) Time: 0.404s, 2534.28/s (0.413s, 2477.96/s) LR: 3.275e-01 Data: 0.026 (0.038) Train: 42 [ 150/156 ( 97%)] Loss: 3.16 (3.08) Time: 0.404s, 2531.56/s (0.411s, 2490.77/s) LR: 3.275e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.426 (1.426) Loss: 2.766 ( 2.766) Acc@1: 41.504 ( 41.504) Acc@5: 68.457 ( 68.457) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.647 ( 2.809) Acc@1: 41.745 ( 41.454) Acc@5: 68.042 ( 66.504) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-35.pth.tar', 39.74399997802735) Train: 43 [ 0/156 ( 1%)] Loss: 2.94 (2.94) Time: 1.513s, 676.61/s (1.513s, 676.61/s) LR: 3.242e-01 Data: 1.138 (1.138) Train: 43 [ 50/156 ( 33%)] Loss: 3.09 (3.01) Time: 0.409s, 2504.38/s (0.431s, 2376.90/s) LR: 3.242e-01 Data: 0.028 (0.049) Train: 43 [ 100/156 ( 65%)] Loss: 3.11 (3.03) Time: 0.409s, 2506.29/s (0.418s, 2448.13/s) LR: 3.242e-01 Data: 0.033 (0.038) Train: 43 [ 150/156 ( 97%)] Loss: 3.06 (3.05) Time: 0.403s, 2541.90/s (0.414s, 2475.69/s) LR: 3.242e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.721 ( 2.721) Acc@1: 43.457 ( 43.457) Acc@5: 67.578 ( 67.578) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.497 ( 2.732) Acc@1: 46.580 ( 42.376) Acc@5: 71.462 ( 67.536) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-36.pth.tar', 40.08399999755859) Train: 44 [ 0/156 ( 1%)] Loss: 2.92 (2.92) Time: 1.792s, 571.35/s (1.792s, 571.35/s) LR: 3.209e-01 Data: 1.160 (1.160) Train: 44 [ 50/156 ( 33%)] Loss: 3.05 (2.98) Time: 0.406s, 2520.91/s (0.432s, 2371.09/s) LR: 3.209e-01 Data: 0.027 (0.049) Train: 44 [ 100/156 ( 65%)] Loss: 3.00 (3.00) Time: 0.406s, 2521.19/s (0.420s, 2438.70/s) LR: 3.209e-01 Data: 0.027 (0.039) Train: 44 [ 150/156 ( 97%)] Loss: 3.17 (3.02) Time: 0.406s, 2519.13/s (0.416s, 2462.01/s) LR: 3.209e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.509 (1.509) Loss: 2.620 ( 2.620) Acc@1: 44.531 ( 44.531) Acc@5: 68.945 ( 68.945) Test: [ 48/48] Time: 0.090 (0.333) Loss: 2.483 ( 2.665) Acc@1: 46.934 ( 43.274) Acc@5: 70.755 ( 68.460) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-34.pth.tar', 40.386000031738284) Train: 45 [ 0/156 ( 1%)] Loss: 2.83 (2.83) Time: 1.868s, 548.24/s (1.868s, 548.24/s) LR: 3.176e-01 Data: 1.492 (1.492) Train: 45 [ 50/156 ( 33%)] Loss: 3.03 (2.95) Time: 0.409s, 2501.60/s (0.436s, 2348.05/s) LR: 3.176e-01 Data: 0.028 (0.056) Train: 45 [ 100/156 ( 65%)] Loss: 3.02 (2.98) Time: 0.414s, 2472.99/s (0.422s, 2425.76/s) LR: 3.176e-01 Data: 0.034 (0.042) Train: 45 [ 150/156 ( 97%)] Loss: 3.02 (3.00) Time: 0.406s, 2524.80/s (0.417s, 2454.04/s) LR: 3.176e-01 Data: 0.025 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.430 (1.430) Loss: 2.766 ( 2.766) Acc@1: 42.285 ( 42.285) Acc@5: 67.773 ( 67.773) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.655 ( 2.816) Acc@1: 44.340 ( 41.376) Acc@5: 69.340 ( 66.368) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-45.pth.tar', 41.375999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-38.pth.tar', 40.84599999145508) Train: 46 [ 0/156 ( 1%)] Loss: 2.95 (2.95) Time: 1.860s, 550.39/s (1.860s, 550.39/s) LR: 3.141e-01 Data: 1.485 (1.485) Train: 46 [ 50/156 ( 33%)] Loss: 3.06 (2.92) Time: 0.407s, 2513.43/s (0.437s, 2343.38/s) LR: 3.141e-01 Data: 0.027 (0.056) Train: 46 [ 100/156 ( 65%)] Loss: 2.97 (2.95) Time: 0.407s, 2518.21/s (0.423s, 2423.23/s) LR: 3.141e-01 Data: 0.026 (0.042) Train: 46 [ 150/156 ( 97%)] Loss: 3.05 (2.97) Time: 0.406s, 2524.69/s (0.418s, 2451.01/s) LR: 3.141e-01 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.425 (1.425) Loss: 2.797 ( 2.797) Acc@1: 43.848 ( 43.848) Acc@5: 66.992 ( 66.992) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.622 ( 2.807) Acc@1: 43.986 ( 41.574) Acc@5: 68.868 ( 66.736) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-45.pth.tar', 41.375999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-39.pth.tar', 40.963999943847654) Train: 47 [ 0/156 ( 1%)] Loss: 2.87 (2.87) Time: 1.668s, 613.73/s (1.668s, 613.73/s) LR: 3.107e-01 Data: 1.175 (1.175) Train: 47 [ 50/156 ( 33%)] Loss: 2.97 (2.90) Time: 0.406s, 2522.34/s (0.432s, 2371.80/s) LR: 3.107e-01 Data: 0.026 (0.050) Train: 47 [ 100/156 ( 65%)] Loss: 2.89 (2.92) Time: 0.402s, 2544.67/s (0.418s, 2449.65/s) LR: 3.107e-01 Data: 0.027 (0.039) Train: 47 [ 150/156 ( 97%)] Loss: 2.98 (2.94) Time: 0.403s, 2539.50/s (0.413s, 2477.55/s) LR: 3.107e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.455 (1.455) Loss: 2.736 ( 2.736) Acc@1: 45.215 ( 45.215) Acc@5: 67.090 ( 67.090) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.564 ( 2.742) Acc@1: 45.755 ( 42.704) Acc@5: 69.811 ( 67.520) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-45.pth.tar', 41.375999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-40.pth.tar', 41.04599999267578) Train: 48 [ 0/156 ( 1%)] Loss: 2.87 (2.87) Time: 1.740s, 588.58/s (1.740s, 588.58/s) LR: 3.072e-01 Data: 1.129 (1.129) Train: 48 [ 50/156 ( 33%)] Loss: 2.91 (2.88) Time: 0.409s, 2503.07/s (0.433s, 2365.45/s) LR: 3.072e-01 Data: 0.027 (0.048) Train: 48 [ 100/156 ( 65%)] Loss: 2.97 (2.90) Time: 0.410s, 2499.71/s (0.421s, 2432.93/s) LR: 3.072e-01 Data: 0.027 (0.038) Train: 48 [ 150/156 ( 97%)] Loss: 2.97 (2.92) Time: 0.403s, 2539.80/s (0.416s, 2459.38/s) LR: 3.072e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 2.791 ( 2.791) Acc@1: 43.652 ( 43.652) Acc@5: 67.578 ( 67.578) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.469 ( 2.749) Acc@1: 47.524 ( 42.512) Acc@5: 72.052 ( 67.610) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-45.pth.tar', 41.375999997558594) Train: 49 [ 0/156 ( 1%)] Loss: 2.81 (2.81) Time: 1.931s, 530.35/s (1.931s, 530.35/s) LR: 3.036e-01 Data: 1.411 (1.411) Train: 49 [ 50/156 ( 33%)] Loss: 2.93 (2.85) Time: 0.414s, 2473.04/s (0.439s, 2333.51/s) LR: 3.036e-01 Data: 0.027 (0.055) Train: 49 [ 100/156 ( 65%)] Loss: 2.91 (2.87) Time: 0.407s, 2517.91/s (0.424s, 2415.45/s) LR: 3.036e-01 Data: 0.027 (0.041) Train: 49 [ 150/156 ( 97%)] Loss: 2.98 (2.89) Time: 0.405s, 2525.88/s (0.419s, 2446.09/s) LR: 3.036e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.423 (1.423) Loss: 2.652 ( 2.652) Acc@1: 44.629 ( 44.629) Acc@5: 70.020 ( 70.020) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.512 ( 2.703) Acc@1: 47.170 ( 43.800) Acc@5: 71.698 ( 68.448) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-33.pth.tar', 41.41400001708984) Train: 50 [ 0/156 ( 1%)] Loss: 2.94 (2.94) Time: 1.584s, 646.57/s (1.584s, 646.57/s) LR: 3.000e-01 Data: 1.185 (1.185) Train: 50 [ 50/156 ( 33%)] Loss: 2.89 (2.82) Time: 0.411s, 2492.51/s (0.433s, 2367.35/s) LR: 3.000e-01 Data: 0.028 (0.050) Train: 50 [ 100/156 ( 65%)] Loss: 2.92 (2.85) Time: 0.407s, 2513.37/s (0.420s, 2436.30/s) LR: 3.000e-01 Data: 0.027 (0.039) Train: 50 [ 150/156 ( 97%)] Loss: 2.93 (2.87) Time: 0.407s, 2517.05/s (0.417s, 2458.04/s) LR: 3.000e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.477 (1.477) Loss: 2.707 ( 2.707) Acc@1: 41.895 ( 41.895) Acc@5: 68.555 ( 68.555) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.576 ( 2.799) Acc@1: 44.929 ( 41.970) Acc@5: 69.340 ( 66.724) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-50.pth.tar', 41.969999995117185) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-42.pth.tar', 41.454000034179685) Train: 51 [ 0/156 ( 1%)] Loss: 2.74 (2.74) Time: 1.628s, 629.10/s (1.628s, 629.10/s) LR: 2.964e-01 Data: 1.254 (1.254) Train: 51 [ 50/156 ( 33%)] Loss: 2.81 (2.81) Time: 0.416s, 2461.58/s (0.433s, 2367.17/s) LR: 2.964e-01 Data: 0.027 (0.052) Train: 51 [ 100/156 ( 65%)] Loss: 2.89 (2.83) Time: 0.407s, 2513.44/s (0.420s, 2435.52/s) LR: 2.964e-01 Data: 0.027 (0.039) Train: 51 [ 150/156 ( 97%)] Loss: 2.89 (2.85) Time: 0.402s, 2544.81/s (0.416s, 2463.67/s) LR: 2.964e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.457 (1.457) Loss: 2.697 ( 2.697) Acc@1: 43.848 ( 43.848) Acc@5: 69.043 ( 69.043) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.468 ( 2.698) Acc@1: 45.401 ( 43.590) Acc@5: 72.524 ( 68.414) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-50.pth.tar', 41.969999995117185) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-46.pth.tar', 41.57399997314453) Train: 52 [ 0/156 ( 1%)] Loss: 2.70 (2.70) Time: 1.583s, 646.69/s (1.583s, 646.69/s) LR: 2.927e-01 Data: 1.211 (1.211) Train: 52 [ 50/156 ( 33%)] Loss: 2.84 (2.77) Time: 0.408s, 2512.33/s (0.429s, 2385.99/s) LR: 2.927e-01 Data: 0.026 (0.050) Train: 52 [ 100/156 ( 65%)] Loss: 2.86 (2.80) Time: 0.409s, 2503.82/s (0.419s, 2441.22/s) LR: 2.927e-01 Data: 0.026 (0.039) Train: 52 [ 150/156 ( 97%)] Loss: 2.85 (2.82) Time: 0.410s, 2496.61/s (0.416s, 2460.84/s) LR: 2.927e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.425 (1.425) Loss: 2.622 ( 2.622) Acc@1: 45.996 ( 45.996) Acc@5: 69.922 ( 69.922) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.507 ( 2.684) Acc@1: 47.524 ( 44.308) Acc@5: 70.519 ( 68.764) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-50.pth.tar', 41.969999995117185) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-41.pth.tar', 41.57600001342774) Train: 53 [ 0/156 ( 1%)] Loss: 2.75 (2.75) Time: 1.836s, 557.71/s (1.836s, 557.71/s) LR: 2.889e-01 Data: 1.462 (1.462) Train: 53 [ 50/156 ( 33%)] Loss: 2.78 (2.75) Time: 0.410s, 2497.78/s (0.436s, 2350.54/s) LR: 2.889e-01 Data: 0.028 (0.055) Train: 53 [ 100/156 ( 65%)] Loss: 2.85 (2.78) Time: 0.407s, 2513.03/s (0.422s, 2427.84/s) LR: 2.889e-01 Data: 0.028 (0.041) Train: 53 [ 150/156 ( 97%)] Loss: 2.89 (2.80) Time: 0.402s, 2545.04/s (0.417s, 2456.76/s) LR: 2.889e-01 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 2.654 ( 2.654) Acc@1: 45.410 ( 45.410) Acc@5: 70.508 ( 70.508) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.448 ( 2.681) Acc@1: 47.759 ( 44.078) Acc@5: 72.288 ( 68.594) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-50.pth.tar', 41.969999995117185) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-37.pth.tar', 41.95600006835937) Train: 54 [ 0/156 ( 1%)] Loss: 2.71 (2.71) Time: 1.585s, 646.23/s (1.585s, 646.23/s) LR: 2.852e-01 Data: 1.216 (1.216) Train: 54 [ 50/156 ( 33%)] Loss: 2.68 (2.73) Time: 0.402s, 2550.01/s (0.429s, 2384.87/s) LR: 2.852e-01 Data: 0.027 (0.050) Train: 54 [ 100/156 ( 65%)] Loss: 2.81 (2.75) Time: 0.404s, 2533.14/s (0.416s, 2461.24/s) LR: 2.852e-01 Data: 0.027 (0.039) Train: 54 [ 150/156 ( 97%)] Loss: 2.86 (2.77) Time: 0.405s, 2530.79/s (0.413s, 2480.28/s) LR: 2.852e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.624 ( 2.624) Acc@1: 45.410 ( 45.410) Acc@5: 69.922 ( 69.922) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.535 ( 2.690) Acc@1: 45.283 ( 44.180) Acc@5: 70.283 ( 68.762) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-50.pth.tar', 41.969999995117185) Train: 55 [ 0/156 ( 1%)] Loss: 2.71 (2.71) Time: 1.843s, 555.48/s (1.843s, 555.48/s) LR: 2.813e-01 Data: 1.315 (1.315) Train: 55 [ 50/156 ( 33%)] Loss: 2.67 (2.71) Time: 0.415s, 2466.04/s (0.438s, 2339.96/s) LR: 2.813e-01 Data: 0.032 (0.052) Train: 55 [ 100/156 ( 65%)] Loss: 2.72 (2.73) Time: 0.406s, 2521.19/s (0.422s, 2425.31/s) LR: 2.813e-01 Data: 0.028 (0.040) Train: 55 [ 150/156 ( 97%)] Loss: 2.75 (2.75) Time: 0.404s, 2536.67/s (0.416s, 2459.34/s) LR: 2.813e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 2.649 ( 2.649) Acc@1: 43.750 ( 43.750) Acc@5: 69.043 ( 69.043) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.504 ( 2.690) Acc@1: 48.349 ( 43.970) Acc@5: 71.580 ( 68.656) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-43.pth.tar', 42.375999936523435) Train: 56 [ 0/156 ( 1%)] Loss: 2.64 (2.64) Time: 2.040s, 502.03/s (2.040s, 502.03/s) LR: 2.775e-01 Data: 1.360 (1.360) Train: 56 [ 50/156 ( 33%)] Loss: 2.72 (2.68) Time: 0.409s, 2503.41/s (0.438s, 2339.41/s) LR: 2.775e-01 Data: 0.027 (0.053) Train: 56 [ 100/156 ( 65%)] Loss: 2.75 (2.71) Time: 0.409s, 2502.34/s (0.424s, 2417.05/s) LR: 2.775e-01 Data: 0.028 (0.040) Train: 56 [ 150/156 ( 97%)] Loss: 2.78 (2.73) Time: 0.400s, 2557.95/s (0.417s, 2453.88/s) LR: 2.775e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.413 (1.413) Loss: 2.665 ( 2.665) Acc@1: 46.094 ( 46.094) Acc@5: 68.945 ( 68.945) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.600 ( 2.673) Acc@1: 43.750 ( 44.170) Acc@5: 70.283 ( 69.068) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-48.pth.tar', 42.51200002319336) Train: 57 [ 0/156 ( 1%)] Loss: 2.57 (2.57) Time: 1.888s, 542.43/s (1.888s, 542.43/s) LR: 2.736e-01 Data: 1.521 (1.521) Train: 57 [ 50/156 ( 33%)] Loss: 2.66 (2.66) Time: 0.401s, 2555.02/s (0.427s, 2397.46/s) LR: 2.736e-01 Data: 0.028 (0.056) Train: 57 [ 100/156 ( 65%)] Loss: 2.69 (2.69) Time: 0.400s, 2558.41/s (0.413s, 2476.83/s) LR: 2.736e-01 Data: 0.028 (0.042) Train: 57 [ 150/156 ( 97%)] Loss: 2.68 (2.70) Time: 0.400s, 2559.07/s (0.409s, 2501.31/s) LR: 2.736e-01 Data: 0.025 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.458 (1.458) Loss: 2.693 ( 2.693) Acc@1: 45.117 ( 45.117) Acc@5: 68.652 ( 68.652) Test: [ 48/48] Time: 0.089 (0.333) Loss: 2.519 ( 2.710) Acc@1: 46.580 ( 43.812) Acc@5: 70.991 ( 68.414) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-47.pth.tar', 42.70400003051758) Train: 58 [ 0/156 ( 1%)] Loss: 2.66 (2.66) Time: 1.784s, 574.03/s (1.784s, 574.03/s) LR: 2.697e-01 Data: 1.412 (1.412) Train: 58 [ 50/156 ( 33%)] Loss: 2.66 (2.67) Time: 0.405s, 2531.16/s (0.431s, 2374.19/s) LR: 2.697e-01 Data: 0.028 (0.054) Train: 58 [ 100/156 ( 65%)] Loss: 2.63 (2.67) Time: 0.411s, 2494.18/s (0.418s, 2449.31/s) LR: 2.697e-01 Data: 0.032 (0.041) Train: 58 [ 150/156 ( 97%)] Loss: 2.81 (2.69) Time: 0.407s, 2515.46/s (0.415s, 2469.40/s) LR: 2.697e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.487 (1.487) Loss: 2.637 ( 2.637) Acc@1: 45.410 ( 45.410) Acc@5: 68.750 ( 68.750) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.545 ( 2.725) Acc@1: 46.462 ( 43.444) Acc@5: 70.991 ( 68.012) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-58.pth.tar', 43.44400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-44.pth.tar', 43.27400002563476) Train: 59 [ 0/156 ( 1%)] Loss: 2.51 (2.51) Time: 1.605s, 637.87/s (1.605s, 637.87/s) LR: 2.658e-01 Data: 1.230 (1.230) Train: 59 [ 50/156 ( 33%)] Loss: 2.63 (2.61) Time: 0.408s, 2512.54/s (0.432s, 2371.62/s) LR: 2.658e-01 Data: 0.027 (0.051) Train: 59 [ 100/156 ( 65%)] Loss: 2.69 (2.64) Time: 0.408s, 2508.21/s (0.420s, 2436.88/s) LR: 2.658e-01 Data: 0.025 (0.039) Train: 59 [ 150/156 ( 97%)] Loss: 2.65 (2.66) Time: 0.406s, 2519.62/s (0.416s, 2459.34/s) LR: 2.658e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.503 (1.503) Loss: 2.686 ( 2.686) Acc@1: 44.727 ( 44.727) Acc@5: 68.555 ( 68.555) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.566 ( 2.734) Acc@1: 46.108 ( 43.830) Acc@5: 70.401 ( 68.236) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-59.pth.tar', 43.82999999023438) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-58.pth.tar', 43.44400001464844) Train: 60 [ 0/156 ( 1%)] Loss: 2.59 (2.59) Time: 1.934s, 529.41/s (1.934s, 529.41/s) LR: 2.618e-01 Data: 1.563 (1.563) Train: 60 [ 50/156 ( 33%)] Loss: 2.66 (2.61) Time: 0.405s, 2530.02/s (0.433s, 2365.41/s) LR: 2.618e-01 Data: 0.028 (0.058) Train: 60 [ 100/156 ( 65%)] Loss: 2.57 (2.63) Time: 0.406s, 2519.50/s (0.419s, 2444.17/s) LR: 2.618e-01 Data: 0.027 (0.043) Train: 60 [ 150/156 ( 97%)] Loss: 2.67 (2.65) Time: 0.406s, 2521.41/s (0.415s, 2466.27/s) LR: 2.618e-01 Data: 0.026 (0.038) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.452 (1.452) Loss: 2.699 ( 2.699) Acc@1: 45.703 ( 45.703) Acc@5: 69.336 ( 69.336) Test: [ 48/48] Time: 0.090 (0.334) Loss: 2.500 ( 2.773) Acc@1: 47.406 ( 43.058) Acc@5: 72.170 ( 67.746) Train: 61 [ 0/156 ( 1%)] Loss: 2.50 (2.50) Time: 1.619s, 632.40/s (1.619s, 632.40/s) LR: 2.578e-01 Data: 1.158 (1.158) Train: 61 [ 50/156 ( 33%)] Loss: 2.63 (2.58) Time: 0.410s, 2498.05/s (0.431s, 2374.20/s) LR: 2.578e-01 Data: 0.031 (0.049) Train: 61 [ 100/156 ( 65%)] Loss: 2.69 (2.61) Time: 0.404s, 2534.82/s (0.419s, 2444.18/s) LR: 2.578e-01 Data: 0.026 (0.038) Train: 61 [ 150/156 ( 97%)] Loss: 2.65 (2.62) Time: 0.407s, 2516.56/s (0.415s, 2466.03/s) LR: 2.578e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.429 (1.429) Loss: 2.649 ( 2.649) Acc@1: 46.289 ( 46.289) Acc@5: 68.555 ( 68.555) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.467 ( 2.721) Acc@1: 48.939 ( 44.004) Acc@5: 72.877 ( 68.504) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-59.pth.tar', 43.82999999023438) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-51.pth.tar', 43.590000006103516) Train: 62 [ 0/156 ( 1%)] Loss: 2.60 (2.60) Time: 1.681s, 609.04/s (1.681s, 609.04/s) LR: 2.538e-01 Data: 1.307 (1.307) Train: 62 [ 50/156 ( 33%)] Loss: 2.54 (2.56) Time: 0.406s, 2522.27/s (0.432s, 2367.96/s) LR: 2.538e-01 Data: 0.026 (0.052) Train: 62 [ 100/156 ( 65%)] Loss: 2.62 (2.58) Time: 0.403s, 2538.94/s (0.419s, 2442.53/s) LR: 2.538e-01 Data: 0.027 (0.040) Train: 62 [ 150/156 ( 97%)] Loss: 2.63 (2.60) Time: 0.405s, 2526.81/s (0.415s, 2468.46/s) LR: 2.538e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.422 (1.422) Loss: 2.684 ( 2.684) Acc@1: 45.605 ( 45.605) Acc@5: 68.164 ( 68.164) Test: [ 48/48] Time: 0.090 (0.327) Loss: 2.545 ( 2.718) Acc@1: 45.755 ( 44.192) Acc@5: 69.693 ( 68.370) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-59.pth.tar', 43.82999999023438) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-49.pth.tar', 43.80000006347656) Train: 63 [ 0/156 ( 1%)] Loss: 2.46 (2.46) Time: 1.513s, 676.67/s (1.513s, 676.67/s) LR: 2.497e-01 Data: 1.101 (1.101) Train: 63 [ 50/156 ( 33%)] Loss: 2.62 (2.54) Time: 0.407s, 2515.16/s (0.430s, 2382.67/s) LR: 2.497e-01 Data: 0.027 (0.048) Train: 63 [ 100/156 ( 65%)] Loss: 2.67 (2.57) Time: 0.405s, 2526.53/s (0.417s, 2454.51/s) LR: 2.497e-01 Data: 0.031 (0.038) Train: 63 [ 150/156 ( 97%)] Loss: 2.64 (2.58) Time: 0.400s, 2562.05/s (0.412s, 2483.45/s) LR: 2.497e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.461 (1.461) Loss: 2.714 ( 2.714) Acc@1: 43.555 ( 43.555) Acc@5: 68.359 ( 68.359) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.494 ( 2.709) Acc@1: 47.524 ( 44.152) Acc@5: 71.226 ( 68.470) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-59.pth.tar', 43.82999999023438) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-57.pth.tar', 43.8120000012207) Train: 64 [ 0/156 ( 1%)] Loss: 2.51 (2.51) Time: 1.739s, 588.81/s (1.739s, 588.81/s) LR: 2.457e-01 Data: 1.095 (1.095) Train: 64 [ 50/156 ( 33%)] Loss: 2.53 (2.51) Time: 0.404s, 2534.71/s (0.432s, 2370.87/s) LR: 2.457e-01 Data: 0.028 (0.049) Train: 64 [ 100/156 ( 65%)] Loss: 2.56 (2.53) Time: 0.408s, 2512.54/s (0.420s, 2438.38/s) LR: 2.457e-01 Data: 0.027 (0.038) Train: 64 [ 150/156 ( 97%)] Loss: 2.60 (2.55) Time: 0.405s, 2527.48/s (0.416s, 2459.47/s) LR: 2.457e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.437 (1.437) Loss: 2.696 ( 2.696) Acc@1: 44.043 ( 44.043) Acc@5: 68.457 ( 68.457) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.592 ( 2.773) Acc@1: 45.755 ( 43.658) Acc@5: 70.165 ( 67.638) Train: 65 [ 0/156 ( 1%)] Loss: 2.43 (2.43) Time: 1.701s, 602.08/s (1.701s, 602.08/s) LR: 2.416e-01 Data: 1.217 (1.217) Train: 65 [ 50/156 ( 33%)] Loss: 2.47 (2.49) Time: 0.411s, 2488.73/s (0.435s, 2353.64/s) LR: 2.416e-01 Data: 0.032 (0.050) Train: 65 [ 100/156 ( 65%)] Loss: 2.53 (2.52) Time: 0.408s, 2508.10/s (0.422s, 2425.34/s) LR: 2.416e-01 Data: 0.028 (0.039) Train: 65 [ 150/156 ( 97%)] Loss: 2.52 (2.54) Time: 0.406s, 2523.01/s (0.418s, 2451.27/s) LR: 2.416e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 2.659 ( 2.659) Acc@1: 44.043 ( 44.043) Acc@5: 69.434 ( 69.434) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.456 ( 2.705) Acc@1: 48.585 ( 44.312) Acc@5: 73.939 ( 68.874) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-59.pth.tar', 43.82999999023438) Train: 66 [ 0/156 ( 1%)] Loss: 2.49 (2.49) Time: 1.709s, 599.20/s (1.709s, 599.20/s) LR: 2.375e-01 Data: 1.333 (1.333) Train: 66 [ 50/156 ( 33%)] Loss: 2.54 (2.49) Time: 0.406s, 2524.08/s (0.434s, 2359.45/s) LR: 2.375e-01 Data: 0.026 (0.053) Train: 66 [ 100/156 ( 65%)] Loss: 2.41 (2.51) Time: 0.404s, 2535.39/s (0.420s, 2437.64/s) LR: 2.375e-01 Data: 0.027 (0.040) Train: 66 [ 150/156 ( 97%)] Loss: 2.58 (2.52) Time: 0.405s, 2531.37/s (0.416s, 2464.21/s) LR: 2.375e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.456 (1.456) Loss: 2.646 ( 2.646) Acc@1: 46.191 ( 46.191) Acc@5: 70.508 ( 70.508) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.472 ( 2.725) Acc@1: 48.585 ( 44.624) Acc@5: 72.288 ( 68.672) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-55.pth.tar', 43.97000005859375) Train: 67 [ 0/156 ( 1%)] Loss: 2.50 (2.50) Time: 1.938s, 528.45/s (1.938s, 528.45/s) LR: 2.334e-01 Data: 1.561 (1.561) Train: 67 [ 50/156 ( 33%)] Loss: 2.49 (2.47) Time: 0.407s, 2512.93/s (0.438s, 2336.90/s) LR: 2.334e-01 Data: 0.027 (0.058) Train: 67 [ 100/156 ( 65%)] Loss: 2.54 (2.49) Time: 0.403s, 2537.90/s (0.422s, 2429.07/s) LR: 2.334e-01 Data: 0.027 (0.043) Train: 67 [ 150/156 ( 97%)] Loss: 2.60 (2.50) Time: 0.401s, 2556.37/s (0.415s, 2465.59/s) LR: 2.334e-01 Data: 0.025 (0.038) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.431 (1.431) Loss: 2.625 ( 2.625) Acc@1: 47.461 ( 47.461) Acc@5: 70.508 ( 70.508) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.450 ( 2.681) Acc@1: 48.467 ( 44.900) Acc@5: 72.642 ( 69.186) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-61.pth.tar', 44.00400005615234) Train: 68 [ 0/156 ( 1%)] Loss: 2.39 (2.39) Time: 1.604s, 638.44/s (1.604s, 638.44/s) LR: 2.292e-01 Data: 1.233 (1.233) Train: 68 [ 50/156 ( 33%)] Loss: 2.53 (2.44) Time: 0.407s, 2515.43/s (0.429s, 2384.55/s) LR: 2.292e-01 Data: 0.027 (0.051) Train: 68 [ 100/156 ( 65%)] Loss: 2.48 (2.47) Time: 0.408s, 2510.62/s (0.419s, 2443.14/s) LR: 2.292e-01 Data: 0.028 (0.039) Train: 68 [ 150/156 ( 97%)] Loss: 2.53 (2.49) Time: 0.401s, 2554.11/s (0.414s, 2470.73/s) LR: 2.292e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.465 (1.465) Loss: 2.682 ( 2.682) Acc@1: 45.703 ( 45.703) Acc@5: 69.336 ( 69.336) Test: [ 48/48] Time: 0.089 (0.332) Loss: 2.502 ( 2.706) Acc@1: 47.288 ( 44.480) Acc@5: 71.698 ( 68.654) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-53.pth.tar', 44.077999996337894) Train: 69 [ 0/156 ( 1%)] Loss: 2.38 (2.38) Time: 1.538s, 665.60/s (1.538s, 665.60/s) LR: 2.251e-01 Data: 1.090 (1.090) Train: 69 [ 50/156 ( 33%)] Loss: 2.42 (2.43) Time: 0.404s, 2534.48/s (0.424s, 2416.77/s) LR: 2.251e-01 Data: 0.027 (0.048) Train: 69 [ 100/156 ( 65%)] Loss: 2.53 (2.44) Time: 0.404s, 2537.10/s (0.414s, 2471.69/s) LR: 2.251e-01 Data: 0.026 (0.038) Train: 69 [ 150/156 ( 97%)] Loss: 2.54 (2.47) Time: 0.406s, 2522.37/s (0.412s, 2485.85/s) LR: 2.251e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.431 (1.431) Loss: 2.715 ( 2.715) Acc@1: 45.508 ( 45.508) Acc@5: 69.629 ( 69.629) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.493 ( 2.742) Acc@1: 47.759 ( 44.460) Acc@5: 72.995 ( 68.246) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-63.pth.tar', 44.15199995849609) Train: 70 [ 0/156 ( 1%)] Loss: 2.47 (2.47) Time: 2.062s, 496.65/s (2.062s, 496.65/s) LR: 2.209e-01 Data: 1.687 (1.687) Train: 70 [ 50/156 ( 33%)] Loss: 2.34 (2.41) Time: 0.413s, 2478.99/s (0.442s, 2314.55/s) LR: 2.209e-01 Data: 0.028 (0.061) Train: 70 [ 100/156 ( 65%)] Loss: 2.48 (2.44) Time: 0.409s, 2506.50/s (0.426s, 2405.77/s) LR: 2.209e-01 Data: 0.027 (0.044) Train: 70 [ 150/156 ( 97%)] Loss: 2.48 (2.45) Time: 0.408s, 2511.17/s (0.420s, 2437.95/s) LR: 2.209e-01 Data: 0.025 (0.039) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.452 (1.452) Loss: 2.758 ( 2.758) Acc@1: 45.312 ( 45.312) Acc@5: 68.262 ( 68.262) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.577 ( 2.822) Acc@1: 47.170 ( 43.240) Acc@5: 72.052 ( 67.016) Train: 71 [ 0/156 ( 1%)] Loss: 2.42 (2.42) Time: 1.892s, 541.34/s (1.892s, 541.34/s) LR: 2.167e-01 Data: 1.517 (1.517) Train: 71 [ 50/156 ( 33%)] Loss: 2.40 (2.39) Time: 0.409s, 2502.14/s (0.442s, 2318.54/s) LR: 2.167e-01 Data: 0.026 (0.061) Train: 71 [ 100/156 ( 65%)] Loss: 2.50 (2.42) Time: 0.415s, 2468.36/s (0.426s, 2404.24/s) LR: 2.167e-01 Data: 0.033 (0.044) Train: 71 [ 150/156 ( 97%)] Loss: 2.40 (2.43) Time: 0.404s, 2533.93/s (0.420s, 2437.89/s) LR: 2.167e-01 Data: 0.025 (0.039) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 2.722 ( 2.722) Acc@1: 45.312 ( 45.312) Acc@5: 68.555 ( 68.555) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.554 ( 2.704) Acc@1: 49.646 ( 44.630) Acc@5: 72.759 ( 69.158) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-56.pth.tar', 44.17) Train: 72 [ 0/156 ( 1%)] Loss: 2.28 (2.28) Time: 1.451s, 705.66/s (1.451s, 705.66/s) LR: 2.126e-01 Data: 1.040 (1.040) Train: 72 [ 50/156 ( 33%)] Loss: 2.42 (2.38) Time: 0.408s, 2508.40/s (0.430s, 2382.20/s) LR: 2.126e-01 Data: 0.027 (0.048) Train: 72 [ 100/156 ( 65%)] Loss: 2.37 (2.40) Time: 0.409s, 2500.89/s (0.419s, 2442.85/s) LR: 2.126e-01 Data: 0.027 (0.038) Train: 72 [ 150/156 ( 97%)] Loss: 2.49 (2.41) Time: 0.404s, 2537.11/s (0.416s, 2463.18/s) LR: 2.126e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.434 (1.434) Loss: 2.634 ( 2.634) Acc@1: 45.801 ( 45.801) Acc@5: 71.289 ( 71.289) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.500 ( 2.706) Acc@1: 48.467 ( 45.070) Acc@5: 73.349 ( 68.994) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-54.pth.tar', 44.18000001953125) Train: 73 [ 0/156 ( 1%)] Loss: 2.35 (2.35) Time: 1.752s, 584.46/s (1.752s, 584.46/s) LR: 2.084e-01 Data: 1.094 (1.094) Train: 73 [ 50/156 ( 33%)] Loss: 2.37 (2.36) Time: 0.409s, 2503.91/s (0.433s, 2366.03/s) LR: 2.084e-01 Data: 0.027 (0.048) Train: 73 [ 100/156 ( 65%)] Loss: 2.49 (2.38) Time: 0.409s, 2503.34/s (0.421s, 2432.62/s) LR: 2.084e-01 Data: 0.027 (0.038) Train: 73 [ 150/156 ( 97%)] Loss: 2.45 (2.40) Time: 0.406s, 2521.14/s (0.417s, 2455.02/s) LR: 2.084e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.445 (1.445) Loss: 2.597 ( 2.597) Acc@1: 46.777 ( 46.777) Acc@5: 71.289 ( 71.289) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.504 ( 2.680) Acc@1: 47.052 ( 45.278) Acc@5: 72.759 ( 69.394) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-62.pth.tar', 44.192000030517576) Train: 74 [ 0/156 ( 1%)] Loss: 2.40 (2.40) Time: 1.804s, 567.49/s (1.804s, 567.49/s) LR: 2.042e-01 Data: 1.429 (1.429) Train: 74 [ 50/156 ( 33%)] Loss: 2.33 (2.34) Time: 0.412s, 2487.79/s (0.441s, 2324.32/s) LR: 2.042e-01 Data: 0.027 (0.060) Train: 74 [ 100/156 ( 65%)] Loss: 2.44 (2.36) Time: 0.406s, 2522.68/s (0.425s, 2411.78/s) LR: 2.042e-01 Data: 0.027 (0.044) Train: 74 [ 150/156 ( 97%)] Loss: 2.40 (2.38) Time: 0.405s, 2526.36/s (0.419s, 2446.35/s) LR: 2.042e-01 Data: 0.025 (0.039) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.422 (1.422) Loss: 2.721 ( 2.721) Acc@1: 43.164 ( 43.164) Acc@5: 67.676 ( 67.676) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.565 ( 2.819) Acc@1: 47.052 ( 43.614) Acc@5: 70.637 ( 67.348) Train: 75 [ 0/156 ( 1%)] Loss: 2.33 (2.33) Time: 1.694s, 604.42/s (1.694s, 604.42/s) LR: 2.000e-01 Data: 1.319 (1.319) Train: 75 [ 50/156 ( 33%)] Loss: 2.27 (2.32) Time: 0.408s, 2509.06/s (0.433s, 2367.39/s) LR: 2.000e-01 Data: 0.027 (0.052) Train: 75 [ 100/156 ( 65%)] Loss: 2.39 (2.34) Time: 0.411s, 2493.16/s (0.420s, 2435.73/s) LR: 2.000e-01 Data: 0.029 (0.040) Train: 75 [ 150/156 ( 97%)] Loss: 2.33 (2.36) Time: 0.402s, 2549.07/s (0.416s, 2462.19/s) LR: 2.000e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.469 (1.469) Loss: 2.636 ( 2.636) Acc@1: 47.363 ( 47.363) Acc@5: 69.922 ( 69.922) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.435 ( 2.694) Acc@1: 50.354 ( 45.348) Acc@5: 72.524 ( 69.040) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-52.pth.tar', 44.308000087890626) Train: 76 [ 0/156 ( 1%)] Loss: 2.25 (2.25) Time: 1.942s, 527.38/s (1.942s, 527.38/s) LR: 1.958e-01 Data: 1.570 (1.570) Train: 76 [ 50/156 ( 33%)] Loss: 2.37 (2.31) Time: 0.404s, 2535.55/s (0.433s, 2362.69/s) LR: 1.958e-01 Data: 0.027 (0.058) Train: 76 [ 100/156 ( 65%)] Loss: 2.45 (2.33) Time: 0.410s, 2498.85/s (0.420s, 2435.74/s) LR: 1.958e-01 Data: 0.028 (0.043) Train: 76 [ 150/156 ( 97%)] Loss: 2.46 (2.35) Time: 0.406s, 2524.49/s (0.417s, 2456.92/s) LR: 1.958e-01 Data: 0.026 (0.038) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.481 (1.481) Loss: 2.648 ( 2.648) Acc@1: 46.289 ( 46.289) Acc@5: 69.629 ( 69.629) Test: [ 48/48] Time: 0.090 (0.333) Loss: 2.525 ( 2.743) Acc@1: 48.939 ( 44.846) Acc@5: 71.698 ( 68.580) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-65.pth.tar', 44.31199996704102) Train: 77 [ 0/156 ( 1%)] Loss: 2.38 (2.38) Time: 1.604s, 638.60/s (1.604s, 638.60/s) LR: 1.916e-01 Data: 1.144 (1.144) Train: 77 [ 50/156 ( 33%)] Loss: 2.37 (2.31) Time: 0.406s, 2523.95/s (0.432s, 2372.46/s) LR: 1.916e-01 Data: 0.027 (0.049) Train: 77 [ 100/156 ( 65%)] Loss: 2.34 (2.32) Time: 0.403s, 2541.39/s (0.418s, 2450.77/s) LR: 1.916e-01 Data: 0.028 (0.038) Train: 77 [ 150/156 ( 97%)] Loss: 2.40 (2.33) Time: 0.401s, 2556.68/s (0.413s, 2480.22/s) LR: 1.916e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.463 (1.463) Loss: 2.728 ( 2.728) Acc@1: 44.336 ( 44.336) Acc@5: 68.164 ( 68.164) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.570 ( 2.768) Acc@1: 47.642 ( 44.274) Acc@5: 72.288 ( 68.164) Train: 78 [ 0/156 ( 1%)] Loss: 2.24 (2.24) Time: 1.810s, 565.59/s (1.810s, 565.59/s) LR: 1.874e-01 Data: 1.440 (1.440) Train: 78 [ 50/156 ( 33%)] Loss: 2.33 (2.29) Time: 0.414s, 2475.10/s (0.433s, 2366.28/s) LR: 1.874e-01 Data: 0.034 (0.054) Train: 78 [ 100/156 ( 65%)] Loss: 2.29 (2.30) Time: 0.409s, 2502.13/s (0.421s, 2432.35/s) LR: 1.874e-01 Data: 0.026 (0.041) Train: 78 [ 150/156 ( 97%)] Loss: 2.36 (2.31) Time: 0.407s, 2518.13/s (0.417s, 2456.10/s) LR: 1.874e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.424 (1.424) Loss: 2.642 ( 2.642) Acc@1: 47.363 ( 47.363) Acc@5: 71.094 ( 71.094) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.548 ( 2.734) Acc@1: 50.708 ( 44.918) Acc@5: 71.580 ( 68.768) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-69.pth.tar', 44.45999999633789) Train: 79 [ 0/156 ( 1%)] Loss: 2.31 (2.31) Time: 1.681s, 609.19/s (1.681s, 609.19/s) LR: 1.833e-01 Data: 1.209 (1.209) Train: 79 [ 50/156 ( 33%)] Loss: 2.26 (2.27) Time: 0.410s, 2496.30/s (0.438s, 2336.96/s) LR: 1.833e-01 Data: 0.033 (0.056) Train: 79 [ 100/156 ( 65%)] Loss: 2.28 (2.28) Time: 0.403s, 2539.04/s (0.421s, 2432.66/s) LR: 1.833e-01 Data: 0.028 (0.042) Train: 79 [ 150/156 ( 97%)] Loss: 2.35 (2.30) Time: 0.401s, 2555.67/s (0.415s, 2469.07/s) LR: 1.833e-01 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.446 (1.446) Loss: 2.648 ( 2.648) Acc@1: 46.582 ( 46.582) Acc@5: 70.215 ( 70.215) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.485 ( 2.697) Acc@1: 47.288 ( 45.428) Acc@5: 72.642 ( 69.350) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-68.pth.tar', 44.47999998535156) Train: 80 [ 0/156 ( 1%)] Loss: 2.25 (2.25) Time: 1.739s, 588.92/s (1.739s, 588.92/s) LR: 1.791e-01 Data: 1.365 (1.365) Train: 80 [ 50/156 ( 33%)] Loss: 2.31 (2.26) Time: 0.407s, 2516.35/s (0.431s, 2376.85/s) LR: 1.791e-01 Data: 0.027 (0.053) Train: 80 [ 100/156 ( 65%)] Loss: 2.32 (2.27) Time: 0.407s, 2513.11/s (0.419s, 2441.57/s) LR: 1.791e-01 Data: 0.027 (0.040) Train: 80 [ 150/156 ( 97%)] Loss: 2.38 (2.28) Time: 0.405s, 2527.02/s (0.416s, 2464.33/s) LR: 1.791e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.443 (1.443) Loss: 2.712 ( 2.712) Acc@1: 47.168 ( 47.168) Acc@5: 69.336 ( 69.336) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.515 ( 2.750) Acc@1: 47.877 ( 44.516) Acc@5: 71.698 ( 68.562) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-80.pth.tar', 44.51600004760742) Train: 81 [ 0/156 ( 1%)] Loss: 2.31 (2.31) Time: 1.485s, 689.78/s (1.485s, 689.78/s) LR: 1.749e-01 Data: 1.110 (1.110) Train: 81 [ 50/156 ( 33%)] Loss: 2.27 (2.25) Time: 0.410s, 2496.66/s (0.429s, 2384.89/s) LR: 1.749e-01 Data: 0.027 (0.048) Train: 81 [ 100/156 ( 65%)] Loss: 2.30 (2.27) Time: 0.410s, 2496.83/s (0.420s, 2440.81/s) LR: 1.749e-01 Data: 0.028 (0.038) Train: 81 [ 150/156 ( 97%)] Loss: 2.32 (2.28) Time: 0.407s, 2517.89/s (0.416s, 2462.17/s) LR: 1.749e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.708 ( 2.708) Acc@1: 45.410 ( 45.410) Acc@5: 70.020 ( 70.020) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.465 ( 2.708) Acc@1: 49.410 ( 45.430) Acc@5: 72.759 ( 69.060) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-66.pth.tar', 44.623999967041016) Train: 82 [ 0/156 ( 1%)] Loss: 2.15 (2.15) Time: 1.606s, 637.81/s (1.606s, 637.81/s) LR: 1.708e-01 Data: 1.178 (1.178) Train: 82 [ 50/156 ( 33%)] Loss: 2.28 (2.25) Time: 0.410s, 2497.80/s (0.431s, 2373.69/s) LR: 1.708e-01 Data: 0.027 (0.049) Train: 82 [ 100/156 ( 65%)] Loss: 2.29 (2.25) Time: 0.407s, 2514.29/s (0.419s, 2442.00/s) LR: 1.708e-01 Data: 0.027 (0.038) Train: 82 [ 150/156 ( 97%)] Loss: 2.32 (2.26) Time: 0.403s, 2540.27/s (0.414s, 2474.16/s) LR: 1.708e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.438 (1.438) Loss: 2.621 ( 2.621) Acc@1: 46.191 ( 46.191) Acc@5: 70.215 ( 70.215) Test: [ 48/48] Time: 0.088 (0.332) Loss: 2.445 ( 2.679) Acc@1: 51.061 ( 45.830) Acc@5: 73.113 ( 69.616) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-71.pth.tar', 44.62999997558594) Train: 83 [ 0/156 ( 1%)] Loss: 2.27 (2.27) Time: 2.005s, 510.60/s (2.005s, 510.60/s) LR: 1.666e-01 Data: 1.192 (1.192) Train: 83 [ 50/156 ( 33%)] Loss: 2.16 (2.22) Time: 0.407s, 2515.81/s (0.435s, 2353.91/s) LR: 1.666e-01 Data: 0.027 (0.050) Train: 83 [ 100/156 ( 65%)] Loss: 2.25 (2.23) Time: 0.410s, 2494.84/s (0.421s, 2430.02/s) LR: 1.666e-01 Data: 0.027 (0.039) Train: 83 [ 150/156 ( 97%)] Loss: 2.30 (2.24) Time: 0.406s, 2520.53/s (0.417s, 2453.95/s) LR: 1.666e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.438 (1.438) Loss: 2.671 ( 2.671) Acc@1: 45.410 ( 45.410) Acc@5: 69.629 ( 69.629) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.452 ( 2.709) Acc@1: 49.646 ( 45.876) Acc@5: 72.524 ( 69.000) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-76.pth.tar', 44.84600005615234) Train: 84 [ 0/156 ( 1%)] Loss: 2.25 (2.25) Time: 1.686s, 607.28/s (1.686s, 607.28/s) LR: 1.625e-01 Data: 1.311 (1.311) Train: 84 [ 50/156 ( 33%)] Loss: 2.10 (2.19) Time: 0.411s, 2489.32/s (0.434s, 2362.12/s) LR: 1.625e-01 Data: 0.028 (0.052) Train: 84 [ 100/156 ( 65%)] Loss: 2.20 (2.21) Time: 0.404s, 2533.13/s (0.421s, 2431.38/s) LR: 1.625e-01 Data: 0.027 (0.040) Train: 84 [ 150/156 ( 97%)] Loss: 2.21 (2.22) Time: 0.402s, 2546.27/s (0.415s, 2464.91/s) LR: 1.625e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.452 (1.452) Loss: 2.615 ( 2.615) Acc@1: 46.777 ( 46.777) Acc@5: 70.410 ( 70.410) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.479 ( 2.675) Acc@1: 49.410 ( 46.244) Acc@5: 73.349 ( 69.728) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-67.pth.tar', 44.90000004516602) Train: 85 [ 0/156 ( 1%)] Loss: 2.22 (2.22) Time: 1.687s, 607.00/s (1.687s, 607.00/s) LR: 1.584e-01 Data: 1.062 (1.062) Train: 85 [ 50/156 ( 33%)] Loss: 2.20 (2.19) Time: 0.406s, 2519.55/s (0.430s, 2383.23/s) LR: 1.584e-01 Data: 0.026 (0.047) Train: 85 [ 100/156 ( 65%)] Loss: 2.24 (2.21) Time: 0.409s, 2501.72/s (0.419s, 2444.07/s) LR: 1.584e-01 Data: 0.027 (0.037) Train: 85 [ 150/156 ( 97%)] Loss: 2.25 (2.22) Time: 0.401s, 2553.43/s (0.415s, 2467.92/s) LR: 1.584e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.443 (1.443) Loss: 2.593 ( 2.593) Acc@1: 46.875 ( 46.875) Acc@5: 71.582 ( 71.582) Test: [ 48/48] Time: 0.088 (0.328) Loss: 2.469 ( 2.683) Acc@1: 49.528 ( 45.852) Acc@5: 72.170 ( 69.676) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-78.pth.tar', 44.91800004882813) Train: 86 [ 0/156 ( 1%)] Loss: 2.21 (2.21) Time: 1.653s, 619.36/s (1.653s, 619.36/s) LR: 1.543e-01 Data: 1.287 (1.287) Train: 86 [ 50/156 ( 33%)] Loss: 2.19 (2.17) Time: 0.399s, 2565.61/s (0.423s, 2418.61/s) LR: 1.543e-01 Data: 0.028 (0.052) Train: 86 [ 100/156 ( 65%)] Loss: 2.23 (2.19) Time: 0.399s, 2565.38/s (0.412s, 2487.52/s) LR: 1.543e-01 Data: 0.027 (0.040) Train: 86 [ 150/156 ( 97%)] Loss: 2.18 (2.20) Time: 0.401s, 2552.90/s (0.408s, 2508.78/s) LR: 1.543e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.438 (1.438) Loss: 2.630 ( 2.630) Acc@1: 47.363 ( 47.363) Acc@5: 69.531 ( 69.531) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.520 ( 2.709) Acc@1: 47.288 ( 45.914) Acc@5: 72.642 ( 69.130) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-72.pth.tar', 45.07000004516602) Train: 87 [ 0/156 ( 1%)] Loss: 2.14 (2.14) Time: 1.702s, 601.70/s (1.702s, 601.70/s) LR: 1.503e-01 Data: 1.320 (1.320) Train: 87 [ 50/156 ( 33%)] Loss: 2.17 (2.17) Time: 0.404s, 2532.17/s (0.428s, 2390.13/s) LR: 1.503e-01 Data: 0.027 (0.053) Train: 87 [ 100/156 ( 65%)] Loss: 2.16 (2.18) Time: 0.406s, 2519.60/s (0.417s, 2456.28/s) LR: 1.503e-01 Data: 0.027 (0.040) Train: 87 [ 150/156 ( 97%)] Loss: 2.23 (2.19) Time: 0.405s, 2527.09/s (0.414s, 2473.78/s) LR: 1.503e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.455 (1.455) Loss: 2.657 ( 2.657) Acc@1: 45.605 ( 45.605) Acc@5: 69.629 ( 69.629) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.462 ( 2.710) Acc@1: 48.349 ( 45.836) Acc@5: 72.524 ( 69.342) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-73.pth.tar', 45.27800001220703) Train: 88 [ 0/156 ( 1%)] Loss: 2.14 (2.14) Time: 1.542s, 664.25/s (1.542s, 664.25/s) LR: 1.462e-01 Data: 1.165 (1.165) Train: 88 [ 50/156 ( 33%)] Loss: 2.19 (2.15) Time: 0.410s, 2495.44/s (0.434s, 2361.52/s) LR: 1.462e-01 Data: 0.028 (0.052) Train: 88 [ 100/156 ( 65%)] Loss: 2.15 (2.17) Time: 0.405s, 2526.68/s (0.421s, 2432.79/s) LR: 1.462e-01 Data: 0.027 (0.040) Train: 88 [ 150/156 ( 97%)] Loss: 2.21 (2.18) Time: 0.408s, 2508.89/s (0.417s, 2458.18/s) LR: 1.462e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.432 (1.432) Loss: 2.678 ( 2.678) Acc@1: 46.191 ( 46.191) Acc@5: 69.629 ( 69.629) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.502 ( 2.756) Acc@1: 47.406 ( 45.114) Acc@5: 72.642 ( 68.558) Train: 89 [ 0/156 ( 1%)] Loss: 2.16 (2.16) Time: 1.851s, 553.07/s (1.851s, 553.07/s) LR: 1.422e-01 Data: 1.430 (1.430) Train: 89 [ 50/156 ( 33%)] Loss: 2.20 (2.15) Time: 0.409s, 2504.69/s (0.435s, 2352.15/s) LR: 1.422e-01 Data: 0.029 (0.054) Train: 89 [ 100/156 ( 65%)] Loss: 2.19 (2.16) Time: 0.407s, 2513.12/s (0.422s, 2425.33/s) LR: 1.422e-01 Data: 0.029 (0.041) Train: 89 [ 150/156 ( 97%)] Loss: 2.22 (2.16) Time: 0.401s, 2554.69/s (0.416s, 2460.69/s) LR: 1.422e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 2.606 ( 2.606) Acc@1: 46.582 ( 46.582) Acc@5: 72.461 ( 72.461) Test: [ 48/48] Time: 0.089 (0.327) Loss: 2.434 ( 2.658) Acc@1: 49.292 ( 46.612) Acc@5: 74.057 ( 69.884) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-75.pth.tar', 45.34800002441406) Train: 90 [ 0/156 ( 1%)] Loss: 2.08 (2.08) Time: 1.513s, 676.95/s (1.513s, 676.95/s) LR: 1.382e-01 Data: 1.042 (1.042) Train: 90 [ 50/156 ( 33%)] Loss: 2.14 (2.12) Time: 0.402s, 2546.73/s (0.423s, 2418.45/s) LR: 1.382e-01 Data: 0.027 (0.048) Train: 90 [ 100/156 ( 65%)] Loss: 2.18 (2.14) Time: 0.407s, 2518.16/s (0.414s, 2472.61/s) LR: 1.382e-01 Data: 0.027 (0.038) Train: 90 [ 150/156 ( 97%)] Loss: 2.17 (2.15) Time: 0.408s, 2508.24/s (0.412s, 2484.51/s) LR: 1.382e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.456 (1.456) Loss: 2.587 ( 2.587) Acc@1: 47.754 ( 47.754) Acc@5: 71.875 ( 71.875) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.434 ( 2.681) Acc@1: 50.000 ( 46.456) Acc@5: 73.113 ( 69.582) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-79.pth.tar', 45.42800005004883) Train: 91 [ 0/156 ( 1%)] Loss: 2.11 (2.11) Time: 1.676s, 611.07/s (1.676s, 611.07/s) LR: 1.342e-01 Data: 1.298 (1.298) Train: 91 [ 50/156 ( 33%)] Loss: 2.14 (2.12) Time: 0.408s, 2508.83/s (0.432s, 2368.39/s) LR: 1.342e-01 Data: 0.029 (0.052) Train: 91 [ 100/156 ( 65%)] Loss: 2.14 (2.13) Time: 0.405s, 2527.21/s (0.419s, 2444.43/s) LR: 1.342e-01 Data: 0.027 (0.040) Train: 91 [ 150/156 ( 97%)] Loss: 2.26 (2.14) Time: 0.406s, 2520.91/s (0.415s, 2467.70/s) LR: 1.342e-01 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.446 (1.446) Loss: 2.655 ( 2.655) Acc@1: 45.801 ( 45.801) Acc@5: 70.898 ( 70.898) Test: [ 48/48] Time: 0.090 (0.332) Loss: 2.460 ( 2.713) Acc@1: 50.000 ( 46.306) Acc@5: 73.821 ( 69.318) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-81.pth.tar', 45.4300000024414) Train: 92 [ 0/156 ( 1%)] Loss: 2.06 (2.06) Time: 2.025s, 505.65/s (2.025s, 505.65/s) LR: 1.303e-01 Data: 1.650 (1.650) Train: 92 [ 50/156 ( 33%)] Loss: 2.05 (2.11) Time: 0.407s, 2514.61/s (0.441s, 2323.64/s) LR: 1.303e-01 Data: 0.027 (0.059) Train: 92 [ 100/156 ( 65%)] Loss: 2.12 (2.12) Time: 0.409s, 2504.93/s (0.425s, 2408.80/s) LR: 1.303e-01 Data: 0.027 (0.044) Train: 92 [ 150/156 ( 97%)] Loss: 2.12 (2.13) Time: 0.409s, 2504.30/s (0.420s, 2437.36/s) LR: 1.303e-01 Data: 0.025 (0.038) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.630 ( 2.630) Acc@1: 47.070 ( 47.070) Acc@5: 70.508 ( 70.508) Test: [ 48/48] Time: 0.090 (0.332) Loss: 2.462 ( 2.702) Acc@1: 49.764 ( 46.488) Acc@5: 73.821 ( 69.502) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-82.pth.tar', 45.83000000854492) Train: 93 [ 0/156 ( 1%)] Loss: 2.03 (2.03) Time: 1.659s, 617.14/s (1.659s, 617.14/s) LR: 1.264e-01 Data: 1.231 (1.231) Train: 93 [ 50/156 ( 33%)] Loss: 2.08 (2.10) Time: 0.411s, 2491.88/s (0.436s, 2350.79/s) LR: 1.264e-01 Data: 0.027 (0.051) Train: 93 [ 100/156 ( 65%)] Loss: 2.12 (2.11) Time: 0.402s, 2546.38/s (0.421s, 2430.11/s) LR: 1.264e-01 Data: 0.027 (0.039) Train: 93 [ 150/156 ( 97%)] Loss: 2.09 (2.12) Time: 0.404s, 2532.72/s (0.416s, 2462.09/s) LR: 1.264e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.671 ( 2.671) Acc@1: 45.801 ( 45.801) Acc@5: 69.043 ( 69.043) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.475 ( 2.700) Acc@1: 49.175 ( 46.136) Acc@5: 73.113 ( 69.442) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-93.pth.tar', 46.13600002929687) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-87.pth.tar', 45.835999993896486) Train: 94 [ 0/156 ( 1%)] Loss: 2.03 (2.03) Time: 1.696s, 603.75/s (1.696s, 603.75/s) LR: 1.225e-01 Data: 1.151 (1.151) Train: 94 [ 50/156 ( 33%)] Loss: 2.07 (2.08) Time: 0.406s, 2521.44/s (0.429s, 2387.93/s) LR: 1.225e-01 Data: 0.027 (0.049) Train: 94 [ 100/156 ( 65%)] Loss: 2.09 (2.10) Time: 0.407s, 2517.33/s (0.417s, 2452.72/s) LR: 1.225e-01 Data: 0.027 (0.038) Train: 94 [ 150/156 ( 97%)] Loss: 2.14 (2.11) Time: 0.406s, 2520.45/s (0.415s, 2468.97/s) LR: 1.225e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.457 (1.457) Loss: 2.593 ( 2.593) Acc@1: 45.508 ( 45.508) Acc@5: 69.922 ( 69.922) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.478 ( 2.682) Acc@1: 49.882 ( 46.522) Acc@5: 72.288 ( 69.892) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-93.pth.tar', 46.13600002929687) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-85.pth.tar', 45.85199998901367) Train: 95 [ 0/156 ( 1%)] Loss: 2.11 (2.11) Time: 1.711s, 598.54/s (1.711s, 598.54/s) LR: 1.187e-01 Data: 1.330 (1.330) Train: 95 [ 50/156 ( 33%)] Loss: 2.11 (2.07) Time: 0.405s, 2529.46/s (0.434s, 2361.30/s) LR: 1.187e-01 Data: 0.027 (0.053) Train: 95 [ 100/156 ( 65%)] Loss: 2.04 (2.08) Time: 0.401s, 2555.98/s (0.419s, 2446.00/s) LR: 1.187e-01 Data: 0.027 (0.040) Train: 95 [ 150/156 ( 97%)] Loss: 2.15 (2.09) Time: 0.398s, 2571.70/s (0.413s, 2481.23/s) LR: 1.187e-01 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.425 (1.425) Loss: 2.610 ( 2.610) Acc@1: 47.656 ( 47.656) Acc@5: 71.484 ( 71.484) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.512 ( 2.688) Acc@1: 48.703 ( 46.432) Acc@5: 72.995 ( 69.726) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-93.pth.tar', 46.13600002929687) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-83.pth.tar', 45.875999975585934) Train: 96 [ 0/156 ( 1%)] Loss: 2.08 (2.08) Time: 1.502s, 681.73/s (1.502s, 681.73/s) LR: 1.148e-01 Data: 1.132 (1.132) Train: 96 [ 50/156 ( 33%)] Loss: 2.06 (2.06) Time: 0.407s, 2514.69/s (0.426s, 2404.49/s) LR: 1.148e-01 Data: 0.028 (0.049) Train: 96 [ 100/156 ( 65%)] Loss: 2.04 (2.08) Time: 0.409s, 2506.70/s (0.417s, 2457.54/s) LR: 1.148e-01 Data: 0.026 (0.038) Train: 96 [ 150/156 ( 97%)] Loss: 2.11 (2.09) Time: 0.408s, 2507.92/s (0.414s, 2470.72/s) LR: 1.148e-01 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.448 (1.448) Loss: 2.639 ( 2.639) Acc@1: 47.168 ( 47.168) Acc@5: 70.801 ( 70.801) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.533 ( 2.698) Acc@1: 47.288 ( 46.696) Acc@5: 72.877 ( 69.752) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-93.pth.tar', 46.13600002929687) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-86.pth.tar', 45.91399998535156) Train: 97 [ 0/156 ( 1%)] Loss: 2.05 (2.05) Time: 1.714s, 597.52/s (1.714s, 597.52/s) LR: 1.111e-01 Data: 1.336 (1.336) Train: 97 [ 50/156 ( 33%)] Loss: 2.03 (2.05) Time: 0.407s, 2515.92/s (0.434s, 2359.22/s) LR: 1.111e-01 Data: 0.028 (0.053) Train: 97 [ 100/156 ( 65%)] Loss: 2.15 (2.07) Time: 0.409s, 2506.58/s (0.421s, 2432.72/s) LR: 1.111e-01 Data: 0.029 (0.040) Train: 97 [ 150/156 ( 97%)] Loss: 2.09 (2.07) Time: 0.406s, 2519.66/s (0.417s, 2457.70/s) LR: 1.111e-01 Data: 0.024 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.430 (1.430) Loss: 2.661 ( 2.661) Acc@1: 46.973 ( 46.973) Acc@5: 69.922 ( 69.922) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.478 ( 2.709) Acc@1: 48.821 ( 46.468) Acc@5: 73.231 ( 69.478) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-93.pth.tar', 46.13600002929687) Train: 98 [ 0/156 ( 1%)] Loss: 2.04 (2.04) Time: 1.611s, 635.58/s (1.611s, 635.58/s) LR: 1.073e-01 Data: 1.235 (1.235) Train: 98 [ 50/156 ( 33%)] Loss: 2.02 (2.04) Time: 0.406s, 2522.62/s (0.432s, 2368.18/s) LR: 1.073e-01 Data: 0.026 (0.051) Train: 98 [ 100/156 ( 65%)] Loss: 2.08 (2.05) Time: 0.410s, 2495.23/s (0.421s, 2434.98/s) LR: 1.073e-01 Data: 0.027 (0.039) Train: 98 [ 150/156 ( 97%)] Loss: 2.04 (2.06) Time: 0.407s, 2518.87/s (0.417s, 2457.27/s) LR: 1.073e-01 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.458 (1.458) Loss: 2.639 ( 2.639) Acc@1: 45.898 ( 45.898) Acc@5: 70.898 ( 70.898) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.448 ( 2.660) Acc@1: 49.646 ( 47.176) Acc@5: 74.882 ( 70.136) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-84.pth.tar', 46.2440000024414) Train: 99 [ 0/156 ( 1%)] Loss: 1.99 (1.99) Time: 1.583s, 646.84/s (1.583s, 646.84/s) LR: 1.036e-01 Data: 1.042 (1.042) Train: 99 [ 50/156 ( 33%)] Loss: 2.12 (2.03) Time: 0.409s, 2503.15/s (0.431s, 2378.36/s) LR: 1.036e-01 Data: 0.028 (0.047) Train: 99 [ 100/156 ( 65%)] Loss: 2.07 (2.04) Time: 0.405s, 2529.58/s (0.418s, 2450.54/s) LR: 1.036e-01 Data: 0.029 (0.038) Train: 99 [ 150/156 ( 97%)] Loss: 2.07 (2.05) Time: 0.400s, 2557.80/s (0.412s, 2483.78/s) LR: 1.036e-01 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.422 (1.422) Loss: 2.624 ( 2.624) Acc@1: 48.438 ( 48.438) Acc@5: 71.582 ( 71.582) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.405 ( 2.674) Acc@1: 49.764 ( 47.020) Acc@5: 74.528 ( 69.956) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-91.pth.tar', 46.306) Train: 100 [ 0/156 ( 1%)] Loss: 2.06 (2.06) Time: 1.611s, 635.61/s (1.611s, 635.61/s) LR: 1.000e-01 Data: 1.024 (1.024) Train: 100 [ 50/156 ( 33%)] Loss: 1.98 (2.03) Time: 0.402s, 2545.14/s (0.425s, 2408.15/s) LR: 1.000e-01 Data: 0.027 (0.047) Train: 100 [ 100/156 ( 65%)] Loss: 2.13 (2.03) Time: 0.409s, 2506.45/s (0.415s, 2465.45/s) LR: 1.000e-01 Data: 0.029 (0.037) Train: 100 [ 150/156 ( 97%)] Loss: 2.12 (2.04) Time: 0.410s, 2500.35/s (0.413s, 2479.89/s) LR: 1.000e-01 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.458 (1.458) Loss: 2.659 ( 2.659) Acc@1: 46.289 ( 46.289) Acc@5: 69.336 ( 69.336) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.468 ( 2.704) Acc@1: 49.175 ( 46.590) Acc@5: 73.821 ( 69.614) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-95.pth.tar', 46.432000018310546) Train: 101 [ 0/156 ( 1%)] Loss: 2.05 (2.05) Time: 1.798s, 569.50/s (1.798s, 569.50/s) LR: 9.639e-02 Data: 1.423 (1.423) Train: 101 [ 50/156 ( 33%)] Loss: 1.99 (2.01) Time: 0.411s, 2492.90/s (0.436s, 2348.40/s) LR: 9.639e-02 Data: 0.029 (0.055) Train: 101 [ 100/156 ( 65%)] Loss: 2.00 (2.02) Time: 0.410s, 2499.16/s (0.423s, 2419.48/s) LR: 9.639e-02 Data: 0.028 (0.041) Train: 101 [ 150/156 ( 97%)] Loss: 2.02 (2.02) Time: 0.408s, 2510.64/s (0.419s, 2445.13/s) LR: 9.639e-02 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 2.653 ( 2.653) Acc@1: 47.559 ( 47.559) Acc@5: 70.898 ( 70.898) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.501 ( 2.694) Acc@1: 49.882 ( 46.996) Acc@5: 73.703 ( 69.916) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-90.pth.tar', 46.456) Train: 102 [ 0/156 ( 1%)] Loss: 2.00 (2.00) Time: 1.738s, 589.12/s (1.738s, 589.12/s) LR: 9.283e-02 Data: 1.363 (1.363) Train: 102 [ 50/156 ( 33%)] Loss: 2.04 (2.00) Time: 0.410s, 2499.65/s (0.435s, 2353.91/s) LR: 9.283e-02 Data: 0.028 (0.053) Train: 102 [ 100/156 ( 65%)] Loss: 2.01 (2.02) Time: 0.405s, 2527.05/s (0.421s, 2430.48/s) LR: 9.283e-02 Data: 0.026 (0.040) Train: 102 [ 150/156 ( 97%)] Loss: 2.06 (2.02) Time: 0.407s, 2517.43/s (0.417s, 2455.96/s) LR: 9.283e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.434 (1.434) Loss: 2.631 ( 2.631) Acc@1: 47.070 ( 47.070) Acc@5: 70.996 ( 70.996) Test: [ 48/48] Time: 0.090 (0.328) Loss: 2.526 ( 2.725) Acc@1: 50.708 ( 46.392) Acc@5: 72.052 ( 69.462) Train: 103 [ 0/156 ( 1%)] Loss: 2.00 (2.00) Time: 1.684s, 608.23/s (1.684s, 608.23/s) LR: 8.932e-02 Data: 1.197 (1.197) Train: 103 [ 50/156 ( 33%)] Loss: 2.02 (2.00) Time: 0.408s, 2511.48/s (0.433s, 2366.82/s) LR: 8.932e-02 Data: 0.027 (0.050) Train: 103 [ 100/156 ( 65%)] Loss: 2.02 (2.00) Time: 0.401s, 2550.89/s (0.419s, 2442.06/s) LR: 8.932e-02 Data: 0.028 (0.039) Train: 103 [ 150/156 ( 97%)] Loss: 1.98 (2.01) Time: 0.398s, 2575.52/s (0.413s, 2479.73/s) LR: 8.932e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.450 (1.450) Loss: 2.563 ( 2.563) Acc@1: 47.461 ( 47.461) Acc@5: 72.070 ( 72.070) Test: [ 48/48] Time: 0.088 (0.331) Loss: 2.458 ( 2.648) Acc@1: 50.236 ( 47.652) Acc@5: 73.585 ( 70.524) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-97.pth.tar', 46.46800000488281) Train: 104 [ 0/156 ( 1%)] Loss: 1.95 (1.95) Time: 1.773s, 577.64/s (1.773s, 577.64/s) LR: 8.586e-02 Data: 1.405 (1.405) Train: 104 [ 50/156 ( 33%)] Loss: 2.00 (1.98) Time: 0.400s, 2559.68/s (0.426s, 2404.01/s) LR: 8.586e-02 Data: 0.028 (0.055) Train: 104 [ 100/156 ( 65%)] Loss: 2.04 (1.99) Time: 0.400s, 2561.73/s (0.414s, 2476.29/s) LR: 8.586e-02 Data: 0.026 (0.042) Train: 104 [ 150/156 ( 97%)] Loss: 2.04 (2.00) Time: 0.404s, 2533.13/s (0.410s, 2494.82/s) LR: 8.586e-02 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.420 (1.420) Loss: 2.641 ( 2.641) Acc@1: 47.852 ( 47.852) Acc@5: 71.094 ( 71.094) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.474 ( 2.729) Acc@1: 49.057 ( 46.508) Acc@5: 73.467 ( 69.436) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-104.pth.tar', 46.50799997802734) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-92.pth.tar', 46.48800002685547) Train: 105 [ 0/156 ( 1%)] Loss: 1.97 (1.97) Time: 1.675s, 611.44/s (1.675s, 611.44/s) LR: 8.244e-02 Data: 1.301 (1.301) Train: 105 [ 50/156 ( 33%)] Loss: 1.98 (1.97) Time: 0.407s, 2518.16/s (0.432s, 2372.52/s) LR: 8.244e-02 Data: 0.027 (0.052) Train: 105 [ 100/156 ( 65%)] Loss: 2.04 (1.98) Time: 0.403s, 2541.90/s (0.418s, 2450.65/s) LR: 8.244e-02 Data: 0.028 (0.040) Train: 105 [ 150/156 ( 97%)] Loss: 2.00 (1.99) Time: 0.398s, 2571.55/s (0.412s, 2484.15/s) LR: 8.244e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.471 (1.471) Loss: 2.611 ( 2.611) Acc@1: 47.070 ( 47.070) Acc@5: 71.582 ( 71.582) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.427 ( 2.671) Acc@1: 51.415 ( 47.416) Acc@5: 75.354 ( 70.174) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-104.pth.tar', 46.50799997802734) Train: 106 [ 0/156 ( 1%)] Loss: 1.95 (1.95) Time: 1.472s, 695.54/s (1.472s, 695.54/s) LR: 7.908e-02 Data: 1.081 (1.081) Train: 106 [ 50/156 ( 33%)] Loss: 1.93 (1.96) Time: 0.405s, 2526.86/s (0.424s, 2415.62/s) LR: 7.908e-02 Data: 0.027 (0.048) Train: 106 [ 100/156 ( 65%)] Loss: 2.03 (1.96) Time: 0.404s, 2536.14/s (0.414s, 2470.61/s) LR: 7.908e-02 Data: 0.027 (0.038) Train: 106 [ 150/156 ( 97%)] Loss: 1.97 (1.97) Time: 0.407s, 2517.26/s (0.412s, 2486.64/s) LR: 7.908e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.450 (1.450) Loss: 2.559 ( 2.559) Acc@1: 47.949 ( 47.949) Acc@5: 71.191 ( 71.191) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.429 ( 2.629) Acc@1: 51.769 ( 47.848) Acc@5: 75.354 ( 70.918) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-94.pth.tar', 46.522000013427736) Train: 107 [ 0/156 ( 1%)] Loss: 2.01 (2.01) Time: 1.789s, 572.53/s (1.789s, 572.53/s) LR: 7.577e-02 Data: 1.255 (1.255) Train: 107 [ 50/156 ( 33%)] Loss: 1.94 (1.95) Time: 0.407s, 2517.85/s (0.437s, 2342.54/s) LR: 7.577e-02 Data: 0.026 (0.052) Train: 107 [ 100/156 ( 65%)] Loss: 1.98 (1.96) Time: 0.404s, 2533.43/s (0.421s, 2434.08/s) LR: 7.577e-02 Data: 0.027 (0.040) Train: 107 [ 150/156 ( 97%)] Loss: 2.04 (1.97) Time: 0.400s, 2557.30/s (0.415s, 2468.94/s) LR: 7.577e-02 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.434 (1.434) Loss: 2.594 ( 2.594) Acc@1: 48.730 ( 48.730) Acc@5: 70.996 ( 70.996) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.433 ( 2.664) Acc@1: 51.887 ( 47.584) Acc@5: 73.703 ( 70.300) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-100.pth.tar', 46.590000029296874) Train: 108 [ 0/156 ( 1%)] Loss: 2.00 (2.00) Time: 1.676s, 611.10/s (1.676s, 611.10/s) LR: 7.252e-02 Data: 1.161 (1.161) Train: 108 [ 50/156 ( 33%)] Loss: 1.94 (1.95) Time: 0.407s, 2518.20/s (0.429s, 2389.24/s) LR: 7.252e-02 Data: 0.032 (0.049) Train: 108 [ 100/156 ( 65%)] Loss: 1.94 (1.95) Time: 0.403s, 2540.13/s (0.415s, 2465.38/s) LR: 7.252e-02 Data: 0.027 (0.039) Train: 108 [ 150/156 ( 97%)] Loss: 1.91 (1.96) Time: 0.400s, 2561.98/s (0.411s, 2491.58/s) LR: 7.252e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.575 ( 2.575) Acc@1: 48.730 ( 48.730) Acc@5: 72.363 ( 72.363) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.434 ( 2.667) Acc@1: 51.179 ( 47.704) Acc@5: 74.882 ( 70.448) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-89.pth.tar', 46.61200001586914) Train: 109 [ 0/156 ( 1%)] Loss: 1.98 (1.98) Time: 1.493s, 685.68/s (1.493s, 685.68/s) LR: 6.932e-02 Data: 1.124 (1.124) Train: 109 [ 50/156 ( 33%)] Loss: 1.97 (1.93) Time: 0.404s, 2534.53/s (0.424s, 2412.53/s) LR: 6.932e-02 Data: 0.027 (0.049) Train: 109 [ 100/156 ( 65%)] Loss: 1.93 (1.94) Time: 0.411s, 2493.10/s (0.416s, 2463.68/s) LR: 6.932e-02 Data: 0.029 (0.038) Train: 109 [ 150/156 ( 97%)] Loss: 1.97 (1.94) Time: 0.408s, 2508.28/s (0.413s, 2476.55/s) LR: 6.932e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.440 (1.440) Loss: 2.537 ( 2.537) Acc@1: 47.754 ( 47.754) Acc@5: 72.656 ( 72.656) Test: [ 48/48] Time: 0.089 (0.333) Loss: 2.421 ( 2.629) Acc@1: 52.712 ( 48.264) Acc@5: 73.939 ( 70.894) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-96.pth.tar', 46.69599998535156) Train: 110 [ 0/156 ( 1%)] Loss: 1.91 (1.91) Time: 1.594s, 642.24/s (1.594s, 642.24/s) LR: 6.617e-02 Data: 1.089 (1.089) Train: 110 [ 50/156 ( 33%)] Loss: 1.96 (1.94) Time: 0.409s, 2504.57/s (0.431s, 2375.71/s) LR: 6.617e-02 Data: 0.027 (0.049) Train: 110 [ 100/156 ( 65%)] Loss: 2.00 (1.94) Time: 0.405s, 2526.03/s (0.421s, 2434.85/s) LR: 6.617e-02 Data: 0.027 (0.038) Train: 110 [ 150/156 ( 97%)] Loss: 1.97 (1.95) Time: 0.402s, 2549.14/s (0.415s, 2465.68/s) LR: 6.617e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.461 (1.461) Loss: 2.600 ( 2.600) Acc@1: 48.047 ( 48.047) Acc@5: 71.680 ( 71.680) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.418 ( 2.661) Acc@1: 50.825 ( 47.768) Acc@5: 74.882 ( 70.326) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-101.pth.tar', 46.99600001342773) Train: 111 [ 0/156 ( 1%)] Loss: 1.99 (1.99) Time: 1.625s, 630.28/s (1.625s, 630.28/s) LR: 6.309e-02 Data: 1.255 (1.255) Train: 111 [ 50/156 ( 33%)] Loss: 1.91 (1.93) Time: 0.408s, 2510.62/s (0.428s, 2394.06/s) LR: 6.309e-02 Data: 0.027 (0.051) Train: 111 [ 100/156 ( 65%)] Loss: 1.98 (1.94) Time: 0.410s, 2497.19/s (0.418s, 2450.66/s) LR: 6.309e-02 Data: 0.027 (0.039) Train: 111 [ 150/156 ( 97%)] Loss: 1.90 (1.94) Time: 0.407s, 2516.12/s (0.415s, 2467.04/s) LR: 6.309e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.443 (1.443) Loss: 2.559 ( 2.559) Acc@1: 46.582 ( 46.582) Acc@5: 71.387 ( 71.387) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.399 ( 2.651) Acc@1: 52.594 ( 47.826) Acc@5: 75.354 ( 70.692) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-99.pth.tar', 47.02000002685547) Train: 112 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 1.600s, 640.01/s (1.600s, 640.01/s) LR: 6.007e-02 Data: 1.229 (1.229) Train: 112 [ 50/156 ( 33%)] Loss: 1.97 (1.92) Time: 0.406s, 2521.95/s (0.428s, 2393.33/s) LR: 6.007e-02 Data: 0.028 (0.051) Train: 112 [ 100/156 ( 65%)] Loss: 1.95 (1.92) Time: 0.411s, 2493.17/s (0.417s, 2454.83/s) LR: 6.007e-02 Data: 0.028 (0.039) Train: 112 [ 150/156 ( 97%)] Loss: 1.93 (1.93) Time: 0.409s, 2502.14/s (0.415s, 2470.09/s) LR: 6.007e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.449 (1.449) Loss: 2.545 ( 2.545) Acc@1: 48.047 ( 48.047) Acc@5: 72.266 ( 72.266) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.425 ( 2.636) Acc@1: 50.236 ( 48.314) Acc@5: 74.882 ( 70.678) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-98.pth.tar', 47.17599997558594) Train: 113 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 1.710s, 598.76/s (1.710s, 598.76/s) LR: 5.711e-02 Data: 1.291 (1.291) Train: 113 [ 50/156 ( 33%)] Loss: 1.93 (1.90) Time: 0.400s, 2558.98/s (0.426s, 2402.77/s) LR: 5.711e-02 Data: 0.026 (0.052) Train: 113 [ 100/156 ( 65%)] Loss: 1.95 (1.91) Time: 0.401s, 2556.46/s (0.414s, 2472.65/s) LR: 5.711e-02 Data: 0.025 (0.040) Train: 113 [ 150/156 ( 97%)] Loss: 1.86 (1.91) Time: 0.407s, 2518.21/s (0.411s, 2490.26/s) LR: 5.711e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.426 (1.426) Loss: 2.553 ( 2.553) Acc@1: 48.926 ( 48.926) Acc@5: 71.875 ( 71.875) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.408 ( 2.645) Acc@1: 50.943 ( 48.094) Acc@5: 74.646 ( 70.644) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-105.pth.tar', 47.41600003295898) Train: 114 [ 0/156 ( 1%)] Loss: 1.93 (1.93) Time: 1.763s, 580.84/s (1.763s, 580.84/s) LR: 5.421e-02 Data: 1.141 (1.141) Train: 114 [ 50/156 ( 33%)] Loss: 1.94 (1.90) Time: 0.408s, 2512.17/s (0.437s, 2342.47/s) LR: 5.421e-02 Data: 0.027 (0.049) Train: 114 [ 100/156 ( 65%)] Loss: 1.91 (1.90) Time: 0.411s, 2492.80/s (0.424s, 2416.73/s) LR: 5.421e-02 Data: 0.027 (0.038) Train: 114 [ 150/156 ( 97%)] Loss: 1.92 (1.91) Time: 0.408s, 2510.73/s (0.419s, 2443.59/s) LR: 5.421e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.442 (1.442) Loss: 2.565 ( 2.565) Acc@1: 49.023 ( 49.023) Acc@5: 72.461 ( 72.461) Test: [ 48/48] Time: 0.090 (0.331) Loss: 2.445 ( 2.666) Acc@1: 52.005 ( 47.712) Acc@5: 75.118 ( 70.378) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-114.pth.tar', 47.71200003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-107.pth.tar', 47.58400004394531) Train: 115 [ 0/156 ( 1%)] Loss: 1.89 (1.89) Time: 2.013s, 508.74/s (2.013s, 508.74/s) LR: 5.137e-02 Data: 1.558 (1.558) Train: 115 [ 50/156 ( 33%)] Loss: 1.92 (1.91) Time: 0.408s, 2512.48/s (0.438s, 2339.45/s) LR: 5.137e-02 Data: 0.028 (0.057) Train: 115 [ 100/156 ( 65%)] Loss: 1.88 (1.91) Time: 0.408s, 2508.70/s (0.424s, 2417.58/s) LR: 5.137e-02 Data: 0.027 (0.042) Train: 115 [ 150/156 ( 97%)] Loss: 1.95 (1.91) Time: 0.408s, 2508.20/s (0.419s, 2444.27/s) LR: 5.137e-02 Data: 0.025 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.450 (1.450) Loss: 2.511 ( 2.511) Acc@1: 49.902 ( 49.902) Acc@5: 73.535 ( 73.535) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.386 ( 2.614) Acc@1: 52.476 ( 48.742) Acc@5: 75.943 ( 71.136) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-114.pth.tar', 47.71200003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-103.pth.tar', 47.651999973144534) Train: 116 [ 0/156 ( 1%)] Loss: 1.86 (1.86) Time: 2.155s, 475.22/s (2.155s, 475.22/s) LR: 4.860e-02 Data: 1.103 (1.103) Train: 116 [ 50/156 ( 33%)] Loss: 1.86 (1.88) Time: 0.408s, 2510.34/s (0.441s, 2319.44/s) LR: 4.860e-02 Data: 0.027 (0.049) Train: 116 [ 100/156 ( 65%)] Loss: 1.89 (1.89) Time: 0.409s, 2504.43/s (0.426s, 2405.60/s) LR: 4.860e-02 Data: 0.026 (0.038) Train: 116 [ 150/156 ( 97%)] Loss: 1.93 (1.89) Time: 0.407s, 2517.80/s (0.420s, 2438.06/s) LR: 4.860e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.437 (1.437) Loss: 2.541 ( 2.541) Acc@1: 49.707 ( 49.707) Acc@5: 71.973 ( 71.973) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.396 ( 2.627) Acc@1: 52.476 ( 48.632) Acc@5: 74.882 ( 70.906) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-114.pth.tar', 47.71200003051758) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-108.pth.tar', 47.704000059814454) Train: 117 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 1.598s, 640.82/s (1.598s, 640.82/s) LR: 4.590e-02 Data: 1.226 (1.226) Train: 117 [ 50/156 ( 33%)] Loss: 1.85 (1.88) Time: 0.408s, 2510.63/s (0.431s, 2378.60/s) LR: 4.590e-02 Data: 0.026 (0.051) Train: 117 [ 100/156 ( 65%)] Loss: 1.85 (1.89) Time: 0.407s, 2518.33/s (0.420s, 2440.07/s) LR: 4.590e-02 Data: 0.029 (0.039) Train: 117 [ 150/156 ( 97%)] Loss: 1.90 (1.89) Time: 0.405s, 2525.64/s (0.415s, 2465.53/s) LR: 4.590e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.435 (1.435) Loss: 2.546 ( 2.546) Acc@1: 49.414 ( 49.414) Acc@5: 71.875 ( 71.875) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.393 ( 2.629) Acc@1: 51.415 ( 48.620) Acc@5: 75.354 ( 70.876) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-114.pth.tar', 47.71200003051758) Train: 118 [ 0/156 ( 1%)] Loss: 1.86 (1.86) Time: 1.721s, 595.16/s (1.721s, 595.16/s) LR: 4.326e-02 Data: 1.346 (1.346) Train: 118 [ 50/156 ( 33%)] Loss: 1.89 (1.88) Time: 0.408s, 2512.05/s (0.435s, 2353.42/s) LR: 4.326e-02 Data: 0.027 (0.053) Train: 118 [ 100/156 ( 65%)] Loss: 1.90 (1.88) Time: 0.406s, 2522.97/s (0.422s, 2428.88/s) LR: 4.326e-02 Data: 0.026 (0.040) Train: 118 [ 150/156 ( 97%)] Loss: 1.86 (1.89) Time: 0.403s, 2538.85/s (0.417s, 2456.80/s) LR: 4.326e-02 Data: 0.024 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.418 (1.418) Loss: 2.543 ( 2.543) Acc@1: 48.340 ( 48.340) Acc@5: 73.242 ( 73.242) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.386 ( 2.619) Acc@1: 53.066 ( 48.528) Acc@5: 75.708 ( 71.218) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-110.pth.tar', 47.76799997070312) Train: 119 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 1.821s, 562.29/s (1.821s, 562.29/s) LR: 4.069e-02 Data: 1.444 (1.444) Train: 119 [ 50/156 ( 33%)] Loss: 1.87 (1.87) Time: 0.408s, 2509.41/s (0.438s, 2339.95/s) LR: 4.069e-02 Data: 0.026 (0.055) Train: 119 [ 100/156 ( 65%)] Loss: 1.84 (1.87) Time: 0.412s, 2486.82/s (0.424s, 2416.93/s) LR: 4.069e-02 Data: 0.027 (0.041) Train: 119 [ 150/156 ( 97%)] Loss: 1.84 (1.87) Time: 0.408s, 2507.52/s (0.419s, 2443.69/s) LR: 4.069e-02 Data: 0.026 (0.037) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.447 (1.447) Loss: 2.575 ( 2.575) Acc@1: 48.438 ( 48.438) Acc@5: 71.777 ( 71.777) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.414 ( 2.653) Acc@1: 52.594 ( 48.346) Acc@5: 74.646 ( 70.700) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-111.pth.tar', 47.82600002807617) Train: 120 [ 0/156 ( 1%)] Loss: 1.93 (1.93) Time: 1.704s, 601.08/s (1.704s, 601.08/s) LR: 3.820e-02 Data: 1.327 (1.327) Train: 120 [ 50/156 ( 33%)] Loss: 1.83 (1.86) Time: 0.408s, 2512.61/s (0.434s, 2358.59/s) LR: 3.820e-02 Data: 0.027 (0.053) Train: 120 [ 100/156 ( 65%)] Loss: 1.90 (1.86) Time: 0.403s, 2542.12/s (0.420s, 2440.35/s) LR: 3.820e-02 Data: 0.027 (0.040) Train: 120 [ 150/156 ( 97%)] Loss: 1.87 (1.87) Time: 0.402s, 2544.18/s (0.415s, 2469.50/s) LR: 3.820e-02 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.420 (1.420) Loss: 2.569 ( 2.569) Acc@1: 49.121 ( 49.121) Acc@5: 72.656 ( 72.656) Test: [ 48/48] Time: 0.089 (0.328) Loss: 2.398 ( 2.635) Acc@1: 53.066 ( 48.512) Acc@5: 75.118 ( 70.936) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-106.pth.tar', 47.84799999267578) Train: 121 [ 0/156 ( 1%)] Loss: 1.80 (1.80) Time: 1.833s, 558.67/s (1.833s, 558.67/s) LR: 3.577e-02 Data: 1.374 (1.374) Train: 121 [ 50/156 ( 33%)] Loss: 1.89 (1.86) Time: 0.409s, 2505.96/s (0.436s, 2349.90/s) LR: 3.577e-02 Data: 0.028 (0.054) Train: 121 [ 100/156 ( 65%)] Loss: 1.85 (1.86) Time: 0.403s, 2538.06/s (0.421s, 2432.40/s) LR: 3.577e-02 Data: 0.028 (0.041) Train: 121 [ 150/156 ( 97%)] Loss: 1.92 (1.86) Time: 0.399s, 2564.76/s (0.415s, 2468.72/s) LR: 3.577e-02 Data: 0.025 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.444 (1.444) Loss: 2.513 ( 2.513) Acc@1: 49.414 ( 49.414) Acc@5: 72.363 ( 72.363) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.403 ( 2.614) Acc@1: 52.005 ( 48.766) Acc@5: 75.825 ( 71.296) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-113.pth.tar', 48.094000021972654) Train: 122 [ 0/156 ( 1%)] Loss: 1.83 (1.83) Time: 1.893s, 540.97/s (1.893s, 540.97/s) LR: 3.342e-02 Data: 1.205 (1.205) Train: 122 [ 50/156 ( 33%)] Loss: 1.81 (1.85) Time: 0.404s, 2533.62/s (0.433s, 2364.66/s) LR: 3.342e-02 Data: 0.028 (0.050) Train: 122 [ 100/156 ( 65%)] Loss: 1.84 (1.86) Time: 0.409s, 2504.93/s (0.420s, 2439.53/s) LR: 3.342e-02 Data: 0.029 (0.039) Train: 122 [ 150/156 ( 97%)] Loss: 1.90 (1.86) Time: 0.407s, 2513.12/s (0.416s, 2459.11/s) LR: 3.342e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.474 (1.474) Loss: 2.538 ( 2.538) Acc@1: 49.219 ( 49.219) Acc@5: 71.973 ( 71.973) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.401 ( 2.620) Acc@1: 51.887 ( 48.738) Acc@5: 76.415 ( 71.152) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-109.pth.tar', 48.26400001464844) Train: 123 [ 0/156 ( 1%)] Loss: 1.89 (1.89) Time: 1.717s, 596.45/s (1.717s, 596.45/s) LR: 3.113e-02 Data: 1.346 (1.346) Train: 123 [ 50/156 ( 33%)] Loss: 1.84 (1.84) Time: 0.401s, 2553.86/s (0.427s, 2400.79/s) LR: 3.113e-02 Data: 0.027 (0.053) Train: 123 [ 100/156 ( 65%)] Loss: 1.88 (1.85) Time: 0.401s, 2551.95/s (0.414s, 2471.16/s) LR: 3.113e-02 Data: 0.027 (0.040) Train: 123 [ 150/156 ( 97%)] Loss: 1.94 (1.85) Time: 0.406s, 2521.15/s (0.411s, 2489.59/s) LR: 3.113e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.459 (1.459) Loss: 2.545 ( 2.545) Acc@1: 49.609 ( 49.609) Acc@5: 72.168 ( 72.168) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.414 ( 2.635) Acc@1: 50.825 ( 48.654) Acc@5: 74.175 ( 71.040) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-112.pth.tar', 48.31399997314453) Train: 124 [ 0/156 ( 1%)] Loss: 1.87 (1.87) Time: 1.664s, 615.32/s (1.664s, 615.32/s) LR: 2.893e-02 Data: 1.288 (1.288) Train: 124 [ 50/156 ( 33%)] Loss: 1.87 (1.83) Time: 0.410s, 2495.63/s (0.436s, 2350.35/s) LR: 2.893e-02 Data: 0.027 (0.053) Train: 124 [ 100/156 ( 65%)] Loss: 1.82 (1.84) Time: 0.401s, 2550.93/s (0.421s, 2433.70/s) LR: 2.893e-02 Data: 0.027 (0.041) Train: 124 [ 150/156 ( 97%)] Loss: 1.82 (1.84) Time: 0.398s, 2572.45/s (0.414s, 2472.17/s) LR: 2.893e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.441 (1.441) Loss: 2.566 ( 2.566) Acc@1: 48.242 ( 48.242) Acc@5: 71.777 ( 71.777) Test: [ 48/48] Time: 0.088 (0.331) Loss: 2.392 ( 2.634) Acc@1: 52.948 ( 48.810) Acc@5: 75.590 ( 70.990) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-119.pth.tar', 48.34600002807617) Train: 125 [ 0/156 ( 1%)] Loss: 1.77 (1.77) Time: 1.577s, 649.20/s (1.577s, 649.20/s) LR: 2.679e-02 Data: 1.176 (1.176) Train: 125 [ 50/156 ( 33%)] Loss: 1.82 (1.83) Time: 0.399s, 2568.96/s (0.422s, 2424.87/s) LR: 2.679e-02 Data: 0.026 (0.050) Train: 125 [ 100/156 ( 65%)] Loss: 1.86 (1.84) Time: 0.402s, 2544.37/s (0.412s, 2487.19/s) LR: 2.679e-02 Data: 0.026 (0.038) Train: 125 [ 150/156 ( 97%)] Loss: 1.80 (1.84) Time: 0.403s, 2543.15/s (0.409s, 2502.25/s) LR: 2.679e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.442 (1.442) Loss: 2.549 ( 2.549) Acc@1: 49.805 ( 49.805) Acc@5: 72.559 ( 72.559) Test: [ 48/48] Time: 0.089 (0.327) Loss: 2.412 ( 2.628) Acc@1: 51.769 ( 49.086) Acc@5: 74.882 ( 71.226) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-120.pth.tar', 48.5120000390625) Train: 126 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 2.154s, 475.30/s (2.154s, 475.30/s) LR: 2.474e-02 Data: 1.080 (1.080) Train: 126 [ 50/156 ( 33%)] Loss: 1.82 (1.83) Time: 0.411s, 2492.92/s (0.443s, 2311.11/s) LR: 2.474e-02 Data: 0.026 (0.048) Train: 126 [ 100/156 ( 65%)] Loss: 1.78 (1.83) Time: 0.408s, 2507.21/s (0.427s, 2400.05/s) LR: 2.474e-02 Data: 0.027 (0.038) Train: 126 [ 150/156 ( 97%)] Loss: 1.84 (1.83) Time: 0.407s, 2515.95/s (0.421s, 2433.27/s) LR: 2.474e-02 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 2.533 ( 2.533) Acc@1: 49.609 ( 49.609) Acc@5: 73.145 ( 73.145) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.378 ( 2.615) Acc@1: 52.476 ( 49.094) Acc@5: 75.354 ( 71.240) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-118.pth.tar', 48.5280000390625) Train: 127 [ 0/156 ( 1%)] Loss: 1.78 (1.78) Time: 1.858s, 550.99/s (1.858s, 550.99/s) LR: 2.276e-02 Data: 1.163 (1.163) Train: 127 [ 50/156 ( 33%)] Loss: 1.76 (1.82) Time: 0.405s, 2527.71/s (0.435s, 2353.88/s) LR: 2.276e-02 Data: 0.027 (0.049) Train: 127 [ 100/156 ( 65%)] Loss: 1.78 (1.83) Time: 0.404s, 2534.77/s (0.420s, 2440.14/s) LR: 2.276e-02 Data: 0.027 (0.038) Train: 127 [ 150/156 ( 97%)] Loss: 1.86 (1.83) Time: 0.405s, 2528.78/s (0.415s, 2468.22/s) LR: 2.276e-02 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.439 (1.439) Loss: 2.569 ( 2.569) Acc@1: 49.219 ( 49.219) Acc@5: 73.145 ( 73.145) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.410 ( 2.641) Acc@1: 51.887 ( 48.856) Acc@5: 76.061 ( 71.004) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-117.pth.tar', 48.620000032958984) Train: 128 [ 0/156 ( 1%)] Loss: 1.79 (1.79) Time: 1.547s, 661.95/s (1.547s, 661.95/s) LR: 2.086e-02 Data: 1.129 (1.129) Train: 128 [ 50/156 ( 33%)] Loss: 1.81 (1.82) Time: 0.408s, 2512.33/s (0.434s, 2360.52/s) LR: 2.086e-02 Data: 0.028 (0.048) Train: 128 [ 100/156 ( 65%)] Loss: 1.84 (1.82) Time: 0.409s, 2506.05/s (0.421s, 2429.93/s) LR: 2.086e-02 Data: 0.029 (0.038) Train: 128 [ 150/156 ( 97%)] Loss: 1.85 (1.83) Time: 0.408s, 2509.58/s (0.417s, 2453.46/s) LR: 2.086e-02 Data: 0.023 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.420 (1.420) Loss: 2.549 ( 2.549) Acc@1: 49.805 ( 49.805) Acc@5: 72.559 ( 72.559) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.386 ( 2.628) Acc@1: 52.948 ( 49.034) Acc@5: 75.943 ( 71.188) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-116.pth.tar', 48.63200004150391) Train: 129 [ 0/156 ( 1%)] Loss: 1.87 (1.87) Time: 1.543s, 663.75/s (1.543s, 663.75/s) LR: 1.903e-02 Data: 1.174 (1.174) Train: 129 [ 50/156 ( 33%)] Loss: 1.91 (1.83) Time: 0.398s, 2570.05/s (0.421s, 2431.38/s) LR: 1.903e-02 Data: 0.027 (0.050) Train: 129 [ 100/156 ( 65%)] Loss: 1.88 (1.83) Time: 0.398s, 2570.95/s (0.410s, 2499.45/s) LR: 1.903e-02 Data: 0.027 (0.039) Train: 129 [ 150/156 ( 97%)] Loss: 1.81 (1.82) Time: 0.397s, 2576.84/s (0.406s, 2521.43/s) LR: 1.903e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.423 (1.423) Loss: 2.538 ( 2.538) Acc@1: 50.195 ( 50.195) Acc@5: 73.730 ( 73.730) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.404 ( 2.637) Acc@1: 52.241 ( 49.100) Acc@5: 75.354 ( 71.130) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-123.pth.tar', 48.653999970703126) Train: 130 [ 0/156 ( 1%)] Loss: 1.81 (1.81) Time: 1.782s, 574.60/s (1.782s, 574.60/s) LR: 1.729e-02 Data: 1.381 (1.381) Train: 130 [ 50/156 ( 33%)] Loss: 1.83 (1.82) Time: 0.406s, 2522.82/s (0.430s, 2379.80/s) LR: 1.729e-02 Data: 0.026 (0.054) Train: 130 [ 100/156 ( 65%)] Loss: 1.83 (1.82) Time: 0.405s, 2531.00/s (0.418s, 2451.45/s) LR: 1.729e-02 Data: 0.027 (0.041) Train: 130 [ 150/156 ( 97%)] Loss: 1.84 (1.82) Time: 0.407s, 2515.30/s (0.414s, 2471.73/s) LR: 1.729e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.451 (1.451) Loss: 2.537 ( 2.537) Acc@1: 48.438 ( 48.438) Acc@5: 73.828 ( 73.828) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.395 ( 2.633) Acc@1: 52.712 ( 48.980) Acc@5: 75.708 ( 71.176) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-122.pth.tar', 48.737999979248045) Train: 131 [ 0/156 ( 1%)] Loss: 1.78 (1.78) Time: 1.555s, 658.40/s (1.555s, 658.40/s) LR: 1.563e-02 Data: 1.177 (1.177) Train: 131 [ 50/156 ( 33%)] Loss: 1.77 (1.81) Time: 0.407s, 2513.31/s (0.431s, 2378.31/s) LR: 1.563e-02 Data: 0.026 (0.050) Train: 131 [ 100/156 ( 65%)] Loss: 1.83 (1.81) Time: 0.405s, 2527.91/s (0.420s, 2440.60/s) LR: 1.563e-02 Data: 0.028 (0.039) Train: 131 [ 150/156 ( 97%)] Loss: 1.86 (1.81) Time: 0.403s, 2541.09/s (0.414s, 2470.94/s) LR: 1.563e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.451 (1.451) Loss: 2.548 ( 2.548) Acc@1: 49.512 ( 49.512) Acc@5: 73.535 ( 73.535) Test: [ 48/48] Time: 0.088 (0.329) Loss: 2.417 ( 2.627) Acc@1: 52.948 ( 49.218) Acc@5: 75.000 ( 71.254) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-115.pth.tar', 48.74200004150391) Train: 132 [ 0/156 ( 1%)] Loss: 1.75 (1.75) Time: 1.749s, 585.57/s (1.749s, 585.57/s) LR: 1.404e-02 Data: 1.139 (1.139) Train: 132 [ 50/156 ( 33%)] Loss: 1.80 (1.81) Time: 0.407s, 2517.55/s (0.429s, 2384.74/s) LR: 1.404e-02 Data: 0.028 (0.049) Train: 132 [ 100/156 ( 65%)] Loss: 1.83 (1.81) Time: 0.411s, 2493.03/s (0.419s, 2446.06/s) LR: 1.404e-02 Data: 0.029 (0.038) Train: 132 [ 150/156 ( 97%)] Loss: 1.85 (1.81) Time: 0.403s, 2543.98/s (0.415s, 2468.06/s) LR: 1.404e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.428 (1.428) Loss: 2.547 ( 2.547) Acc@1: 49.902 ( 49.902) Acc@5: 73.047 ( 73.047) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.403 ( 2.628) Acc@1: 52.948 ( 49.132) Acc@5: 75.590 ( 71.338) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-121.pth.tar', 48.76599996582031) Train: 133 [ 0/156 ( 1%)] Loss: 1.81 (1.81) Time: 1.595s, 642.09/s (1.595s, 642.09/s) LR: 1.254e-02 Data: 1.208 (1.208) Train: 133 [ 50/156 ( 33%)] Loss: 1.79 (1.80) Time: 0.404s, 2534.22/s (0.425s, 2408.25/s) LR: 1.254e-02 Data: 0.030 (0.051) Train: 133 [ 100/156 ( 65%)] Loss: 1.87 (1.80) Time: 0.409s, 2503.07/s (0.415s, 2468.68/s) LR: 1.254e-02 Data: 0.029 (0.039) Train: 133 [ 150/156 ( 97%)] Loss: 1.82 (1.80) Time: 0.405s, 2531.03/s (0.412s, 2483.18/s) LR: 1.254e-02 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.451 (1.451) Loss: 2.524 ( 2.524) Acc@1: 50.000 ( 50.000) Acc@5: 73.535 ( 73.535) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.392 ( 2.613) Acc@1: 52.358 ( 49.220) Acc@5: 75.943 ( 71.462) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-124.pth.tar', 48.80999998779297) Train: 134 [ 0/156 ( 1%)] Loss: 1.83 (1.83) Time: 1.644s, 623.01/s (1.644s, 623.01/s) LR: 1.112e-02 Data: 1.273 (1.273) Train: 134 [ 50/156 ( 33%)] Loss: 1.88 (1.80) Time: 0.399s, 2563.29/s (0.428s, 2394.22/s) LR: 1.112e-02 Data: 0.027 (0.054) Train: 134 [ 100/156 ( 65%)] Loss: 1.76 (1.80) Time: 0.401s, 2551.86/s (0.414s, 2473.57/s) LR: 1.112e-02 Data: 0.028 (0.041) Train: 134 [ 150/156 ( 97%)] Loss: 1.81 (1.80) Time: 0.399s, 2566.24/s (0.410s, 2499.22/s) LR: 1.112e-02 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.438 (1.438) Loss: 2.533 ( 2.533) Acc@1: 49.121 ( 49.121) Acc@5: 73.828 ( 73.828) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.391 ( 2.615) Acc@1: 53.066 ( 49.418) Acc@5: 76.415 ( 71.554) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-127.pth.tar', 48.856000043945315) Train: 135 [ 0/156 ( 1%)] Loss: 1.85 (1.85) Time: 1.653s, 619.62/s (1.653s, 619.62/s) LR: 9.789e-03 Data: 1.280 (1.280) Train: 135 [ 50/156 ( 33%)] Loss: 1.79 (1.79) Time: 0.406s, 2523.07/s (0.430s, 2383.77/s) LR: 9.789e-03 Data: 0.027 (0.052) Train: 135 [ 100/156 ( 65%)] Loss: 1.78 (1.80) Time: 0.409s, 2504.51/s (0.419s, 2443.32/s) LR: 9.789e-03 Data: 0.028 (0.040) Train: 135 [ 150/156 ( 97%)] Loss: 1.82 (1.80) Time: 0.406s, 2519.31/s (0.416s, 2464.06/s) LR: 9.789e-03 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.446 (1.446) Loss: 2.533 ( 2.533) Acc@1: 49.023 ( 49.023) Acc@5: 73.828 ( 73.828) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.388 ( 2.627) Acc@1: 52.830 ( 49.136) Acc@5: 76.297 ( 71.370) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-130.pth.tar', 48.98000001464844) Train: 136 [ 0/156 ( 1%)] Loss: 1.79 (1.79) Time: 1.921s, 533.05/s (1.921s, 533.05/s) LR: 8.536e-03 Data: 1.116 (1.116) Train: 136 [ 50/156 ( 33%)] Loss: 1.81 (1.79) Time: 0.413s, 2478.71/s (0.438s, 2340.30/s) LR: 8.536e-03 Data: 0.034 (0.049) Train: 136 [ 100/156 ( 65%)] Loss: 1.71 (1.79) Time: 0.408s, 2508.08/s (0.423s, 2420.85/s) LR: 8.536e-03 Data: 0.026 (0.038) Train: 136 [ 150/156 ( 97%)] Loss: 1.80 (1.79) Time: 0.406s, 2523.73/s (0.418s, 2448.48/s) LR: 8.536e-03 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.443 (1.443) Loss: 2.520 ( 2.520) Acc@1: 49.316 ( 49.316) Acc@5: 73.535 ( 73.535) Test: [ 48/48] Time: 0.090 (0.327) Loss: 2.381 ( 2.611) Acc@1: 53.184 ( 49.446) Acc@5: 76.297 ( 71.564) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-128.pth.tar', 49.034000052490235) Train: 137 [ 0/156 ( 1%)] Loss: 1.84 (1.84) Time: 1.681s, 609.12/s (1.681s, 609.12/s) LR: 7.367e-03 Data: 1.106 (1.106) Train: 137 [ 50/156 ( 33%)] Loss: 1.80 (1.79) Time: 0.407s, 2516.09/s (0.433s, 2364.23/s) LR: 7.367e-03 Data: 0.027 (0.048) Train: 137 [ 100/156 ( 65%)] Loss: 1.80 (1.79) Time: 0.409s, 2503.16/s (0.420s, 2435.25/s) LR: 7.367e-03 Data: 0.029 (0.038) Train: 137 [ 150/156 ( 97%)] Loss: 1.79 (1.79) Time: 0.407s, 2518.20/s (0.416s, 2461.36/s) LR: 7.367e-03 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.454 (1.454) Loss: 2.519 ( 2.519) Acc@1: 49.414 ( 49.414) Acc@5: 72.559 ( 72.559) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.381 ( 2.608) Acc@1: 52.948 ( 49.544) Acc@5: 75.943 ( 71.662) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-125.pth.tar', 49.08599999267578) Train: 138 [ 0/156 ( 1%)] Loss: 1.75 (1.75) Time: 1.777s, 576.13/s (1.777s, 576.13/s) LR: 6.283e-03 Data: 1.404 (1.404) Train: 138 [ 50/156 ( 33%)] Loss: 1.79 (1.78) Time: 0.405s, 2529.84/s (0.433s, 2366.37/s) LR: 6.283e-03 Data: 0.026 (0.055) Train: 138 [ 100/156 ( 65%)] Loss: 1.83 (1.78) Time: 0.409s, 2504.60/s (0.420s, 2435.36/s) LR: 6.283e-03 Data: 0.028 (0.041) Train: 138 [ 150/156 ( 97%)] Loss: 1.79 (1.79) Time: 0.408s, 2509.80/s (0.417s, 2456.73/s) LR: 6.283e-03 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 2.525 ( 2.525) Acc@1: 49.219 ( 49.219) Acc@5: 73.145 ( 73.145) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.387 ( 2.612) Acc@1: 52.594 ( 49.518) Acc@5: 75.472 ( 71.552) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-126.pth.tar', 49.094000041503904) Train: 139 [ 0/156 ( 1%)] Loss: 1.83 (1.83) Time: 1.583s, 647.06/s (1.583s, 647.06/s) LR: 5.284e-03 Data: 1.207 (1.207) Train: 139 [ 50/156 ( 33%)] Loss: 1.85 (1.79) Time: 0.409s, 2504.33/s (0.432s, 2368.13/s) LR: 5.284e-03 Data: 0.027 (0.051) Train: 139 [ 100/156 ( 65%)] Loss: 1.82 (1.79) Time: 0.405s, 2530.81/s (0.420s, 2439.27/s) LR: 5.284e-03 Data: 0.027 (0.039) Train: 139 [ 150/156 ( 97%)] Loss: 1.82 (1.79) Time: 0.407s, 2515.58/s (0.416s, 2463.69/s) LR: 5.284e-03 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.433 (1.433) Loss: 2.526 ( 2.526) Acc@1: 49.316 ( 49.316) Acc@5: 72.754 ( 72.754) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.385 ( 2.613) Acc@1: 53.184 ( 49.530) Acc@5: 75.943 ( 71.530) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-129.pth.tar', 49.10000000366211) Train: 140 [ 0/156 ( 1%)] Loss: 1.78 (1.78) Time: 1.716s, 596.71/s (1.716s, 596.71/s) LR: 4.370e-03 Data: 1.212 (1.212) Train: 140 [ 50/156 ( 33%)] Loss: 1.77 (1.79) Time: 0.407s, 2513.29/s (0.438s, 2335.99/s) LR: 4.370e-03 Data: 0.026 (0.051) Train: 140 [ 100/156 ( 65%)] Loss: 1.78 (1.79) Time: 0.403s, 2539.26/s (0.422s, 2425.08/s) LR: 4.370e-03 Data: 0.027 (0.039) Train: 140 [ 150/156 ( 97%)] Loss: 1.78 (1.79) Time: 0.402s, 2547.45/s (0.416s, 2459.57/s) LR: 4.370e-03 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.437 (1.437) Loss: 2.534 ( 2.534) Acc@1: 49.023 ( 49.023) Acc@5: 73.242 ( 73.242) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.393 ( 2.619) Acc@1: 52.241 ( 49.332) Acc@5: 76.179 ( 71.552) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-140.pth.tar', 49.33200000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-132.pth.tar', 49.131999987792966) Train: 141 [ 0/156 ( 1%)] Loss: 1.77 (1.77) Time: 1.506s, 679.74/s (1.506s, 679.74/s) LR: 3.543e-03 Data: 1.134 (1.134) Train: 141 [ 50/156 ( 33%)] Loss: 1.81 (1.79) Time: 0.402s, 2546.45/s (0.425s, 2408.80/s) LR: 3.543e-03 Data: 0.025 (0.049) Train: 141 [ 100/156 ( 65%)] Loss: 1.79 (1.79) Time: 0.407s, 2517.82/s (0.415s, 2467.97/s) LR: 3.543e-03 Data: 0.027 (0.038) Train: 141 [ 150/156 ( 97%)] Loss: 1.82 (1.79) Time: 0.408s, 2511.83/s (0.413s, 2482.21/s) LR: 3.543e-03 Data: 0.026 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.427 (1.427) Loss: 2.536 ( 2.536) Acc@1: 49.023 ( 49.023) Acc@5: 73.242 ( 73.242) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.391 ( 2.620) Acc@1: 53.184 ( 49.506) Acc@5: 76.297 ( 71.468) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-140.pth.tar', 49.33200000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-135.pth.tar', 49.1360000012207) Train: 142 [ 0/156 ( 1%)] Loss: 1.79 (1.79) Time: 1.691s, 605.55/s (1.691s, 605.55/s) LR: 2.801e-03 Data: 1.316 (1.316) Train: 142 [ 50/156 ( 33%)] Loss: 1.75 (1.79) Time: 0.406s, 2523.12/s (0.433s, 2364.43/s) LR: 2.801e-03 Data: 0.026 (0.052) Train: 142 [ 100/156 ( 65%)] Loss: 1.77 (1.79) Time: 0.407s, 2519.02/s (0.420s, 2437.54/s) LR: 2.801e-03 Data: 0.029 (0.040) Train: 142 [ 150/156 ( 97%)] Loss: 1.76 (1.79) Time: 0.407s, 2513.66/s (0.416s, 2464.22/s) LR: 2.801e-03 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.444 (1.444) Loss: 2.537 ( 2.537) Acc@1: 49.219 ( 49.219) Acc@5: 72.949 ( 72.949) Test: [ 48/48] Time: 0.090 (0.329) Loss: 2.395 ( 2.621) Acc@1: 52.476 ( 49.454) Acc@5: 76.297 ( 71.462) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-140.pth.tar', 49.33200000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-131.pth.tar', 49.21800005249023) Train: 143 [ 0/156 ( 1%)] Loss: 1.80 (1.80) Time: 1.631s, 627.84/s (1.631s, 627.84/s) LR: 2.146e-03 Data: 1.103 (1.103) Train: 143 [ 50/156 ( 33%)] Loss: 1.77 (1.78) Time: 0.405s, 2529.47/s (0.432s, 2370.11/s) LR: 2.146e-03 Data: 0.027 (0.049) Train: 143 [ 100/156 ( 65%)] Loss: 1.82 (1.78) Time: 0.402s, 2549.98/s (0.417s, 2456.26/s) LR: 2.146e-03 Data: 0.026 (0.038) Train: 143 [ 150/156 ( 97%)] Loss: 1.86 (1.78) Time: 0.396s, 2582.63/s (0.411s, 2491.05/s) LR: 2.146e-03 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.426 (1.426) Loss: 2.525 ( 2.525) Acc@1: 49.316 ( 49.316) Acc@5: 72.754 ( 72.754) Test: [ 48/48] Time: 0.088 (0.331) Loss: 2.381 ( 2.611) Acc@1: 53.302 ( 49.566) Acc@5: 76.297 ( 71.558) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-143.pth.tar', 49.566000012207034) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-140.pth.tar', 49.33200000366211) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-133.pth.tar', 49.22000005493164) Train: 144 [ 0/156 ( 1%)] Loss: 1.77 (1.77) Time: 1.500s, 682.44/s (1.500s, 682.44/s) LR: 1.577e-03 Data: 1.134 (1.134) Train: 144 [ 50/156 ( 33%)] Loss: 1.84 (1.78) Time: 0.404s, 2533.14/s (0.420s, 2437.04/s) LR: 1.577e-03 Data: 0.029 (0.049) Train: 144 [ 100/156 ( 65%)] Loss: 1.81 (1.78) Time: 0.402s, 2545.63/s (0.411s, 2492.80/s) LR: 1.577e-03 Data: 0.027 (0.038) Train: 144 [ 150/156 ( 97%)] Loss: 1.80 (1.78) Time: 0.403s, 2539.27/s (0.409s, 2505.10/s) LR: 1.577e-03 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.425 (1.425) Loss: 2.531 ( 2.531) Acc@1: 49.316 ( 49.316) Acc@5: 73.340 ( 73.340) Test: [ 48/48] Time: 0.089 (0.329) Loss: 2.390 ( 2.618) Acc@1: 52.948 ( 49.454) Acc@5: 75.825 ( 71.486) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-143.pth.tar', 49.566000012207034) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-144.pth.tar', 49.45400005249024) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-140.pth.tar', 49.33200000366211) Train: 145 [ 0/156 ( 1%)] Loss: 1.79 (1.79) Time: 2.083s, 491.49/s (2.083s, 491.49/s) LR: 1.096e-03 Data: 1.221 (1.221) Train: 145 [ 50/156 ( 33%)] Loss: 1.78 (1.78) Time: 0.410s, 2497.53/s (0.442s, 2319.15/s) LR: 1.096e-03 Data: 0.027 (0.051) Train: 145 [ 100/156 ( 65%)] Loss: 1.79 (1.77) Time: 0.406s, 2521.56/s (0.424s, 2413.46/s) LR: 1.096e-03 Data: 0.027 (0.039) Train: 145 [ 150/156 ( 97%)] Loss: 1.76 (1.78) Time: 0.406s, 2520.86/s (0.418s, 2446.90/s) LR: 1.096e-03 Data: 0.024 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.442 (1.442) Loss: 2.531 ( 2.531) Acc@1: 49.023 ( 49.023) Acc@5: 73.340 ( 73.340) Test: [ 48/48] Time: 0.090 (0.330) Loss: 2.389 ( 2.617) Acc@1: 53.302 ( 49.500) Acc@5: 75.943 ( 71.544) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-143.pth.tar', 49.566000012207034) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-145.pth.tar', 49.50000001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-144.pth.tar', 49.45400005249024) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-134.pth.tar', 49.41799997436524) Train: 146 [ 0/156 ( 1%)] Loss: 1.82 (1.82) Time: 1.550s, 660.56/s (1.550s, 660.56/s) LR: 7.014e-04 Data: 1.174 (1.174) Train: 146 [ 50/156 ( 33%)] Loss: 1.78 (1.78) Time: 0.413s, 2477.67/s (0.433s, 2367.24/s) LR: 7.014e-04 Data: 0.028 (0.050) Train: 146 [ 100/156 ( 65%)] Loss: 1.80 (1.78) Time: 0.411s, 2492.46/s (0.422s, 2429.26/s) LR: 7.014e-04 Data: 0.027 (0.039) Train: 146 [ 150/156 ( 97%)] Loss: 1.77 (1.78) Time: 0.410s, 2498.56/s (0.418s, 2451.90/s) LR: 7.014e-04 Data: 0.025 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.451 (1.451) Loss: 2.526 ( 2.526) Acc@1: 49.316 ( 49.316) Acc@5: 73.340 ( 73.340) Test: [ 48/48] Time: 0.089 (0.331) Loss: 2.386 ( 2.615) Acc@1: 52.830 ( 49.570) Acc@5: 76.061 ( 71.578) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-146.pth.tar', 49.57000006591797) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-143.pth.tar', 49.566000012207034) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-145.pth.tar', 49.50000001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-144.pth.tar', 49.45400005249024) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-136.pth.tar', 49.44600002563477) Train: 147 [ 0/156 ( 1%)] Loss: 1.77 (1.77) Time: 1.519s, 674.17/s (1.519s, 674.17/s) LR: 3.947e-04 Data: 1.037 (1.037) Train: 147 [ 50/156 ( 33%)] Loss: 1.76 (1.78) Time: 0.401s, 2554.72/s (0.422s, 2425.89/s) LR: 3.947e-04 Data: 0.027 (0.047) Train: 147 [ 100/156 ( 65%)] Loss: 1.71 (1.78) Time: 0.399s, 2567.71/s (0.411s, 2488.49/s) LR: 3.947e-04 Data: 0.027 (0.037) Train: 147 [ 150/156 ( 97%)] Loss: 1.81 (1.78) Time: 0.402s, 2549.77/s (0.408s, 2507.15/s) LR: 3.947e-04 Data: 0.025 (0.034) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.436 (1.436) Loss: 2.525 ( 2.525) Acc@1: 49.121 ( 49.121) Acc@5: 73.535 ( 73.535) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.381 ( 2.612) Acc@1: 52.594 ( 49.586) Acc@5: 75.943 ( 71.552) Current checkpoints: ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-147.pth.tar', 49.58600002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-146.pth.tar', 49.57000006591797) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-143.pth.tar', 49.566000012207034) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-137.pth.tar', 49.54399998779297) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-139.pth.tar', 49.53000002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-138.pth.tar', 49.51800002807617) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-141.pth.tar', 49.50600002563476) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-145.pth.tar', 49.50000001220703) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-144.pth.tar', 49.45400005249024) ('./output/train/ImageNetTraining30.0-frac-1over8/checkpoint-142.pth.tar', 49.4540000415039) Train: 148 [ 0/156 ( 1%)] Loss: 1.78 (1.78) Time: 1.661s, 616.38/s (1.661s, 616.38/s) LR: 1.754e-04 Data: 1.288 (1.288) Train: 148 [ 50/156 ( 33%)] Loss: 1.76 (1.77) Time: 0.411s, 2492.99/s (0.431s, 2374.22/s) LR: 1.754e-04 Data: 0.027 (0.052) Train: 148 [ 100/156 ( 65%)] Loss: 1.76 (1.77) Time: 0.410s, 2498.41/s (0.421s, 2434.48/s) LR: 1.754e-04 Data: 0.027 (0.040) Train: 148 [ 150/156 ( 97%)] Loss: 1.77 (1.78) Time: 0.409s, 2504.45/s (0.417s, 2455.18/s) LR: 1.754e-04 Data: 0.026 (0.036) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.431 (1.431) Loss: 2.540 ( 2.540) Acc@1: 48.926 ( 48.926) Acc@5: 73.047 ( 73.047) Test: [ 48/48] Time: 0.089 (0.330) Loss: 2.395 ( 2.624) Acc@1: 52.830 ( 49.420) Acc@5: 76.179 ( 71.456) Train: 149 [ 0/156 ( 1%)] Loss: 1.78 (1.78) Time: 1.723s, 594.24/s (1.723s, 594.24/s) LR: 4.386e-05 Data: 1.198 (1.198) Train: 149 [ 50/156 ( 33%)] Loss: 1.76 (1.78) Time: 0.399s, 2566.59/s (0.427s, 2399.51/s) LR: 4.386e-05 Data: 0.027 (0.050) Train: 149 [ 100/156 ( 65%)] Loss: 1.72 (1.78) Time: 0.402s, 2547.98/s (0.414s, 2473.69/s) LR: 4.386e-05 Data: 0.027 (0.039) Train: 149 [ 150/156 ( 97%)] Loss: 1.79 (1.78) Time: 0.403s, 2540.02/s (0.411s, 2494.44/s) LR: 4.386e-05 Data: 0.026 (0.035) Distributing BatchNorm running means and vars Test: [ 0/48] Time: 1.434 (1.434) Loss: 2.535 ( 2.535) Acc@1: 48.828 ( 48.828) Acc@5: 73.047 ( 73.047) Test: [ 48/48] Time: 0.090 (0.332) Loss: 2.394 ( 2.624) Acc@1: 53.184 ( 49.446) Acc@5: 75.943 ( 71.480) *** Best metric: 49.58600002807617 (epoch 147) --result [ { "epoch": 142, "train": { "loss": 1.786149024963379 }, "validation": { "loss": 2.6212486039733887, "top1": 49.4540000415039, "top5": 71.46200017578126 } }, { "epoch": 144, "train": { "loss": 1.7819733619689941 }, "validation": { "loss": 2.6176555850219727, "top1": 49.45400005249024, "top5": 71.48599997070312 } }, { "epoch": 145, "train": { "loss": 1.7763924598693848 }, "validation": { "loss": 2.6170238582611085, "top1": 49.50000001220703, "top5": 71.54400002197265 } }, { "epoch": 141, "train": { "loss": 1.7903223037719727 }, "validation": { "loss": 2.619572492599487, "top1": 49.50600002563476, "top5": 71.46800004638672 } }, { "epoch": 138, "train": { "loss": 1.7871060371398926 }, "validation": { "loss": 2.612201716156006, "top1": 49.51800002807617, "top5": 71.55199994628906 } }, { "epoch": 139, "train": { "loss": 1.7861570119857788 }, "validation": { "loss": 2.612953329772949, "top1": 49.53000002563476, "top5": 71.53000002197265 } }, { "epoch": 137, "train": { "loss": 1.7929143905639648 }, "validation": { "loss": 2.6078895418548584, "top1": 49.54399998779297, "top5": 71.66200002197266 } }, { "epoch": 143, "train": { "loss": 1.779794454574585 }, "validation": { "loss": 2.610794797821045, "top1": 49.566000012207034, "top5": 71.55800004638672 } }, { "epoch": 146, "train": { "loss": 1.7819963693618774 }, "validation": { "loss": 2.6145646882629396, "top1": 49.57000006591797, "top5": 71.5780000732422 } }, { "epoch": 147, "train": { "loss": 1.7816953659057617 }, "validation": { "loss": 2.6121715985107423, "top1": 49.58600002807617, "top5": 71.55200002197266 } } ]