nhiro3303's picture
Upload . with huggingface_hub
c3aa30b
[2023-03-02 09:58:26,873][08626] Saving configuration to /home/gpu/train_dir/default_experiment/config.json...
[2023-03-02 09:58:26,873][08626] Rollout worker 0 uses device cpu
[2023-03-02 09:58:26,873][08626] Rollout worker 1 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 2 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 3 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 4 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 5 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 6 uses device cpu
[2023-03-02 09:58:26,874][08626] Rollout worker 7 uses device cpu
[2023-03-02 09:58:26,902][08626] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 09:58:26,902][08626] InferenceWorker_p0-w0: min num requests: 2
[2023-03-02 09:58:26,918][08626] Starting all processes...
[2023-03-02 09:58:26,919][08626] Starting process learner_proc0
[2023-03-02 09:58:27,574][08626] Starting all processes...
[2023-03-02 09:58:27,577][08626] Starting process inference_proc0-0
[2023-03-02 09:58:27,577][08626] Starting process rollout_proc0
[2023-03-02 09:58:27,577][08684] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 09:58:27,578][08684] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-03-02 09:58:27,578][08626] Starting process rollout_proc1
[2023-03-02 09:58:27,578][08626] Starting process rollout_proc2
[2023-03-02 09:58:27,578][08626] Starting process rollout_proc3
[2023-03-02 09:58:27,580][08626] Starting process rollout_proc4
[2023-03-02 09:58:27,582][08626] Starting process rollout_proc5
[2023-03-02 09:58:27,582][08626] Starting process rollout_proc6
[2023-03-02 09:58:27,583][08626] Starting process rollout_proc7
[2023-03-02 09:58:27,612][08684] Num visible devices: 1
[2023-03-02 09:58:27,637][08684] Starting seed is not provided
[2023-03-02 09:58:27,637][08684] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 09:58:27,638][08684] Initializing actor-critic model on device cuda:0
[2023-03-02 09:58:27,638][08684] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 09:58:27,639][08684] RunningMeanStd input shape: (1,)
[2023-03-02 09:58:27,651][08684] ConvEncoder: input_channels=3
[2023-03-02 09:58:27,870][08684] Conv encoder output size: 512
[2023-03-02 09:58:27,870][08684] Policy head output size: 512
[2023-03-02 09:58:27,881][08684] Created Actor Critic model with architecture:
[2023-03-02 09:58:27,881][08684] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-03-02 09:58:28,670][08715] Worker 2 uses CPU cores [4, 5]
[2023-03-02 09:58:28,692][08713] Worker 0 uses CPU cores [0, 1]
[2023-03-02 09:58:28,721][08717] Worker 4 uses CPU cores [8, 9]
[2023-03-02 09:58:28,722][08720] Worker 7 uses CPU cores [14, 15]
[2023-03-02 09:58:28,727][08719] Worker 6 uses CPU cores [12, 13]
[2023-03-02 09:58:28,750][08718] Worker 5 uses CPU cores [10, 11]
[2023-03-02 09:58:28,752][08716] Worker 3 uses CPU cores [6, 7]
[2023-03-02 09:58:28,759][08714] Worker 1 uses CPU cores [2, 3]
[2023-03-02 09:58:28,776][08712] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 09:58:28,777][08712] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-03-02 09:58:28,819][08712] Num visible devices: 1
[2023-03-02 09:58:29,994][08684] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-03-02 09:58:29,996][08684] No checkpoints found
[2023-03-02 09:58:29,996][08684] Did not load from checkpoint, starting from scratch!
[2023-03-02 09:58:29,996][08684] Initialized policy 0 weights for model version 0
[2023-03-02 09:58:29,999][08684] LearnerWorker_p0 finished initialization!
[2023-03-02 09:58:29,999][08684] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 09:58:30,101][08712] Unhandled exception CUDA error: invalid resource handle
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1. in evt loop inference_proc0-0_evt_loop
[2023-03-02 09:58:30,375][08626] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:58:35,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:58:40,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:58:45,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:58:46,897][08626] Heartbeat connected on Batcher_0
[2023-03-02 09:58:46,899][08626] Heartbeat connected on LearnerWorker_p0
[2023-03-02 09:58:46,905][08626] Heartbeat connected on RolloutWorker_w0
[2023-03-02 09:58:46,906][08626] Heartbeat connected on RolloutWorker_w1
[2023-03-02 09:58:46,908][08626] Heartbeat connected on RolloutWorker_w2
[2023-03-02 09:58:46,910][08626] Heartbeat connected on RolloutWorker_w3
[2023-03-02 09:58:46,912][08626] Heartbeat connected on RolloutWorker_w4
[2023-03-02 09:58:46,914][08626] Heartbeat connected on RolloutWorker_w5
[2023-03-02 09:58:46,916][08626] Heartbeat connected on RolloutWorker_w6
[2023-03-02 09:58:46,918][08626] Heartbeat connected on RolloutWorker_w7
[2023-03-02 09:58:50,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:58:55,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:00,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:05,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:10,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:15,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:20,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:25,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:30,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:35,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:40,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:45,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:50,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 09:59:55,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:00,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:05,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:10,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:15,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:20,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:25,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:25,377][08684] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:00:30,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:35,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:40,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:45,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:50,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:00:55,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:00,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:05,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:10,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:15,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:20,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:25,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:30,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:35,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:40,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:45,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:50,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:01:55,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:00,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:05,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:10,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:15,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:20,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:25,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:25,376][08684] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:02:30,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:35,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:40,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:45,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:50,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:02:55,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:00,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:05,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:10,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:15,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:20,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:25,375][08626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:03:27,918][08626] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 8626], exiting...
[2023-03-02 10:03:27,919][08717] Stopping RolloutWorker_w4...
[2023-03-02 10:03:27,919][08714] Stopping RolloutWorker_w1...
[2023-03-02 10:03:27,919][08626] Runner profile tree view:
main_loop: 301.0005
[2023-03-02 10:03:27,919][08718] Stopping RolloutWorker_w5...
[2023-03-02 10:03:27,919][08715] Stopping RolloutWorker_w2...
[2023-03-02 10:03:27,919][08717] Loop rollout_proc4_evt_loop terminating...
[2023-03-02 10:03:27,919][08626] Collected {0: 0}, FPS: 0.0
[2023-03-02 10:03:27,919][08719] Stopping RolloutWorker_w6...
[2023-03-02 10:03:27,919][08713] Stopping RolloutWorker_w0...
[2023-03-02 10:03:27,919][08720] Stopping RolloutWorker_w7...
[2023-03-02 10:03:27,919][08714] Loop rollout_proc1_evt_loop terminating...
[2023-03-02 10:03:27,919][08716] Stopping RolloutWorker_w3...
[2023-03-02 10:03:27,919][08684] Stopping Batcher_0...
[2023-03-02 10:03:27,919][08718] Loop rollout_proc5_evt_loop terminating...
[2023-03-02 10:03:27,919][08713] Loop rollout_proc0_evt_loop terminating...
[2023-03-02 10:03:27,919][08715] Loop rollout_proc2_evt_loop terminating...
[2023-03-02 10:03:27,919][08719] Loop rollout_proc6_evt_loop terminating...
[2023-03-02 10:03:27,919][08720] Loop rollout_proc7_evt_loop terminating...
[2023-03-02 10:03:27,919][08684] Loop batcher_evt_loop terminating...
[2023-03-02 10:03:27,919][08716] Loop rollout_proc3_evt_loop terminating...
[2023-03-02 10:03:27,920][08684] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:03:27,945][08684] Stopping LearnerWorker_p0...
[2023-03-02 10:03:27,945][08684] Loop learner_proc0_evt_loop terminating...
[2023-03-02 10:03:28,008][08626] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 10:03:28,009][08626] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 10:03:28,009][08626] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 10:03:28,009][08626] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 10:03:28,010][08626] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 10:03:28,020][08626] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-03-02 10:03:28,021][08626] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 10:03:28,023][08626] RunningMeanStd input shape: (1,)
[2023-03-02 10:03:28,035][08626] ConvEncoder: input_channels=3
[2023-03-02 10:03:28,222][08626] Conv encoder output size: 512
[2023-03-02 10:03:28,223][08626] Policy head output size: 512
[2023-03-02 10:03:29,855][08626] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:03:30,602][08626] Num frames 100...
[2023-03-02 10:03:30,696][08626] Num frames 200...
[2023-03-02 10:03:30,791][08626] Num frames 300...
[2023-03-02 10:03:30,922][08626] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 10:03:30,922][08626] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 10:03:30,940][08626] Num frames 400...
[2023-03-02 10:03:31,034][08626] Num frames 500...
[2023-03-02 10:03:31,127][08626] Num frames 600...
[2023-03-02 10:03:31,222][08626] Num frames 700...
[2023-03-02 10:03:31,338][08626] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 10:03:31,338][08626] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 10:03:31,370][08626] Num frames 800...
[2023-03-02 10:03:31,465][08626] Num frames 900...
[2023-03-02 10:03:31,560][08626] Num frames 1000...
[2023-03-02 10:03:31,654][08626] Num frames 1100...
[2023-03-02 10:03:31,748][08626] Num frames 1200...
[2023-03-02 10:03:31,816][08626] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
[2023-03-02 10:03:31,816][08626] Avg episode reward: 4.387, avg true_objective: 4.053
[2023-03-02 10:03:31,895][08626] Num frames 1300...
[2023-03-02 10:03:31,993][08626] Num frames 1400...
[2023-03-02 10:03:32,088][08626] Num frames 1500...
[2023-03-02 10:03:32,185][08626] Num frames 1600...
[2023-03-02 10:03:32,236][08626] Avg episode rewards: #0: 4.250, true rewards: #0: 4.000
[2023-03-02 10:03:32,236][08626] Avg episode reward: 4.250, avg true_objective: 4.000
[2023-03-02 10:03:32,332][08626] Num frames 1700...
[2023-03-02 10:03:32,427][08626] Num frames 1800...
[2023-03-02 10:03:32,520][08626] Num frames 1900...
[2023-03-02 10:03:32,614][08626] Num frames 2000...
[2023-03-02 10:03:32,711][08626] Avg episode rewards: #0: 4.496, true rewards: #0: 4.096
[2023-03-02 10:03:32,711][08626] Avg episode reward: 4.496, avg true_objective: 4.096
[2023-03-02 10:03:32,760][08626] Num frames 2100...
[2023-03-02 10:03:32,853][08626] Num frames 2200...
[2023-03-02 10:03:32,947][08626] Num frames 2300...
[2023-03-02 10:03:33,041][08626] Num frames 2400...
[2023-03-02 10:03:33,123][08626] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
[2023-03-02 10:03:33,124][08626] Avg episode reward: 4.387, avg true_objective: 4.053
[2023-03-02 10:03:33,188][08626] Num frames 2500...
[2023-03-02 10:03:33,281][08626] Num frames 2600...
[2023-03-02 10:03:33,377][08626] Num frames 2700...
[2023-03-02 10:03:33,471][08626] Num frames 2800...
[2023-03-02 10:03:33,538][08626] Avg episode rewards: #0: 4.309, true rewards: #0: 4.023
[2023-03-02 10:03:33,539][08626] Avg episode reward: 4.309, avg true_objective: 4.023
[2023-03-02 10:03:33,618][08626] Num frames 2900...
[2023-03-02 10:03:33,715][08626] Num frames 3000...
[2023-03-02 10:03:33,810][08626] Num frames 3100...
[2023-03-02 10:03:33,906][08626] Num frames 3200...
[2023-03-02 10:03:34,022][08626] Avg episode rewards: #0: 4.455, true rewards: #0: 4.080
[2023-03-02 10:03:34,022][08626] Avg episode reward: 4.455, avg true_objective: 4.080
[2023-03-02 10:03:34,061][08626] Num frames 3300...
[2023-03-02 10:03:34,168][08626] Num frames 3400...
[2023-03-02 10:03:34,271][08626] Num frames 3500...
[2023-03-02 10:03:34,366][08626] Num frames 3600...
[2023-03-02 10:03:34,465][08626] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
[2023-03-02 10:03:34,465][08626] Avg episode reward: 4.387, avg true_objective: 4.053
[2023-03-02 10:03:34,520][08626] Num frames 3700...
[2023-03-02 10:03:34,622][08626] Num frames 3800...
[2023-03-02 10:03:34,726][08626] Num frames 3900...
[2023-03-02 10:03:34,829][08626] Num frames 4000...
[2023-03-02 10:03:34,915][08626] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032
[2023-03-02 10:03:34,915][08626] Avg episode reward: 4.332, avg true_objective: 4.032
[2023-03-02 10:03:38,820][08626] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 10:46:22,903][08626] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 10:46:22,903][08626] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 10:46:22,903][08626] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'hf_repository'='nhiro3303/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 10:46:22,904][08626] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 10:46:22,905][08626] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 10:46:22,908][08626] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 10:46:22,908][08626] RunningMeanStd input shape: (1,)
[2023-03-02 10:46:22,914][08626] ConvEncoder: input_channels=3
[2023-03-02 10:46:22,938][08626] Conv encoder output size: 512
[2023-03-02 10:46:22,938][08626] Policy head output size: 512
[2023-03-02 10:46:22,958][08626] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:46:23,267][08626] Num frames 100...
[2023-03-02 10:46:23,410][08626] Num frames 200...
[2023-03-02 10:46:23,551][08626] Num frames 300...
[2023-03-02 10:46:23,718][08626] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 10:46:23,719][08626] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 10:46:23,745][08626] Num frames 400...
[2023-03-02 10:46:23,884][08626] Num frames 500...
[2023-03-02 10:46:24,021][08626] Num frames 600...
[2023-03-02 10:46:24,177][08626] Num frames 700...
[2023-03-02 10:46:24,330][08626] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 10:46:24,330][08626] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 10:46:24,372][08626] Num frames 800...
[2023-03-02 10:46:24,513][08626] Num frames 900...
[2023-03-02 10:46:24,647][08626] Num frames 1000...
[2023-03-02 10:46:24,777][08626] Num frames 1100...
[2023-03-02 10:46:24,901][08626] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 10:46:24,901][08626] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 10:46:24,973][08626] Num frames 1200...
[2023-03-02 10:46:25,108][08626] Num frames 1300...
[2023-03-02 10:46:25,248][08626] Num frames 1400...
[2023-03-02 10:46:25,393][08626] Num frames 1500...
[2023-03-02 10:46:25,525][08626] Num frames 1600...
[2023-03-02 10:46:25,575][08626] Avg episode rewards: #0: 4.250, true rewards: #0: 4.000
[2023-03-02 10:46:25,576][08626] Avg episode reward: 4.250, avg true_objective: 4.000
[2023-03-02 10:46:25,707][08626] Num frames 1700...
[2023-03-02 10:46:25,842][08626] Num frames 1800...
[2023-03-02 10:46:25,977][08626] Num frames 1900...
[2023-03-02 10:46:26,145][08626] Avg episode rewards: #0: 4.168, true rewards: #0: 3.968
[2023-03-02 10:46:26,145][08626] Avg episode reward: 4.168, avg true_objective: 3.968
[2023-03-02 10:46:26,166][08626] Num frames 2000...
[2023-03-02 10:46:26,299][08626] Num frames 2100...
[2023-03-02 10:46:26,432][08626] Num frames 2200...
[2023-03-02 10:46:26,557][08626] Num frames 2300...
[2023-03-02 10:46:26,708][08626] Avg episode rewards: #0: 4.113, true rewards: #0: 3.947
[2023-03-02 10:46:26,708][08626] Avg episode reward: 4.113, avg true_objective: 3.947
[2023-03-02 10:46:26,755][08626] Num frames 2400...
[2023-03-02 10:46:26,883][08626] Num frames 2500...
[2023-03-02 10:46:27,016][08626] Num frames 2600...
[2023-03-02 10:46:27,157][08626] Num frames 2700...
[2023-03-02 10:46:27,284][08626] Avg episode rewards: #0: 4.074, true rewards: #0: 3.931
[2023-03-02 10:46:27,284][08626] Avg episode reward: 4.074, avg true_objective: 3.931
[2023-03-02 10:46:27,354][08626] Num frames 2800...
[2023-03-02 10:46:27,487][08626] Num frames 2900...
[2023-03-02 10:46:27,620][08626] Num frames 3000...
[2023-03-02 10:46:27,761][08626] Num frames 3100...
[2023-03-02 10:46:27,862][08626] Avg episode rewards: #0: 4.045, true rewards: #0: 3.920
[2023-03-02 10:46:27,862][08626] Avg episode reward: 4.045, avg true_objective: 3.920
[2023-03-02 10:46:27,952][08626] Num frames 3200...
[2023-03-02 10:46:28,092][08626] Num frames 3300...
[2023-03-02 10:46:28,231][08626] Num frames 3400...
[2023-03-02 10:46:28,363][08626] Num frames 3500...
[2023-03-02 10:46:28,530][08626] Avg episode rewards: #0: 4.204, true rewards: #0: 3.982
[2023-03-02 10:46:28,531][08626] Avg episode reward: 4.204, avg true_objective: 3.982
[2023-03-02 10:46:28,558][08626] Num frames 3600...
[2023-03-02 10:46:28,700][08626] Num frames 3700...
[2023-03-02 10:46:28,836][08626] Num frames 3800...
[2023-03-02 10:46:28,968][08626] Num frames 3900...
[2023-03-02 10:46:29,103][08626] Num frames 4000...
[2023-03-02 10:46:29,203][08626] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032
[2023-03-02 10:46:29,203][08626] Avg episode reward: 4.332, avg true_objective: 4.032
[2023-03-02 10:46:33,125][08626] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 10:46:42,501][08626] The model has been pushed to https://huggingface.co/nhiro3303/rl_course_vizdoom_health_gathering_supreme
[2023-03-02 10:54:01,490][09136] Saving configuration to /home/gpu/train_dir/default_experiment/config.json...
[2023-03-02 10:54:01,491][09136] Rollout worker 0 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 1 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 2 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 3 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 4 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 5 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 6 uses device cpu
[2023-03-02 10:54:01,491][09136] Rollout worker 7 uses device cpu
[2023-03-02 10:54:01,520][09136] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 10:54:01,520][09136] InferenceWorker_p0-w0: min num requests: 2
[2023-03-02 10:54:01,536][09136] Starting all processes...
[2023-03-02 10:54:01,536][09136] Starting process learner_proc0
[2023-03-02 10:54:02,191][09136] Starting all processes...
[2023-03-02 10:54:02,194][09136] Starting process inference_proc0-0
[2023-03-02 10:54:02,194][09136] Starting process rollout_proc0
[2023-03-02 10:54:02,195][09194] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 10:54:02,195][09194] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-03-02 10:54:02,194][09136] Starting process rollout_proc1
[2023-03-02 10:54:02,195][09136] Starting process rollout_proc2
[2023-03-02 10:54:02,197][09136] Starting process rollout_proc3
[2023-03-02 10:54:02,197][09136] Starting process rollout_proc4
[2023-03-02 10:54:02,199][09136] Starting process rollout_proc5
[2023-03-02 10:54:02,199][09136] Starting process rollout_proc6
[2023-03-02 10:54:02,200][09136] Starting process rollout_proc7
[2023-03-02 10:54:02,234][09194] Num visible devices: 1
[2023-03-02 10:54:02,264][09194] Starting seed is not provided
[2023-03-02 10:54:02,265][09194] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 10:54:02,265][09194] Initializing actor-critic model on device cuda:0
[2023-03-02 10:54:02,265][09194] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 10:54:02,265][09194] RunningMeanStd input shape: (1,)
[2023-03-02 10:54:02,275][09194] ConvEncoder: input_channels=3
[2023-03-02 10:54:02,483][09194] Conv encoder output size: 512
[2023-03-02 10:54:02,483][09194] Policy head output size: 512
[2023-03-02 10:54:02,496][09194] Created Actor Critic model with architecture:
[2023-03-02 10:54:02,496][09194] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-03-02 10:54:03,222][09224] Worker 2 uses CPU cores [4, 5]
[2023-03-02 10:54:03,262][09230] Worker 7 uses CPU cores [14, 15]
[2023-03-02 10:54:03,280][09228] Worker 6 uses CPU cores [12, 13]
[2023-03-02 10:54:03,308][09226] Worker 4 uses CPU cores [8, 9]
[2023-03-02 10:54:03,329][09223] Worker 0 uses CPU cores [0, 1]
[2023-03-02 10:54:03,344][09229] Worker 1 uses CPU cores [2, 3]
[2023-03-02 10:54:03,353][09225] Worker 3 uses CPU cores [6, 7]
[2023-03-02 10:54:03,361][09222] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 10:54:03,362][09222] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-03-02 10:54:03,375][09222] Num visible devices: 1
[2023-03-02 10:54:03,385][09227] Worker 5 uses CPU cores [10, 11]
[2023-03-02 10:54:04,555][09194] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-03-02 10:54:04,556][09194] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:54:04,562][09194] Loading model from checkpoint
[2023-03-02 10:54:04,564][09194] Loaded experiment state at self.train_step=0, self.env_steps=0
[2023-03-02 10:54:04,564][09194] Initialized policy 0 weights for model version 0
[2023-03-02 10:54:04,567][09194] LearnerWorker_p0 finished initialization!
[2023-03-02 10:54:04,567][09194] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 10:54:04,676][09222] Unhandled exception CUDA error: invalid resource handle in evt loop inference_proc0-0_evt_loop
[2023-03-02 10:54:04,992][09136] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:21,515][09136] Heartbeat connected on Batcher_0
[2023-03-02 10:54:21,517][09136] Heartbeat connected on LearnerWorker_p0
[2023-03-02 10:54:21,522][09136] Heartbeat connected on RolloutWorker_w0
[2023-03-02 10:54:21,524][09136] Heartbeat connected on RolloutWorker_w1
[2023-03-02 10:54:21,526][09136] Heartbeat connected on RolloutWorker_w2
[2023-03-02 10:54:21,528][09136] Heartbeat connected on RolloutWorker_w3
[2023-03-02 10:54:21,530][09136] Heartbeat connected on RolloutWorker_w4
[2023-03-02 10:54:21,532][09136] Heartbeat connected on RolloutWorker_w5
[2023-03-02 10:54:21,534][09136] Heartbeat connected on RolloutWorker_w6
[2023-03-02 10:54:21,535][09136] Heartbeat connected on RolloutWorker_w7
[2023-03-02 10:54:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:54:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:55:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:56:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:56:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:57:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 10:58:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:58:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 10:59:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:00:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:00:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:01:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:02:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:02:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:03:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:03:59,993][09136] Components not started: InferenceWorker_p0-w0, wait_time=600.0 seconds
[2023-03-02 11:04:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:04:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:05:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:06:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:06:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:07:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:08:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:08:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:09,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:14,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:19,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:24,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:29,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:34,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:39,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:44,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:49,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:54,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:59,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:09:59,993][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:10:04,992][09136] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:10:09,285][09136] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 9136], exiting...
[2023-03-02 11:10:09,285][09229] Stopping RolloutWorker_w1...
[2023-03-02 11:10:09,285][09136] Runner profile tree view:
main_loop: 967.7497
[2023-03-02 11:10:09,285][09226] Stopping RolloutWorker_w4...
[2023-03-02 11:10:09,285][09227] Stopping RolloutWorker_w5...
[2023-03-02 11:10:09,285][09224] Stopping RolloutWorker_w2...
[2023-03-02 11:10:09,285][09223] Stopping RolloutWorker_w0...
[2023-03-02 11:10:09,285][09228] Stopping RolloutWorker_w6...
[2023-03-02 11:10:09,286][09136] Collected {0: 0}, FPS: 0.0
[2023-03-02 11:10:09,285][09225] Stopping RolloutWorker_w3...
[2023-03-02 11:10:09,286][09229] Loop rollout_proc1_evt_loop terminating...
[2023-03-02 11:10:09,286][09227] Loop rollout_proc5_evt_loop terminating...
[2023-03-02 11:10:09,286][09230] Stopping RolloutWorker_w7...
[2023-03-02 11:10:09,286][09226] Loop rollout_proc4_evt_loop terminating...
[2023-03-02 11:10:09,286][09194] Stopping Batcher_0...
[2023-03-02 11:10:09,286][09224] Loop rollout_proc2_evt_loop terminating...
[2023-03-02 11:10:09,286][09225] Loop rollout_proc3_evt_loop terminating...
[2023-03-02 11:10:09,286][09223] Loop rollout_proc0_evt_loop terminating...
[2023-03-02 11:10:09,286][09228] Loop rollout_proc6_evt_loop terminating...
[2023-03-02 11:10:09,286][09230] Loop rollout_proc7_evt_loop terminating...
[2023-03-02 11:10:09,286][09194] Loop batcher_evt_loop terminating...
[2023-03-02 11:10:09,287][09194] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:10:09,311][09194] Stopping LearnerWorker_p0...
[2023-03-02 11:10:09,311][09194] Loop learner_proc0_evt_loop terminating...
[2023-03-02 11:10:09,380][09136] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:10:09,380][09136] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:10:09,380][09136] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:10:09,380][09136] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:10:09,381][09136] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:10:09,382][09136] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:10:09,392][09136] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-03-02 11:10:09,393][09136] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:10:09,395][09136] RunningMeanStd input shape: (1,)
[2023-03-02 11:10:09,405][09136] ConvEncoder: input_channels=3
[2023-03-02 11:10:09,588][09136] Conv encoder output size: 512
[2023-03-02 11:10:09,588][09136] Policy head output size: 512
[2023-03-02 11:10:11,311][09136] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:10:12,026][09136] Num frames 100...
[2023-03-02 11:10:12,149][09136] Num frames 200...
[2023-03-02 11:10:12,271][09136] Num frames 300...
[2023-03-02 11:10:12,391][09136] Num frames 400...
[2023-03-02 11:10:12,541][09136] Avg episode rewards: #0: 6.800, true rewards: #0: 4.800
[2023-03-02 11:10:12,541][09136] Avg episode reward: 6.800, avg true_objective: 4.800
[2023-03-02 11:10:12,566][09136] Num frames 500...
[2023-03-02 11:10:12,688][09136] Num frames 600...
[2023-03-02 11:10:12,810][09136] Num frames 700...
[2023-03-02 11:10:12,932][09136] Num frames 800...
[2023-03-02 11:10:13,063][09136] Avg episode rewards: #0: 5.320, true rewards: #0: 4.320
[2023-03-02 11:10:13,063][09136] Avg episode reward: 5.320, avg true_objective: 4.320
[2023-03-02 11:10:13,107][09136] Num frames 900...
[2023-03-02 11:10:13,231][09136] Num frames 1000...
[2023-03-02 11:10:13,352][09136] Num frames 1100...
[2023-03-02 11:10:13,474][09136] Num frames 1200...
[2023-03-02 11:10:13,586][09136] Avg episode rewards: #0: 4.827, true rewards: #0: 4.160
[2023-03-02 11:10:13,587][09136] Avg episode reward: 4.827, avg true_objective: 4.160
[2023-03-02 11:10:13,651][09136] Num frames 1300...
[2023-03-02 11:10:13,773][09136] Num frames 1400...
[2023-03-02 11:10:13,894][09136] Num frames 1500...
[2023-03-02 11:10:14,016][09136] Num frames 1600...
[2023-03-02 11:10:14,112][09136] Avg episode rewards: #0: 4.580, true rewards: #0: 4.080
[2023-03-02 11:10:14,112][09136] Avg episode reward: 4.580, avg true_objective: 4.080
[2023-03-02 11:10:14,207][09136] Num frames 1700...
[2023-03-02 11:10:14,332][09136] Num frames 1800...
[2023-03-02 11:10:14,455][09136] Num frames 1900...
[2023-03-02 11:10:14,576][09136] Num frames 2000...
[2023-03-02 11:10:14,648][09136] Avg episode rewards: #0: 4.432, true rewards: #0: 4.032
[2023-03-02 11:10:14,648][09136] Avg episode reward: 4.432, avg true_objective: 4.032
[2023-03-02 11:10:14,751][09136] Num frames 2100...
[2023-03-02 11:10:14,874][09136] Num frames 2200...
[2023-03-02 11:10:14,995][09136] Num frames 2300...
[2023-03-02 11:10:15,116][09136] Num frames 2400...
[2023-03-02 11:10:15,211][09136] Avg episode rewards: #0: 4.553, true rewards: #0: 4.053
[2023-03-02 11:10:15,211][09136] Avg episode reward: 4.553, avg true_objective: 4.053
[2023-03-02 11:10:15,302][09136] Num frames 2500...
[2023-03-02 11:10:15,435][09136] Num frames 2600...
[2023-03-02 11:10:15,558][09136] Num frames 2700...
[2023-03-02 11:10:15,683][09136] Num frames 2800...
[2023-03-02 11:10:15,755][09136] Avg episode rewards: #0: 4.451, true rewards: #0: 4.023
[2023-03-02 11:10:15,755][09136] Avg episode reward: 4.451, avg true_objective: 4.023
[2023-03-02 11:10:15,861][09136] Num frames 2900...
[2023-03-02 11:10:15,982][09136] Num frames 3000...
[2023-03-02 11:10:16,103][09136] Num frames 3100...
[2023-03-02 11:10:16,229][09136] Num frames 3200...
[2023-03-02 11:10:16,279][09136] Avg episode rewards: #0: 4.375, true rewards: #0: 4.000
[2023-03-02 11:10:16,280][09136] Avg episode reward: 4.375, avg true_objective: 4.000
[2023-03-02 11:10:16,400][09136] Num frames 3300...
[2023-03-02 11:10:16,521][09136] Num frames 3400...
[2023-03-02 11:10:16,645][09136] Num frames 3500...
[2023-03-02 11:10:16,814][09136] Avg episode rewards: #0: 4.316, true rewards: #0: 3.982
[2023-03-02 11:10:16,814][09136] Avg episode reward: 4.316, avg true_objective: 3.982
[2023-03-02 11:10:16,839][09136] Num frames 3600...
[2023-03-02 11:10:16,983][09136] Num frames 3700...
[2023-03-02 11:10:17,126][09136] Num frames 3800...
[2023-03-02 11:10:17,269][09136] Num frames 3900...
[2023-03-02 11:10:17,420][09136] Avg episode rewards: #0: 4.268, true rewards: #0: 3.968
[2023-03-02 11:10:17,421][09136] Avg episode reward: 4.268, avg true_objective: 3.968
[2023-03-02 11:10:21,290][09136] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 11:10:50,666][09136] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:10:50,667][09136] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:10:50,667][09136] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-03-02 11:10:50,667][09136] Adding new argument 'hf_repository'='nhiro3303/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-03-02 11:10:50,668][09136] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:10:50,668][09136] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:10:50,668][09136] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:10:50,668][09136] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:10:50,668][09136] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:10:50,671][09136] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:10:50,671][09136] RunningMeanStd input shape: (1,)
[2023-03-02 11:10:50,677][09136] ConvEncoder: input_channels=3
[2023-03-02 11:10:50,701][09136] Conv encoder output size: 512
[2023-03-02 11:10:50,701][09136] Policy head output size: 512
[2023-03-02 11:10:50,723][09136] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-03-02 11:10:51,059][09136] Num frames 100...
[2023-03-02 11:10:51,268][09136] Num frames 200...
[2023-03-02 11:10:51,456][09136] Num frames 300...
[2023-03-02 11:10:51,668][09136] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:10:51,668][09136] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:10:51,703][09136] Num frames 400...
[2023-03-02 11:10:51,892][09136] Num frames 500...
[2023-03-02 11:10:52,084][09136] Num frames 600...
[2023-03-02 11:10:52,294][09136] Num frames 700...
[2023-03-02 11:10:52,483][09136] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:10:52,483][09136] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:10:52,545][09136] Num frames 800...
[2023-03-02 11:10:52,734][09136] Num frames 900...
[2023-03-02 11:10:52,922][09136] Num frames 1000...
[2023-03-02 11:10:53,112][09136] Num frames 1100...
[2023-03-02 11:10:53,281][09136] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:10:53,281][09136] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:10:53,381][09136] Num frames 1200...
[2023-03-02 11:10:53,575][09136] Num frames 1300...
[2023-03-02 11:10:53,774][09136] Num frames 1400...
[2023-03-02 11:10:53,965][09136] Num frames 1500...
[2023-03-02 11:10:54,174][09136] Num frames 1600...
[2023-03-02 11:10:54,225][09136] Avg episode rewards: #0: 4.250, true rewards: #0: 4.000
[2023-03-02 11:10:54,226][09136] Avg episode reward: 4.250, avg true_objective: 4.000
[2023-03-02 11:10:54,429][09136] Num frames 1700...
[2023-03-02 11:10:54,619][09136] Num frames 1800...
[2023-03-02 11:10:54,808][09136] Num frames 1900...
[2023-03-02 11:10:55,001][09136] Num frames 2000...
[2023-03-02 11:10:55,154][09136] Avg episode rewards: #0: 4.496, true rewards: #0: 4.096
[2023-03-02 11:10:55,154][09136] Avg episode reward: 4.496, avg true_objective: 4.096
[2023-03-02 11:10:55,266][09136] Num frames 2100...
[2023-03-02 11:10:55,467][09136] Num frames 2200...
[2023-03-02 11:10:55,664][09136] Num frames 2300...
[2023-03-02 11:10:55,853][09136] Num frames 2400...
[2023-03-02 11:10:55,973][09136] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
[2023-03-02 11:10:55,974][09136] Avg episode reward: 4.387, avg true_objective: 4.053
[2023-03-02 11:10:56,107][09136] Num frames 2500...
[2023-03-02 11:10:56,310][09136] Num frames 2600...
[2023-03-02 11:10:56,504][09136] Num frames 2700...
[2023-03-02 11:10:56,699][09136] Num frames 2800...
[2023-03-02 11:10:56,856][09136] Avg episode rewards: #0: 4.497, true rewards: #0: 4.069
[2023-03-02 11:10:56,856][09136] Avg episode reward: 4.497, avg true_objective: 4.069
[2023-03-02 11:10:56,960][09136] Num frames 2900...
[2023-03-02 11:10:57,156][09136] Num frames 3000...
[2023-03-02 11:10:57,352][09136] Num frames 3100...
[2023-03-02 11:10:57,544][09136] Num frames 3200...
[2023-03-02 11:10:57,665][09136] Avg episode rewards: #0: 4.415, true rewards: #0: 4.040
[2023-03-02 11:10:57,665][09136] Avg episode reward: 4.415, avg true_objective: 4.040
[2023-03-02 11:10:57,799][09136] Num frames 3300...
[2023-03-02 11:10:57,992][09136] Num frames 3400...
[2023-03-02 11:10:58,191][09136] Num frames 3500...
[2023-03-02 11:10:58,380][09136] Num frames 3600...
[2023-03-02 11:10:58,583][09136] Avg episode rewards: #0: 4.533, true rewards: #0: 4.089
[2023-03-02 11:10:58,584][09136] Avg episode reward: 4.533, avg true_objective: 4.089
[2023-03-02 11:10:58,630][09136] Num frames 3700...
[2023-03-02 11:10:58,827][09136] Num frames 3800...
[2023-03-02 11:10:59,015][09136] Num frames 3900...
[2023-03-02 11:10:59,232][09136] Num frames 4000...
[2023-03-02 11:10:59,408][09136] Avg episode rewards: #0: 4.464, true rewards: #0: 4.064
[2023-03-02 11:10:59,408][09136] Avg episode reward: 4.464, avg true_objective: 4.064
[2023-03-02 11:11:03,282][09136] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 11:11:22,494][09136] The model has been pushed to https://huggingface.co/nhiro3303/rl_course_vizdoom_health_gathering_supreme
[2023-03-02 11:41:48,937][09917] Saving configuration to /home/gpu/train_dir/default_experiment/config.json...
[2023-03-02 11:41:48,938][09917] Rollout worker 0 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 1 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 2 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 3 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 4 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 5 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 6 uses device cpu
[2023-03-02 11:41:48,938][09917] Rollout worker 7 uses device cpu
[2023-03-02 11:41:48,966][09917] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:41:48,966][09917] InferenceWorker_p0-w0: min num requests: 2
[2023-03-02 11:41:48,983][09917] Starting all processes...
[2023-03-02 11:41:48,983][09917] Starting process learner_proc0
[2023-03-02 11:41:49,683][09917] Starting all processes...
[2023-03-02 11:41:49,686][09975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:41:49,686][09975] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-03-02 11:41:49,686][09917] Starting process inference_proc0-0
[2023-03-02 11:41:49,686][09917] Starting process rollout_proc0
[2023-03-02 11:41:49,686][09917] Starting process rollout_proc1
[2023-03-02 11:41:49,688][09917] Starting process rollout_proc2
[2023-03-02 11:41:49,692][09917] Starting process rollout_proc3
[2023-03-02 11:41:49,694][09917] Starting process rollout_proc4
[2023-03-02 11:41:49,694][09917] Starting process rollout_proc5
[2023-03-02 11:41:49,694][09917] Starting process rollout_proc6
[2023-03-02 11:41:49,694][09917] Starting process rollout_proc7
[2023-03-02 11:41:49,721][09975] Num visible devices: 1
[2023-03-02 11:41:49,748][09975] Starting seed is not provided
[2023-03-02 11:41:49,748][09975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:41:49,748][09975] Initializing actor-critic model on device cuda:0
[2023-03-02 11:41:49,749][09975] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:41:49,750][09975] RunningMeanStd input shape: (1,)
[2023-03-02 11:41:49,762][09975] ConvEncoder: input_channels=3
[2023-03-02 11:41:50,005][09975] Conv encoder output size: 512
[2023-03-02 11:41:50,005][09975] Policy head output size: 512
[2023-03-02 11:41:50,018][09975] Created Actor Critic model with architecture:
[2023-03-02 11:41:50,019][09975] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-03-02 11:41:50,847][10004] Worker 0 uses CPU cores [0, 1]
[2023-03-02 11:41:50,857][10011] Worker 7 uses CPU cores [14, 15]
[2023-03-02 11:41:50,857][10005] Worker 1 uses CPU cores [2, 3]
[2023-03-02 11:41:50,857][10009] Worker 5 uses CPU cores [10, 11]
[2023-03-02 11:41:50,888][10006] Worker 2 uses CPU cores [4, 5]
[2023-03-02 11:41:50,890][10007] Worker 3 uses CPU cores [6, 7]
[2023-03-02 11:41:50,891][10010] Worker 6 uses CPU cores [12, 13]
[2023-03-02 11:41:50,908][10003] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:41:50,908][10003] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-03-02 11:41:50,921][10003] Num visible devices: 1
[2023-03-02 11:41:50,924][10008] Worker 4 uses CPU cores [8, 9]
[2023-03-02 11:41:52,109][09975] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-03-02 11:41:52,109][09975] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth...
[2023-03-02 11:41:52,129][09975] Loading model from checkpoint
[2023-03-02 11:41:52,136][09975] Loaded experiment state at self.train_step=856, self.env_steps=3506176
[2023-03-02 11:41:52,137][09975] Initialized policy 0 weights for model version 856
[2023-03-02 11:41:52,140][09975] LearnerWorker_p0 finished initialization!
[2023-03-02 11:41:52,140][09975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:41:52,242][10003] Unhandled exception CUDA error: invalid resource handle in evt loop inference_proc0-0_evt_loop
[2023-03-02 11:41:52,400][09917] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 3506176. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:41:57,400][09917] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 3506176. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:42:02,400][09917] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 3506176. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:42:07,400][09917] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 3506176. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:42:08,483][09917] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 9917], exiting...
[2023-03-02 11:42:08,484][10009] Stopping RolloutWorker_w5...
[2023-03-02 11:42:08,484][10008] Stopping RolloutWorker_w4...
[2023-03-02 11:42:08,484][09917] Runner profile tree view:
main_loop: 19.5008
[2023-03-02 11:42:08,484][09917] Collected {0: 3506176}, FPS: 0.0
[2023-03-02 11:42:08,484][10005] Stopping RolloutWorker_w1...
[2023-03-02 11:42:08,484][10006] Stopping RolloutWorker_w2...
[2023-03-02 11:42:08,484][10010] Stopping RolloutWorker_w6...
[2023-03-02 11:42:08,484][10009] Loop rollout_proc5_evt_loop terminating...
[2023-03-02 11:42:08,484][10007] Stopping RolloutWorker_w3...
[2023-03-02 11:42:08,484][09975] Stopping Batcher_0...
[2023-03-02 11:42:08,484][10004] Stopping RolloutWorker_w0...
[2023-03-02 11:42:08,484][10008] Loop rollout_proc4_evt_loop terminating...
[2023-03-02 11:42:08,484][10011] Stopping RolloutWorker_w7...
[2023-03-02 11:42:08,484][10006] Loop rollout_proc2_evt_loop terminating...
[2023-03-02 11:42:08,484][10005] Loop rollout_proc1_evt_loop terminating...
[2023-03-02 11:42:08,484][10010] Loop rollout_proc6_evt_loop terminating...
[2023-03-02 11:42:08,484][10007] Loop rollout_proc3_evt_loop terminating...
[2023-03-02 11:42:08,484][10004] Loop rollout_proc0_evt_loop terminating...
[2023-03-02 11:42:08,484][10011] Loop rollout_proc7_evt_loop terminating...
[2023-03-02 11:42:08,484][09975] Loop batcher_evt_loop terminating...
[2023-03-02 11:42:08,485][09975] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth...
[2023-03-02 11:42:08,546][09975] Stopping LearnerWorker_p0...
[2023-03-02 11:42:08,547][09975] Loop learner_proc0_evt_loop terminating...
[2023-03-02 11:42:08,580][09917] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:42:08,580][09917] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:42:08,580][09917] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-03-02 11:42:08,581][09917] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-03-02 11:42:08,582][09917] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:42:08,582][09917] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:42:08,582][09917] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:42:08,582][09917] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:42:08,582][09917] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:42:08,592][09917] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-03-02 11:42:08,593][09917] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:42:08,593][09917] RunningMeanStd input shape: (1,)
[2023-03-02 11:42:08,606][09917] ConvEncoder: input_channels=3
[2023-03-02 11:42:08,837][09917] Conv encoder output size: 512
[2023-03-02 11:42:08,838][09917] Policy head output size: 512
[2023-03-02 11:42:10,551][09917] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth...
[2023-03-02 11:42:11,320][09917] Num frames 100...
[2023-03-02 11:42:11,449][09917] Num frames 200...
[2023-03-02 11:42:11,577][09917] Num frames 300...
[2023-03-02 11:42:11,704][09917] Num frames 400...
[2023-03-02 11:42:11,818][09917] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480
[2023-03-02 11:42:11,818][09917] Avg episode reward: 5.480, avg true_objective: 4.480
[2023-03-02 11:42:11,886][09917] Num frames 500...
[2023-03-02 11:42:12,014][09917] Num frames 600...
[2023-03-02 11:42:12,141][09917] Num frames 700...
[2023-03-02 11:42:12,267][09917] Num frames 800...
[2023-03-02 11:42:12,404][09917] Avg episode rewards: #0: 5.320, true rewards: #0: 4.320
[2023-03-02 11:42:12,404][09917] Avg episode reward: 5.320, avg true_objective: 4.320
[2023-03-02 11:42:12,450][09917] Num frames 900...
[2023-03-02 11:42:12,576][09917] Num frames 1000...
[2023-03-02 11:42:12,701][09917] Num frames 1100...
[2023-03-02 11:42:12,827][09917] Num frames 1200...
[2023-03-02 11:42:12,940][09917] Avg episode rewards: #0: 4.827, true rewards: #0: 4.160
[2023-03-02 11:42:12,940][09917] Avg episode reward: 4.827, avg true_objective: 4.160
[2023-03-02 11:42:13,007][09917] Num frames 1300...
[2023-03-02 11:42:13,136][09917] Num frames 1400...
[2023-03-02 11:42:13,262][09917] Num frames 1500...
[2023-03-02 11:42:13,393][09917] Num frames 1600...
[2023-03-02 11:42:13,487][09917] Avg episode rewards: #0: 4.580, true rewards: #0: 4.080
[2023-03-02 11:42:13,487][09917] Avg episode reward: 4.580, avg true_objective: 4.080
[2023-03-02 11:42:13,578][09917] Num frames 1700...
[2023-03-02 11:42:13,705][09917] Num frames 1800...
[2023-03-02 11:42:13,834][09917] Num frames 1900...
[2023-03-02 11:42:13,961][09917] Num frames 2000...
[2023-03-02 11:42:14,036][09917] Avg episode rewards: #0: 4.432, true rewards: #0: 4.032
[2023-03-02 11:42:14,036][09917] Avg episode reward: 4.432, avg true_objective: 4.032
[2023-03-02 11:42:14,156][09917] Num frames 2100...
[2023-03-02 11:42:14,296][09917] Num frames 2200...
[2023-03-02 11:42:14,484][09917] Num frames 2300...
[2023-03-02 11:42:14,624][09917] Num frames 2400...
[2023-03-02 11:42:14,675][09917] Avg episode rewards: #0: 4.333, true rewards: #0: 4.000
[2023-03-02 11:42:14,675][09917] Avg episode reward: 4.333, avg true_objective: 4.000
[2023-03-02 11:42:14,803][09917] Num frames 2500...
[2023-03-02 11:42:14,929][09917] Num frames 2600...
[2023-03-02 11:42:15,056][09917] Num frames 2700...
[2023-03-02 11:42:15,184][09917] Num frames 2800...
[2023-03-02 11:42:15,299][09917] Avg episode rewards: #0: 4.497, true rewards: #0: 4.069
[2023-03-02 11:42:15,300][09917] Avg episode reward: 4.497, avg true_objective: 4.069
[2023-03-02 11:42:15,376][09917] Num frames 2900...
[2023-03-02 11:42:15,507][09917] Num frames 3000...
[2023-03-02 11:42:15,632][09917] Num frames 3100...
[2023-03-02 11:42:15,772][09917] Num frames 3200...
[2023-03-02 11:42:15,974][09917] Avg episode rewards: #0: 4.620, true rewards: #0: 4.120
[2023-03-02 11:42:15,974][09917] Avg episode reward: 4.620, avg true_objective: 4.120
[2023-03-02 11:42:15,982][09917] Num frames 3300...
[2023-03-02 11:42:16,136][09917] Num frames 3400...
[2023-03-02 11:42:16,291][09917] Num frames 3500...
[2023-03-02 11:42:16,451][09917] Num frames 3600...
[2023-03-02 11:42:16,637][09917] Avg episode rewards: #0: 4.533, true rewards: #0: 4.089
[2023-03-02 11:42:16,638][09917] Avg episode reward: 4.533, avg true_objective: 4.089
[2023-03-02 11:42:16,676][09917] Num frames 3700...
[2023-03-02 11:42:16,842][09917] Num frames 3800...
[2023-03-02 11:42:17,012][09917] Num frames 3900...
[2023-03-02 11:42:17,185][09917] Num frames 4000...
[2023-03-02 11:42:17,362][09917] Avg episode rewards: #0: 4.464, true rewards: #0: 4.064
[2023-03-02 11:42:17,362][09917] Avg episode reward: 4.464, avg true_objective: 4.064
[2023-03-02 11:42:21,426][09917] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 11:42:32,345][09917] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:42:32,345][09917] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:42:32,345][09917] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:42:32,345][09917] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:42:32,345][09917] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:42:32,345][09917] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:42:32,345][09917] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-03-02 11:42:32,345][09917] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'hf_repository'='nhiro3303/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:42:32,346][09917] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:42:32,349][09917] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:42:32,349][09917] RunningMeanStd input shape: (1,)
[2023-03-02 11:42:32,356][09917] ConvEncoder: input_channels=3
[2023-03-02 11:42:32,379][09917] Conv encoder output size: 512
[2023-03-02 11:42:32,379][09917] Policy head output size: 512
[2023-03-02 11:42:32,399][09917] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth...
[2023-03-02 11:42:32,789][09917] Num frames 100...
[2023-03-02 11:42:33,008][09917] Num frames 200...
[2023-03-02 11:42:33,216][09917] Num frames 300...
[2023-03-02 11:42:33,431][09917] Num frames 400...
[2023-03-02 11:42:33,520][09917] Avg episode rewards: #0: 4.160, true rewards: #0: 4.160
[2023-03-02 11:42:33,521][09917] Avg episode reward: 4.160, avg true_objective: 4.160
[2023-03-02 11:42:33,708][09917] Num frames 500...
[2023-03-02 11:42:33,921][09917] Num frames 600...
[2023-03-02 11:42:34,148][09917] Num frames 700...
[2023-03-02 11:42:34,362][09917] Num frames 800...
[2023-03-02 11:42:34,484][09917] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160
[2023-03-02 11:42:34,484][09917] Avg episode reward: 4.660, avg true_objective: 4.160
[2023-03-02 11:42:34,633][09917] Num frames 900...
[2023-03-02 11:42:34,835][09917] Num frames 1000...
[2023-03-02 11:42:35,067][09917] Avg episode rewards: #0: 3.960, true rewards: #0: 3.627
[2023-03-02 11:42:35,067][09917] Avg episode reward: 3.960, avg true_objective: 3.627
[2023-03-02 11:42:35,089][09917] Num frames 1100...
[2023-03-02 11:42:35,305][09917] Num frames 1200...
[2023-03-02 11:42:35,521][09917] Num frames 1300...
[2023-03-02 11:42:35,726][09917] Num frames 1400...
[2023-03-02 11:42:35,864][09917] Avg episode rewards: #0: 4.100, true rewards: #0: 3.600
[2023-03-02 11:42:35,864][09917] Avg episode reward: 4.100, avg true_objective: 3.600
[2023-03-02 11:42:35,991][09917] Num frames 1500...
[2023-03-02 11:42:36,195][09917] Num frames 1600...
[2023-03-02 11:42:36,414][09917] Num frames 1700...
[2023-03-02 11:42:36,616][09917] Num frames 1800...
[2023-03-02 11:42:36,718][09917] Avg episode rewards: #0: 4.048, true rewards: #0: 3.648
[2023-03-02 11:42:36,718][09917] Avg episode reward: 4.048, avg true_objective: 3.648
[2023-03-02 11:42:36,882][09917] Num frames 1900...
[2023-03-02 11:42:37,083][09917] Num frames 2000...
[2023-03-02 11:42:37,277][09917] Num frames 2100...
[2023-03-02 11:42:37,490][09917] Num frames 2200...
[2023-03-02 11:42:37,561][09917] Avg episode rewards: #0: 4.013, true rewards: #0: 3.680
[2023-03-02 11:42:37,561][09917] Avg episode reward: 4.013, avg true_objective: 3.680
[2023-03-02 11:42:37,746][09917] Num frames 2300...
[2023-03-02 11:42:38,005][09917] Num frames 2400...
[2023-03-02 11:42:38,214][09917] Num frames 2500...
[2023-03-02 11:42:38,449][09917] Avg episode rewards: #0: 3.989, true rewards: #0: 3.703
[2023-03-02 11:42:38,449][09917] Avg episode reward: 3.989, avg true_objective: 3.703
[2023-03-02 11:42:38,466][09917] Num frames 2600...
[2023-03-02 11:42:38,670][09917] Num frames 2700...
[2023-03-02 11:42:38,865][09917] Num frames 2800...
[2023-03-02 11:42:39,070][09917] Num frames 2900...
[2023-03-02 11:42:39,290][09917] Avg episode rewards: #0: 3.970, true rewards: #0: 3.720
[2023-03-02 11:42:39,290][09917] Avg episode reward: 3.970, avg true_objective: 3.720
[2023-03-02 11:42:39,353][09917] Num frames 3000...
[2023-03-02 11:42:39,552][09917] Num frames 3100...
[2023-03-02 11:42:39,763][09917] Num frames 3200...
[2023-03-02 11:42:39,965][09917] Num frames 3300...
[2023-03-02 11:42:40,144][09917] Avg episode rewards: #0: 3.956, true rewards: #0: 3.733
[2023-03-02 11:42:40,145][09917] Avg episode reward: 3.956, avg true_objective: 3.733
[2023-03-02 11:42:40,232][09917] Num frames 3400...
[2023-03-02 11:42:40,452][09917] Num frames 3500...
[2023-03-02 11:42:40,660][09917] Num frames 3600...
[2023-03-02 11:42:40,745][09917] Avg episode rewards: #0: 3.816, true rewards: #0: 3.616
[2023-03-02 11:42:40,745][09917] Avg episode reward: 3.816, avg true_objective: 3.616
[2023-03-02 11:42:44,193][09917] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 11:45:25,806][09917] The model has been pushed to https://huggingface.co/nhiro3303/rl_course_vizdoom_health_gathering_supreme
[2023-03-02 11:51:30,436][10553] Saving configuration to /home/gpu/train_dir/default_experiment/config.json...
[2023-03-02 11:51:30,436][10553] Rollout worker 0 uses device cpu
[2023-03-02 11:51:30,436][10553] Rollout worker 1 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 2 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 3 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 4 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 5 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 6 uses device cpu
[2023-03-02 11:51:30,437][10553] Rollout worker 7 uses device cpu
[2023-03-02 11:51:30,464][10553] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:51:30,464][10553] InferenceWorker_p0-w0: min num requests: 2
[2023-03-02 11:51:30,481][10553] Starting all processes...
[2023-03-02 11:51:30,481][10553] Starting process learner_proc0
[2023-03-02 11:51:31,134][10553] Starting all processes...
[2023-03-02 11:51:31,137][10553] Starting process inference_proc0-0
[2023-03-02 11:51:31,137][10611] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:51:31,137][10611] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-03-02 11:51:31,137][10553] Starting process rollout_proc0
[2023-03-02 11:51:31,137][10553] Starting process rollout_proc1
[2023-03-02 11:51:31,138][10553] Starting process rollout_proc2
[2023-03-02 11:51:31,138][10553] Starting process rollout_proc3
[2023-03-02 11:51:31,138][10553] Starting process rollout_proc4
[2023-03-02 11:51:31,140][10553] Starting process rollout_proc5
[2023-03-02 11:51:31,142][10553] Starting process rollout_proc6
[2023-03-02 11:51:31,142][10553] Starting process rollout_proc7
[2023-03-02 11:51:31,173][10611] Num visible devices: 1
[2023-03-02 11:51:31,197][10611] Starting seed is not provided
[2023-03-02 11:51:31,197][10611] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:51:31,197][10611] Initializing actor-critic model on device cuda:0
[2023-03-02 11:51:31,198][10611] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:51:31,199][10611] RunningMeanStd input shape: (1,)
[2023-03-02 11:51:31,212][10611] ConvEncoder: input_channels=3
[2023-03-02 11:51:31,450][10611] Conv encoder output size: 512
[2023-03-02 11:51:31,451][10611] Policy head output size: 512
[2023-03-02 11:51:31,463][10611] Created Actor Critic model with architecture:
[2023-03-02 11:51:31,463][10611] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-03-02 11:51:32,207][10641] Worker 1 uses CPU cores [2, 3]
[2023-03-02 11:51:32,226][10640] Worker 0 uses CPU cores [0, 1]
[2023-03-02 11:51:32,280][10639] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:51:32,280][10639] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-03-02 11:51:32,287][10644] Worker 4 uses CPU cores [8, 9]
[2023-03-02 11:51:32,287][10647] Worker 7 uses CPU cores [14, 15]
[2023-03-02 11:51:32,289][10643] Worker 3 uses CPU cores [6, 7]
[2023-03-02 11:51:32,289][10645] Worker 5 uses CPU cores [10, 11]
[2023-03-02 11:51:32,295][10639] Num visible devices: 1
[2023-03-02 11:51:32,300][10646] Worker 6 uses CPU cores [12, 13]
[2023-03-02 11:51:32,316][10642] Worker 2 uses CPU cores [4, 5]
[2023-03-02 11:51:33,451][10611] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-03-02 11:51:33,451][10611] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth...
[2023-03-02 11:51:33,471][10611] Loading model from checkpoint
[2023-03-02 11:51:33,478][10611] Loaded experiment state at self.train_step=948, self.env_steps=3883008
[2023-03-02 11:51:33,478][10611] Initialized policy 0 weights for model version 948
[2023-03-02 11:51:33,481][10611] LearnerWorker_p0 finished initialization!
[2023-03-02 11:51:33,482][10611] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-03-02 11:51:33,588][10639] Unhandled exception CUDA error: invalid resource handle in evt loop inference_proc0-0_evt_loop
[2023-03-02 11:51:33,897][10553] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 3883008. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-03-02 11:51:37,481][10553] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 10553], exiting...
[2023-03-02 11:51:37,481][10641] Stopping RolloutWorker_w1...
[2023-03-02 11:51:37,481][10645] Stopping RolloutWorker_w5...
[2023-03-02 11:51:37,481][10553] Runner profile tree view:
main_loop: 7.0007
[2023-03-02 11:51:37,481][10646] Stopping RolloutWorker_w6...
[2023-03-02 11:51:37,482][10553] Collected {0: 3883008}, FPS: 0.0
[2023-03-02 11:51:37,482][10642] Stopping RolloutWorker_w2...
[2023-03-02 11:51:37,482][10643] Stopping RolloutWorker_w3...
[2023-03-02 11:51:37,482][10645] Loop rollout_proc5_evt_loop terminating...
[2023-03-02 11:51:37,482][10644] Stopping RolloutWorker_w4...
[2023-03-02 11:51:37,482][10641] Loop rollout_proc1_evt_loop terminating...
[2023-03-02 11:51:37,482][10647] Stopping RolloutWorker_w7...
[2023-03-02 11:51:37,481][10611] Stopping Batcher_0...
[2023-03-02 11:51:37,482][10643] Loop rollout_proc3_evt_loop terminating...
[2023-03-02 11:51:37,482][10646] Loop rollout_proc6_evt_loop terminating...
[2023-03-02 11:51:37,482][10642] Loop rollout_proc2_evt_loop terminating...
[2023-03-02 11:51:37,482][10644] Loop rollout_proc4_evt_loop terminating...
[2023-03-02 11:51:37,482][10647] Loop rollout_proc7_evt_loop terminating...
[2023-03-02 11:51:37,482][10640] Stopping RolloutWorker_w0...
[2023-03-02 11:51:37,482][10611] Loop batcher_evt_loop terminating...
[2023-03-02 11:51:37,482][10640] Loop rollout_proc0_evt_loop terminating...
[2023-03-02 11:51:37,483][10611] Saving /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000948_3883008.pth...
[2023-03-02 11:51:37,527][10611] Stopping LearnerWorker_p0...
[2023-03-02 11:51:37,527][10611] Loop learner_proc0_evt_loop terminating...
[2023-03-02 11:51:37,566][10553] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:51:37,566][10553] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:51:37,566][10553] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:51:37,566][10553] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:51:37,566][10553] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:51:37,566][10553] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:51:37,567][10553] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:51:37,577][10553] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-03-02 11:51:37,577][10553] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:51:37,578][10553] RunningMeanStd input shape: (1,)
[2023-03-02 11:51:37,589][10553] ConvEncoder: input_channels=3
[2023-03-02 11:51:37,800][10553] Conv encoder output size: 512
[2023-03-02 11:51:37,800][10553] Policy head output size: 512
[2023-03-02 11:51:39,438][10553] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000948_3883008.pth...
[2023-03-02 11:51:40,207][10553] Num frames 100...
[2023-03-02 11:51:40,334][10553] Num frames 200...
[2023-03-02 11:51:40,460][10553] Num frames 300...
[2023-03-02 11:51:40,615][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:51:40,615][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:51:40,634][10553] Num frames 400...
[2023-03-02 11:51:40,757][10553] Num frames 500...
[2023-03-02 11:51:40,880][10553] Num frames 600...
[2023-03-02 11:51:41,003][10553] Num frames 700...
[2023-03-02 11:51:41,127][10553] Num frames 800...
[2023-03-02 11:51:41,219][10553] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160
[2023-03-02 11:51:41,219][10553] Avg episode reward: 4.660, avg true_objective: 4.160
[2023-03-02 11:51:41,304][10553] Num frames 900...
[2023-03-02 11:51:41,429][10553] Num frames 1000...
[2023-03-02 11:51:41,552][10553] Num frames 1100...
[2023-03-02 11:51:41,677][10553] Num frames 1200...
[2023-03-02 11:51:41,749][10553] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
[2023-03-02 11:51:41,749][10553] Avg episode reward: 4.387, avg true_objective: 4.053
[2023-03-02 11:51:41,854][10553] Num frames 1300...
[2023-03-02 11:51:41,976][10553] Num frames 1400...
[2023-03-02 11:51:42,100][10553] Num frames 1500...
[2023-03-02 11:51:42,222][10553] Num frames 1600...
[2023-03-02 11:51:42,355][10553] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160
[2023-03-02 11:51:42,356][10553] Avg episode reward: 4.660, avg true_objective: 4.160
[2023-03-02 11:51:42,402][10553] Num frames 1700...
[2023-03-02 11:51:42,527][10553] Num frames 1800...
[2023-03-02 11:51:42,654][10553] Num frames 1900...
[2023-03-02 11:51:42,778][10553] Num frames 2000...
[2023-03-02 11:51:42,869][10553] Avg episode rewards: #0: 4.458, true rewards: #0: 4.058
[2023-03-02 11:51:42,869][10553] Avg episode reward: 4.458, avg true_objective: 4.058
[2023-03-02 11:51:42,960][10553] Num frames 2100...
[2023-03-02 11:51:43,085][10553] Num frames 2200...
[2023-03-02 11:51:43,213][10553] Num frames 2300...
[2023-03-02 11:51:43,341][10553] Num frames 2400...
[2023-03-02 11:51:43,489][10553] Avg episode rewards: #0: 4.628, true rewards: #0: 4.128
[2023-03-02 11:51:43,490][10553] Avg episode reward: 4.628, avg true_objective: 4.128
[2023-03-02 11:51:43,517][10553] Num frames 2500...
[2023-03-02 11:51:43,642][10553] Num frames 2600...
[2023-03-02 11:51:43,766][10553] Num frames 2700...
[2023-03-02 11:51:43,890][10553] Num frames 2800...
[2023-03-02 11:51:44,014][10553] Num frames 2900...
[2023-03-02 11:51:44,193][10553] Avg episode rewards: #0: 5.127, true rewards: #0: 4.270
[2023-03-02 11:51:44,193][10553] Avg episode reward: 5.127, avg true_objective: 4.270
[2023-03-02 11:51:44,207][10553] Num frames 3000...
[2023-03-02 11:51:44,336][10553] Num frames 3100...
[2023-03-02 11:51:44,460][10553] Num frames 3200...
[2023-03-02 11:51:44,582][10553] Num frames 3300...
[2023-03-02 11:51:44,726][10553] Avg episode rewards: #0: 4.966, true rewards: #0: 4.216
[2023-03-02 11:51:44,726][10553] Avg episode reward: 4.966, avg true_objective: 4.216
[2023-03-02 11:51:44,764][10553] Num frames 3400...
[2023-03-02 11:51:44,910][10553] Num frames 3500...
[2023-03-02 11:51:45,053][10553] Num frames 3600...
[2023-03-02 11:51:45,198][10553] Num frames 3700...
[2023-03-02 11:51:45,350][10553] Num frames 3800...
[2023-03-02 11:51:45,438][10553] Avg episode rewards: #0: 5.023, true rewards: #0: 4.246
[2023-03-02 11:51:45,438][10553] Avg episode reward: 5.023, avg true_objective: 4.246
[2023-03-02 11:51:45,553][10553] Num frames 3900...
[2023-03-02 11:51:45,697][10553] Num frames 4000...
[2023-03-02 11:51:45,846][10553] Num frames 4100...
[2023-03-02 11:51:46,005][10553] Num frames 4200...
[2023-03-02 11:51:46,068][10553] Avg episode rewards: #0: 4.905, true rewards: #0: 4.205
[2023-03-02 11:51:46,068][10553] Avg episode reward: 4.905, avg true_objective: 4.205
[2023-03-02 11:51:50,116][10553] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!
[2023-03-02 11:52:10,809][10553] Loading existing experiment configuration from /home/gpu/train_dir/default_experiment/config.json
[2023-03-02 11:52:10,809][10553] Overriding arg 'num_workers' with value 1 passed from command line
[2023-03-02 11:52:10,809][10553] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-03-02 11:52:10,809][10553] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-03-02 11:52:10,809][10553] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-03-02 11:52:10,809][10553] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-03-02 11:52:10,809][10553] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'hf_repository'='nhiro3303/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-03-02 11:52:10,810][10553] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-03-02 11:52:10,813][10553] RunningMeanStd input shape: (3, 72, 128)
[2023-03-02 11:52:10,814][10553] RunningMeanStd input shape: (1,)
[2023-03-02 11:52:10,821][10553] ConvEncoder: input_channels=3
[2023-03-02 11:52:10,844][10553] Conv encoder output size: 512
[2023-03-02 11:52:10,845][10553] Policy head output size: 512
[2023-03-02 11:52:10,864][10553] Loading state from checkpoint /home/gpu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000948_3883008.pth...
[2023-03-02 11:52:11,229][10553] Num frames 100...
[2023-03-02 11:52:11,422][10553] Num frames 200...
[2023-03-02 11:52:11,622][10553] Num frames 300...
[2023-03-02 11:52:11,837][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:11,837][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:11,873][10553] Num frames 400...
[2023-03-02 11:52:12,065][10553] Num frames 500...
[2023-03-02 11:52:12,256][10553] Num frames 600...
[2023-03-02 11:52:12,452][10553] Num frames 700...
[2023-03-02 11:52:12,641][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:12,641][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:12,710][10553] Num frames 800...
[2023-03-02 11:52:12,902][10553] Num frames 900...
[2023-03-02 11:52:13,095][10553] Num frames 1000...
[2023-03-02 11:52:13,289][10553] Num frames 1100...
[2023-03-02 11:52:13,456][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:13,456][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:13,557][10553] Num frames 1200...
[2023-03-02 11:52:13,745][10553] Num frames 1300...
[2023-03-02 11:52:13,935][10553] Num frames 1400...
[2023-03-02 11:52:14,136][10553] Num frames 1500...
[2023-03-02 11:52:14,270][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:14,271][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:14,406][10553] Num frames 1600...
[2023-03-02 11:52:14,609][10553] Num frames 1700...
[2023-03-02 11:52:14,802][10553] Num frames 1800...
[2023-03-02 11:52:14,994][10553] Num frames 1900...
[2023-03-02 11:52:15,091][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:15,091][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:15,246][10553] Num frames 2000...
[2023-03-02 11:52:15,446][10553] Num frames 2100...
[2023-03-02 11:52:15,640][10553] Num frames 2200...
[2023-03-02 11:52:15,835][10553] Num frames 2300...
[2023-03-02 11:52:15,895][10553] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
[2023-03-02 11:52:15,896][10553] Avg episode reward: 3.840, avg true_objective: 3.840
[2023-03-02 11:52:16,085][10553] Num frames 2400...
[2023-03-02 11:52:16,278][10553] Num frames 2500...
[2023-03-02 11:52:16,466][10553] Num frames 2600...
[2023-03-02 11:52:16,654][10553] Num frames 2700...
[2023-03-02 11:52:16,812][10553] Avg episode rewards: #0: 4.074, true rewards: #0: 3.931
[2023-03-02 11:52:16,813][10553] Avg episode reward: 4.074, avg true_objective: 3.931
[2023-03-02 11:52:16,907][10553] Num frames 2800...
[2023-03-02 11:52:17,096][10553] Num frames 2900...
[2023-03-02 11:52:17,285][10553] Num frames 3000...
[2023-03-02 11:52:17,478][10553] Num frames 3100...
[2023-03-02 11:52:17,609][10553] Avg episode rewards: #0: 4.045, true rewards: #0: 3.920
[2023-03-02 11:52:17,609][10553] Avg episode reward: 4.045, avg true_objective: 3.920
[2023-03-02 11:52:17,737][10553] Num frames 3200...
[2023-03-02 11:52:17,927][10553] Num frames 3300...
[2023-03-02 11:52:18,122][10553] Num frames 3400...
[2023-03-02 11:52:18,317][10553] Num frames 3500...
[2023-03-02 11:52:18,538][10553] Avg episode rewards: #0: 4.204, true rewards: #0: 3.982
[2023-03-02 11:52:18,538][10553] Avg episode reward: 4.204, avg true_objective: 3.982
[2023-03-02 11:52:18,577][10553] Num frames 3600...
[2023-03-02 11:52:18,769][10553] Num frames 3700...
[2023-03-02 11:52:18,965][10553] Num frames 3800...
[2023-03-02 11:52:19,169][10553] Num frames 3900...
[2023-03-02 11:52:19,370][10553] Num frames 4000...
[2023-03-02 11:52:19,493][10553] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032
[2023-03-02 11:52:19,494][10553] Avg episode reward: 4.332, avg true_objective: 4.032
[2023-03-02 11:52:23,407][10553] Replay video saved to /home/gpu/train_dir/default_experiment/replay.mp4!