RajkNakka's picture
Upload folder using huggingface_hub
0012f0a
[2023-07-08 20:44:45,202][17004] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 20:44:45,221][17004] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 20:44:45,272][17004] Num visible devices: 1
[2023-07-08 20:44:45,422][17004] Setting fixed seed 42
[2023-07-08 20:44:45,422][17004] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 20:44:45,422][17004] Initializing actor-critic model on device cuda:0
[2023-07-08 20:44:45,423][17004] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 20:44:45,423][17004] RunningMeanStd input shape: (1,)
[2023-07-08 20:44:45,429][17004] ConvEncoder: input_channels=3
[2023-07-08 20:44:45,842][17004] Conv encoder output size: 512
[2023-07-08 20:44:45,842][17004] Policy head output size: 512
[2023-07-08 20:44:45,888][17004] Created Actor Critic model with architecture:
[2023-07-08 20:44:45,888][17004] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 20:44:46,131][17028] Worker 2 uses CPU cores [2]
[2023-07-08 20:44:46,218][17025] Worker 0 uses CPU cores [0]
[2023-07-08 20:44:46,362][17024] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 20:44:46,362][17024] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 20:44:46,378][17040] Worker 15 uses CPU cores [3]
[2023-07-08 20:44:46,419][17026] Worker 4 uses CPU cores [0]
[2023-07-08 20:44:46,426][17024] Num visible devices: 1
[2023-07-08 20:44:46,461][17027] Worker 3 uses CPU cores [3]
[2023-07-08 20:44:46,470][17029] Worker 6 uses CPU cores [2]
[2023-07-08 20:44:46,484][17042] Worker 17 uses CPU cores [1]
[2023-07-08 20:44:46,539][17038] Worker 13 uses CPU cores [1]
[2023-07-08 20:44:46,540][17031] Worker 1 uses CPU cores [1]
[2023-07-08 20:44:46,598][17043] Worker 18 uses CPU cores [2]
[2023-07-08 20:44:46,600][17035] Worker 9 uses CPU cores [1]
[2023-07-08 20:44:46,625][17030] Worker 7 uses CPU cores [3]
[2023-07-08 20:44:46,633][17034] Worker 16 uses CPU cores [0]
[2023-07-08 20:44:46,679][17041] Worker 10 uses CPU cores [2]
[2023-07-08 20:44:46,686][17039] Worker 14 uses CPU cores [2]
[2023-07-08 20:44:46,691][17037] Worker 12 uses CPU cores [0]
[2023-07-08 20:44:46,701][17044] Worker 19 uses CPU cores [3]
[2023-07-08 20:44:46,722][17036] Worker 11 uses CPU cores [3]
[2023-07-08 20:44:46,782][17032] Worker 5 uses CPU cores [1]
[2023-07-08 20:44:46,870][17033] Worker 8 uses CPU cores [0]
[2023-07-08 20:44:48,363][17004] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 20:44:48,364][17004] No checkpoints found
[2023-07-08 20:44:48,364][17004] Did not load from checkpoint, starting from scratch!
[2023-07-08 20:44:48,364][17004] Initialized policy 0 weights for model version 0
[2023-07-08 20:44:48,367][17004] LearnerWorker_p0 finished initialization!
[2023-07-08 20:44:48,367][17004] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 20:44:48,523][17024] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 20:46:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:48:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:50:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:52:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:54:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:56:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:58:38,984][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:59:38,986][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:59:38,987][17044] Stopping RolloutWorker_w19...
[2023-07-08 20:59:38,987][17044] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 20:59:38,987][17025] Stopping RolloutWorker_w0...
[2023-07-08 20:59:38,987][17025] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 20:59:38,988][17026] Stopping RolloutWorker_w4...
[2023-07-08 20:59:38,986][17027] Stopping RolloutWorker_w3...
[2023-07-08 20:59:38,988][17026] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 20:59:38,988][17036] Stopping RolloutWorker_w11...
[2023-07-08 20:59:38,988][17027] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 20:59:38,988][17030] Stopping RolloutWorker_w7...
[2023-07-08 20:59:38,989][17030] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 20:59:38,988][17043] Stopping RolloutWorker_w18...
[2023-07-08 20:59:38,986][17040] Stopping RolloutWorker_w15...
[2023-07-08 20:59:38,988][17039] Stopping RolloutWorker_w14...
[2023-07-08 20:59:38,989][17036] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 20:59:38,988][17041] Stopping RolloutWorker_w10...
[2023-07-08 20:59:38,990][17040] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 20:59:38,989][17029] Stopping RolloutWorker_w6...
[2023-07-08 20:59:38,989][17028] Stopping RolloutWorker_w2...
[2023-07-08 20:59:38,989][17043] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 20:59:38,990][17041] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 20:59:38,992][17037] Stopping RolloutWorker_w12...
[2023-07-08 20:59:38,992][17035] Stopping RolloutWorker_w9...
[2023-07-08 20:59:38,990][17039] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 20:59:38,992][17037] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 20:59:38,992][17035] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 20:59:38,990][17029] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 20:59:38,990][17028] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 20:59:39,001][17034] Stopping RolloutWorker_w16...
[2023-07-08 20:59:39,001][17034] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 20:59:39,002][17031] Stopping RolloutWorker_w1...
[2023-07-08 20:59:39,002][17031] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 20:59:38,986][17033] Stopping RolloutWorker_w8...
[2023-07-08 20:59:39,012][17042] Stopping RolloutWorker_w17...
[2023-07-08 20:59:39,012][17033] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 20:59:39,012][17004] Stopping Batcher_0...
[2023-07-08 20:59:39,012][17042] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 20:59:39,012][17004] Loop batcher_evt_loop terminating...
[2023-07-08 20:59:39,022][17038] Stopping RolloutWorker_w13...
[2023-07-08 20:59:39,022][17038] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 20:59:39,032][17032] Stopping RolloutWorker_w5...
[2023-07-08 20:59:39,032][17032] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 20:59:39,043][17004] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 20:59:39,075][17004] Stopping LearnerWorker_p0...
[2023-07-08 20:59:39,075][17004] Loop learner_proc0_evt_loop terminating...
[2023-07-08 21:13:45,642][17306] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:13:45,642][17306] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 21:13:45,688][17306] Num visible devices: 1
[2023-07-08 21:13:45,807][17306] Setting fixed seed 42
[2023-07-08 21:13:45,808][17306] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:13:45,808][17306] Initializing actor-critic model on device cuda:0
[2023-07-08 21:13:45,808][17306] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 21:13:45,809][17306] RunningMeanStd input shape: (1,)
[2023-07-08 21:13:45,816][17306] ConvEncoder: input_channels=3
[2023-07-08 21:13:45,919][17326] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:13:45,920][17326] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 21:13:45,976][17326] Num visible devices: 1
[2023-07-08 21:13:46,140][17327] Worker 0 uses CPU cores [0]
[2023-07-08 21:13:46,130][17306] Conv encoder output size: 512
[2023-07-08 21:13:46,172][17306] Policy head output size: 512
[2023-07-08 21:13:46,223][17306] Created Actor Critic model with architecture:
[2023-07-08 21:13:46,242][17306] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 21:13:46,351][17329] Worker 2 uses CPU cores [2]
[2023-07-08 21:13:46,421][17330] Worker 3 uses CPU cores [3]
[2023-07-08 21:13:46,561][17331] Worker 4 uses CPU cores [0]
[2023-07-08 21:13:46,608][17334] Worker 5 uses CPU cores [1]
[2023-07-08 21:13:46,689][17338] Worker 11 uses CPU cores [3]
[2023-07-08 21:13:46,697][17336] Worker 9 uses CPU cores [1]
[2023-07-08 21:13:46,698][17339] Worker 12 uses CPU cores [0]
[2023-07-08 21:13:46,704][17328] Worker 1 uses CPU cores [1]
[2023-07-08 21:13:46,711][17345] Worker 18 uses CPU cores [2]
[2023-07-08 21:13:46,732][17346] Worker 19 uses CPU cores [3]
[2023-07-08 21:13:46,743][17341] Worker 15 uses CPU cores [3]
[2023-07-08 21:13:46,746][17340] Worker 13 uses CPU cores [1]
[2023-07-08 21:13:46,746][17344] Worker 17 uses CPU cores [1]
[2023-07-08 21:13:46,771][17337] Worker 10 uses CPU cores [2]
[2023-07-08 21:13:46,789][17333] Worker 7 uses CPU cores [3]
[2023-07-08 21:13:46,791][17332] Worker 6 uses CPU cores [2]
[2023-07-08 21:13:46,793][17343] Worker 16 uses CPU cores [0]
[2023-07-08 21:13:46,801][17335] Worker 8 uses CPU cores [0]
[2023-07-08 21:13:46,841][17342] Worker 14 uses CPU cores [2]
[2023-07-08 21:13:47,146][17306] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 21:13:47,147][17306] Loading state from checkpoint /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:13:47,155][17306] Loading model from checkpoint
[2023-07-08 21:13:47,156][17306] Loaded experiment state at self.train_step=0, self.env_steps=0
[2023-07-08 21:13:47,156][17306] Initialized policy 0 weights for model version 0
[2023-07-08 21:13:47,159][17306] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:13:47,161][17306] LearnerWorker_p0 finished initialization!
[2023-07-08 21:13:47,303][17326] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 21:15:39,049][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:17:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:19:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:21:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:23:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:25:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:27:39,050][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:28:39,052][17337] Stopping RolloutWorker_w10...
[2023-07-08 21:28:39,052][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:28:39,052][17337] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 21:28:39,052][17335] Stopping RolloutWorker_w8...
[2023-07-08 21:28:39,053][17339] Stopping RolloutWorker_w12...
[2023-07-08 21:28:39,054][17333] Stopping RolloutWorker_w7...
[2023-07-08 21:28:39,053][17343] Stopping RolloutWorker_w16...
[2023-07-08 21:28:39,055][17341] Stopping RolloutWorker_w15...
[2023-07-08 21:28:39,055][17343] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 21:28:39,054][17346] Stopping RolloutWorker_w19...
[2023-07-08 21:28:39,055][17338] Stopping RolloutWorker_w11...
[2023-07-08 21:28:39,055][17330] Stopping RolloutWorker_w3...
[2023-07-08 21:28:39,056][17341] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 21:28:39,053][17331] Stopping RolloutWorker_w4...
[2023-07-08 21:28:39,056][17333] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 21:28:39,057][17331] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 21:28:39,056][17338] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 21:28:39,061][17345] Stopping RolloutWorker_w18...
[2023-07-08 21:28:39,061][17306] Stopping Batcher_0...
[2023-07-08 21:28:39,061][17345] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 21:28:39,061][17306] Loop batcher_evt_loop terminating...
[2023-07-08 21:28:39,061][17335] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 21:28:39,057][17346] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 21:28:39,062][17340] Stopping RolloutWorker_w13...
[2023-07-08 21:28:39,057][17330] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 21:28:39,062][17340] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 21:28:39,071][17329] Stopping RolloutWorker_w2...
[2023-07-08 21:28:39,071][17329] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 21:28:39,052][17327] Stopping RolloutWorker_w0...
[2023-07-08 21:28:39,072][17344] Stopping RolloutWorker_w17...
[2023-07-08 21:28:39,072][17327] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 21:28:39,072][17344] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 21:28:39,081][17332] Stopping RolloutWorker_w6...
[2023-07-08 21:28:39,081][17332] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 21:28:39,071][17339] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 21:28:39,082][17328] Stopping RolloutWorker_w1...
[2023-07-08 21:28:39,082][17328] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 21:28:39,091][17342] Stopping RolloutWorker_w14...
[2023-07-08 21:28:39,091][17342] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 21:28:39,092][17334] Stopping RolloutWorker_w5...
[2023-07-08 21:28:39,092][17334] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 21:28:39,094][17306] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:28:39,102][17336] Stopping RolloutWorker_w9...
[2023-07-08 21:28:39,102][17336] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 21:28:39,125][17306] Stopping LearnerWorker_p0...
[2023-07-08 21:28:39,125][17306] Loop learner_proc0_evt_loop terminating...
[2023-07-08 21:39:33,872][17857] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:39:33,882][17857] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 21:39:33,928][17857] Num visible devices: 1
[2023-07-08 21:39:34,048][17857] Setting fixed seed 42
[2023-07-08 21:39:34,049][17857] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:39:34,049][17857] Initializing actor-critic model on device cuda:0
[2023-07-08 21:39:34,049][17857] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 21:39:34,049][17857] RunningMeanStd input shape: (1,)
[2023-07-08 21:39:34,056][17857] ConvEncoder: input_channels=3
[2023-07-08 21:39:34,442][17857] Conv encoder output size: 512
[2023-07-08 21:39:34,443][17857] Policy head output size: 512
[2023-07-08 21:39:34,452][17857] Created Actor Critic model with architecture:
[2023-07-08 21:39:34,492][17857] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 21:39:34,999][17857] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 21:39:35,000][17857] Loading state from checkpoint /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:39:35,072][17857] Loading model from checkpoint
[2023-07-08 21:39:35,073][17857] Loaded experiment state at self.train_step=0, self.env_steps=0
[2023-07-08 21:39:35,074][17857] Initialized policy 0 weights for model version 0
[2023-07-08 21:39:35,084][17857] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:39:35,092][17857] LearnerWorker_p0 finished initialization!
[2023-07-08 21:39:35,100][17884] Worker 6 uses CPU cores [2]
[2023-07-08 21:39:35,221][17877] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 21:39:35,222][17877] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 21:39:35,223][17878] Worker 0 uses CPU cores [0]
[2023-07-08 21:39:35,231][17879] Worker 1 uses CPU cores [1]
[2023-07-08 21:39:35,286][17877] Num visible devices: 1
[2023-07-08 21:39:35,371][17880] Worker 2 uses CPU cores [2]
[2023-07-08 21:39:35,398][17886] Worker 8 uses CPU cores [0]
[2023-07-08 21:39:35,408][17888] Worker 10 uses CPU cores [2]
[2023-07-08 21:39:35,492][17885] Worker 7 uses CPU cores [3]
[2023-07-08 21:39:35,492][17891] Worker 12 uses CPU cores [0]
[2023-07-08 21:39:35,511][17887] Worker 9 uses CPU cores [1]
[2023-07-08 21:39:35,532][17881] Worker 4 uses CPU cores [0]
[2023-07-08 21:39:35,541][17882] Worker 3 uses CPU cores [3]
[2023-07-08 21:39:35,576][17893] Worker 15 uses CPU cores [3]
[2023-07-08 21:39:35,601][17890] Worker 13 uses CPU cores [1]
[2023-07-08 21:39:35,613][17892] Worker 14 uses CPU cores [2]
[2023-07-08 21:39:35,621][17894] Worker 16 uses CPU cores [0]
[2023-07-08 21:39:35,624][17895] Worker 17 uses CPU cores [1]
[2023-07-08 21:39:35,672][17896] Worker 19 uses CPU cores [3]
[2023-07-08 21:39:35,679][17883] Worker 5 uses CPU cores [1]
[2023-07-08 21:39:35,686][17897] Worker 18 uses CPU cores [2]
[2023-07-08 21:39:35,725][17889] Worker 11 uses CPU cores [3]
[2023-07-08 21:39:35,840][17877] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 21:41:27,794][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:43:27,795][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:45:27,793][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:47:27,793][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:49:27,793][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:51:27,793][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:53:27,795][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:54:27,795][17894] Stopping RolloutWorker_w16...
[2023-07-08 21:54:27,795][17890] Stopping RolloutWorker_w13...
[2023-07-08 21:54:27,795][17894] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 21:54:27,795][17890] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 21:54:27,796][17888] Stopping RolloutWorker_w10...
[2023-07-08 21:54:27,796][17888] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 21:54:27,802][17879] Stopping RolloutWorker_w1...
[2023-07-08 21:54:27,802][17892] Stopping RolloutWorker_w14...
[2023-07-08 21:54:27,802][17891] Stopping RolloutWorker_w12...
[2023-07-08 21:54:27,802][17896] Stopping RolloutWorker_w19...
[2023-07-08 21:54:27,802][17879] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 21:54:27,802][17892] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 21:54:27,802][17891] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 21:54:27,802][17896] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 21:54:27,808][17897] Stopping RolloutWorker_w18...
[2023-07-08 21:54:27,808][17897] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 21:54:27,812][17885] Stopping RolloutWorker_w7...
[2023-07-08 21:54:27,812][17881] Stopping RolloutWorker_w4...
[2023-07-08 21:54:27,812][17880] Stopping RolloutWorker_w2...
[2023-07-08 21:54:27,812][17885] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 21:54:27,812][17895] Stopping RolloutWorker_w17...
[2023-07-08 21:54:27,812][17881] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 21:54:27,812][17880] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 21:54:27,812][17895] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 21:54:27,818][17884] Stopping RolloutWorker_w6...
[2023-07-08 21:54:27,819][17884] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 21:54:27,822][17893] Stopping RolloutWorker_w15...
[2023-07-08 21:54:27,822][17883] Stopping RolloutWorker_w5...
[2023-07-08 21:54:27,822][17878] Stopping RolloutWorker_w0...
[2023-07-08 21:54:27,822][17893] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 21:54:27,822][17883] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 21:54:27,822][17878] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 21:54:27,832][17882] Stopping RolloutWorker_w3...
[2023-07-08 21:54:27,832][17882] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 21:54:27,832][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:54:27,833][17886] Stopping RolloutWorker_w8...
[2023-07-08 21:54:27,833][17886] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 21:54:27,833][17887] Stopping RolloutWorker_w9...
[2023-07-08 21:54:27,833][17887] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 21:54:27,839][17857] Stopping Batcher_0...
[2023-07-08 21:54:27,839][17857] Loop batcher_evt_loop terminating...
[2023-07-08 21:54:27,842][17889] Stopping RolloutWorker_w11...
[2023-07-08 21:54:27,842][17889] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 21:54:27,855][17857] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 21:54:27,905][17857] Stopping LearnerWorker_p0...
[2023-07-08 21:54:27,905][17857] Loop learner_proc0_evt_loop terminating...
[2023-07-08 22:10:38,828][18235] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:10:38,832][18235] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 22:10:38,894][18235] Num visible devices: 1
[2023-07-08 22:10:39,062][18235] Setting fixed seed 42
[2023-07-08 22:10:39,063][18235] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:10:39,063][18235] Initializing actor-critic model on device cuda:0
[2023-07-08 22:10:39,063][18235] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 22:10:39,063][18235] RunningMeanStd input shape: (1,)
[2023-07-08 22:10:39,070][18235] ConvEncoder: input_channels=3
[2023-07-08 22:10:39,386][18235] Conv encoder output size: 512
[2023-07-08 22:10:39,428][18235] Policy head output size: 512
[2023-07-08 22:10:39,448][18235] Created Actor Critic model with architecture:
[2023-07-08 22:10:39,481][18235] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 22:10:40,063][18235] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 22:10:40,063][18235] Loading state from checkpoint /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:10:40,149][18255] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:10:40,150][18255] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 22:10:40,226][18255] Num visible devices: 1
[2023-07-08 22:10:40,229][18235] Loading model from checkpoint
[2023-07-08 22:10:40,231][18235] Loaded experiment state at self.train_step=539850, self.env_steps=4422451200
[2023-07-08 22:10:40,232][18235] Initialized policy 0 weights for model version 539850
[2023-07-08 22:10:40,252][18235] LearnerWorker_p0 finished initialization!
[2023-07-08 22:10:40,253][18235] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:10:40,461][18260] Worker 3 uses CPU cores [3]
[2023-07-08 22:10:40,672][18256] Worker 0 uses CPU cores [0]
[2023-07-08 22:10:40,741][18261] Worker 5 uses CPU cores [1]
[2023-07-08 22:10:40,772][18257] Worker 1 uses CPU cores [1]
[2023-07-08 22:10:40,774][18258] Worker 2 uses CPU cores [2]
[2023-07-08 22:10:40,917][18263] Worker 7 uses CPU cores [3]
[2023-07-08 22:10:40,931][18264] Worker 8 uses CPU cores [0]
[2023-07-08 22:10:40,941][18269] Worker 13 uses CPU cores [1]
[2023-07-08 22:10:40,977][18259] Worker 4 uses CPU cores [0]
[2023-07-08 22:10:41,112][18274] Worker 18 uses CPU cores [2]
[2023-07-08 22:10:41,119][18272] Worker 16 uses CPU cores [0]
[2023-07-08 22:10:41,137][18262] Worker 6 uses CPU cores [2]
[2023-07-08 22:10:41,156][18270] Worker 14 uses CPU cores [2]
[2023-07-08 22:10:41,172][18266] Worker 11 uses CPU cores [3]
[2023-07-08 22:10:41,191][18275] Worker 19 uses CPU cores [3]
[2023-07-08 22:10:41,201][18268] Worker 9 uses CPU cores [1]
[2023-07-08 22:10:41,214][18273] Worker 17 uses CPU cores [1]
[2023-07-08 22:10:41,218][18267] Worker 12 uses CPU cores [0]
[2023-07-08 22:10:41,218][18271] Worker 15 uses CPU cores [3]
[2023-07-08 22:10:41,227][18265] Worker 10 uses CPU cores [2]
[2023-07-08 22:10:41,377][18255] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 22:12:33,306][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:14:33,308][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:16:33,307][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:18:33,307][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:20:33,307][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:22:33,307][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:24:33,306][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:25:33,309][18274] Stopping RolloutWorker_w18...
[2023-07-08 22:25:33,308][18264] Stopping RolloutWorker_w8...
[2023-07-08 22:25:33,309][18264] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 22:25:33,309][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:25:33,309][18267] Stopping RolloutWorker_w12...
[2023-07-08 22:25:33,309][18267] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 22:25:33,308][18270] Stopping RolloutWorker_w14...
[2023-07-08 22:25:33,310][18258] Stopping RolloutWorker_w2...
[2023-07-08 22:25:33,309][18262] Stopping RolloutWorker_w6...
[2023-07-08 22:25:33,311][18266] Stopping RolloutWorker_w11...
[2023-07-08 22:25:33,309][18265] Stopping RolloutWorker_w10...
[2023-07-08 22:25:33,311][18266] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 22:25:33,310][18270] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 22:25:33,312][18260] Stopping RolloutWorker_w3...
[2023-07-08 22:25:33,312][18257] Stopping RolloutWorker_w1...
[2023-07-08 22:25:33,312][18260] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 22:25:33,312][18257] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 22:25:33,310][18258] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 22:25:33,316][18275] Stopping RolloutWorker_w19...
[2023-07-08 22:25:33,316][18275] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 22:25:33,311][18262] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 22:25:33,311][18274] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 22:25:33,321][18271] Stopping RolloutWorker_w15...
[2023-07-08 22:25:33,311][18265] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 22:25:33,321][18271] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 22:25:33,322][18268] Stopping RolloutWorker_w9...
[2023-07-08 22:25:33,322][18263] Stopping RolloutWorker_w7...
[2023-07-08 22:25:33,322][18256] Stopping RolloutWorker_w0...
[2023-07-08 22:25:33,322][18263] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 22:25:33,322][18268] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 22:25:33,322][18256] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 22:25:33,326][18272] Stopping RolloutWorker_w16...
[2023-07-08 22:25:33,326][18272] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 22:25:33,332][18273] Stopping RolloutWorker_w17...
[2023-07-08 22:25:33,332][18273] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 22:25:33,332][18259] Stopping RolloutWorker_w4...
[2023-07-08 22:25:33,332][18259] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 22:25:33,342][18261] Stopping RolloutWorker_w5...
[2023-07-08 22:25:33,342][18261] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 22:25:33,347][18235] Stopping Batcher_0...
[2023-07-08 22:25:33,348][18235] Loop batcher_evt_loop terminating...
[2023-07-08 22:25:33,352][18269] Stopping RolloutWorker_w13...
[2023-07-08 22:25:33,352][18269] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 22:25:33,428][18235] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-07-08 22:25:33,539][18235] Stopping LearnerWorker_p0...
[2023-07-08 22:25:33,539][18235] Loop learner_proc0_evt_loop terminating...
[2023-07-08 22:34:04,498][18621] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:34:04,498][18621] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 22:34:04,582][18621] Num visible devices: 1
[2023-07-08 22:34:04,692][18621] Setting fixed seed 42
[2023-07-08 22:34:04,692][18641] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:34:04,692][18641] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 22:34:04,692][18621] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:34:04,693][18621] Initializing actor-critic model on device cuda:0
[2023-07-08 22:34:04,693][18621] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 22:34:04,694][18621] RunningMeanStd input shape: (1,)
[2023-07-08 22:34:04,700][18621] ConvEncoder: input_channels=3
[2023-07-08 22:34:04,744][18641] Num visible devices: 1
[2023-07-08 22:34:04,771][18645] Worker 4 uses CPU cores [0]
[2023-07-08 22:34:04,781][18646] Worker 5 uses CPU cores [1]
[2023-07-08 22:34:04,822][18643] Worker 0 uses CPU cores [0]
[2023-07-08 22:34:04,872][18642] Worker 1 uses CPU cores [1]
[2023-07-08 22:34:05,084][18621] Conv encoder output size: 512
[2023-07-08 22:34:05,102][18621] Policy head output size: 512
[2023-07-08 22:34:05,142][18648] Worker 6 uses CPU cores [2]
[2023-07-08 22:34:05,152][18621] Created Actor Critic model with architecture:
[2023-07-08 22:34:05,152][18621] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 22:34:05,191][18658] Worker 16 uses CPU cores [0]
[2023-07-08 22:34:05,221][18644] Worker 2 uses CPU cores [2]
[2023-07-08 22:34:05,271][18651] Worker 9 uses CPU cores [1]
[2023-07-08 22:34:05,275][18647] Worker 3 uses CPU cores [3]
[2023-07-08 22:34:05,321][18652] Worker 10 uses CPU cores [2]
[2023-07-08 22:34:05,371][18649] Worker 7 uses CPU cores [3]
[2023-07-08 22:34:05,401][18650] Worker 8 uses CPU cores [0]
[2023-07-08 22:34:05,406][18657] Worker 15 uses CPU cores [3]
[2023-07-08 22:34:05,408][18654] Worker 12 uses CPU cores [0]
[2023-07-08 22:34:05,411][18655] Worker 13 uses CPU cores [1]
[2023-07-08 22:34:05,421][18659] Worker 17 uses CPU cores [1]
[2023-07-08 22:34:05,431][18656] Worker 14 uses CPU cores [2]
[2023-07-08 22:34:05,431][18653] Worker 11 uses CPU cores [3]
[2023-07-08 22:34:05,531][18661] Worker 19 uses CPU cores [3]
[2023-07-08 22:34:05,555][18660] Worker 18 uses CPU cores [2]
[2023-07-08 22:34:05,639][18621] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 22:34:05,639][18621] No checkpoints found
[2023-07-08 22:34:05,640][18621] Did not load from checkpoint, starting from scratch!
[2023-07-08 22:34:05,640][18621] Initialized policy 0 weights for model version 0
[2023-07-08 22:34:05,642][18621] LearnerWorker_p0 finished initialization!
[2023-07-08 22:34:05,642][18621] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 22:34:05,765][18641] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 22:35:57,062][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:37:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:39:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:41:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:43:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:45:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:47:57,063][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:48:57,065][18652] Stopping RolloutWorker_w10...
[2023-07-08 22:48:57,065][18621] Stopping Batcher_0...
[2023-07-08 22:48:57,065][18621] Loop batcher_evt_loop terminating...
[2023-07-08 22:48:57,065][18652] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 22:48:57,065][18661] Stopping RolloutWorker_w19...
[2023-07-08 22:48:57,065][18661] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 22:48:57,066][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:48:57,067][18654] Stopping RolloutWorker_w12...
[2023-07-08 22:48:57,067][18654] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 22:48:57,071][18646] Stopping RolloutWorker_w5...
[2023-07-08 22:48:57,071][18648] Stopping RolloutWorker_w6...
[2023-07-08 22:48:57,072][18648] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 22:48:57,072][18646] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 22:48:57,072][18643] Stopping RolloutWorker_w0...
[2023-07-08 22:48:57,072][18650] Stopping RolloutWorker_w8...
[2023-07-08 22:48:57,065][18647] Stopping RolloutWorker_w3...
[2023-07-08 22:48:57,073][18650] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 22:48:57,072][18649] Stopping RolloutWorker_w7...
[2023-07-08 22:48:57,072][18653] Stopping RolloutWorker_w11...
[2023-07-08 22:48:57,072][18657] Stopping RolloutWorker_w15...
[2023-07-08 22:48:57,073][18647] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 22:48:57,073][18649] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 22:48:57,074][18653] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 22:48:57,074][18657] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 22:48:57,082][18660] Stopping RolloutWorker_w18...
[2023-07-08 22:48:57,082][18642] Stopping RolloutWorker_w1...
[2023-07-08 22:48:57,082][18660] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 22:48:57,082][18642] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 22:48:57,083][18645] Stopping RolloutWorker_w4...
[2023-07-08 22:48:57,083][18645] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 22:48:57,092][18656] Stopping RolloutWorker_w14...
[2023-07-08 22:48:57,092][18651] Stopping RolloutWorker_w9...
[2023-07-08 22:48:57,092][18651] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 22:48:57,092][18656] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 22:48:57,082][18658] Stopping RolloutWorker_w16...
[2023-07-08 22:48:57,095][18658] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 22:48:57,098][18643] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 22:48:57,102][18644] Stopping RolloutWorker_w2...
[2023-07-08 22:48:57,102][18655] Stopping RolloutWorker_w13...
[2023-07-08 22:48:57,102][18655] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 22:48:57,102][18644] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 22:48:57,112][18659] Stopping RolloutWorker_w17...
[2023-07-08 22:48:57,112][18659] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 22:48:57,115][18621] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 22:48:57,152][18621] Stopping LearnerWorker_p0...
[2023-07-08 22:48:57,152][18621] Loop learner_proc0_evt_loop terminating...
[2023-07-08 23:01:01,233][19220] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:01:01,234][19220] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 23:01:01,277][19220] Num visible devices: 1
[2023-07-08 23:01:01,412][19220] Setting fixed seed 42
[2023-07-08 23:01:01,412][19220] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:01:01,412][19220] Initializing actor-critic model on device cuda:0
[2023-07-08 23:01:01,413][19220] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 23:01:01,413][19220] RunningMeanStd input shape: (1,)
[2023-07-08 23:01:01,420][19220] ConvEncoder: input_channels=3
[2023-07-08 23:01:01,502][19241] Worker 0 uses CPU cores [0]
[2023-07-08 23:01:01,550][19242] Worker 2 uses CPU cores [2]
[2023-07-08 23:01:01,609][19243] Worker 1 uses CPU cores [1]
[2023-07-08 23:01:01,648][19244] Worker 3 uses CPU cores [3]
[2023-07-08 23:01:01,649][19240] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:01:01,649][19240] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 23:01:01,737][19240] Num visible devices: 1
[2023-07-08 23:01:01,782][19246] Worker 4 uses CPU cores [0]
[2023-07-08 23:01:01,741][19220] Conv encoder output size: 512
[2023-07-08 23:01:01,787][19220] Policy head output size: 512
[2023-07-08 23:01:01,794][19252] Worker 11 uses CPU cores [3]
[2023-07-08 23:01:01,800][19250] Worker 9 uses CPU cores [1]
[2023-07-08 23:01:01,807][19220] Created Actor Critic model with architecture:
[2023-07-08 23:01:01,831][19248] Worker 7 uses CPU cores [3]
[2023-07-08 23:01:01,849][19249] Worker 8 uses CPU cores [0]
[2023-07-08 23:01:01,862][19251] Worker 10 uses CPU cores [2]
[2023-07-08 23:01:01,891][19220] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 23:01:01,903][19253] Worker 12 uses CPU cores [0]
[2023-07-08 23:01:01,925][19247] Worker 6 uses CPU cores [2]
[2023-07-08 23:01:01,943][19254] Worker 13 uses CPU cores [1]
[2023-07-08 23:01:01,967][19256] Worker 15 uses CPU cores [3]
[2023-07-08 23:01:01,975][19259] Worker 18 uses CPU cores [2]
[2023-07-08 23:01:01,991][19255] Worker 14 uses CPU cores [2]
[2023-07-08 23:01:02,001][19257] Worker 16 uses CPU cores [0]
[2023-07-08 23:01:02,021][19260] Worker 19 uses CPU cores [3]
[2023-07-08 23:01:02,041][19245] Worker 5 uses CPU cores [1]
[2023-07-08 23:01:02,052][19258] Worker 17 uses CPU cores [1]
[2023-07-08 23:01:02,219][19220] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 23:01:02,220][19220] No checkpoints found
[2023-07-08 23:01:02,220][19220] Did not load from checkpoint, starting from scratch!
[2023-07-08 23:01:02,220][19220] Initialized policy 0 weights for model version 0
[2023-07-08 23:01:02,223][19220] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:01:02,236][19220] LearnerWorker_p0 finished initialization!
[2023-07-08 23:01:02,356][19240] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 23:02:53,735][19220] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 23:04:53,736][19220] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 23:05:29,863][19259] Stopping RolloutWorker_w18...
[2023-07-08 23:05:29,864][19259] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 23:05:29,864][19260] Stopping RolloutWorker_w19...
[2023-07-08 23:05:29,864][19260] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 23:05:29,872][19252] Stopping RolloutWorker_w11...
[2023-07-08 23:05:29,872][19249] Stopping RolloutWorker_w8...
[2023-07-08 23:05:29,872][19251] Stopping RolloutWorker_w10...
[2023-07-08 23:05:29,872][19252] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 23:05:29,872][19249] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 23:05:29,872][19251] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 23:05:29,882][19255] Stopping RolloutWorker_w14...
[2023-07-08 23:05:29,882][19248] Stopping RolloutWorker_w7...
[2023-07-08 23:05:29,882][19257] Stopping RolloutWorker_w16...
[2023-07-08 23:05:29,882][19255] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 23:05:29,882][19248] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 23:05:29,882][19257] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 23:05:29,882][19220] Stopping Batcher_0...
[2023-07-08 23:05:29,883][19220] Loop batcher_evt_loop terminating...
[2023-07-08 23:05:29,892][19244] Stopping RolloutWorker_w3...
[2023-07-08 23:05:29,892][19241] Stopping RolloutWorker_w0...
[2023-07-08 23:05:29,892][19244] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 23:05:29,892][19241] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 23:05:29,892][19220] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 23:05:29,895][19256] Stopping RolloutWorker_w15...
[2023-07-08 23:05:29,895][19247] Stopping RolloutWorker_w6...
[2023-07-08 23:05:29,895][19256] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 23:05:29,896][19253] Stopping RolloutWorker_w12...
[2023-07-08 23:05:29,896][19247] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 23:05:29,896][19253] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 23:05:29,901][19246] Stopping RolloutWorker_w4...
[2023-07-08 23:05:29,902][19246] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 23:05:29,903][19242] Stopping RolloutWorker_w2...
[2023-07-08 23:05:29,905][19242] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 23:05:29,956][19220] Stopping LearnerWorker_p0...
[2023-07-08 23:05:29,956][19220] Loop learner_proc0_evt_loop terminating...
[2023-07-08 23:05:30,040][19250] Stopping RolloutWorker_w9...
[2023-07-08 23:05:30,041][19250] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 23:05:30,042][19254] Stopping RolloutWorker_w13...
[2023-07-08 23:05:30,042][19254] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 23:05:30,052][19245] Stopping RolloutWorker_w5...
[2023-07-08 23:05:30,052][19245] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 23:05:30,062][19258] Stopping RolloutWorker_w17...
[2023-07-08 23:05:30,062][19258] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 23:05:30,072][19243] Stopping RolloutWorker_w1...
[2023-07-08 23:05:30,072][19243] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 23:06:39,079][19475] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:06:39,079][19475] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-08 23:06:39,116][19475] Num visible devices: 1
[2023-07-08 23:06:39,240][19475] Setting fixed seed 42
[2023-07-08 23:06:39,241][19475] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:06:39,241][19475] Initializing actor-critic model on device cuda:0
[2023-07-08 23:06:39,241][19475] RunningMeanStd input shape: (3, 72, 128)
[2023-07-08 23:06:39,242][19475] RunningMeanStd input shape: (1,)
[2023-07-08 23:06:39,248][19475] ConvEncoder: input_channels=3
[2023-07-08 23:06:39,280][19499] Worker 3 uses CPU cores [3]
[2023-07-08 23:06:39,412][19496] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:06:39,412][19496] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-08 23:06:39,470][19496] Num visible devices: 1
[2023-07-08 23:06:39,462][19500] Worker 4 uses CPU cores [0]
[2023-07-08 23:06:39,527][19495] Worker 0 uses CPU cores [0]
[2023-07-08 23:06:39,541][19497] Worker 1 uses CPU cores [1]
[2023-07-08 23:06:39,541][19502] Worker 6 uses CPU cores [2]
[2023-07-08 23:06:39,604][19475] Conv encoder output size: 512
[2023-07-08 23:06:39,605][19475] Policy head output size: 512
[2023-07-08 23:06:39,608][19507] Worker 12 uses CPU cores [0]
[2023-07-08 23:06:39,646][19475] Created Actor Critic model with architecture:
[2023-07-08 23:06:39,646][19475] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ReLU)
)
)
)
)
(core): ModelCoreRNN(
(core): LSTM(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-08 23:06:39,648][19498] Worker 2 uses CPU cores [2]
[2023-07-08 23:06:39,753][19513] Worker 15 uses CPU cores [3]
[2023-07-08 23:06:39,774][19503] Worker 7 uses CPU cores [3]
[2023-07-08 23:06:39,781][19510] Worker 14 uses CPU cores [2]
[2023-07-08 23:06:39,800][19511] Worker 16 uses CPU cores [0]
[2023-07-08 23:06:39,801][19506] Worker 10 uses CPU cores [2]
[2023-07-08 23:06:39,804][19508] Worker 13 uses CPU cores [1]
[2023-07-08 23:06:39,805][19505] Worker 9 uses CPU cores [1]
[2023-07-08 23:06:39,811][19501] Worker 5 uses CPU cores [1]
[2023-07-08 23:06:39,821][19509] Worker 11 uses CPU cores [3]
[2023-07-08 23:06:39,888][19514] Worker 17 uses CPU cores [1]
[2023-07-08 23:06:39,899][19515] Worker 19 uses CPU cores [3]
[2023-07-08 23:06:39,901][19512] Worker 18 uses CPU cores [2]
[2023-07-08 23:06:39,968][19504] Worker 8 uses CPU cores [0]
[2023-07-08 23:06:40,230][19475] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-08 23:06:40,230][19475] Loading state from checkpoint /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth...
[2023-07-08 23:06:40,254][19475] Loading model from checkpoint
[2023-07-08 23:06:40,257][19475] Loaded experiment state at self.train_step=466273, self.env_steps=3819708416
[2023-07-08 23:06:40,257][19475] Initialized policy 0 weights for model version 466273
[2023-07-08 23:06:40,260][19475] LearnerWorker_p0 finished initialization!
[2023-07-08 23:06:40,260][19475] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-08 23:06:40,402][19496] Unhandled exception CUDA error: OS call failed or operation not supported on this OS
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
in evt loop inference_proc0-0_evt_loop
[2023-07-08 23:08:32,108][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:10:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:12:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:14:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:16:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:18:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:20:32,106][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:21:32,107][19507] Stopping RolloutWorker_w12...
[2023-07-08 23:21:32,107][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:21:32,107][19513] Stopping RolloutWorker_w15...
[2023-07-08 23:21:32,107][19507] Loop rollout_proc12_evt_loop terminating...
[2023-07-08 23:21:32,108][19513] Loop rollout_proc15_evt_loop terminating...
[2023-07-08 23:21:32,108][19505] Stopping RolloutWorker_w9...
[2023-07-08 23:21:32,108][19508] Stopping RolloutWorker_w13...
[2023-07-08 23:21:32,109][19514] Stopping RolloutWorker_w17...
[2023-07-08 23:21:32,109][19501] Stopping RolloutWorker_w5...
[2023-07-08 23:21:32,109][19501] Loop rollout_proc5_evt_loop terminating...
[2023-07-08 23:21:32,109][19502] Stopping RolloutWorker_w6...
[2023-07-08 23:21:32,109][19502] Loop rollout_proc6_evt_loop terminating...
[2023-07-08 23:21:32,107][19497] Stopping RolloutWorker_w1...
[2023-07-08 23:21:32,112][19497] Loop rollout_proc1_evt_loop terminating...
[2023-07-08 23:21:32,112][19509] Stopping RolloutWorker_w11...
[2023-07-08 23:21:32,112][19495] Stopping RolloutWorker_w0...
[2023-07-08 23:21:32,112][19512] Stopping RolloutWorker_w18...
[2023-07-08 23:21:32,112][19495] Loop rollout_proc0_evt_loop terminating...
[2023-07-08 23:21:32,112][19509] Loop rollout_proc11_evt_loop terminating...
[2023-07-08 23:21:32,112][19512] Loop rollout_proc18_evt_loop terminating...
[2023-07-08 23:21:32,117][19499] Stopping RolloutWorker_w3...
[2023-07-08 23:21:32,117][19506] Stopping RolloutWorker_w10...
[2023-07-08 23:21:32,107][19515] Stopping RolloutWorker_w19...
[2023-07-08 23:21:32,116][19503] Stopping RolloutWorker_w7...
[2023-07-08 23:21:32,117][19506] Loop rollout_proc10_evt_loop terminating...
[2023-07-08 23:21:32,117][19499] Loop rollout_proc3_evt_loop terminating...
[2023-07-08 23:21:32,117][19503] Loop rollout_proc7_evt_loop terminating...
[2023-07-08 23:21:32,122][19505] Loop rollout_proc9_evt_loop terminating...
[2023-07-08 23:21:32,122][19498] Stopping RolloutWorker_w2...
[2023-07-08 23:21:32,122][19511] Stopping RolloutWorker_w16...
[2023-07-08 23:21:32,118][19515] Loop rollout_proc19_evt_loop terminating...
[2023-07-08 23:21:32,122][19511] Loop rollout_proc16_evt_loop terminating...
[2023-07-08 23:21:32,122][19498] Loop rollout_proc2_evt_loop terminating...
[2023-07-08 23:21:32,129][19510] Stopping RolloutWorker_w14...
[2023-07-08 23:21:32,129][19510] Loop rollout_proc14_evt_loop terminating...
[2023-07-08 23:21:32,132][19514] Loop rollout_proc17_evt_loop terminating...
[2023-07-08 23:21:32,132][19504] Stopping RolloutWorker_w8...
[2023-07-08 23:21:32,132][19504] Loop rollout_proc8_evt_loop terminating...
[2023-07-08 23:21:32,132][19475] Stopping Batcher_0...
[2023-07-08 23:21:32,132][19475] Loop batcher_evt_loop terminating...
[2023-07-08 23:21:32,136][19508] Loop rollout_proc13_evt_loop terminating...
[2023-07-08 23:21:32,142][19500] Stopping RolloutWorker_w4...
[2023-07-08 23:21:32,142][19500] Loop rollout_proc4_evt_loop terminating...
[2023-07-08 23:21:32,239][19475] Saving /home/raj/repos/HF-DeepRL/8-Proximal-Policy-Optimization/train_dir/default_experiment/checkpoint_p0/checkpoint_000466273_3819708416.pth...
[2023-07-08 23:21:32,373][19475] Stopping LearnerWorker_p0...
[2023-07-08 23:21:32,373][19475] Loop learner_proc0_evt_loop terminating...