--- tags: - ChopperCommand-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ChopperCommand-v5 type: ChopperCommand-v5 metrics: - type: mean_reward value: 24610.00 +/- 14101.24 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **ChopperCommand-v5** This is a trained model of a PPO agent playing ChopperCommand-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id ChopperCommand-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 16, 'async_update': 4, 'batch_size': 8192, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'ChopperCommand-v5', 'exp_name': 'sebulba_ppo_envpool', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 2048, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 64, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6103, 'params_queue_timeout': 0.02, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```