--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python %%capture !apt install python-opengl !apt install ffmpeg !apt install xvfb !pip3 install pyvirtualdisplay from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() !pip install stable-baselines3[extra] !pip install gymnasium !pip install huggingface_sb3 !pip install huggingface_hub !pip install panda_gym import os import gymnasium as gym import panda_gym from stable_baselines3 import A2C from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize from stable_baselines3.common.env_util import make_vec_env env_id = "PandaPickAndPlace-v3" env = gym.make(env_id) env = make_vec_env(env_id, n_envs=4) env = VecNormalize(env, clip_obs = 10) model = A2C("MultiInputPolicy", env, verbose=1) model.learn(1_000_000) model.save("a2c-PandaPickAndPlace-v3") env.save("vec_normalize.pkl") from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize # Load the saved statistics eval_env = DummyVecEnv([lambda: gym.make("PandaPickAndPlace-v3")]) eval_env = VecNormalize.load("vec_normalize.pkl", eval_env) # We need to override the render_mode eval_env.render_mode = "rgb_array" # do not update them at test time eval_env.training = False # reward normalization is not needed at test time eval_env.norm_reward = False # Load the agent model = A2C.load("a2c-PandaPickAndPlace-v3") mean_reward, std_reward = evaluate_policy(model, eval_env) print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}") ... ```