File size: 2,947 Bytes
49dda6b a25d928 0ffb2d2 49dda6b 8f3b4d8 a25d928 c887ff6 a25d928 c887ff6 8f3b4d8 a25d928 49dda6b 8f3b4d8 a25d928 8f3b4d8 a25d928 dcd48c0 a25d928 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license: apache-2.0
datasets:
- lerobot/pusht
tags:
- diffusion-policy
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
pipeline_tag: robotics
---
# Model Card for Diffusion Policy / PushT
Diffusion Policy (as per [Diffusion Policy: Visuomotor Policy
Learning via Action Diffusion](https://arxiv.org/abs/2303.04137)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht).
## How to Get Started with the Model
See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model.
## Training Details
Trained with [LeRobot@3c0a209](https://github.com/huggingface/lerobot/tree/3c0a209f9fac4d2a57617e686a7f2a2309144ba2).
The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/train.py) and with the [pusht](https://huggingface.co/datasets/lerobot/pusht) dataset, using this command:
```bash
python lerobot/scripts/train.py \
--output_dir=outputs/train/diffusion_pusht \
--policy.type=diffusion \
--dataset.repo_id=lerobot/pusht \
--seed=100000 \
--env.type=pusht \
--batch_size=64 \
--offline.steps=200000 \
--eval_freq=25000 \
--save_freq=25000 \
--wandb.enable=true
```
The training curves may be found at https://wandb.ai/aliberts/lerobot/runs/s7elvf4r.
The current model corresponds to the checkpoint at 175k steps.
## Evaluation
The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht) and compared to a similar model trained with the original [Diffusion Policy code](https://github.com/real-stanford/diffusion_policy). There are two evaluation metrics on a per-episode basis:
- Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1].
- Success: whether or not the maximum overlap is at least 95%.
Here are the metrics for 500 episodes worth of evaluation. The "Theirs" column is for an equivalent model trained on the original Diffusion Policy repository and evaluated on LeRobot (the model weights may be found in the [`original_dp_repo`](https://huggingface.co/lerobot/diffusion_pusht/tree/original_dp_repo) branch of this respository).
<blank>|Ours|Theirs
-|-|-
Average max. overlap ratio | 0.955 | 0.957
Success rate for 500 episodes (%) | 65.4 | 64.2
The results of each of the individual rollouts may be found in [eval_info.json](eval_info.json).
It was produced after training with this command:
```bash
python lerobot/scripts/eval.py \
--policy.path=outputs/train/diffusion_pusht/checkpoints/175000/pretrained_model \
--output_dir=outputs/eval/diffusion_pusht/175000 \
--env.type=pusht \
--eval.n_episodes=500 \
--eval.batch_size=50 \
--device=cuda \
--use_amp=false
``` |