A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
Abstract
In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which result in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.
Community
ArXiv: https://arxiv.org/abs/2410.22391
Datasets: https://huggingface.co/ml-jku
GitHub: https://github.com/ml-jku/LRAM
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Retrieval-Augmented Decision Transformer: External Memory for In-context RL (2024)
- Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining (2024)
- Diffusion Transformer Policy (2024)
- Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers (2024)
- ReLIC: A Recipe for 64k Steps of In-Context Reinforcement Learning for Embodied AI (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper