English
vlm
egocentric
embodied ai
Edit model card

AlanaVLM

AI personal assistants deployed via robots or wearables require embodied understanding to collaborate with humans effectively. However, current Vision-Language Models (VLMs) primarily focus on third-person view videos, neglecting the richness of egocentric perceptual experience. To address this gap, we propose three key contributions. First, we introduce the Egocentric Video Understanding Dataset (EVUD) for training VLMs on video captioning and question answering tasks specific to egocentric videos. Second, we present AlanaVLM, a 7B parameter VLM trained using parameter-efficient methods on EVUD. Finally, we evaluate AlanaVLM's capabilities on OpenEQA, a challenging benchmark for embodied video question answering. Our model achieves state-of-the-art performance, outperforming open-source models including strong Socratic models using GPT-4 as a planner by 3.6%. Additionally, we outperform Claude 3 and Gemini Pro Vision 1.0 and showcase competitive results compared to Gemini Pro 1.5 and GPT-4V, even surpassing the latter in spatial reasoning. This research paves the way for building efficient VLMs that can be deployed in robots or wearables, leveraging embodied video understanding to collaborate seamlessly with humans in everyday tasks, contributing to the next generation of Embodied AI.

This repository contains all the checkpoints developed for the AlanaVLM project. Please see paper for details.

For details about risks, limitations, and intended uses, please see our EVUD dataset.

Model training

We use the Chat-UniVi codebase to train our model. Please refer to their instructions: https://github.com/PKU-YuanGroup/Chat-UniVi/

Citation

If you use our models, please cite our paper using the citation below:

BibTeX:

@article{suglia2024alanavlm,
  title={AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding},
  author={Suglia, Alessandro and Greco, Claudio and Baker, Katie and Part, Jose L and Papaionnou, Ioannis and Eshghi, Arash and Konstas, Ioannis and Lemon, Oliver},
  journal={arXiv preprint arXiv:2406.13807},
  year={2024}
}

APA:

Suglia, A., Greco, C., Baker, K., Part, J. L., Papaionnou, I., Eshghi, A., ... & Lemon, O. (2024). AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding. arXiv preprint arXiv:2406.13807.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train AlanaAI/AlanaVLM