--- license: apache-2.0 --- # UI-TARS: Pioneering Automated GUI Interaction with Native Agents ## Overview UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules. ## Core Features ### Perception - **Comprehensive GUI Understanding**: Processes multimodal inputs (text, images, interactions) to build a coherent understanding of interfaces. - **Real-Time Interaction**: Continuously monitors dynamic GUIs and responds accurately to changes in real-time. ### Action - **Unified Action Space**: Standardized action definitions across platforms (desktop, mobile, and web). - **Platform-Specific Actions**: Supports additional actions like hotkeys, long press, and platform-specific gestures. ### Reasoning - **System 1 & System 2 Reasoning**: Combines fast, intuitive responses with deliberate, high-level planning for complex tasks. - **Task Decomposition & Reflection**: Supports multi-step planning, reflection, and error correction for robust task execution. ### Memory - **Short-Term Memory**: Captures task-specific context for situational awareness. - **Long-Term Memory**: Retains historical interactions and knowledge for improved decision-making. ## Capabilities - **Cross-Platform Interaction**: Supports desktop, mobile, and web environments with a unified action framework. - **Multi-Step Task Execution**: Trained to handle complex tasks through multi-step trajectories and reasoning. - **Learning from Synthetic and Real Data**: Combines large-scale annotated and synthetic datasets for improved generalization and robustness. ## Training Pipeline 1. **Pre-Training**: Leveraging large-scale GUI-specific datasets for foundational learning. 2. **Supervised Fine-Tuning**: Fine-tuning on human-annotated and synthetic multi-step task data. 3. **Continual Learning**: Employing online trace bootstrapping and reinforcement learning for continual improvement. ## Evaluation Metrics - **Step-Level Metrics**: Element accuracy, operation F1 score, and step success rate. - **Task-Level Metrics**: Complete match and partial match scores for overall task success. - **Other Metrics**: Measures for execution efficiency, safety, robustness, and adaptability. ## License UI-TARS is licensed under the Apache License 2.0. ## Acknowledgements This project builds upon and extends the capabilities of Qwen-2-VL, a powerful vision-language model, which serves as the foundational architecture for UI-TARS. We would like to acknowledge the contributions of the developers and researchers behind Qwen-2-VL for their groundbreaking work in the field of multimodal AI and for providing a robust base for further advancements. Additionally, we thank the broader open-source community for their datasets, tools, and insights that have facilitated the development of UI-TARS. These collaborative efforts continue to push the boundaries of what GUI automation and AI-driven agents can achieve.