Add link to paper (#2)
Browse files- Add link to paper (d3f8548851d97deed3761bd754db1e7b448fc865)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -9,7 +9,6 @@ tags:
|
|
9 |
library_name: transformers
|
10 |
---
|
11 |
|
12 |
-
|
13 |
# UI-TARS-72B-DPO
|
14 |
[UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT) |
|
15 |
[UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf) |
|
@@ -18,6 +17,9 @@ library_name: transformers
|
|
18 |
[UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf) |
|
19 |
[UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) |
|
20 |
[UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
|
|
|
|
|
|
|
21 |
## Introduction
|
22 |
|
23 |
UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
|
|
|
9 |
library_name: transformers
|
10 |
---
|
11 |
|
|
|
12 |
# UI-TARS-72B-DPO
|
13 |
[UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT) |
|
14 |
[UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf) |
|
|
|
17 |
[UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf) |
|
18 |
[UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) |
|
19 |
[UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
|
20 |
+
|
21 |
+
This repository contains the model of the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
|
22 |
+
|
23 |
## Introduction
|
24 |
|
25 |
UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
|