MingComplex
commited on
update readme
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ library_name: transformers
|
|
17 |
[**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)(Recommended) |
|
18 |
[UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf) |
|
19 |
[UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) |
|
20 |
-
[
|
21 |
## Introduction
|
22 |
|
23 |
UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
|
@@ -32,9 +32,6 @@ UI-TARS is a next-generation native GUI agent model designed to interact seamles
|
|
32 |
<!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
|
33 |
|
34 |
This repository contains the model for the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
|
35 |
-
|
36 |
-
Code: https://github.com/bytedance/UI-TARS
|
37 |
-
|
38 |
## Performance
|
39 |
**Perception Capabilty Evaluation**
|
40 |
| Model | VisualWebBench | WebSRC | SQAshort |
|
|
|
17 |
[**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)(Recommended) |
|
18 |
[UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf) |
|
19 |
[UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) |
|
20 |
+
[UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
|
21 |
## Introduction
|
22 |
|
23 |
UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
|
|
|
32 |
<!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
|
33 |
|
34 |
This repository contains the model for the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
|
|
|
|
|
|
|
35 |
## Performance
|
36 |
**Perception Capabilty Evaluation**
|
37 |
| Model | VisualWebBench | WebSRC | SQAshort |
|