MingComplex commited on
Commit
babf880
·
1 Parent(s): 27c601a

update readme

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -11,18 +11,13 @@ library_name: transformers
11
 
12
 
13
  # UI-TARS-72B-SFT
14
-
15
  [UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT)  | 
16
  [UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf)  | 
17
  [UI-TARS-7B-SFT](https://huggingface.co/bytedance-research/UI-TARS-7B-SFT)  | 
18
- [UI-TARS-7B-DPO](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)  | 
19
  [UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf)  | 
20
  [UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT)  | 
21
  [UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
22
-
23
-
24
-
25
-
26
  ## Introduction
27
 
28
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
@@ -36,6 +31,8 @@ UI-TARS is a next-generation native GUI agent model designed to interact seamles
36
 
37
  <!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
38
 
 
 
39
 
40
  ## Performance
41
  **Perception Capabilty Evaluation**
@@ -186,6 +183,7 @@ UI-TARS is a next-generation native GUI agent model designed to interact seamles
186
  | **UI-TARS-72B-DPO** | **22.7** (15 steps) | - |
187
  | **UI-TARS-72B-DPO** | **24.6** (50 steps) | - |
188
 
 
189
  ## Citation
190
  If you find our paper and model useful in your research, feel free to give us a cite.
191
 
 
11
 
12
 
13
  # UI-TARS-72B-SFT
 
14
  [UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT) &nbsp;|&nbsp;
15
  [UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf) &nbsp;|&nbsp;
16
  [UI-TARS-7B-SFT](https://huggingface.co/bytedance-research/UI-TARS-7B-SFT) &nbsp;|&nbsp;
17
+ [**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)(Recommended) &nbsp;|&nbsp;
18
  [UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf) &nbsp;|&nbsp;
19
  [UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) &nbsp;|&nbsp;
20
  [UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
 
 
 
 
21
  ## Introduction
22
 
23
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
 
31
 
32
  <!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
33
 
34
+ This repository contains the model for the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
35
+ Code: https://github.com/bytedance/UI-TARS
36
 
37
  ## Performance
38
  **Perception Capabilty Evaluation**
 
183
  | **UI-TARS-72B-DPO** | **22.7** (15 steps) | - |
184
  | **UI-TARS-72B-DPO** | **24.6** (50 steps) | - |
185
 
186
+
187
  ## Citation
188
  If you find our paper and model useful in your research, feel free to give us a cite.
189