nielsr HF staff commited on
Commit
d3f8548
·
verified ·
1 Parent(s): fb12a27

Add link to paper

Browse files

This PR ensures that the model is linked to the paper it was introduced in: https://huggingface.co/papers/2501.12326.

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -9,7 +9,6 @@ tags:
9
  library_name: transformers
10
  ---
11
 
12
-
13
  # UI-TARS-72B-DPO
14
  [UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT)  | 
15
  [UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf)  | 
@@ -18,6 +17,9 @@ library_name: transformers
18
  [UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf)  | 
19
  [UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT)  | 
20
  [UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
 
 
 
21
  ## Introduction
22
 
23
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
 
9
  library_name: transformers
10
  ---
11
 
 
12
  # UI-TARS-72B-DPO
13
  [UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT)  | 
14
  [UI-TARS-2B-gguf](https://huggingface.co/bytedance-research/UI-TARS-2B-gguf)  | 
 
17
  [UI-TARS-7B-gguf](https://huggingface.co/bytedance-research/UI-TARS-7B-gguf)  | 
18
  [UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT)  | 
19
  [UI-TARS-72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
20
+
21
+ This repository contains the model of the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
22
+
23
  ## Introduction
24
 
25
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.