Add pipeline tag, link to paper page

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -5,16 +5,18 @@ datasets:
5
  base_model:
6
  - lmms-lab/LongVA-7B
7
  library_name: transformers
 
8
  ---
9
 
10
-
11
  <a href='https://arxiv.org/abs/2501.13919v1'><img src='https://img.shields.io/badge/arXiv-paper-red'></a><a href='https://ruili33.github.io/tpo_website/'><img src='https://img.shields.io/badge/project-TPO-blue'></a><a href='https://huggingface.co/collections/ruili0/temporal-preference-optimization-67874b451f65db189fa35e10'><img src='https://img.shields.io/badge/huggingface-datasets-green'></a>
12
  <a href='https://huggingface.co/collections/ruili0/temporal-preference-optimization-67874b451f65db189fa35e10'><img src='https://img.shields.io/badge/model-checkpoints-yellow'></a>
13
  <a href='https://github.com/ruili33/TPO'><img src='https://img.shields.io/badge/github-repository-purple'></a>
14
  <img src="cvpr_figure_TPO.png"></img>
15
  # LongVA-7B-TPO
16
 
17
- LongVA-7B-TPO, introduced by paper [Temporal Preference Optimization for Long-form Video Understanding](https://arxiv.org/abs/2501.13919v1), optimized
 
 
18
  by temporal preference based on LongVA-7B. The LongVA-7B-TPO model establishes state-of-the-art performance across a range of
19
  benchmarks, demonstrating an average performance improvement of 2% compared to LongVA-7B.
20
 
 
5
  base_model:
6
  - lmms-lab/LongVA-7B
7
  library_name: transformers
8
+ pipeline_tag: video-text-to-text
9
  ---
10
 
 
11
  <a href='https://arxiv.org/abs/2501.13919v1'><img src='https://img.shields.io/badge/arXiv-paper-red'></a><a href='https://ruili33.github.io/tpo_website/'><img src='https://img.shields.io/badge/project-TPO-blue'></a><a href='https://huggingface.co/collections/ruili0/temporal-preference-optimization-67874b451f65db189fa35e10'><img src='https://img.shields.io/badge/huggingface-datasets-green'></a>
12
  <a href='https://huggingface.co/collections/ruili0/temporal-preference-optimization-67874b451f65db189fa35e10'><img src='https://img.shields.io/badge/model-checkpoints-yellow'></a>
13
  <a href='https://github.com/ruili33/TPO'><img src='https://img.shields.io/badge/github-repository-purple'></a>
14
  <img src="cvpr_figure_TPO.png"></img>
15
  # LongVA-7B-TPO
16
 
17
+ This repository contains the model described in the paper [Temporal Preference Optimization for Long-form Video Understanding](https://huggingface.co/papers/2501.13919).
18
+
19
+ LongVA-7B-TPO, introduced by paper [Temporal Preference Optimization for Long-form Video Understanding](https://huggingface.co/papers/2501.13919v1), optimized
20
  by temporal preference based on LongVA-7B. The LongVA-7B-TPO model establishes state-of-the-art performance across a range of
21
  benchmarks, demonstrating an average performance improvement of 2% compared to LongVA-7B.
22