RaushanTurganbay HF staff commited on
Commit
cae9be6
1 Parent(s): 5db80b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,7 +17,7 @@ arxiv: 2408.03326
17
 
18
  Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-4AtYjR8UMtCALV0AswU1kiNkWCLTALT?usp=sharing)
19
 
20
- Below is the model card of 72B LLaVA-Onevision Chat model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat-hf).
21
 
22
 
23
 
@@ -51,7 +51,7 @@ The model supports multi-image and multi-prompt generation. Meaning that you can
51
 
52
  ### Using `pipeline`:
53
 
54
- Below we used [`"llava-hf/llava-onevision-qwen2-0.5b-si-hf"`](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-si-hf) checkpoint.
55
 
56
  ```python
57
  from transformers import pipeline, AutoProcessor
 
17
 
18
  Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-4AtYjR8UMtCALV0AswU1kiNkWCLTALT?usp=sharing)
19
 
20
+ Below is the model card of 72B LLaVA-Onevision Chat model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat).
21
 
22
 
23
 
 
51
 
52
  ### Using `pipeline`:
53
 
54
+ Below we used [`"llava-hf/llava-onevision-qwen2-72b-ov-chat-hf"`](https://huggingface.co/llava-hf/llava-onevision-qwen2-72b-ov-chat-hf) checkpoint.
55
 
56
  ```python
57
  from transformers import pipeline, AutoProcessor