zzxslp commited on
Commit
7eca548
1 Parent(s): 7cc166a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ LLaVA-v1.5 mixed trained with SoM style data (QA+listing).
8
 
9
  The model can understand tag-style visual prompts on the image (e.g., what is the object tagged with id 9?), also gained improved performance on MLLM benchmarks (POPE, MME, SEED, MM-Vet, LLav-wild), even when the input testing images has no tags.
10
 
11
- **Checkout more details on our [github page](https://github.com/zzxslp/SoM-LLaVA) and [paper](https://arxiv.org/abs/2404.16375)!!!**
12
 
13
  ## Getting Started
14
  If you would like to load our model in huggingface, here is an example script:
 
8
 
9
  The model can understand tag-style visual prompts on the image (e.g., what is the object tagged with id 9?), also gained improved performance on MLLM benchmarks (POPE, MME, SEED, MM-Vet, LLav-wild), even when the input testing images has no tags.
10
 
11
+ **For more information about SoM-LLaVA, check our [github page](https://github.com/zzxslp/SoM-LLaVA) and [paper](https://arxiv.org/abs/2404.16375)!**
12
 
13
  ## Getting Started
14
  If you would like to load our model in huggingface, here is an example script: