Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
ponytail commited on
Commit
4d156e4
β€’
1 Parent(s): cff6971

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -29,7 +29,8 @@ Specifically, (1) we first construct **a large-scale and high-quality human-rela
29
 
30
  ## Result
31
  human-llava has a good performance in both general and special fields
32
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/AjNql8GzmoIC7x_W8h0hY.png)
 
33
 
34
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
35
  * Sep.8, 2024. **πŸ€—[Human-LLaVA-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**
 
29
 
30
  ## Result
31
  human-llava has a good performance in both general and special fields
32
+
33
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/zFuyEPb6ZOt-HHadE2K9-.png)
34
 
35
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
36
  * Sep.8, 2024. **πŸ€—[Human-LLaVA-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**