Update README.md
Browse files
README.md
CHANGED
@@ -20,4 +20,59 @@ configs:
|
|
20 |
data_files:
|
21 |
- split: train
|
22 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
data_files:
|
21 |
- split: train
|
22 |
path: data/train-*
|
23 |
+
license: other
|
24 |
+
task_categories:
|
25 |
+
- visual-question-answering
|
26 |
+
- question-answering
|
27 |
+
language:
|
28 |
+
- hi
|
29 |
+
- en
|
30 |
+
tags:
|
31 |
+
- VLM
|
32 |
+
pretty_name: hindi-vqa
|
33 |
+
size_categories:
|
34 |
+
- 100K<n<1M
|
35 |
---
|
36 |
+
|
37 |
+
# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
|
38 |
+
|
39 |
+
## Dataset details
|
40 |
+
|
41 |
+
**Dataset type:**
|
42 |
+
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
|
43 |
+
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
|
44 |
+
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
|
45 |
+
We aim to build large multimodal towards GPT-4 vision/language capability.
|
46 |
+
|
47 |
+
**Dataset date:**
|
48 |
+
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.
|
49 |
+
|
50 |
+
**Dataset structure:**
|
51 |
+
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
|
52 |
+
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption. Note that ~10% of the samples are not associated with BLIP caption yet in this release.
|
53 |
+
- `images.zip` Can be found from here [images](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/images.zip)
|
54 |
+
|
55 |
+
|
56 |
+
**Paper or resources for more information:**
|
57 |
+
https://llava-vl.github.io/
|
58 |
+
|
59 |
+
**License:**
|
60 |
+
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
|
61 |
+
|
62 |
+
CC-3M
|
63 |
+
The dataset may be freely used for any purpose, although acknowledgement of
|
64 |
+
Google LLC ("Google") as the data source would be appreciated. The dataset is
|
65 |
+
provided "AS IS" without any warranty, express or implied. Google disclaims all
|
66 |
+
liability for any damages, direct or indirect, resulting from the use of the
|
67 |
+
dataset.
|
68 |
+
|
69 |
+
|
70 |
+
**Where to send questions or comments about the model:**
|
71 |
+
https://github.com/haotian-liu/LLaVA/issues
|
72 |
+
|
73 |
+
## Intended use
|
74 |
+
**Primary intended uses:**
|
75 |
+
The primary use of LLaVA is research on large multimodal models and chatbots.
|
76 |
+
|
77 |
+
**Primary intended users:**
|
78 |
+
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|