File size: 2,788 Bytes
3ac98f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4b703f
 
 
 
 
 
 
 
 
 
 
 
3ac98f3
b4b703f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79488d2
f14c0fb
b4b703f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
dataset_info:
  features:
  - name: conversations
    list:
    - name: from
      dtype: string
    - name: value
      dtype: string
  - name: image
    dtype: string
  splits:
  - name: train
    num_bytes: 139133435
    num_examples: 595375
  download_size: 39144914
  dataset_size: 139133435
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: other
task_categories:
- visual-question-answering
- question-answering
language:
- hi
- en
tags:
- VLM
pretty_name: hindi-vqa
size_categories:
- 100K<n<1M
---

# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card

## Dataset details

**Dataset type:**
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.

**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.

**Dataset structure:**
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image".  It is used for pretraining in LLaVA.  We use the raw CC-3M caption as the default answer.
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption.  Note that ~10% of the samples are not associated with BLIP caption yet in this release.
- `images.zip` Can be found from here [images](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/images.zip)
- `Bilingual` This dataset contains both hindi and english captions 
  

**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).

CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.


## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.