|
--- |
|
dataset_info: |
|
features: |
|
- name: conversations |
|
list: |
|
- name: from |
|
dtype: string |
|
- name: value |
|
dtype: string |
|
- name: image |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 139133435 |
|
num_examples: 595375 |
|
download_size: 39144914 |
|
dataset_size: 139133435 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: other |
|
task_categories: |
|
- visual-question-answering |
|
- question-answering |
|
language: |
|
- hi |
|
- en |
|
tags: |
|
- VLM |
|
pretty_name: hindi-vqa |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card |
|
|
|
## Dataset details |
|
|
|
**Dataset type:** |
|
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. |
|
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference. |
|
It is constructed for the pretraining stage for feature alignment in visual instruction tuning. |
|
We aim to build large multimodal towards GPT-4 vision/language capability. |
|
|
|
**Dataset date:** |
|
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023. |
|
|
|
**Dataset structure:** |
|
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer. |
|
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption. Note that ~10% of the samples are not associated with BLIP caption yet in this release. |
|
- `images.zip` Can be found from here [images](https://huggingface.co/datasets/theblackcat102/llava-pretrain?row=0) |
|
- `Bilingual` This dataset contains both hindi and english captions |
|
|
|
|
|
**Paper or resources for more information:** |
|
https://llava-vl.github.io/ |
|
|
|
**License:** |
|
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption). |
|
|
|
CC-3M |
|
The dataset may be freely used for any purpose, although acknowledgement of |
|
Google LLC ("Google") as the data source would be appreciated. The dataset is |
|
provided "AS IS" without any warranty, express or implied. Google disclaims all |
|
liability for any damages, direct or indirect, resulting from the use of the |
|
dataset. |
|
|
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/haotian-liu/LLaVA/issues |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use of LLaVA is research on large multimodal models and chatbots. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |