---
dataset_info:
features:
- name: image
dtype: image
- name: image_coco_url
dtype: string
- name: image_date_captured
dtype: string
- name: image_file_name
dtype: string
- name: image_height
dtype: int32
- name: image_width
dtype: int32
- name: image_id
dtype: int32
- name: image_license
dtype: int8
- name: image_open_images_id
dtype: string
- name: annotations_ids
sequence: int32
- name: annotations_captions
sequence: string
splits:
- name: validation
num_bytes: 1421862846.0
num_examples: 4500
- name: test
num_bytes: 3342844310.0
num_examples: 10600
download_size: 4761076789
dataset_size: 4764707156.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [NoCaps](https://nocaps.org/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{Agrawal_2019,
title={nocaps: novel object captioning at scale},
url={http://dx.doi.org/10.1109/ICCV.2019.00904},
DOI={10.1109/iccv.2019.00904},
booktitle={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
publisher={IEEE},
author={Agrawal, Harsh and Desai, Karan and Wang, Yufei and Chen, Xinlei and Jain, Rishabh and Johnson, Mark and Batra, Dhruv and Parikh, Devi and Lee, Stefan and Anderson, Peter},
year={2019},
month=oct }
```