---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: internal_id
dtype: string
- name: prompt
dtype: string
- name: url
dtype: string
- name: annotation
struct:
- name: symmetry
dtype: int64
range: [-1,1]
- name: richness
dtype: int64
range: [-2,2]
- name: color aesthetic
dtype: int64
range: [-1,1]
- name: detail realism
dtype: int64
range: [-3,1]
- name: safety
dtype: int64
range: [-3,1]
- name: body
dtype: int64
range: [-4,1]
- name: lighting aesthetic
dtype: int64
range: [-1,2]
- name: lighting distinction
dtype: int64
range: [-1,2]
- name: background
dtype: int64
range: [-1,2]
- name: emotion
dtype: int64
range: [-2,2]
- name: main object
dtype: int64
range: [-1,1]
- name: color brightness
dtype: int64
range: [-1,1]
- name: face
dtype: int64
range: [-3,2]
- name: hands
dtype: int64
range: [-4,1]
- name: clarity
dtype: int64
range: [-2,2]
- name: detail refinement
dtype: int64
range: [-4,2]
- name: unsafe type
dtype: int64
range: [0,3]
- name: object pairing
dtype: int64
range: [-1,1]
- name: meta_result
sequence:
dtype: int64
- name: meta_mask
sequence:
dtype: int64
config_name: default
splits:
- name: train
num_examples: 40743
---
# VisionRewardDB-Image
## Introduction
VisionRewardDB-Image is a comprehensive dataset designed to train VisionReward-Image models, providing detailed aesthetic annotations across 18 aspects. The dataset aims to enhance the assessment and understanding of visual aesthetics and quality. 🌟✨
For more detail, please refer to the [**Github Repository**](https://github.com/THUDM/VisionReward). 🔍📚
## Annotation Detail
Each image in the dataset is annotated with the following attributes:
Dimension |
Attributes |
Composition |
Symmetry; Object pairing; Main object; Richness; Background |
Quality |
Clarity; Color Brightness; Color Aesthetic; Lighting Distinction; Lighting Aesthetic |
Fidelity |
Detail realism; Detail refinement; Body; Face; Hands |
Safety & Emotion |
Emotion; Safety |
### Example: Scene Richness (richness)
- **2:** Very rich
- **1:** Rich
- **0:** Normal
- **-1:** Monotonous
- **-2:** Empty
For more detailed annotation guidelines(such as the meanings of different scores and annotation rules), please refer to:
- [annotation_deatil](https://flame-spaghetti-eb9.notion.site/VisionReward-Image-Annotation-Detail-196a0162280e80ef8359c38e9e41247e)
- [annotation_deatil_zh](https://flame-spaghetti-eb9.notion.site/VisionReward-Image-195a0162280e8044bcb4ec48d000409c)
## Additional Feature Detail
The dataset includes two special features: `annotation` and `meta_result`.
### Annotation
The `annotation` feature contains scores across 18 different dimensions of image assessment, with each dimension having its own scoring criteria as detailed above.
### Meta Result
The `meta_result` feature transforms multi-choice questions into a series of binary judgments. For example, for the `richness` dimension:
| Score | Is the image very rich? | Is the image rich? | Is the image not monotonous? | Is the image not empty? |
|-------|------------------------|-------------------|---------------------------|----------------------|
| 2 | 1 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 | 1 |
| 0 | 0 | 0 | 1 | 1 |
| -1 | 0 | 0 | 0 | 1 |
| -2 | 0 | 0 | 0 | 0 |
Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the `meta_qa_en.txt` file.
### Meta Mask
The `meta_mask` feature is used for balanced sampling during model training:
- Elements with value 1 indicate that the corresponding binary judgment was used in training
- Elements with value 0 indicate that the corresponding binary judgment was ignored during training
## Data Processing
We provide `extract.py` for processing the dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.
```bash
python extract.py [--save_imgs] [--process_qa]
```
## Citation Information
```
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation},
author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
year={2024},
eprint={2412.21059},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.21059},
}
```