Update README.md
Browse files
README.md
CHANGED
@@ -1,172 +1,172 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
4 |
-
dataset_info:
|
5 |
-
features:
|
6 |
-
- name: image
|
7 |
-
dtype: image
|
8 |
-
- name: internal_id
|
9 |
-
dtype: string
|
10 |
-
- name: prompt
|
11 |
-
dtype: string
|
12 |
-
- name: url
|
13 |
-
dtype: string
|
14 |
-
- name: annotation
|
15 |
-
struct:
|
16 |
-
- name: symmetry
|
17 |
-
dtype: int64
|
18 |
-
range: [-1,1]
|
19 |
-
- name: richness
|
20 |
-
dtype: int64
|
21 |
-
range: [-2,2]
|
22 |
-
- name: color aesthetic
|
23 |
-
dtype: int64
|
24 |
-
range: [-1,1]
|
25 |
-
- name: detail realism
|
26 |
-
dtype: int64
|
27 |
-
range: [-3,1]
|
28 |
-
- name: safety
|
29 |
-
dtype: int64
|
30 |
-
range: [-3,1]
|
31 |
-
- name: body
|
32 |
-
dtype: int64
|
33 |
-
range: [-4,1]
|
34 |
-
- name: lighting aesthetic
|
35 |
-
dtype: int64
|
36 |
-
range: [-1,2]
|
37 |
-
- name: lighting distinction
|
38 |
-
dtype: int64
|
39 |
-
range: [-1,2]
|
40 |
-
- name: background
|
41 |
-
dtype: int64
|
42 |
-
range: [-1,2]
|
43 |
-
- name: emotion
|
44 |
-
dtype: int64
|
45 |
-
range: [-2,2]
|
46 |
-
- name: main object
|
47 |
-
dtype: int64
|
48 |
-
range: [-1,1]
|
49 |
-
- name: color brightness
|
50 |
-
dtype: int64
|
51 |
-
range: [-1,1]
|
52 |
-
- name: face
|
53 |
-
dtype: int64
|
54 |
-
range: [-3,2]
|
55 |
-
- name: hands
|
56 |
-
dtype: int64
|
57 |
-
range: [-4,1]
|
58 |
-
- name: clarity
|
59 |
-
dtype: int64
|
60 |
-
range: [-2,2]
|
61 |
-
- name: detail refinement
|
62 |
-
dtype: int64
|
63 |
-
range: [-4,2]
|
64 |
-
- name: unsafe type
|
65 |
-
dtype: int64
|
66 |
-
range: [0,3]
|
67 |
-
- name: object pairing
|
68 |
-
dtype: int64
|
69 |
-
range: [-1,1]
|
70 |
-
- name: meta_result
|
71 |
-
sequence:
|
72 |
-
dtype: int64
|
73 |
-
- name: meta_mask
|
74 |
-
sequence:
|
75 |
-
dtype: int64
|
76 |
-
config_name: default
|
77 |
-
splits:
|
78 |
-
- name: train
|
79 |
-
num_examples: 40743
|
80 |
-
---
|
81 |
-
# VisionRewardDB-Image
|
82 |
-
|
83 |
-
## Introduction
|
84 |
-
|
85 |
-
VisionRewardDB-Image is a comprehensive dataset designed to train VisionReward-Image models, providing detailed aesthetic annotations across 18 aspects. The dataset aims to enhance the assessment and understanding of visual aesthetics and quality. ๐โจ
|
86 |
-
|
87 |
-
For more
|
88 |
-
|
89 |
-
|
90 |
-
## Annotation
|
91 |
-
|
92 |
-
Each image in the dataset is annotated with the following attributes:
|
93 |
-
|
94 |
-
<table border="1" style="border-collapse: collapse; width: 100%;">
|
95 |
-
<tr>
|
96 |
-
<th style="padding: 8px; width: 30%;">Dimension</th>
|
97 |
-
<th style="padding: 8px; width: 70%;">Attributes</th>
|
98 |
-
</tr>
|
99 |
-
<tr>
|
100 |
-
<td style="padding: 8px;">Composition</td>
|
101 |
-
<td style="padding: 8px;">Symmetry; Object pairing; Main object; Richness; Background</td>
|
102 |
-
</tr>
|
103 |
-
<tr>
|
104 |
-
<td style="padding: 8px;">Quality</td>
|
105 |
-
<td style="padding: 8px;">Clarity; Color Brightness; Color Aesthetic; Lighting Distinction; Lighting Aesthetic</td>
|
106 |
-
</tr>
|
107 |
-
<tr>
|
108 |
-
<td style="padding: 8px;">Fidelity</td>
|
109 |
-
<td style="padding: 8px;">Detail realism; Detail refinement; Body; Face; Hands</td>
|
110 |
-
</tr>
|
111 |
-
<tr>
|
112 |
-
<td style="padding: 8px;">Safety & Emotion</td>
|
113 |
-
<td style="padding: 8px;">Emotion; Safety</td>
|
114 |
-
</tr>
|
115 |
-
</table>
|
116 |
-
|
117 |
-
### Example: Scene Richness (richness)
|
118 |
-
- **2:** Very rich
|
119 |
-
- **1:** Rich
|
120 |
-
- **0:** Normal
|
121 |
-
- **-1:** Monotonous
|
122 |
-
- **-2:** Empty
|
123 |
-
|
124 |
-
For more detailed annotation guidelines(such as the meanings of different scores and annotation rules), please refer to:
|
125 |
-
- [
|
126 |
-
- [
|
127 |
-
|
128 |
-
|
129 |
-
## Additional Feature
|
130 |
-
The dataset includes two special features: `annotation` and `meta_result`.
|
131 |
-
|
132 |
-
### Annotation
|
133 |
-
The `annotation` feature contains scores across 18 different dimensions of image assessment, with each dimension having its own scoring criteria as detailed above.
|
134 |
-
|
135 |
-
### Meta Result
|
136 |
-
The `meta_result` feature transforms multi-choice questions into a series of binary judgments. For example, for the `richness` dimension:
|
137 |
-
|
138 |
-
| Score | Is the image very rich? | Is the image rich? | Is the image not monotonous? | Is the image not empty? |
|
139 |
-
|-------|------------------------|-------------------|---------------------------|----------------------|
|
140 |
-
| 2 | 1 | 1 | 1 | 1 |
|
141 |
-
| 1 | 0 | 1 | 1 | 1 |
|
142 |
-
| 0 | 0 | 0 | 1 | 1 |
|
143 |
-
| -1 | 0 | 0 | 0 | 1 |
|
144 |
-
| -2 | 0 | 0 | 0 | 0 |
|
145 |
-
|
146 |
-
Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the `meta_qa_en.txt` file.
|
147 |
-
|
148 |
-
### Meta Mask
|
149 |
-
The `meta_mask` feature is used for balanced sampling during model training:
|
150 |
-
- Elements with value 1 indicate that the corresponding binary judgment was used in training
|
151 |
-
- Elements with value 0 indicate that the corresponding binary judgment was ignored during training
|
152 |
-
|
153 |
-
## Data Processing
|
154 |
-
|
155 |
-
We provide `extract.py` for processing the dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.
|
156 |
-
|
157 |
-
```bash
|
158 |
-
python extract.py [--save_imgs] [--process_qa]
|
159 |
-
```
|
160 |
-
|
161 |
-
## Citation Information
|
162 |
-
```
|
163 |
-
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
|
164 |
-
title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation},
|
165 |
-
author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
|
166 |
-
year={2024},
|
167 |
-
eprint={2412.21059},
|
168 |
-
archivePrefix={arXiv},
|
169 |
-
primaryClass={cs.CV},
|
170 |
-
url={https://arxiv.org/abs/2412.21059},
|
171 |
-
}
|
172 |
```
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
|
4 |
+
dataset_info:
|
5 |
+
features:
|
6 |
+
- name: image
|
7 |
+
dtype: image
|
8 |
+
- name: internal_id
|
9 |
+
dtype: string
|
10 |
+
- name: prompt
|
11 |
+
dtype: string
|
12 |
+
- name: url
|
13 |
+
dtype: string
|
14 |
+
- name: annotation
|
15 |
+
struct:
|
16 |
+
- name: symmetry
|
17 |
+
dtype: int64
|
18 |
+
range: [-1,1]
|
19 |
+
- name: richness
|
20 |
+
dtype: int64
|
21 |
+
range: [-2,2]
|
22 |
+
- name: color aesthetic
|
23 |
+
dtype: int64
|
24 |
+
range: [-1,1]
|
25 |
+
- name: detail realism
|
26 |
+
dtype: int64
|
27 |
+
range: [-3,1]
|
28 |
+
- name: safety
|
29 |
+
dtype: int64
|
30 |
+
range: [-3,1]
|
31 |
+
- name: body
|
32 |
+
dtype: int64
|
33 |
+
range: [-4,1]
|
34 |
+
- name: lighting aesthetic
|
35 |
+
dtype: int64
|
36 |
+
range: [-1,2]
|
37 |
+
- name: lighting distinction
|
38 |
+
dtype: int64
|
39 |
+
range: [-1,2]
|
40 |
+
- name: background
|
41 |
+
dtype: int64
|
42 |
+
range: [-1,2]
|
43 |
+
- name: emotion
|
44 |
+
dtype: int64
|
45 |
+
range: [-2,2]
|
46 |
+
- name: main object
|
47 |
+
dtype: int64
|
48 |
+
range: [-1,1]
|
49 |
+
- name: color brightness
|
50 |
+
dtype: int64
|
51 |
+
range: [-1,1]
|
52 |
+
- name: face
|
53 |
+
dtype: int64
|
54 |
+
range: [-3,2]
|
55 |
+
- name: hands
|
56 |
+
dtype: int64
|
57 |
+
range: [-4,1]
|
58 |
+
- name: clarity
|
59 |
+
dtype: int64
|
60 |
+
range: [-2,2]
|
61 |
+
- name: detail refinement
|
62 |
+
dtype: int64
|
63 |
+
range: [-4,2]
|
64 |
+
- name: unsafe type
|
65 |
+
dtype: int64
|
66 |
+
range: [0,3]
|
67 |
+
- name: object pairing
|
68 |
+
dtype: int64
|
69 |
+
range: [-1,1]
|
70 |
+
- name: meta_result
|
71 |
+
sequence:
|
72 |
+
dtype: int64
|
73 |
+
- name: meta_mask
|
74 |
+
sequence:
|
75 |
+
dtype: int64
|
76 |
+
config_name: default
|
77 |
+
splits:
|
78 |
+
- name: train
|
79 |
+
num_examples: 40743
|
80 |
+
---
|
81 |
+
# VisionRewardDB-Image
|
82 |
+
|
83 |
+
## Introduction
|
84 |
+
|
85 |
+
VisionRewardDB-Image is a comprehensive dataset designed to train VisionReward-Image models, providing detailed aesthetic annotations across 18 aspects. The dataset aims to enhance the assessment and understanding of visual aesthetics and quality. ๐โจ
|
86 |
+
|
87 |
+
For more detail, please refer to the [**Github Repository**](https://github.com/THUDM/VisionReward). ๐๐
|
88 |
+
|
89 |
+
|
90 |
+
## Annotation Detail
|
91 |
+
|
92 |
+
Each image in the dataset is annotated with the following attributes:
|
93 |
+
|
94 |
+
<table border="1" style="border-collapse: collapse; width: 100%;">
|
95 |
+
<tr>
|
96 |
+
<th style="padding: 8px; width: 30%;">Dimension</th>
|
97 |
+
<th style="padding: 8px; width: 70%;">Attributes</th>
|
98 |
+
</tr>
|
99 |
+
<tr>
|
100 |
+
<td style="padding: 8px;">Composition</td>
|
101 |
+
<td style="padding: 8px;">Symmetry; Object pairing; Main object; Richness; Background</td>
|
102 |
+
</tr>
|
103 |
+
<tr>
|
104 |
+
<td style="padding: 8px;">Quality</td>
|
105 |
+
<td style="padding: 8px;">Clarity; Color Brightness; Color Aesthetic; Lighting Distinction; Lighting Aesthetic</td>
|
106 |
+
</tr>
|
107 |
+
<tr>
|
108 |
+
<td style="padding: 8px;">Fidelity</td>
|
109 |
+
<td style="padding: 8px;">Detail realism; Detail refinement; Body; Face; Hands</td>
|
110 |
+
</tr>
|
111 |
+
<tr>
|
112 |
+
<td style="padding: 8px;">Safety & Emotion</td>
|
113 |
+
<td style="padding: 8px;">Emotion; Safety</td>
|
114 |
+
</tr>
|
115 |
+
</table>
|
116 |
+
|
117 |
+
### Example: Scene Richness (richness)
|
118 |
+
- **2:** Very rich
|
119 |
+
- **1:** Rich
|
120 |
+
- **0:** Normal
|
121 |
+
- **-1:** Monotonous
|
122 |
+
- **-2:** Empty
|
123 |
+
|
124 |
+
For more detailed annotation guidelines(such as the meanings of different scores and annotation rules), please refer to:
|
125 |
+
- [annotation_deatil](https://flame-spaghetti-eb9.notion.site/VisionReward-Image-Annotation-Detail-196a0162280e80ef8359c38e9e41247e)
|
126 |
+
- [annotation_deatil_ch](https://flame-spaghetti-eb9.notion.site/VisionReward-Image-195a0162280e8044bcb4ec48d000409c)
|
127 |
+
|
128 |
+
|
129 |
+
## Additional Feature Detail
|
130 |
+
The dataset includes two special features: `annotation` and `meta_result`.
|
131 |
+
|
132 |
+
### Annotation
|
133 |
+
The `annotation` feature contains scores across 18 different dimensions of image assessment, with each dimension having its own scoring criteria as detailed above.
|
134 |
+
|
135 |
+
### Meta Result
|
136 |
+
The `meta_result` feature transforms multi-choice questions into a series of binary judgments. For example, for the `richness` dimension:
|
137 |
+
|
138 |
+
| Score | Is the image very rich? | Is the image rich? | Is the image not monotonous? | Is the image not empty? |
|
139 |
+
|-------|------------------------|-------------------|---------------------------|----------------------|
|
140 |
+
| 2 | 1 | 1 | 1 | 1 |
|
141 |
+
| 1 | 0 | 1 | 1 | 1 |
|
142 |
+
| 0 | 0 | 0 | 1 | 1 |
|
143 |
+
| -1 | 0 | 0 | 0 | 1 |
|
144 |
+
| -2 | 0 | 0 | 0 | 0 |
|
145 |
+
|
146 |
+
Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the `meta_qa_en.txt` file.
|
147 |
+
|
148 |
+
### Meta Mask
|
149 |
+
The `meta_mask` feature is used for balanced sampling during model training:
|
150 |
+
- Elements with value 1 indicate that the corresponding binary judgment was used in training
|
151 |
+
- Elements with value 0 indicate that the corresponding binary judgment was ignored during training
|
152 |
+
|
153 |
+
## Data Processing
|
154 |
+
|
155 |
+
We provide `extract.py` for processing the dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.
|
156 |
+
|
157 |
+
```bash
|
158 |
+
python extract.py [--save_imgs] [--process_qa]
|
159 |
+
```
|
160 |
+
|
161 |
+
## Citation Information
|
162 |
+
```
|
163 |
+
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
|
164 |
+
title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation},
|
165 |
+
author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
|
166 |
+
year={2024},
|
167 |
+
eprint={2412.21059},
|
168 |
+
archivePrefix={arXiv},
|
169 |
+
primaryClass={cs.CV},
|
170 |
+
url={https://arxiv.org/abs/2412.21059},
|
171 |
+
}
|
172 |
```
|