Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
CodeZzz commited on
Commit
ab3806f
·
1 Parent(s): 64361fd
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. README.md +38 -122
  3. annotation.xlsx +0 -3
  4. annotation_ch.xlsx +0 -3
  5. extract.py +105 -0
.gitattributes CHANGED
@@ -60,3 +60,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
60
  *.xlsx filter=lfs diff=lfs merge=lfs -text
61
  annotation.xlsx filter=lfs diff=lfs merge=lfs -text
62
  annotation_ch.xlsx filter=lfs diff=lfs merge=lfs -text
 
 
60
  *.xlsx filter=lfs diff=lfs merge=lfs -text
61
  annotation.xlsx filter=lfs diff=lfs merge=lfs -text
62
  annotation_ch.xlsx filter=lfs diff=lfs merge=lfs -text
63
+ annotation.jsonl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -62,129 +62,38 @@ This dataset contains aesthetic annotations for images. The annotations cover 18
62
 
63
 
64
  ## Annotation Details
65
- For more detailed annotation guidelines, please refer to:
66
- - annotation_ch.xlsx(Chinese)
67
- - annotation.xlsx(English)
68
- <!-- - [English Documentation (Google Docs)](your_google_docs_link_here) -->
69
 
70
  Each image in the dataset is annotated with the following attributes:
71
 
72
- ### 1. Overall Symmetry (adjective)
73
- - 1: Symmetric
74
- - 0: Normal
75
- - -1: Asymmetric
76
-
77
- ### 2. Object Composition (collocation)
78
- - 1: Harmonious
79
- - 0: Normal
80
- - -1: Disharmonious
81
-
82
- ### 3. Main Object Position (place)
83
- - 1: Prominent
84
- - 0: Normal
85
- - -1: Not prominent
86
-
87
- ### 4. Scene Richness (richness)
88
- - 2: Very rich
89
- - 1: Rich
90
- - 0: Normal
91
- - -1: Monotonous
92
- - -2: Empty
93
-
94
- ### 5. Background Quality (background)
95
- - 2: Beautiful
96
- - 1: Somewhat beautiful
97
- - 0: Normal
98
- - -1: No background
99
-
100
- ### 6. Overall Clarity (sharpness)
101
- - 2: Very clear
102
- - 1: Clear
103
- - 0: Normal
104
- - -1: Blurry
105
- - -2: Completely blurry
106
-
107
- ### 7. Brightness (color)
108
- - 1: Bright
109
- - 0: Normal
110
- - -1: Dark
111
-
112
- ### 8. Color Aesthetics (color_aes)
113
- - 1: Beautiful colors
114
- - 0: Normal colors
115
- - -1: Ugly colors
116
-
117
- ### 9. Environmental Light and Shadow Prominence (shadow_degree)
118
- - 2: Very prominent
119
- - 1: Prominent
120
- - 0: Normal
121
- - -1: No light and shadow
122
-
123
- ### 10. Light and Shadow Aesthetics (shadow_aes)
124
- - 2: Very beautiful
125
- - 1: Beautiful
126
- - 0: Normal
127
- - -1: No light and shadow
128
-
129
- ### 11. Emotional Response (emotion)
130
- - 2: Very positive
131
- - 1: Positive
132
- - 0: Normal
133
- - -1: Negative
134
- - -2: Very negative
135
-
136
- ### 12. Detail Refinement (detail_fineness)
137
- - 2: Very refined
138
- - 1: Refined
139
- - 0: Normal
140
- - -1: Rough
141
- - -2: Very rough
142
- - -3: Hard to recognize
143
- - -4: Fragmented
144
-
145
- ### 13. Detail Authenticity (detail_facticity)
146
- - 1: Authentic
147
- - 0: Neutral
148
- - -1: Inauthentic
149
- - -2: Very inauthentic
150
- - -3: Severely inauthentic
151
-
152
- ### 14. Human Body Accuracy (body_correctness)
153
- - 1: No errors
154
- - 0: Neutral
155
- - -1: Has errors
156
- - -2: Has obvious errors
157
- - -3: Has severe errors
158
- - -4: No human body
159
-
160
- ### 15. Face Quality (face)
161
- - 2: Very beautiful
162
- - 1: Beautiful
163
- - 0: Normal
164
- - -1: Has errors
165
- - -2: Has severe errors
166
- - -3: No face
167
-
168
- ### 16. Hand Quality (hand)
169
- - 1: Perfect
170
- - 0: Basically correct
171
- - -1: Minor errors
172
- - -2: Obvious errors
173
- - -3: Severe errors
174
- - -4: No hands
175
-
176
- ### 17. Safety Rating (safe)
177
- - 1: Safe
178
- - 0: Neutral
179
- - -1: Potentially harmful
180
- - -2: Harmful
181
- - -3: Very harmful
182
-
183
- ### 18. Harm Type (harm)
184
- - 3: Adult content
185
- - 2: Horror
186
- - 1: Other
187
- - 0: Harmless
188
 
189
 
190
  ## Additional Feature Details
@@ -202,9 +111,16 @@ The `meta_result` feature transforms multi-choice questions into a series of bin
202
  - Score -1 (Monotonous) corresponds to [0,0,0,1]
203
  - Score -2 (Empty) corresponds to [0,0,0,0]
204
 
205
- Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the meta_qa_en.txt file.
206
 
207
  ### Meta Mask
208
  The `meta_mask` feature is used for balanced sampling during model training:
209
  - Elements with value 1 indicate that the corresponding binary judgment was used in training
210
- - Elements with value 0 indicate that the corresponding binary judgment was ignored during training
 
 
 
 
 
 
 
 
62
 
63
 
64
  ## Annotation Details
 
 
 
 
65
 
66
  Each image in the dataset is annotated with the following attributes:
67
 
68
+ 1. **Overall Symmetry (adjective)**
69
+ 2. **Object Composition (collocation)**
70
+ 3. **Main Object Position (place)**
71
+ 4. **Scene Richness (richness)**
72
+ 5. **Background Quality (background)**
73
+ 6. **Overall Clarity (sharpness)**
74
+ 7. **Brightness (color)**
75
+ 8. **Color Aesthetics (color_aes)**
76
+ 9. **Environmental Light and Shadow Prominence (shadow_degree)**
77
+ 10. **Light and Shadow Aesthetics (shadow_aes)**
78
+ 11. **Emotional Response (emotion)**
79
+ 12. **Detail Refinement (detail_fineness)**
80
+ 13. **Detail Authenticity (detail_facticity)**
81
+ 14. **Human Body Accuracy (body_correctness)**
82
+ 15. **Face Quality (face)**
83
+ 16. **Hand Quality (hand)**
84
+ 17. **Safety Rating (safe)**
85
+ 18. **Harm Type (harm)**
86
+
87
+ ### Example: Scene Richness (richness)
88
+ - **2:** Very rich
89
+ - **1:** Rich
90
+ - **0:** Normal
91
+ - **-1:** Monotonous
92
+ - **-2:** Empty
93
+
94
+ For more detailed annotation guidelines, please refer to:
95
+ - [annotation_deatils](https://www.notion.so/VisionReward-Image-Annotation-Details-196a0162280e80ef8359c38e9e41247e?pvs=4)
96
+ - [annotation_deatils_ch](https://www.notion.so/VisionReward-Image-195a0162280e8044bcb4ec48d000409c?pvs=4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
 
99
  ## Additional Feature Details
 
111
  - Score -1 (Monotonous) corresponds to [0,0,0,1]
112
  - Score -2 (Empty) corresponds to [0,0,0,0]
113
 
114
+ Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the `meta_qa_en.txt` file.
115
 
116
  ### Meta Mask
117
  The `meta_mask` feature is used for balanced sampling during model training:
118
  - Elements with value 1 indicate that the corresponding binary judgment was used in training
119
+ - Elements with value 0 indicate that the corresponding binary judgment was ignored during training
120
+
121
+ ## Data Processing
122
+
123
+ We provide `extract.py` for processing the dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.
124
+
125
+ ```bash
126
+ python extract.py [--save_imgs] [--process_qa]
annotation.xlsx DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:070e5b9ebcafb1f69c1304fc113892b6a9013f283f899045f35f0c8790baed84
3
- size 28783535
 
 
 
 
annotation_ch.xlsx DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b33bd01fcfbb980e687a8204b360ba50dd5160b57cdfb5f80764ebaf3a03e9a
3
- size 28783047
 
 
 
 
extract.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import logging
4
+ import argparse
5
+ from PIL import Image
6
+ from datasets import Dataset
7
+ import io
8
+
9
+ # Configure logging for detailed output
10
+ logging.basicConfig(level=logging.INFO)
11
+ logger = logging.getLogger(__name__)
12
+
13
+ def load_questions_from_meta_qa(meta_qa_file):
14
+ with open(meta_qa_file, "r") as f:
15
+ questions = [line.strip() for line in f if line.strip()]
16
+ return questions
17
+
18
+ def process_parquet_files(data_dir, output_jsonl, meta_qa_file=None, output_imgs=None, process_qa=False):
19
+ """
20
+ Process Parquet files to generate a JSONL file with optional image export and QA list creation.
21
+
22
+ Args:
23
+ data_dir (str): Directory containing Parquet files.
24
+ output_jsonl (str): Output JSONL file path.
25
+ meta_qa_file (str, optional): Path to the meta_qa_en.txt file for QA list creation.
26
+ output_imgs (str, optional): Directory path to save images. If None, images are not saved.
27
+ process_qa (bool): Whether to process and include QA pairs in the output.
28
+
29
+ Returns:
30
+ None
31
+ """
32
+
33
+ if output_imgs and not os.path.exists(output_imgs):
34
+ os.makedirs(output_imgs)
35
+
36
+ # Load questions only if QA processing is enabled
37
+ questions = None
38
+ if process_qa and meta_qa_file:
39
+ questions = load_questions_from_meta_qa(meta_qa_file)
40
+
41
+ jsonl_data = []
42
+
43
+ parquet_files = [os.path.join(data_dir, f) for f in os.listdir(data_dir) if f.endswith(".parquet")]
44
+
45
+ for parquet_file in parquet_files:
46
+ dataset = Dataset.from_parquet(parquet_file)
47
+
48
+ for row in dataset:
49
+ json_item = {
50
+ "internal_id": row["internal_id"],
51
+ "url": row["url"],
52
+ "annotation": row["annotation"],
53
+ "meta_result": row["meta_result"],
54
+ "meta_mask": row["meta_mask"],
55
+ }
56
+
57
+ # Optionally save images
58
+ if output_imgs:
59
+ img_data = row["image"]
60
+ img_path = os.path.join(output_imgs, f"{row['internal_id']}.jpg")
61
+
62
+ try:
63
+ with open(img_path, "wb") as img_file:
64
+ img_file.write(img_data)
65
+ json_item["image_path"] = img_path
66
+ except Exception as e:
67
+ logger.error(f"Error saving image for internal_id {row['internal_id']}: {e}")
68
+
69
+ # Optionally process QA pairs
70
+ if process_qa and questions:
71
+ qa_list = []
72
+ meta_result = row["meta_result"]
73
+ meta_mask = row["meta_mask"]
74
+ for idx, mask in enumerate(meta_mask):
75
+ if mask == 1: # Process questions only if the mask is 1
76
+ question = questions[idx]
77
+ answer = 'yes' if meta_result[idx] == 1 else 'no'
78
+ qa_list.append({"question": question, "answer": answer})
79
+ json_item["qa_list"] = qa_list
80
+
81
+ jsonl_data.append(json_item)
82
+
83
+ with open(output_jsonl, "w") as outfile:
84
+ for json_item in jsonl_data:
85
+ outfile.write(json.dumps(json_item) + "\n")
86
+ logger.info(f"Finished writing JSONL file with {len(jsonl_data)} items.")
87
+
88
+ if __name__ == "__main__":
89
+ parser = argparse.ArgumentParser(description="Convert VisionReward Parquet dataset files to JSONL format with optional image extraction and QA list generation.")
90
+ parser.add_argument("--data_dir", type=str, default='data', help="Directory containing Parquet files.")
91
+ parser.add_argument("--output_jsonl", type=str, default='annotation.jsonl', help="Path to the output JSONL file.")
92
+ parser.add_argument("--meta_qa_file", type=str, default="meta_qa_en.txt", help="Optional: Path to the meta_qa_en.txt file for QA list generation.")
93
+ parser.add_argument("--save_imgs", action="store_true", help="Optional: Whether to save images.")
94
+ parser.add_argument("--process_qa", action="store_true", help="Optional: Process and include QA pairs in the output.")
95
+ args = parser.parse_args()
96
+
97
+ output_imgs = 'imgs' if args.save_imgs else None
98
+
99
+ process_parquet_files(
100
+ data_dir=args.data_dir,
101
+ output_jsonl=args.output_jsonl,
102
+ meta_qa_file=args.meta_qa_file,
103
+ output_imgs=args.output_imgs,
104
+ process_qa=args.process_qa
105
+ )