File size: 6,920 Bytes
dcc0999 3710fa2 11062c2 8ebf26b 11062c2 aac3684 a6c98d3 f32b608 aac3684 599430d 5678eb1 aac3684 f32b608 aac3684 f32b608 aac3684 b787f82 aac3684 268abe7 de6a021 268abe7 aac3684 cd31756 6a6361f cd31756 aac3684 f32b608 aac3684 040fedf aac3684 7a63387 aac3684 bcea0d0 dd25cf4 7a63387 bcea0d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
license: cc-by-4.0
task_categories:
- image-segmentation
- object-detection
task_ids:
- semantic-segmentation
- instance-segmentation
tags:
- automotive
- autonomous driving
- synthetic
- safe ai
- validation
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
pretty_name: VALERIE22
size_categories:
- 1K<n<10K
---
# VALERIE22 - A photorealistic, richly metadata annotated dataset of urban environments
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/teaser_c.png">
## Dataset Description
- **Paper:** https://arxiv.org/abs/2308.09632
- **Point of Contact:** [email protected]
### Dataset Summary
The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline (see image below) providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/VALERIE_overview1.png">
Each sequence of the dataset contains for each scene two rendered images. One is rendered with the default Blender tonemapping (/png) whereas the second is renderd with our photorealistic sensor simulation (see hagn2022optimized). The image below shows the difference of the two methods.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/SensorSimulation.png">
Following are some example images showing the unique characteristics of the different sequences.
|Sequence0052|Sequence0054|Sequence0057|Sequence0058|
|:---:|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq52_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq54_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq57_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq58_1.png" width="500">|
|Sequence0059|Sequence0060|Sequence0062|
|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq59_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq60_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq62_1.jpg" width="500">|
### Supported Tasks
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
- ai-validation
## Dataset Structure
```
VALERIE22
ββββintel_results_sequence_0050
β ββββground-truth
β β ββββ2d-bounding-box_json
β β β ββββcar-camera000-0000-{UUID}-0000.json
β β ββββ3d-bounding-box_json
β β β ββββcar-camera000-0000-{UUID}-0000.json
β β ββββclass-id_png
β β β ββββcar-camera000-0000-{UUID}-0000.png
β β ββββgeneral-globally-per-frame-analysis_json
β β β ββββcar-camera000-0000-{UUID}-0000.json
β β β ββββcar-camera000-0000-{UUID}-0000.csv
β β ββββsemantic-group-segmentation_png
β β β ββββcar-camera000-0000-{UUID}-0000.png
β β ββββsemantic-instance-segmentation_png
β β β ββββcar-camera000-0000-{UUID}-0000.png
β β β ββββcar-camera000-0000-{UUID}-0000
β β β β ββββ{Entity-ID}
β ββββsensor
β β ββββcamera
β β β ββββleft
β β β β ββββpng
β β β β β ββββcar-camera000-0000-{UUID}-0000.png
β β β β ββββpng_distorted
β β β β β ββββcar-camera000-0000-{UUID}-0000.png
ββββintel_results_sequence_0052
ββββintel_results_sequence_0054
ββββintel_results_sequence_0057
ββββintel_results_sequence_0058
ββββintel_results_sequence_0059
ββββintel_results_sequence_0060
ββββintel_results_sequence_0062
```
### Data Splits
13476 images for trainining:
```
dataset = load_dataset("Intel/VALERIE22", split="train")
```
8406 images for validation and test:
```
dataset = load_dataset("Intel/VALERIE22", split="validation")
dataset = load_dataset("Intel/VALERIE22", split="test")
```
### Licensing Information
CC BY 4.0
## Grant Information
Generated within project KI-Abischerung with funding of the German Federal Ministry of Industry and Energy under grant number 19A19005M.
### Citation Information
Relevant publications:
```
@misc{grau2023valerie22,
title={VALERIE22 -- A photorealistic, richly metadata annotated dataset of urban environments},
author={Oliver Grau and Korbinian Hagn},
year={2023},
eprint={2308.09632},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{hagn2022increasing,
title={Increasing pedestrian detection performance through weighting of detection impairing factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Proceedings of the 6th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2022}
}
@inproceedings{hagn2022validation,
title={Validation of Pedestrian Detectors by Classification of Visual Detection Impairing Factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={European Conference on Computer Vision},
pages={476--491},
year={2022},
organization={Springer}
}
@incollection{grau2022variational,
title={A variational deep synthesis approach for perception validation},
author={Grau, Oliver and Hagn, Korbinian and Syed Sha, Qutub},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={359--381},
year={2022},
publisher={Springer International Publishing Cham}
}
@incollection{hagn2022optimized,
title={Optimized data synthesis for DNN training and validation by sensor artifact simulation},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={127--147},
year={2022},
publisher={Springer International Publishing Cham}
}
@inproceedings{syed2020dnn,
title={DNN analysis through synthetic data variation},
author={Syed Sha, Qutub and Grau, Oliver and Hagn, Korbinian},
booktitle={Proceedings of the 4th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2020}
}
``` |