Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SCANnotateDataset
|
2 |
+
CAD model and pose annotations for objects in the ScanNet dataset. Annotations are automatically generated
|
3 |
+
using [scannotate](https://github.com/stefan-ainetter/SCANnotate) and [HOC-Search](https://arxiv.org/abs/2309.06107).
|
4 |
+
The quality of these annotations was verified in several verification passes,
|
5 |
+
with manual re-annotations performed for outliers to ensure that final annotations are of high quality.
|
6 |
+
|
7 |
+
<p align="center">
|
8 |
+
<img src="https://github.com/stefan-ainetter/SCANnotateDataset/tree/master/figures/example_annotation.png" width="100%"/>
|
9 |
+
</p>
|
10 |
+
|
11 |
+
## Details about Annotations
|
12 |
+
|
13 |
+
For the public [ScanNet dataset](http://www.scan-net.org/), we provide:
|
14 |
+
|
15 |
+
* `18617` CAD model annotations for objects in the ScanNet dataset (30% more annotated objects compared to [Scan2CAD](https://github.com/skanti/Scan2CAD))
|
16 |
+
* Accurate 9D pose for each CAD model
|
17 |
+
* 3D semantic object instance segmentation corresponding to the annotated objects
|
18 |
+
* Automatically generated symmetry tags for ShapeNet CAD models for all categories present in ScanNet
|
19 |
+
* Extracted view parameters (selected RGB-D images and camera poses) for each object, which
|
20 |
+
can be used for CAD model retrieval via render-and-compare
|
21 |
+
|
22 |
+
## CAD Model and Pose Annotations
|
23 |
+
Our annotations for ScanNet are provided as `.pkl` files, which
|
24 |
+
contain additional information about the annotated objects, e.g. view parameters for render-and-compare and the
|
25 |
+
corresponding 3D instance segmentation of the pointcloud data.
|
26 |
+
|
27 |
+
For convenience, we additionally provide the annotations as `.json` file using the scan2cad data format.
|
28 |
+
|
29 |
+
**Note** that in order to use any of the provided annotations correctly, you have to preprocess the ShapeNet
|
30 |
+
CAD models (center and scale-normalize all CAD models) as explained below,
|
31 |
+
to generate clean CAD models which are then compatible with our annotations.
|
32 |
+
|
33 |
+
|
34 |
+
### Preliminaries: Download ShapeNet and ScanNet examples
|
35 |
+
|
36 |
+
* Download the ScanNet example scene [here](https://files.icg.tugraz.at/f/5b1b756a78bb457aafb5/?dl=1). Extract the data
|
37 |
+
and copy them to `/data/ScanNet/scans`. Note that by downloading this example data
|
38 |
+
you agree to the [ScanNet Terms of Use](https://kaldir.vc.in.tum.de/scannet/ScanNet_TOS.pdf).
|
39 |
+
To download the full ScanNet dataset follow the instructions on the [ScanNet GitHub page](https://github.com/ScanNet/ScanNet).
|
40 |
+
|
41 |
+
* Download the [ShapenetV2](https://shapenet.org/) dataset by signing up
|
42 |
+
on the website. Extract ShapeNetCore.v2.zip to `/data/ShapeNet`.
|
43 |
+
|
44 |
+
* Download our annotations for the full ScanNet dataset
|
45 |
+
[here](https://files.icg.tugraz.at/f/249aa5c3418f4c1897ee/?dl=1). Extract the data and copy them to
|
46 |
+
`/data/ScanNet/annotations`.
|
47 |
+
|
48 |
+
#### Preprocessing ShapeNet CAD Models
|
49 |
+
To center and scale-normalize the downloaded ShapeNet CAD models, run:
|
50 |
+
```bash
|
51 |
+
bash run_shapenet_prepro.sh gpu=0
|
52 |
+
```
|
53 |
+
The `gpu` argument specifies which GPU should be used for processing.
|
54 |
+
By default, code is executed on CPU.
|
55 |
+
|
56 |
+
After the above-mentioned steps the `/data` folder should contain the following directories:
|
57 |
+
```text
|
58 |
+
- data
|
59 |
+
- ScanNet
|
60 |
+
- annotations
|
61 |
+
- scene0495_00
|
62 |
+
- ...
|
63 |
+
- scans
|
64 |
+
- scene0495_00
|
65 |
+
- ShapeNet
|
66 |
+
- ShapeNet_preprocessed
|
67 |
+
- ShapeNetCore.v2
|
68 |
+
```
|
69 |
+
|
70 |
+
#### Installation Requirements and Setup
|
71 |
+
|
72 |
+
* Clone this repository. Install PyTorch3D by following the instructions from the
|
73 |
+
[official installation guide](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md).
|
74 |
+
|
75 |
+
After installing Pytorch3D, run the following command:
|
76 |
+
```bash
|
77 |
+
pip install scikit-image matplotlib imageio plotly opencv-python open3d trimesh==3.10.2
|
78 |
+
```
|
79 |
+
|
80 |
+
### Annotations in Scan2CAD data format
|
81 |
+
Annotations in scan2cad format are available [here](https://files.icg.tugraz.at/f/aaaf656e64014745af15/?dl=1).
|
82 |
+
The file `full_annotions_scannotate.json` contains `1513` entries, where the field of one entry is described as:
|
83 |
+
```javascript
|
84 |
+
[{
|
85 |
+
id_scan : "scannet scene id",
|
86 |
+
trs : { // <-- transformation from scan space to world space
|
87 |
+
|
88 |
+
translation : [tx, ty, tz], // <-- translation vector
|
89 |
+
rotation : [qw, qx, qy, qz], // <-- rotation quaternion
|
90 |
+
scale : [sx, sy, sz], // <-- scale vector
|
91 |
+
},
|
92 |
+
aligned_models : [{ // <-- list of aligned models for this scene
|
93 |
+
sym : "(__SYM_NONE, __SYM_ROTATE_UP_2, __SYM_ROTATE_UP_4 or __SYM_ROTATE_UP_INF)", // <-- symmetry property only one applies
|
94 |
+
catid_cad : "shapenet category id",
|
95 |
+
id_cad : "shapenet model id",
|
96 |
+
category_name : "", // e.g. chair,
|
97 |
+
trs : { // <-- transformation from CAD space to world space
|
98 |
+
translation : [tx, ty, tz], // <-- translation vector
|
99 |
+
rotation : [qw, qx, qy, qz], // <-- rotation quaternion
|
100 |
+
scale : [sx, sy, sz] // <-- scale vector
|
101 |
+
},
|
102 |
+
keypoints_scan : {}, // no keypoints in our annotations
|
103 |
+
keypoints_cad : {}, // no keypoints in our annotations
|
104 |
+
scannet_category_label: "", // e.g. chair; this label is taken from original ScanNet 3D object instance segmentation
|
105 |
+
object_id: "", // unique id for each annotated object in the scene
|
106 |
+
is_in_scan2cad: // <-- True if CAD annotation is available in scan2cad, else False
|
107 |
+
}]
|
108 |
+
},
|
109 |
+
{ ... },
|
110 |
+
{ ... },
|
111 |
+
]
|
112 |
+
```
|
113 |
+
|
114 |
+
|
115 |
+
### Visualization of Annotations
|
116 |
+
Use the following command to visualize the annotations:
|
117 |
+
```bash
|
118 |
+
bash visualize_annotations.sh
|
119 |
+
```
|
120 |
+
|
121 |
+
## ShapeNet Object Symmetry Annotations
|
122 |
+
Automatically generated symmetry tags for all CAD models of considered categories are available for download
|
123 |
+
[here](https://files.icg.tugraz.at/f/58469ba8edbd419abb6d/?dl=1). Symmetry
|
124 |
+
tags are saved in the following format:
|
125 |
+
```javascript
|
126 |
+
[{
|
127 |
+
cad_symmetry_dict: { // Symmetry Tags for CAD models
|
128 |
+
synset_id: { // shapenet category id,
|
129 |
+
category_name: "", // e.g. chair,
|
130 |
+
synset_id: "",
|
131 |
+
object_sym_dict: { // <-- dictionary containing CAD model ids and corresponding symmetry tags
|
132 |
+
'id_cad': 'symmetry_tag',
|
133 |
+
},
|
134 |
+
{...},
|
135 |
+
{...},
|
136 |
+
}
|
137 |
+
}
|
138 |
+
}]
|
139 |
+
```
|
140 |
+
|
141 |
+
To predict the symmetry tag for a given CAD model, we first render depth maps from 6 different views of the
|
142 |
+
preprocessed CAD model.
|
143 |
+
We then rotate the object around the vertical axis by a specific angle (e.g. 180° to check for
|
144 |
+
__SYM_ROTATE_UP_2), and again render the depth maps of the 6 views. If the difference of depth renderings is below a
|
145 |
+
certain threshold, we assume that the object is symmetric according to the performed rotation.
|
146 |
+
|
147 |
+
<p align="center">
|
148 |
+
<img src="https://github.com/stefan-ainetter/SCANnotateDataset/tree/master/figures/example_symmetry_annotation.png" width="80%"/>
|
149 |
+
</p>
|
150 |
+
|
151 |
+
## Citation
|
152 |
+
To create these annotations, we used the CAD model retrieval pipeline from
|
153 |
+
[scannotate](https://github.com/stefan-ainetter/SCANnotate), but replaced the exhaustive
|
154 |
+
CAD retrieval stage with [HOC-Search](https://arxiv.org/abs/2309.06107).
|
155 |
+
If you use any of the provided code or data, please cite the following works:
|
156 |
+
|
157 |
+
Scannotate:
|
158 |
+
```bibtex
|
159 |
+
@inproceedings{ainetter2023automatically,
|
160 |
+
title={Automatically Annotating Indoor Images with CAD Models via RGB-D Scans},
|
161 |
+
author={Ainetter, Stefan and Stekovic, Sinisa and Fraundorfer, Friedrich and Lepetit, Vincent},
|
162 |
+
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
|
163 |
+
pages={3156--3164},
|
164 |
+
year={2023}
|
165 |
+
}
|
166 |
+
```
|
167 |
+
HOC-Search:
|
168 |
+
```bibtex
|
169 |
+
@misc{ainetter2023hocsearch,
|
170 |
+
title={HOC-Search: Efficient CAD Model and Pose Retrieval from RGB-D Scans},
|
171 |
+
author={Stefan Ainetter and Sinisa Stekovic and Friedrich Fraundorfer and Vincent Lepetit},
|
172 |
+
year={2023},
|
173 |
+
eprint={2309.06107},
|
174 |
+
archivePrefix={arXiv},
|
175 |
+
primaryClass={cs.CV}
|
176 |
+
}
|
177 |
+
```
|