arxiv_dump / txt /2101.04493.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
21.5 kB
PVDECONV: POINT-VOXEL DECONVOLUTION FOR AUTOENCODING CAD
CONSTRUCTION IN 3D
Kseniya Cherenkova?yDjamila Aouada?Gleb Gusevy
?SnT, University of LuxembourgyArtec3D
ABSTRACT
We propose a Point-V oxel DeConvolution ( PVDeConv ) mod-
ule for 3D data autoencoder. To demonstrate its efficiency we
learn to synthesize high-resolution point clouds of 10k points
that densely describe the underlying geometry of Computer
Aided Design (CAD) models. Scanning artifacts, such as pro-
trusions, missing parts, smoothed edges and holes, inevitably
appear in real 3D scans of fabricated CAD objects. Learning
the original CAD model construction from a 3D scan requires
a ground truth to be available together with the corresponding
3D scan of an object. To solve the gap, we introduce a new
dedicated dataset, the CC3D, containing 50k+ pairs of CAD
models and their corresponding 3D meshes. This dataset is
used to learn a convolutional autoencoder for point clouds
sampled from the pairs of 3D scans - CAD models. The chal-
lenges of this new dataset are demonstrated in comparison
with other generative point cloud sampling models trained on
ShapeNet. The CC3D autoencoder is efficient with respect to
memory consumption and training time as compared to state-
of-the-art models for 3D data generation.
Index Terms —CC3D, point cloud autoencoder, CAD
models generation, Scan2CAD.
1. INTRODUCTION
Recently, deep learning (DL) for 3D data analysis has seen
a boost in successful and competitive solutions for segmen-
tation, detection and classification [1], and real-life applica-
tions, such as self-driving, robotics, medicine, and augmented
reality. In industrial manufacturing, 3D scanning of fabricated
parts is an essential step of product quality control, when the
3D scans of real objects are compared to the original Com-
puter Aided Design (CAD) models. While most consumer
solutions for 3D scanning are good enough for capturing
the general shape of an object, artifacts can be introduced
in the parts of the object that are physically inaccessible for
3D scanning, resulting in the loss of sharp features and fine
details.
This paper focuses on recovering scanning artifacts in an
autoencoding data-driven manner. In addition to presenting a
new point cloud autoencoder, we introduce a new 3D dataset,
referred to as CC3D , which stands for CAD Construction
Fig. 1 . Examples of CC3D data: From left to right, CAD
models, corresponding 3D scans, 10k input point clouds and
results of the proposed autoencoder.
in 3D . We further provide an analysis focused on real 3D
scanned data, keeping in mind real-world constraints, i.e.,
variability, complexity, artifacts, memory and speed require-
ments. The first two columns in Fig. 1 give some examples
from CC3D data; the CAD model and its 3D scanned version
in triangular mesh format. While the most recent existing
solutions [2, 3, 4, 5] on 3D data autoencoders mostly focus
on low-resolution data configuration (approximately 2500
points), we see it more beneficial for real data to experiment
in higher-dimension. It is what brings the important 3D object
details into the big data learning perspective.
Several publicly available datasets related to CAD mod-
elling, such as ModelNet [6], ShapeNet [7], and ABC [8],
have been released in the last years. The summary of the fea-
tures they offer can be found in Table 1. These datasets have
boosted the research on deep learning on 3D point clouds
mainly.
Similarly, our CC3D dataset should support research ef-
forts in addressing real-world challenges. Indeed, this dataset
provides various 3D scanned objects, with their ground-truth
CAD models. The models collected in CC3D dataset are
not restricted to any object’s category and/or complexity. 3D
scans offer challenging cases of missing data, smoothed ge-arXiv:2101.04493v1 [cs.CV] 12 Jan 2021ometry and fusion artefacts in the form of varying protrusions
and swept holes. Moreover, the resolution of 3D scans is typ-
ically high with more than 100k faces in the mesh.
In summary, the contributions of this paper include: (1)
A 3D dataset, CC3D, a collection of 50k+ aligned pairs of
meshes, a CAD model and its virtually 3D scanned coun-
terpart with corresponding scanning artifacts; (2) A CC3D
autoencoder architecture on 10k point clouds learned from
CC3D data; (3) A Point-V oxel DeConvolution ( PVDeConv )
block for the decoder part of our model, combining point fea-
tures on coarse and fine levels of the data.
The remainder of the paper is organized as follows: Sec-
tion 2 reviews relevant state-of-the-art works in 3D data au-
toencoding. In Section 3 we give a brief overview of the
core components our work is built upon. Section 4 describes
the main contributions of this paper in more details. In Sec-
tion 5 the results and comparison with related methods are
presented. Section 6 gives the conclusions.
2. RELATED WORK
The choice of DL architecture and 3D data representation is
usually defined by existing practices and available datasets
for learning [9]. V oxel-based representations have pioneered
3D data analysis, applying 3D Convolution Neural Network
(CNN) directly on a regular voxel grid [10]. Despite the im-
proved models in terms of memory consumption, e.g., [11],
their inability to resolve fine object details remains the main
limiting factor in practical use.
Other works introduce convolutions directly on graph
structures, e.g., [12]. They attempt to generalize DL models
to non-Euclidean domains such as graphs and manifolds [13],
and offer the analogs of pooling/unpooling operations as
well [14]. However, they are not applicable for learning on
real unconstrained data as they require either meshes to be
registered to a common template, or inefficiently deal with
meshes of up to several thousand faces, or are specific to
segmentation or classification tasks only.
Recent advances in developing efficient architectures for
3D data analysis are mainly related to point cloud based meth-
ods [15, 16]. Decoders [17, 2, 18, 3, 19] have made point
clouds a highly promising representation for 3D object gen-
eration and completion using neural networks. Successful
works in generative adversarial network (GAN) (e.g.,[20]),
show the applicability of different GAN models operating on
the raw point clouds.
In this paper, we comply with the common autoencoder
approach, i.e., we use a point cloud encoder to embed the
point cloud input, and design a decoder to generate a complete
point cloud from the embedding of the encoder.3. BACKGROUND AND MOTIVATION
We herein present the fundamental building blocks that com-
prise the core of this paper, namely, the point cloud, metric on
it, and the DL backbone. All together, these elements make
the CC3D autoencoder perform efficiently on high-resolution
3D data.
A point cloud Scan be represented as S=f(pk; fk)g,
where each pkis the 3D coordinates of the kthinput point,
andfkis the feature corresponding to it, and the size of fk
defines the dimensionality of the points feature space. Note
that while it is straightforward to include auxiliary informa-
tion (such as points’ normals) to our architecture, in this paper
we exclusively employ the xyzcoordinates of pkas the input
data.
We base on Point-V oxel Convolution (PVConv), a mem-
ory efficient architecture for learning on 3D point cloud pre-
sented in [21]. To the best of our knowledge, it is the first de-
velopment of autoencoder based on PVCNN as the encoder.
Briefly, PVConv combines the fine-grained feature transfor-
mation on points with the coarse-grained neighboring feature
aggregation in the voxel space of point cloud. Three basic op-
erations are performed in the coarse branch, namely, voxeliza-
tion, followed by voxel-based 3D convolution, and the devox-
elization. The point-based branch aggregates the features for
each individual with multilayer perceptron (MLP), providing
high resolution details. The features from both branches are
aggregated into a hidden feature representation.
The formulation of convolution in both voxel-based and
point-based cases is the following:
yk=X
xi2N(xk)K(xk; xi)F(xi); (1)
where for each center xk, and its neighborhood N(xk), the
neighboring features F(xi)are convolved with the kernel
K(xk; xi). The choice of PVCNN is due to its efficiency
in training on high-resolution 3D data. Indeed, it makes it
a good candidate for working with real-life data. As it is
stated in [21], PVConv combines advantages of point-based
methods, reducing memory consumption, and voxel-based,
improving the data locality and regularity.
For the loss function, Chamfer distance [22] is used to
measure the quality of the autoencoder. It is a differentiable
metric, invariant to permutation of points in both ground-truth
and target point clouds, SGandS, respectively. It is defined
as follows:
dCD(S; SG) =X
x2Smin
y2SGkxyk2+X
y2SGmin
x2Skxyk2:
(2)
As it follows from its definition, no correspondence or equal
number of points in SandSGis required for the computation
ofdCD, making it possible to work within different resolu-
tions for the encoder and decoder.4. PROPOSED AUTOENCODING OF 3D SCANS TO
CAD MODELS
This paper studies the problem of 3D point cloud autoencod-
ing in a deep learning setup, and in particular, the choice
of the architecture of a 3D point cloud decoder for efficient
reconstruction of point clouds sampled from corresponding
pairs of 3D scans and CAD models.
4.1. CC3D dataset
The CC3D dataset of 3D CAD models was collected from a
free online service for sharing CAD designs [23]. In total,
the collected dataset contains 50k+ models in STEP format,
unrestricted to any category, with varying complexity from
simple to highly detailed designs (see examples in Fig. 1).
These CAD models are converted to meshes, and each mesh
was virtually scanned using proprietary 3D scanning pipeline
developed by Artec3D [24]. The typical size of the result-
ing scans is in the order of 100K points and faces, while the
meshes converted from CAD models are usually more than
an order of magnitude lighter.
In order to illustrate the uniqueness of our dataset, Ta-
ble 1 summarizes the available CAD-like datasets and se-
mantic information they provide. Unlike ShapeNet [7] and
ModelNet [6], the CC3D dataset is a collection of 3D ob-
jects unrestricted to any category, with the complexity vary-
ing from very basic to highly detailed models. One of the
most recent datasets, the ABC dataset [8] would have been a
valuable collection due to its size for our task if it had con-
tained 3D scanned models alongside with ground-truth CAD
objects. The availability of CAD-3D scan pairings, the high-
resolution of meshes and variability of the models make the
CC3D dataset stand out among other alternatives. The CC3D
dataset will be shared with the research community.
4.2. CC3D Autoencoder
Our decoder is a modified version of PVCNN, where we
cut the final classification/segmentation layer. The proposed
Dataset #Models
CAD
Curves
Patches
Semantics
Categories
3D scan
CC3D (ours) 50k+ 3 7 7 7 7 3
ABC [8] 1M+ 3 3 3 7 7 7
ShapeNet [7] 3M+ 7 7 7 3 3 7
ShapeNetCore [7] 51k+ 7 7 7 3 3 7
ShapeNetSem [7] 12k 7 7 7 3 3 7
ModelNet [6] 151k+ 7 7 7 3 3 7
Table 1 . Summary of datasets with CAD-like data. Note that
only ABC and CC3D offer CAD models in b-rep (boundary
representation) format in addition to triangular meshes.
Fig. 2 . Overview of CC3D autoencoder architecture and
PVDeConv module. The features from coarse voxel-based
and fine point-based branches are fused to be unwrapped to
the output point cloud.
PVDeConv structure is depicted in Fig. 2. The fine point-
based branch is implemented as shared transposed MLP, al-
lowing to maintain the same number of points throughout the
autoencoder, while the coarse branch allows the features to be
aggregated at different voxel grid resolutions, thus modelling
the neighborhood information at different scales.
The PVDeConv block consists of 3D volumetric deconvo-
lutions to aggregate the features, dropout, the batch normal-
ization and the nonlinear activation function after each 3D
deconvolution. Features from both branches are fused at the
final level and MLP to produce the output points.
The transposed 3D convolution operator, used in PVDe-
Conv, multiplies each input value element-wise by a learnable
kernel, and sums over the outputs from all input feature chan-
nels. This operation can be seen as the gradient of 3D convo-
lution, although it is not an actual deconvolution operation.
5. EXPERIMENTS
We evaluate the proposed autoencoder by training first on our
CC3D dataset, and then on the ShapeNetCore [7] dataset.
5.1. Training on CC3D
Dataset. CC3D dataset is randomly split into three non-
intersecting folds: 80% for training, 10% for testing and 10%
for validation. Ground-truth point clouds are generated by
uniformly sampling N= 10 k points on the CAD models sur-
faces, while the input point clouds are sampled in the same
manner from corresponding 3D scans of the models. The data
is normalized to (0, 1).
Implementation Details. The encoder follows the struc-
ture in [21], the coarse blocks are ((64, 1, 32), (64, 2, 16),
(128, 1, 16), 1024), where triplets describe voxel-based con-
volutional PVConv block in terms of number of channels,
number of blocks, and voxel resolution. The last number de-
scribes the resulting embedding size for the coarse part, and
being combined with shared MLP cloud blocks = (256, 128),
gives the feature embedding size of 1472. The decoder coarse
blocks are ((128, 1, 16), (64, 2, 16), (64, 1, 32), 128), whereFig. 3 . Results of our autoencoder on CC3D data with 10k
points for input and output. The left of each pair of results
is the input point cloud of 10k, the right is the autoencoder
reconstruction of 10k points.
the triplets are PVDeConv concatenated with decoder point-
based fine blocks of size (256, 128).
Training setup. The autoencoder is trained with Chamfer
loss for 50 epochs on two Quadro P6000 with batch size 80 in
data parallel mode. The overall training takes approximately
15 hours. The best model is chosen based on the validation
set.
Evaluation. The qualitative results of our autoencoder on
the CC3D data are presented in Fig. 3. We notice that the fine
details are captured in these challenging cases.
Method Chamfer distance, 103
AtlasNet [2] 1.769
FoldingNet [17] 1.648
PCN [19] 1.472
TopNet [3] 0.972
Ours 0.804
Table 2 . CC3D autoencoder results on ShapeNetCore
dataset: comparison against previous works ( N= 2:5k).
5.2. Training on ShapeNetCore
To demonstrate the competitive performance of our CC3D
autoencoder, we train it on the ShapeNetCore dataset follow-
ing the train/test/val split of [3], with the number of sampled
point N= 2500 for a fair comparison. Since we do not have
scanned models for the ShapeNet data, we add a 3% Gaussian
noise to each point’s location. The rest of the training setup
is replicated from the CC3D configuration. The final met-
ric is the mean Chamfer distance averaged per model across
all classes. The numbers for other methods are reported
from [3]. The results of the evaluation of our method against
state-of-the-art methods are shown in Table 2. We note that
Fig. 4 . Chamfer distance distribution for our autoencoder. On
test set of CC3D for point clouds of size N= 10 k, mean
Chamfer distance is 1:26103with standard deviation of
0:794103. ShapeNetCore test set with N= 2:5k, it is
0:804103with standard deviation 0:766103.
Fig. 5 . Results of our autoencoder on ShapeNetCore data.
The top row is the input 2.5k point clouds, the bottom is the
reconstruction of our autoencoder.
our result surpasses the previous works by a significant mar-
gin. Qualitative examples on ShapeNetCore data are given in
Fig. 5. The distribution of distances given in Fig. 4 implies
that CC3D dataset presents advanced challenges for our au-
toencoder, where it performs at 1:26103average Chamfer
distance, while it reaches 0:804103on ShapeNetCore.
6. CONCLUSIONS
In this work, we proposed a Point-V oxel Deconvolution
(PVDeConv ) block for a fast and efficient deconvolution
on 3D point clouds. It was used in combination with a new
dataset, CC3D, for autoencoding 3D Scans to their corre-
sponding synthetic CAD models. The CC3D dataset offers
pairs of CAD models and 3D scans, totaling to 50k+ objects.
Our CC3D autoencoder on point clouds is memory and time
efficient. Furthermore, it demonstrates superior results com-
pared to existing methods on ShapeNet data. As future work,
different types of losses will be investigated to improve the
sharpness on edges, such as quadric [5]. Testing the variants
of CC3D autoencoder with different configurations of stacked
PVConv and PVDeConv layers will also be considered. Fi-
nally, we believe that the CC3D dataset itself could assist in
real 3D scanned data analysis with deep learning methods.7. REFERENCES
[1] Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu,
Li Liu, and Mohammed Bennamoun, “Deep learning for
3d point clouds: A survey,” ArXiv , vol. abs/1912.12033,
2019.
[2] Thibault Groueix, Matthew Fisher, Vladimir G. Kim,
Bryan C. Russell, and Mathieu Aubry, “Atlasnet: A
papier-m ˆach´e approach to learning 3d surface genera-
tion,” CoRR , vol. abs/1802.05384, 2018.
[3] Lyne P Tchapmi, Vineet Kosaraju, S. Hamid
Rezatofighi, Ian Reid, and Silvio Savarese, “Top-
net: Structural point cloud decoder,” in The IEEE
Conference on Computer Vision and Pattern Recogni-
tion (CVPR) , 2019.
[4] Isaak Lim, Moritz Ibing, and Leif Kobbelt, “A convolu-
tional decoder for point clouds using adaptive instance
normalization,” CoRR , vol. abs/1906.11478, 2019.
[5] Nitin Agarwal, Sung-Eui Yoon, and M. Gopi, “Learning
embedding of 3d models with quadric loss,” CoRR , vol.
abs/1907.10250, 2019.
[6] Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang
Zhang, Xiaoou Tang, and J. Xiao, “3d shapenets:
A deep representation for volumetric shapes,” in
2015 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) , June 2015, pp. 1912–1920.
[7] Angel X. Chang, Thomas Funkhouser, Leonidas
Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Sil-
vio Savarese, Manolis Savva, Shuran Song, Hao Su,
Jianxiong Xiao, Li Yi, and Fisher Yu, “Shapenet: An
information-rich 3d model repository,” 2015.
[8] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Fran-
cis Williams, Alexey Artemov, Evgeny Burnaev, Marc
Alexa, Denis Zorin, and Daniele Panozzo, “Abc: A
big cad model dataset for geometric deep learning,” in
The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) , June 2019.
[9] Eman Ahmed, Alexandre Saint, Abd El Rahman
Shabayek, Kseniya Cherenkova, Rig Das, Gleb Gusev,
Djamila Aouada, and Bj ¨orn E. Ottersten, “Deep learn-
ing advances on different 3d data representations: A sur-
vey,” ArXiv , vol. abs/1808.01462, 2018.
[10] D. Maturana and S. Scherer, “V oxnet: A 3d convolu-
tional neural network for real-time object recognition,”
in2015 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS) , Sep. 2015, pp. 922–
928.[11] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger,
“Octnet: Learning deep 3d representations at high res-
olutions,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition , 2017.
[12] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and
Michael J. Black, “Generating 3D faces using convo-
lutional mesh autoencoders,” in European Conference
on Computer Vision (ECCV) , 2018, pp. 725–741.
[13] M. M. Bronstein, J. Bruna, Y . LeCun, A. Szlam, and
P. Vandergheynst, “Geometric deep learning: Going be-
yond euclidean data,” IEEE Signal Processing Maga-
zine, vol. 34, no. 4, pp. 18–42, July 2017.
[14] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes,
Shachar Fleishman, and Daniel Cohen-Or, “Meshcnn:
A network with an edge,” ACM Transactions on Graph-
ics (TOG) , vol. 38, no. 4, pp. 90, 2019.
[15] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J.
Guibas, “Pointnet++: Deep hierarchical feature learn-
ing on point sets in a metric space,” CoRR , vol.
abs/1706.02413, 2017.
[16] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma,
Michael M. Bronstein, and Justin M. Solomon, “Dy-
namic graph cnn for learning on point clouds,” ACM
Trans. Graph. , vol. 38, no. 5, Oct. 2019.
[17] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian,
“Foldingnet: Interpretable unsupervised learning on 3d
point clouds,” ArXiv , vol. abs/1712.07262, 2017.
[18] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Fed-
erico Tombari, “3d point-capsule networks,” CoRR , vol.
abs/1812.10775, 2018.
[19] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz,
and Martial Hebert, “Pcn: Point completion network,”
in3D Vision (3DV), 2018 International Conference on ,
2018.
[20] Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnab ´as
P´oczos, and Ruslan Salakhutdinov, “Point cloud GAN,”
CoRR , vol. abs/1810.05795, 2018.
[21] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han,
“Point-voxel cnn for efficient 3d deep learning,” in Ad-
vances in Neural Information Processing Systems , 2019.
[22] Haoqiang Fan, Hao Su, and Leonidas Guibas, ,” in
A Point Set Generation Network for 3D Object Recon-
struction from a Single Image , 07 2017, pp. 2463–2471.
[23] “3dcontentcentral,” https://www.
3dcontentcentral.com , Accessed: 2020-02-02.
[24] “Artec3d,” https://www.artec3d.com/ , Ac-
cessed: 2020-02-020.