arxiv_dump / txt /2104.03109.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
47 kB
VGF-Net: Visual-Geometric Fusion Learning
for Simultaneous Drone Navigation and Height Mapping
Yilin Liu
Shenzhen UniversityKe Xie
Shenzhen UniversityHui Huang*
Shenzhen University
Abstract
The drone navigation requires the comprehensive un-
derstanding of both visual and geometric information
in the 3D world. In this paper, we present a Visual-
Geometric Fusion Network (VGF-Net), a deep network
for the fusion analysis of visual/geometric data and
the construction of 2.5D height maps for simultaneous
drone navigation in novel environments. Given an initial
rough height map and a sequence of RGB images, our
VGF-Net extracts the visual information of the scene,
along with a sparse set of 3D keypoints that capture
the geometric relationship between objects in the scene.
Driven by the data, VGF-Net adaptively fuses visual
and geometric information, forming a unified Visual-
Geometric Representation . This representation is fed to
a new Directional Attention Model (DAM), which helps
enhance the visual-geometric object relationship and
propagates the informative data to dynamically refine
the height map and the corresponding keypoints. An
entire end-to-end information fusion and mapping sys-
tem is formed, demonstrating remarkable robustness
and high accuracy on the autonomous drone navigation
across complex indoor and large-scale outdoor scenes.
1. Introduction
In recent years, we have witnessed the development of
autonomous robotic systems that have been broadly used
in many scenarios (e.g., autonomous driving, manufactur-
ing and surveillance). Drone belongs to the robotic system,
and is well-known for its flying capacity. Navigation is ex-
tremely important to the drone fly, as it facilitates the effec-
tive exploration and recognition of the unknown environ-
ments. Yet, the navigation of drone remains a challenging
task, especially for planning the pathway as short as pos-
sible to the target/destination whilst avoiding the potential
collision with objects in the unexplored space. The conven-
tional navigation heavily relies on the expertise of human,
who intuitively designs the drone flyby trajectory based on
the spatial layout within the visible range. The resulting
*Corresponding author: Hui Huang ([email protected])
Figure 1: We show a drone navigation trajectory (yellow
curve) in 3D scene, which connects the starting and target
points (red dots). During the navigation, our VGF-Net dy-
namically updates the 2.5D height map (see the bottom-left
corner) in new places (see pictures in red rectangles), which
is used to timely update the navigation trajectory.
navigation system lacks of the globe knowledge of scenes,
leading to unsatisfactory or even failed path planning.
To better leverage the global information of 3D en-
vironment, researches on drone navigation have focused
on collecting and memorizing the environmental informa-
1arXiv:2104.03109v1 [cs.CV] 7 Apr 2021tion during the navigating process. Typically, the exist-
ing works [11, 2, 3] employ the mapping techniques to
construct 2D/3D maps with respect to the vacant/occupied
space. The mapping result contains rich geometric relation-
ship between objects, which helps to navigate. There have
also been navigation approaches based on visual informa-
tion [6, 10, 2], saving the computational overhead to con-
struct maps. Nonetheless, these works purely condition the
accuracy of navigation on either geometric or visual infor-
mation.
In this paper, we utilize 2.5D height map for autonomous
drone navigation. There are growing computer applica-
tions that use height map to represent the boundaries of
objects (e.g., buildings or furniture). Nonetheless, there is
nothing guaranteed for the quality of given height maps,
as the mapping process likely involves incomplete or out-
of-date information. Here, we advocate the importance
of fusing geometric and visual information for a more ro-
bust construction of the height map. The new trend of re-
searches [24, 5] on the 3D object/scene understanding has
also demonstrated that the geometric relationship between
objects and visual appearance of scenes are closely corre-
lated. We thus propose a Visual-Geometric Fusion Network
(VGF-Net) to dynamically update the height map during
drone navigation by utilizing the timely captured new im-
ages (see Figure 1).
More specifically, as illustrated in Figure 2, the network
takes an initial rough height map together with a sequence
of RGB images as input. We use convolutional layers to
compute the visual and geometric information to renew the
height map. Next, we apply the simultaneous localization
and mapping (SLAM) [20] module to extract a sparse set
of 3D keypoints from the image sequence. These key-
points are used along with the renewed height map to con-
struct a novel Visual-Geometric Representation , which is
passed to a Directional Attention Model . This attention
model exchanges visual and geometric information among
objects in the scene, providing quite useful object relation-
ship for simultaneous refinement of the height map and
the corresponding keypoints, leading to the successful path
planning [15] at each navigation moment. Compared to
dense point clouds that require time-consuming depth es-
timation [4] and costly processing, the sparse keypoints we
use are fast to compute yet effective in terms of capturing
useful geometric information without much redundancy. As
the drone flies over more and more places, our network can
achieve and fuse more and more the visual and geometric
information to largely increase the precision of height map
and consequently the reliability of autonomous navigation.
We intensively train and evaluate our method on a bench-
mark of seven large-scale urban scenes and six complex
indoor scenes for height map construction and drone nav-
igation. The experimental results and comparative statisticsclearly demonstrate the effectiveness and the robustness of
our proposed VGF-Net.
2. Related Work
There have been an array of researches on the naviga-
tion system that allows robots to smartly explore the real
world. Below, we will mainly survey on the drone naviga-
tion and environment mapping, as they are highly relevant
to our work in the sense that their navigation systems are
driven by the critical environment data.
2.1. Drone Navigation
The modern drone systems are generally equipped with
various sensors (e.g., RGB-D camera, radar and GPS),
which help the hardware devices to achieve accurate percep-
tion of the real world. Typically, the data captured by sen-
sors is used for mapping (i.e., the construction of map), pro-
viding comprehensive information for planning the moving
path of drone. During the navigation process, the traditional
methods [11, 22] compute the trajectory of drone based on
the pre-scribed maps. However, the construction of a pre-
cise map is generally expensive and time-consuming. Thus,
the recent works [6, 10, 2] simplify the construction of map
to facility more commercially-cheap navigation.
The advances on deep learning have significantly im-
proved the robustness of visual navigation, leading to the
emergency of many navigation systems that do not rely on
the given maps. Kim et al. [14] and Padhy et al. [21] use the
classification neural network to predict the direction (e.g.,
right, left or straight) of moving drone. Furthermore, Lo-
quercio et al. [17] and Mirowski et al. [19] use neural net-
works to compute the angle of flying and the risk of colli-
sion, which provide more detailed information to control
the drone flyby. Note that the above methods learn the
actions of drone from the human annotations. The latest
works employ deep reinforcement learning [23, 29, 26] to
optimize the network, enabling more flexible solutions for
autonomous drone navigation in novel environments.
Our approach utilizes a rough 2.5D height map to in-
crease the success rate of navigation in different complex
scenes, which may have various spatial layouts of objects.
Compared to the existing methods that conduct the mapping
before navigation, we allow for real-time intelligent update
of the height map during navigation, largely alleviating neg-
ative impacts of problematic mapping results.
2.2. Mapping Technique
The mapping technique is fundamental in the drone nav-
igation. The techniques of 2D mapping have been widely
used in the navigation task. Henriques et al. [11] and Savi-
nov et al. [22] use 2D layout map to store useful informa-
tion, which is learned by neural networks from the imageOffset... ...
Projection
SLAM
ConvConv
Direational Attention ModelKeypoints
t 1 …  t N p{ } p
1 …  t {I I}
tMt+1rM tcM trR tcR
Figure 2: Overview of VGF-Net. At the tthmoment, the network uses convolutional layers to learn visual and geometric
representations from the RGB image Itand 2.5D height map Mt(produced at the (t1)thmoment). The representations
are combined to compute the residual update map Rc
t, which is added to the 2.5D height map to form a renewed height
mapMc
t. Based on the new height map and the 3D keypoints fpt;1; :::; p t;Ng(produced by SLAM), we construct the VG
representation for each keypoint (yellow dot), which is used by DAM to select useful information to refine object boundaries
and 3D keypoints at the next moment. Note that the refined height map Mr
t+1is used for path planning, which is omitted for
a simple illustration.
data of 3D scenes. Chen et al. [6] use the 2D topologi-
cal map, which can be constructed using the coarse spatial
layout of objects, to navigate the robot in an indoor scene.
Different from the methods that consider the 2D map of an
entire scene, Gupta et al. [10] unify the mapping and 2D
path planning to rapidly adjust the navigation with respect
to the surrounding local environment. Bansal et al. [2] uti-
lize sparse waypoints to represent the map, which can be
used to generate a smooth pathway to the target object or
destination.
Compared to 2D mapping, 3D mapping provides much
richer spatial information for the navigation system. Wang
et al. [27] use visual odometry to capture the geometric re-
lationship between 3D points, which is important to recon-
struct the 3D scene. Engel et al. [8, 7] integrate the tracking
of keypoints into the mapping process, harnessing tempo-
ral information to produce a more consistent mapping of
the global environment. Futhermore, Huang et al. [12, 13]
use a probabilistic Conditional Random Field model and a
noise-aware motion affinity matrix to effectively track both
moving and static objects. Wang et al. [28] use plane as
a geometric constrain to reconstruct the whole scene. Be-
sides 3D points, depth information is also important to 3D
mapping. During the mapping process, Tateno et al. [25]
and Ma et al. [18] use neural networks to estimate the depth
map of a single image, for a faster construction of the 3D
map. However, the fidelity of depth estimation is bounded
by the scale of training data. To enhance, Kuznietsov et
al. [16], Godard et al. [9] and Bian et al. [3] train the depthestimation network in semi-supervised/unsupervised man-
ner, where the consistence in-between images are learned.
Nowadays, a vast of real-world 3D models and applica-
tions emerge, such as Google earth, and so there is abundant
data of height maps available for the training of drone nav-
igation system. Nonetheless, the accuracy and timeliness
of such data is impossible to be guaranteed, thus hard to
be directly used in practice. We deeply exploit the visual-
geometric information fusion representation to effectively
and dynamically update the given height map during navi-
gation, yielding a significant increase of the success rate of
the autonomous drone navigation in various novel scenes.
3. Overview
The core idea behind our approach is to fuse the visual
and geometric information for the construction of height
map. This is done by our Visual-Geometric Fusion Net-
work (VGF-Net) to compute the visual-geometric represen-
tation with respect to the visual and geometric consistence
between the 3D keypoints and object boundaries character-
ized in the height map. VGF-Net uses the fused representa-
tion to refine the keypoints and height map at each moment
during drone navigation. Below, we outline the architecture
of VGF-Net.
As illustrated in Figure 2, at the tthmoment ( t0), the
network takes the RGB image Itand the associated height
mapMtas input. The image Itis fed to convolutional lay-
ers to compute the visual representation Vt. The height map
Mtis also input to the convolutional layers for the geomet-ric representation Gt. The visual and geometric representa-
tions are fused to compute the residual update map Rc
tthat
updates the height map to Mc
t, providing more consistent
information for the subsequent steps.
Next, we use the SLAM [20] module to compute a sparse
set of 3D keypoints fpt;1; :::; p t;Ng, based on the images
fI1; :::; I tg. We project these keypoints to the renewed
height map Mc
t. For the keypoint pt;i, we compute a set
of distancesfdt;i;1; :::; d t;i;Kg, where dt;i;kdenotes the dis-
tance from the keypoint pt;ito the nearest object boundary
along the kthdirection (see Figure 3(a)). Intuitively, the
keypoint, which is extracted around the objects in the 3D
scene, is also near to the boundaries of the corresponding
objects in the height map. This relationship between the
keypoint pt;iand the object can be represented by the vi-
sual and geometric information in the scene. Specifically,
this is done by fusing the visual representation Vt, geomet-
ric representation Gc
t(learned from the renewed height map
Mc
t) and the distances fdt;i;1; :::; d t;i;Kgto form a novel
Visual-Geometric (VG) representation Uifor the keypoint
pt;i. For all keypoints, we compute a set of VG representa-
tionsfUt;1; :::; U t;Ng.
Finally, we employ a Directional Attention Model
(DAM), which takes input as the VG representations
fUt;1; :::; U t;Ng, to learn a residual update map Rr
tto refine
the height map Mc
t. The DAM produces a new height map
Mr
t+1that respects the importance of each keypoint to the
object boundaries in different directions (see Figure 3(b)).
Meanwhile, we use DAM to compute a set of spatial off-
setsfpt+1;1; :::;pt+1;Ngto update the keypoints, whose
locations are imperfectly estimated by the SLAM. We use
the height map Mr
t+1for dynamic path planning [15] at the
(t+ 1)thmoment, and meanwhile input the image It+1and
the height map Mr
t+1to VGF-Net at this moment for next
update. As drone flies, the network achieves more accu-
rate information and works more robustly for simultaneous
drone navigation and height mapping.
4. Method
We now introduce our VGF-Net in more detail. The net-
work extracts visual and geometric information from the
RGB images, the associated 2.5D height map and 3D key-
points. In what follows, we formally define the informa-
tion fusion that produces the visual-geometric representa-
tion, which is then used for the refinement of the height
map and keypoints.
4.1. Residual Update Strategy
The VGF-Net refines the height map and keypoints iter-
atively, as the drone flies to new places and captures new
images. We divide this refinement process into separate
moments. At the tthmoment, we feed the RGB image
It2RHIWI3and the height map Mt2RHMWMinto the VGF-Net, computing the global visual represen-
tation Vt2RHMWMCand the geometric representation
Gt2RHMWMCas:
Vt=Fv(It); G t=Fg(Mt); (1)
whereFvandFgdenote the two sets of convolutional lay-
ers. Note that the value of each location on Mtrepresents
the height of object, and we set the height of ground to be 0.
We concatenate the representations VtandGtfor comput-
ing a residual update map Rc
t2RHMWM, which is used
to update the height map Mtas:
Mc
t=Mt+Rc
t; (2)
where
Rc
t=Fc(Vt; Gt): (3)
Here, Mc
t2RHMWMis a renewed height map, and Fc
denotes a set of convolutional layers. Compared to directly
computing a new height map, the residual update strategy
(as formulated by Eq. (2)) adaptively reuses the information
ofMt. More importantly, we learn the residual update map
Rc
tfrom the new content captured at the tthmoment. It
facilitates a more focused update on the height values of
regions that are unexplored before the tthmoment. The
height map Mc
tis fed to an extra set of convolutional layers
to produce the representation Gc
t, which will be used for the
construction of the visual-geometric representation.
4.2. Visual-Geometric Representation
We conduct the visual-geometric information fusion
to further refine the height map. To capture the geo-
metric relationship between objects, we use a standard
SLAM [20] module to extract a sparse set of 3D keypoints
fpt;1; :::; p t;Ngfrom the sequence of images fI1; :::; I tg.
Given the keypoint pt;i2R13in the camera coordinate
system, we project it to the 2.5D space as:
p0
t;i=pt;iSR+T: (4)
Here, S2R33is decided by a pre-defined scale factor,
which could be calculated at the initialization of the SLAM
system or by GPS adjustment. T2R13andR2R33
translate the origin of the 3D point set from the camera to
the height map coordinate system. In the height map co-
ordinate system, the drone is located at (W
2;0), where W
represent the width of the height map.
Note that the first two dimensions of p0
t;i2R13indi-
cate the location on the height map, and the third dimension
indicates the corresponding height value. The set of key-
pointsfp0
t;1; :::; p0
t;Ngare used for constructing the visual-
geometric representations.
Next, for each keypoint p0
t;i, we compute its distances to
the nearest objects in Kdifferent directions. Here, we refer(a) VG Representation (b) DAM
height increase height decrease
Figure 3: Illustration of fusing visual and geometric infor-
mation for updating the 2.5D height map. (a) We construct
the VG representation for each 3D keypoint (yellow dot)
projected to the 2.5D height map. The information of VG
representation is propagated to surrounding object bound-
aries, along different directions (indicated by different col-
ors). The distance between the keypoint and object bound-
ary (black arrow) determines the weight for adjusting the in-
formation propagation. The dash arrow means that there is
no object along the corresponding direction. (b) Given the
existing object boundary, we use DAM to select the most
relevant keypoint along each direction. We use the selected
keypoints to provide fused visual and geometric informa-
tion, which is used for refining object boundary.
to objects as the regions that have larger height values than
the ground (with height value of 0) in the height map Mc
t.
As illustrated in Figure 3(a), we compute the Euclidean dis-
tance dt;i;kalong the kthdirection, from p0
t;ito the first lo-
cation, where the height value is larger than 0. We compute
a set of distancesfdt;i;1; :::; d t;i;KgforKdirections, then
useVt(see Eq. (1)), Gc
tand this distance set to form the VG
representation Ut;i2RKas:
Ut;i;k=Fv
k(Wt;i;kVt) +Fg
k(Wt;i;kGc
t;i); (5)
where
Wv
t;i;k=KX
k0=1exp(jdt;i;kdt;i;k0j): (6)
Here, Gc
t;i2RCdenotes the feature vectors located in p0
t;i
in the map Gc
t. In Eq. (5), Ut;i;kis represented as a weighted
map with the resolution equal to the geometric representa-
tion ( 2020by default), where Wt;i;kplays as a weight of
importance that is determined by the distance from the key-
pointp0
t;ito the nearest object boundary along the kthdirec-
tion. As formulated in Eq. (5) and Eq. (6), longer distance
decays the importance. Besides, we use independent set of
fully connected layers (i.e., Fv
kandFg
kin Eq. (5)) to learn
important information from VtandGc
t;i. It allows the con-
tent, which is far from p0
t;i, to have the opportunity to makean impact on Ut;i;k. We construct the VG representation for
each keypoint infp0
t;1; :::; p0
t;Ng, while each VG represen-
tation captures the visual and geometric information around
the corresponding keypoint. Based on the the VG represen-
tations, we propagate the information of the keypoints to
each location on the height map, where the corresponding
height value is refined. We also learn temporal information
from the VG representations to refine the spatial locations
of keypoints at the (t+ 1)thmoment, as detailed below.
4.3. Directional Attention Model
We use DAM to propagate the visual and geometric
information, from each keypoint to each location on the
height map, along different directions. More formally, for
a location ph
j2R13on the height map Mc
t, we conduct
the information propagation that yields a new representation
Qt;j2RCKas:
Qt;j=NX
i=1Gc
t;jU>
t;i: (7)
Along the second dimension of the representation Qt;j, we
perform max pooling to yield Q0
t;j2RCas:
Q0
t;j;c= max( Qt;j;c; 1; :::; Q t;j;c;K ): (8)
As illustrated in Eq. (7), Qt;j;c;k summarizes the influence
of all keypoints along kthdirection. We perform max pool-
ing on the setfQt;j;c; 1; :::; Q t;j;c;Kg(see Eq. (8)), attend-
ing to the most information along a direction to form the
representation Q0
t;j;c(see Figure 3(b)). To further refine the
height map, we use the representation Q0
t2RHMWMC
to compute another residual update map Rr
t2RHMWM,
which is added to the height map Mc
tto form a new height
mapMr
t+12RHMWMas:
Mr
t+1=Mc
t+Rr
t; (9)
where
Rr
t=Fr(Vt; Q0
t): (10)
Again,Frdenotes a set of convolutional layers. We make
use of the new height map Mr
t+1for the path planning at the
(t+ 1)thmoment.
We refine not only the 2.5D height map but also
the 3D keypoints at the (t+ 1)thmoment. Assume
that we use SLAM to produce a new set of keypoints
fpt+1;1; :::; p t+1;Ng. We remark that the keypoint sets at
thetthand(t+ 1)thmoments are not necessary the same.
To refine the new keypoint pt+1;j2R13, we use DAM to
compute the representation p0
t+1;j2R3Kas:
p0
t+1;j=NX
i=1pt;iU>
t;i: (11)Figure 4: Overview of our 3D urban navigation dataset, including 7 city scenes with different characteristics.
In this way, DAM distills the information of keypoints
at the tthmoment, which is propagated to the next mo-
ment. Again, we use max pooling to form the spatial offset
pt+1;j;c2R13for updating keypoint pt+1;jas:
pt+1;j;c= max( p0
t+1;j;c; 1; :::;p0
t+1;j;c;K ):(12)
We take the average of the updated keypoints pt+1;j+
pt+1;jand the estimated keypoints pt+1;jin place of
the original one to construct the VG representation at the
(t+ 1)thmoment.
4.4. Training Details
We use the L1loss function for training the VGF-Net as:
L(Mgt
t; Mr
t) =TX
t=1HWX
j=1jMgt
t;jMr
t;jj; (13)
where Mgt
tRHWis the ground-truth height map. Ac-
tually, we select 8 pairs of RGB image and height map
(T= 8) to construct each mini-batch for the standard SGD
solver. We set the height and width of each RGB image
(224224) and the height map ( 2020). The overall train-
ing samples is nearly 24000 images randomly sampled in 3
scenes, while we test the model on the 24000 samples sam-
pled on the other 3 scenes. Details about the dataset could
be found in Sec. 5. We train the network for 30 epochs,
and use the final snapshot of network parameters for test-
ing. The learning rate is set to 0.001 at the first 15 epochs,
and decayed to 0.0001 for a more stable optimization.
By default, the backbone of FvandFgis a ResNet-18,
while the remained FcandFris two stacked 33convo-
lutional layer with max-pooling and batch normalization.Note that it is our contribution to learn spatial offsets
of 3D keypoints, without explicitly using any ground-truth
data. This is done by modeling the computation of spatial
offsets as a differentiable function with respect to the VG
representation. In this way, we enable the end-to-end learn-
ing of spatial offsets, where the related network parameters
can be optimized by the back-propagated gradients. It sig-
nificantly reduces the effort for data annotation, while al-
lows the network training to be flexibly driven by data.
When constructing the VG representation, we set the
number of directions K= 16 for each keypoint, and the
number of keypoints N= 50 at each moment. We remark
that these hyper-parameters are chosen based on the valida-
tion results.
5. Results and Discussion
5.1. Description of Experimental Dataset
To promote the related research on drone navigation, we
newly collect a 3D urban navigation dataset. This dataset
contains 7 models of different city scenes (see Figure 4).
Note that New York, Chicago, San Francisco, and Las
Vegas are Google Earth models we download, which are
similar to the real-world scenes with respect to the appear-
ance but most objects inside are only buildings. We have
also Shenzhen, Suzhou and Shanghai that are manually built
based on the map by professional modelers, which contain
rich 3D objects (e.g., buildings, trees, street lights and road
signs, etc.) and other stuff (e.g., ground, sky and sea). There
are various spatial configurations of objects, building styles
and weather conditions in these 3D scenes. Thus, we pro-
vide challenging data for evaluating the navigation system.Table 1: Statistics of our 3D urban navigation dataset. Note that in addition to buildings, there may also exist many other
objects we must consider, such as trees, flower beds, and street lights, which highly increase the challenge for height mapping
and autonomous navigation task.
scene area (km2)objects (#) model size ( MB )texture images (#) texture size ( MB )
New York 7.4 744 86.4 762 122
Chicago 24 1629 146 2277 227
San Francisco 55 2801 225 2865 322
Las Vegas 20 1408 108 1756 190
Shenzhen 3 1126 50.3 199 72.5
Suzhou 7 168 191 395 23.7
Shanghai 37 6850 308 2285 220
Table 2: Comparisons with different strategies of information fusion, in terms of the accuracy of height mapping (average L1
error). We also show the accuracies ( %) of predicting height values, with respect to different ranges of error ( <3m, 5mand
10m). All strategies are evaluated on the testing (i.e., unknown and novel) scenes of San Francisco, Shenzhen and Chicago.
methodaverage L1error ( m) accuracy w.r.t. error 2[0;3]m(%)
San Francisco Shenzhen Chicago San Francisco Shenzhen Chicago
w/o fusion 4.57 4.57 4.49 68.95% 68.02% 70.05%
w/ fusion 2.37 2.93 3.41 85.09% 83.63% 78.44%
w/ fusion and memory 2.81 3.44 4.02 79.86% 79.20% 72.86%
w/ fusion, memory and exchange 2.35 3.04 3.80 80.54% 82.36% 74.73%
full strategy 1.98 2.72 3.10 85.71% 86.13% 80.46%
methodaccuracy w.r.t. error 2[0;5]m(%) accuracy w.r.t. error 2[0;10]m(%)
San Francisco Shenzhen Chicago San Francisco Shenzhen Chicago
w/o fusion 75.02% 74.08% 76.86% 83.96% 83.96% 85.71%
w/ fusion 89.20% 87.39% 84.12% 93.87% 92.25% 91.18%
w/ fusion and memory 86.35% 84.56% 80.36% 93.00% 91.31% 89.51%
w/ fusion, memory and exchange 86.13% 86.43% 81.41% 93.33% 91.85% 89.94%
full strategy 89.22% 88.90% 85.30% 94.10% 92.56% 91.67%
The models are input to the render for producing sequences
of RGB images. All RGB images and the associated 2.5D
height maps are used to form a training set (i.e., New York,
Las Vegas and Suzhou) and a testing set (i.e., San Francisco,
Shenzhen, and Chicago). We provides more detailed statis-
tics of the dataset in Table 1.To train our VGF-Net, which takes as input a rough im-
perfect height map and outputs an accurate height map,
we use 5 types of manipulations (i.e., translation, height
increase/decrease, size dilation/contraction, creation and
deletion) to disturb the object boundaries in the ground-
truth height map. One time of the disturbance increases orBefore disturbance After disturbance Residual map050100
Translation + Dilation
DilationHeight increaseHeight decrease
Translation
TranslationCreationDeletion
Figure 5: Illustration of disturbance manipulations. Actu-
ally, these manipulations can be combined to yield the dis-
turbance results (e.g., translation and dilation). The bottom
row of this figure shows the difference between height maps
before/after disturbance. The residual map is learned by our
VGF-Net, for recovering the disturbed height map to the
undisturbed counterpart.
decreases height values by 10 min certain map locations.
See Figure 5 for an illustration of our manipulations.
5.2. Different Strategies of Information Fusion
The residual update, VG representation and DAM are
critical components of VFG-Net, defining the strategy of
information fusion. Below, we conduct an internal study by
removing these components, and examine the effect on the
accuracy of height mapping (see Table 2).
First, we report the performance using visual informa-
tion only for height mapping, disabling any visual and geo-
metric fusion. Here, the visual information is learned from
RGB images (see the entries “w/o fusion” in Table 2). But
visual information is insufficient for reconstructing height
maps, which requires the modeling of geometric relation-
ship between objects, yielding lower performances com-
pared to other methods using geometric information.
Next, we examine the efficiency of residual update strat-
egy. At each moment, the residual update allows VGF-Net
to reuse the mapping result produced earlier. This strategy,
where the useful visual and geometric contents can be ef-
fectively distilled and memorized at all moments, improves
the reliability of height mapping. Thus, by removing the
residual update (see the entries “w/ fusion” in Table 2) from
VGF-Net (see the entries “full strategy”), we degrade the
performance of height mapping.
We further study the effect of VG representation on the
performance. The VG representation can be regarded as an
information linkage. It contains fused visual and geometric
information, which is exchanged among objects. Withoutthe VG representation, we use independent sets of convo-
lutional layers to extract the visual and geometric represen-
tations from the image and height map, respectively. The
representations are simply concatenated for computing the
residual update map (see the entries “w/ fusion and mem-
ory” in Table 2). This manner successfully disconnects the
communication between objects and leads to performance
drops on almost all scenes, compared to our full strategy of
information fusion.
We find that the performance of using memory of height
values lags behind the second method without using mem-
ory (see the entries “w/ fusion” in Table 2). We explain that
the information fusion with memory easily accumulates er-
rors in the height map over time. Thus, it is critical to com-
pute the VG representation based on the memorized infor-
mation, enabling the information exchange between objects
(see the entries “w/ fusion, memory and exchange”). Such
exchange process provides richer object relationship to ef-
fectively address the error accumulation problem, signifi-
cantly assisting height mapping at each moment.
Finally, we investigate the importance of DAM (see the
entries “w/ fusion, memory and exchange” in Table 2). We
solely remove DAM from the full model, by directly us-
ing VG representations to compute the residual update map
and spatial offsets for refining the height map and key-
points. Compared to this fusion strategy, our full strategy
with DAM provides a more effective way to adjust the im-
pact of each keypoint along different directions. Therefore,
our method achieves the best results on all testing scenes.
5.3. Sensitivity to the Quality of Height Map
As demonstrated in the above experiment, it is impor-
tant to the iterative information fusion for achieving a more
global understanding of 3D scene to perfect the height map
estimation. During the iterative procedure, the problematic
height values may be memorized to make a negative im-
pact on the production of height map at future moment. In
this experiment, we investigate the sensitivity of different
approaches to the quality of height maps, by controlling
the percentage of height values that are dissimilar to the
ground-truth height maps. Again, we produce dissimilar
height maps by using disturbance manipulations to change
the object boundaries.
At each moment, the disturbed height map is input to
the trained model to compute the new height map, which is
compared to the ground-truth height map for calculating the
average L1error. In Figure 6, we compare the average L1
errors produced by 4 different information fusion strategies
(i.e., see the entries “w/ fusion”, “w/ fusion and memory”,
“w/ fusion, memory and exchange” and “full strategies” in
Table 2), which learn geometric information from height
maps. As we can see, heavier disturbances generally lead to
the degradation of all strategies.w/ fusion w/ fusion and memory w/ fusion, memory and exchange full strategy20%Error (m)
Dissimilarity to GT Dissimilarity to GT Dissimilarity to GT40% 60% 80% 20%271217
40% 60% 80% 20%24610
8
04
2610
8
40% 60% 80%Shenzhen San Francisco ChicagoFigure 6: We disturb the 2.5D height maps, which are used to examine the robustness of different information fusion ap-
proaches. We evaluate different approaches on the testing sets of San Francisco, Shenzhen and Chicago. All results are
reported in terms of L1errors.
Figure 7: The five indoor training scenes selected from the S3DIS dataset [1].
Figure 8: The successful navigation trajectories produced by VGF-Net in a complicate indoor testing scene from the S3DIS
dataset [1].
The strategy “w/ fusion and memory” performs the worst
among all approaches, showing very high sensitivity to the
quality of height maps. This result further evidences our
finding in Sec. 5.2, where we have shown the unreliabil-
ity of the method with memory of height information but
without information exchange. Compared to other meth-
ods, our full strategy yields better results. Especially, givena very high percentage (80%) of incorrect height values, our
full strategy outperforms other methods by remarkable mar-
gins. These results clearly demonstrate the robustness of
our strategy.Table 3: We compare VGF-Net with/without using depth to other methods. All methods are evaluated on the outdoor sets
(i.e., San Francisco, Shenzhen and Chicago) and the indoor set (i.e., S3DIS). Results are reported in terms of the success
rates of navigation.
outdoor testw/ depth w/o depth
ground-truth depth estimated depth [3] VGF-Net
San Francisco 100% 27% 85%
Shenzhen 100% 34% 83%
Chicago 100% 19% 82%
indoor testw/ depth w/o depth
LSTM [10] CMP [10] VGF-Net LSTM [10] CMP [10] VGF-Net
S3DIS 71.8% 78.3% 92% 53% 62.5% 76%
5.4. Comparison on the Navigation Task
The quality of 2.5D height maps, which are estimated
by the height mapping, largely determines the accuracy of
drone navigation. In this experiment, we compare our VGF-
Net to different mapping approaches. All methods are di-
vided into two groups. In the first group, the approaches
apply depth information for height mapping. Note that the
depth information can be achieved by scanner [10], or esti-
mated by deep network based on the RGB images [3]. The
second group consists of approaches that only use RGB im-
ages to reconstruct the height map. In addition to an ini-
tial height map that can be easily obtained from various re-
sources, our VGF-Net only requires image inputs, but can
also accept depth information if available without changing
any scheme architecture. We set the height of flight to be
1030mfor drone, evaluating the success rate of 3D navi-
gation on our outdoor dataset. Overheight (e.g., 100 m) al-
ways leads to successful navigation, making the evaluation
meaningless. On the indoor dataset [1] (see also Figure 7
and Figure 8) , we report the success rate of 2D drone navi-
gation, by fixing the height of flight to 0.5 m. All results can
be found in Table 3.
Obviously, using accurate depth information can yield a
perfect success rate of navigation (see the entry “ground-
truth depth”). Here, the depth data is directly computed
from the synthesized 3D urban scenes, without involving
any noise. However, due to the limitation of hardware de-
vice, it is difficult for the scanner to really capture the accu-
rate depth data of outdoor scenes. A simple alternative is to
use deep network to estimate the depth based on the RGB
image (see the entry “estimated depth”). Depth estimation
often produces erroneous depth values for the height map-
ping, even with the most advanced method [3], thus severely
Input Predicted GTFigure 9: Examples of height mapping. All the height maps
are selected from the outdoor dataset. Here, we compare
the height maps with noise (in the first column), predicted
height maps (in the second column) and ground-truth height
maps (in the last column).misleading the navigation process. Similar to depth infor-
mation, the sparse 3D keypoints used in our approach also
provide valuable geometry information of objects. More
importantly, our VGF-Net uses visual cues to assist the
learning of geometric representations. Therefore, our ap-
proach without using depth produces better results than that
of using depth estimated by state-of-the-art techniques. We
have shown an example of trajectory for 3D drone naviga-
tion in Figure 1. We also show examples of height mapping
in Figure 9, where the height map with redundant boundary
(see the first two rows of Figure 9) or missing boundary (see
the last two rows of Figure 9) is input to the VGF-Net. Even
given the input height maps with much noise, our network
still precisely recovers the height information.
Depth data of indoor scenes (see Figure 7) can be more
easily achieved. With the available depth information, we
can trivially input the RGB image along with the associated
depth to the VGF-Net, producing the height map. We com-
pare VGF-Net to the recent approach [10] (see the entries
“LSTM ” and “CMP”) that produces state-of-the-art indoor
navigation accuracies. Our method achieves a better result
under the same condition of training and testing. Without
depth, our approach still leads to the best result among all
image based methods. It demonstrates the generality and
ability of our approach, in terms of stably learning useful
information from different data sources. In Figure 8, we
show more navigation trajectories planned by our approach
in an indoor testing scene.
6. Conclusions and Future Work
The latest progress on drone navigation is largely driven
by the active sensing and selecting the useful visual and ge-
ometric information of surrounding 3D scenes. In this pa-
per, we have presented VGF-Net, where we fuse visual and
geometric information for simultaneous drone navigation
and height mapping. Our network distills the fused infor-
mation, which is learned from the RGB image sequences
and an initial rough height map, constructing a novel VG
representation to better capture object/scene relation infor-
mation. Based on the VG representation, we propose DAM
to establish information exchange among objects and select
essential object relationship in a data-driven fashion. By us-
ing residual update strategy, DAM progressively refines the
object boundaries in the 2.5D height map and the extracted
3D keypoints, showing its generality to various complicate
outdoor/indoor scenes. The mapping module runs at nearly
0.2sec on a mobile GPU, which could be further optimized
by compression and pruning in an embedded system.
VGF-Net eventually outputs the residual update map and
spatial offsets, which are used for explicitly updating the
geometric information of objects (i.e., the 2.5D height map
and 3D keypoints). It should be noted that we currently use
convolutional layers to learn implicit representation fromthe fused information, and update the visual representa-
tion. The visual content of the sequence of RGB image
shows complex patterns, which together form the global ob-
ject/scene relationship. However, these patterns may be ne-
glected by the implicit representation during the learning
process. Thus, in the near future, we would like to investi-
gate a more controllable way to update the visual represen-
tation. Additionally, complex occlusion relations in the real
scenarios often lead to inaccurate height mappings in the oc-
cluded areas. In the future, we would like to further utilize
the uncertainty map of the environment, together with the
multi-view information to improve both the accuracy and
the efficiency of the mapping process. Moreover, since the
geometric modeling (triangulation of sparse keypoints) is
commonly involved in the optimization pipeline of SLAM,
effectively collaborating the 3D keypoints detection and the
height mapping would be quite interesting to explore.
Acknowledgment
We would like to thank the anonymous reviewers for
their constructive comments. This work was supported
in parts by NSFC Key Project (U2001206), Guangdong
Outstanding Talent Program (2019JC05X328), Guangdong
Science and Technology Program (2020A0505100064,
2018A030310441, 2015A030312015), DEGP Key Project
(2018KZDXM058), Shenzhen Science and Technology
Program (RCJC20200714114435012), and Guangdong
Laboratory of Artificial Intelligence and Digital Economy
(Shenzhen University).
References
[1] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis,
M. Fischer, and S. Savarese. 3D semantic parsing of large-
scale indoor spaces. In Proc. IEEE Conf. on Computer Vision
& Pattern Recognition , pages 1534–1543, 2016. 9, 10
[2] S. Bansal, V . Tolani, S. Gupta, J. Malik, and C. Tomlin. Com-
bining optimal control and learning for visual navigation in
novel environments. In Proc. Conf. on Robot Learning , vol-
ume 100, pages 420–429, 2020. 2, 3
[3] J. Bian, Z. Li, N. Wang, H. Zhan, C. Shen, M.-M. Cheng, and
I. Reid. Unsupervised scale-consistent depth and ego-motion
learning from monocular video. In Proc. of Advances in Neu-
ral Information Processing Systems , pages 35–45, 2019. 2,
3, 10
[4] G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and
G. Drettakis. Depth synthesis and local warps for plau-
sible image-based navigation. ACM Trans. on Graphics ,
32(3):30:1–30:12, 2013. 2
[5] D. Chen, B. Zhou, V . Koltun, and P. Kr ¨ahenb ¨uhl. Learning
by cheating. In Proc. Conf. on Robot Learning , volume 100,
pages 66–75, 2019. 2
[6] K. Chen, J. P. de Vicente, G. Sepulveda, F. Xia, A. Soto,
M. V ´azquez, and S. Savarese. A behavioral approach to vi-
sual navigation with graph localization networks. In Proc. of
Robotics: Science and Systems , pages 1–10, 2019. 2, 3[7] J. Engel, V . Koltun, and D. Cremers. Direct sparse odome-
try. IEEE Trans. Pattern Analysis & Machine Intelligence ,
40(3):611–625, 2017. 3
[8] J. Engel, T. Sch ¨ops, and D. Cremers. Lsd-slam: Large-scale
direct monocular slam. In Proc. Euro. Conf. on Computer
Vision , pages 834–849, 2014. 3
[9] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised
monocular depth estimation with left-right consistency. In
Proc. IEEE Conf. on Computer Vision & Pattern Recogni-
tion, pages 270–279, 2017. 3
[10] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Ma-
lik. Cognitive mapping and planning for visual navigation.
InProc. IEEE Conf. on Computer Vision & Pattern Recog-
nition , pages 2616–2625, 2017. 2, 3, 10, 11
[11] J. F. Henriques and A. Vedaldi. Mapnet: An allocentric spa-
tial memory for mapping environments. In Proc. IEEE Conf.
on Computer Vision & Pattern Recognition , pages 8476–
8484, 2018. 2
[12] J. Huang, S. Yang, T.-J. Mu, and S.-M. Hu. Clustervo: Clus-
tering moving instances and estimating visual odometry for
self and surroundings. In Proc. IEEE Conf. on Computer
Vision & Pattern Recognition , pages 2165–2174, 2020. 3
[13] J. Huang, S. Yang, Z. Zhao, Y .-K. Lai, and S.-M. Hu. Clus-
terslam: A slam backend for simultaneous rigid body cluster-
ing and motion estimation. In Proc. Int. Conf. on Computer
Vision , pages 5874–5883, 2019. 3
[14] D. K. Kim and T. Chen. Deep neural network for real-time
autonomous indoor navigation. arXiv preprint:1511.04668 ,
2015. 2
[15] S. Koenig and M. Likhachev. D* lite. In Proc. of Association
for the Advancement of Artificial Intelligence , pages 476–
483, 2002. 2, 4
[16] Y . Kuznietsov, J. Stuckler, and B. Leibe. Semi-supervised
deep learning for monocular depth map prediction. In Proc.
IEEE Conf. on Computer Vision & Pattern Recognition ,
pages 6647–6655, 2017. 3
[17] A. Loquercio, A. I. Maqueda, C. R. Del-Blanco, and
D. Scaramuzza. Dronet: Learning to fly by driving. IEEE
Robotics and Automation Letters , 3(2):1088–1095, 2018. 2
[18] F. Ma and S. Karaman. Sparse-to-dense: Depth prediction
from sparse depth samples and a single image. In Proc. IEEE
Int. Conf. on Robotics & Automation , pages 1–8, 2018. 3
[19] P. Mirowski, M. Grimes, M. Malinowski, K. M. Hermann,
K. Anderson, D. Teplyashin, K. Simonyan, A. Zisserman,
R. Hadsell, et al. Learning to navigate in cities without a
map. In Proc. of Advances in Neural Information Processing
Systems , pages 2419–2430, 2018. 2
[20] R. Mur-Artal and J. D. Tard ´os. Orb-slam2: An open-source
slam system for monocular, stereo, and rgb-d cameras. IEEE
Trans. on Robotics , 33(5):1255–1262, 2017. 2, 4
[21] R. P. Padhy, S. Verma, S. Ahmad, S. K. Choudhury, and P. K.
Sa. Deep neural network for autonomous UA V navigation in
indoor corridor environments. Procedia Computer Science ,
133:643–650, 2018. 2
[22] N. Savinov, A. Dosovitskiy, and V . Koltun. Semi-parametric
topological memory for navigation. In Proc. Int. Conf. on
Learning Representations , pages 1–16, 2018. 2[23] L. Tai, G. Paolo, and M. Liu. Virtual-to-real deep rein-
forcement learning: Continuous control of mobile robots for
mapless navigation. In Proc. IEEE Int. Conf. on Intelligent
Robots & Systems , pages 31–36, 2017. 2
[24] M. Tatarchenko, S. R. Richter, R. Ranftl, Z. Li, V . Koltun,
and T. Brox. What do single-view 3D reconstruction net-
works learn? In Proc. IEEE Conf. on Computer Vision &
Pattern Recognition , pages 3405–3414, 2019. 2
[25] K. Tateno, F. Tombari, I. Laina, and N. Navab. Cnn-slam:
Real-time dense monocular slam with learned depth predic-
tion. In Proc. IEEE Conf. on Computer Vision & Pattern
Recognition , pages 6243–6252, 2017. 3
[26] C. Wang, J. Wang, Y . Shen, and X. Zhang. Autonomous nav-
igation of uavs in large-scale complex environments: A deep
reinforcement learning approach. IEEE Trans. on Vehicular
Technology , 68(3):2124–2136, 2019. 2
[27] R. Wang, M. Schworer, and D. Cremers. Stereo dso: Large-
scale direct sparse visual odometry with stereo cameras.
InProc. Int. Conf. on Computer Vision , pages 3903–3911,
2017. 3
[28] W. Wang, W. Gao, and Z. hu. Effectively modeling piece-
wise planar urban scenes based on structure priors and cnn.
Science China Information Sciences , 62:1869–1919, 2019. 3
[29] Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-
Fei, and A. Farhadi. Target-driven visual navigation in in-
door scenes using deep reinforcement learning. In Proc.
IEEE Int. Conf. on Robotics & Automation , pages 3357–
3364, 2017. 2