arxiv_dump / txt /2201.02942.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
65.1 kB
See discussions, st ats, and author pr ofiles f or this public ation at : https://www .researchgate.ne t/public ation/357188680
Fast Solver for J2-Pertu rbed Lambert Problem Using Deep Neu ral Network
Article    in  Journal of Guidanc e, Contr ol, and Dynamics · Dec ember 2021
DOI: 10.2514/1.G006091
CITATIONS
0READS
160
4 author s, including:
Some o f the author s of this public ation ar e also w orking on these r elat ed pr ojects:
Stardust View pr oject
Multi-Objectiv e Hybrid Optimal Contr ol of Sp ace Syst ems View pr oject
Bin Y ang
Univ ersity of Str athcly de
18 PUBLICA TIONS    57 CITATIONS    
SEE PROFILE
Shuang Li
Nanjing Univ ersity of Aer onautics & Astr onautics
197 PUBLICA TIONS    1,391 CITATIONS    
SEE PROFILE
Massimiliano V asile
Univ ersity of Str athcly de
416 PUBLICA TIONS    3,755 CITATIONS    
SEE PROFILE
All c ontent f ollo wing this p age was uplo aded b y Shuang Li on 23 Dec ember 2021.
The user has r equest ed enhanc ement of the do wnlo aded file.1
Fast solver for J2-perturbed Lambert problem using deep
neural network
Bin Yang1 and Shuang Li2*
Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Jinglang Feng3 and Massimiliano Vasile4
University of Strathclyde, Glasgow, Scotland G1 1XJ, United Kingdom
This paper presents a novel and fast solver for the J2-perturbe d Lambert problem. The
solver consists of an intelligent initial guess generator combi ned with a differential
correction procedure. The intelligent initial guess generator i s a deep neural network that is
trained to correct the initial velocity vector coming from the solution of the unperturbed
Lambert problem. The differential correction module takes the i nitial guess and uses a
forward shooting procedure to further update the initial veloci ty and exactly meet the
terminal conditions. Eight sample forms are analyzed and compar e d t o f i n d t h e o p t i m u m
form to train the neural network on the J2-perturbed Lambert pr oblem. The accuracy and
performance of this novel approach will be demonstrated on a re presentative test case: the
solution of a multi-revolution J2-perturbed Lambert problem in the Jupiter system. We will
compare the performance of the proposed approach against a clas sical standard shooting
1 Ph.D. candidate, Advanced Space Technology Laboratory, No. 29 Yudao Str., Nanjing 211106, China.
2 Professor, Advanced Space Technology Laboratory, Email: lishua [email protected], Corresponding Author.
3 Assistant Professor, Department of Mechanical and Aerospace En gineering, University of Strathclyde, 75
Montrose Street, Glasgow, UK.
4 Professor, Department of Mechanical and Aerospace Engineering, University of Strathclyde, 75 Montrose Street,
Glasgow, UK. 2
method and a homotopy-based perturbed Lambert algorithm. It wil l be s ho w n t h a t, f o r a
comparable level of accuracy, th e proposed method is significan tly faster than the other two.
I. Introduction
The effect of orbital perturbations, such as those coming from a non-spherical, inhomogeneous gravity field,
leads a spacecraft to depart from the trajectory prescribed by t h e s o l u t i o n o f t h e L a m b e r t p r o b l e m i n a s i m p l e
two-body model [1], [2]. Since the perturbation due to the J2 z onal harmonics has the most significant effect around
all planets in the solar system, a body of research exists that addressed the problem of solving the perturbed Lambert
problem accounting for the J2 effect [3], [4]. This body of res earch can be classified into two categories: indirect
methods and shooting methods [5]. Indirect methods transform th e perturbed Lambert problem into the solution of a
system of parametric nonlinear algebraic equations. For instanc e, Engles and Junkins [1] proposed an indirect
method that uses the Kustaanheimo-Stiefel (KS) transformation t o derive a system of two nonlinear algebraic
equations. Der [6] presented a superior Lambert algorithm by us ing the modified iterative method of Laguerre that
h a s g o o d c o m p u t a t i o n a l p e r f o r m a n c e i f g i v e n a g o o d i n i t i a l g u e s s. Armellin et al. [7] proposed two algorithms,
based on Differential Algebra, for the multi-revolution perturb ed Lambert problems (MRPLP). One uses homotopy
over the value of the perturbati on and the solution of the unpe rturbed, or Keplerian, Lambert problem as initial guess.
The other uses a high-order Taylor polynomial expansion to map the dependency of the terminal position on the
initial velocity, and solves a system of three nonlinear equati ons. A refinement step is th en added to obtain a solution
with the required accuracy. A common problem of indirect method s is the need for a good initial guess to solve the
system of nonlinear algebraic equations. A bad initial guess in creases the time to solve the algebraic system or can
lead to a failure of the solution procedure, especially when th e transfer time is long. 3
Shooting methods transcribe the perturbed Lambert problem into the search for the initial velocity vector that
provides the desired terminal conditions at a given time. Kraig e et al. [8] investigated the efficiency of different
shooting approaches and found that a straightforward differenti al correction algorithm combined with the
Rectangular Encke’s motion predictor is more efficient than the analytical KS approach. Junkins and Schaub [9]
transformed the problem into a two-point boundary value problem and applied Newton iteration method to solve it.
The main problem with shooting methods is that, with the increa se of the transfer time, the terminal conditions
become more sensitive to the variations of the initial velocity and the derivatives of the final states with respect to
the initial velocity are more affected by the propagation of nu merical errors. In order to mitigate this problem, Arora
et al. [10] proposed to compute the derivatives of the initial and final velocity vectors with respect to the initial and
final position vectors, and the time of flight, with the state transition matrix. Woollands et al. [11] applied the KS
transformation and the modified Chebyshev–Picard iteration to o btain the perturbed solution starting from the
solution of the Keplerian Lambert problem, which is to solve th e initial velocity vector corresponding to the transfer
between two given points with a given time of free flight in a two-body gravitational field [12]. For the
multi-revolution perturbed Lambert problem with long flight tim e, Woollands et al. [13] also utilized the modified
Chebyshev-Picard iteration and the method of particular solutio ns based on the local-linearity, to improve the
computational efficiency, but its solution relies on the soluti on of the Keplerian Lambert problem as the initial
guesses. Alhulayil et al. [14] proposed a high-order perturbati on expansion method that accelerates convergence,
compared to conventional first-order Newton’s methods, but requ ires a good initial guess to guarantee convergence.
Yang et al. [15] developed a targeting technique using homotopy to reduce the sensitivity of the terminal position
errors on the variation of the initial velocity. However, often techniques that improve robustness of convergence by
reducing the sensitivity of the terminal conditions on the init ial velocity vector, incur in a higher computational cost. 4
The major problem of both classes of methods can be identified in the need for a judicious initial guess, often
better than the simple solution of the Keplerian Lambert proble m. To this end, this paper proposes a novel method
combining the generation of a first guess with machine learning and a shooting method based on finite-differences.
We propose to train a deep neural network (DNN) to generate ini tial guesses for the solution of the J2-perturbed
Lambert problem and which has been a growing interest in the ap plication of machine learning (ML) to space
trajectory design [16], [17]. In Ref. [18] one can find a recen t survey of the application of ML to spacecraft guidance
dynamics and control. Deep neural network is a technology in th e field of ML, which has at least one hidden layer
and can be trained using a back-propagation algorithm [18]. Sán chez-Sánchez and Izzo [19] used DNNs to achieve
online real-time optimal control for precise landing. Li et al. [16] used DNN to estimate the parameters of low-thrust
and multi-impulse trajectories in multi-target missions. Zhu an d Luo [20] proposed a rapid assessment approach of
low-thrust transfer trajectory using a classification multilaye r perception and a regression multilayer perception.
Song and Gong [21] utilized a DNN to approximate the flight tim e of the transfer trajectory with solar sail. Cheng et
al. [22] adopted the multi-scale deep neural network to achieve real-time on-board trajectory optimization with
guaranteed convergence for optimal transfers. However, to the b est of our knowledge ML has not yet been applied
to improve the solution of the perturbed Lambert problem.
The DNN-based solver proposed in this paper was applied to the design of trajectories in the Jovian system. The
strong perturbation induced by the J2 harmonics of the gravity field of Jupiter creates significant differences
between the J2-perturbed and Keplerian Lambert solutions, even for a small number of revolutions. Hence Jupiter
was chosen to put the proposed DNN-based solver to the test. Th e performance of the combination of the DNN first
guess generation and shooting will be compared against two solv ers: one implementing the homotopy method of
Yang et al. [15], the other implementing a direct application o f Newton method starting from a first guess generated 5
with the solution of the Keplerian Lambert problem. The homotop y method in Ref. [15] was chosen for its
simplicity of implementation and robustness also in the case of long transfer times.
The rest of this paper is organized as follows. In Sec. II, the J2-perturbed Lambert problem and the shooting
method are presented. Sec. III investigates eight sample forms and their learning features for the DNN. With
comparative analysis of the different sample forms and standard ization technologies, the optimal sample form for
the J2-perturbed Lambert problem is found. The algorithm using the deep neural network and the finite
difference-based shooting method is proposed and implemented to solve the J2-perturbed Lambert problem in Sec.
IV. Considering Jupiter’s J2 perturbation, Sec. V compares the numerical simulation results of the proposed
algorithm, the traditional shooting method and the method with homotopy technique. Finally, the conclusions are
made in Sec. VI.
II. J2-perturbed Lambert Problem
This section presents the dynamical model we used to study the J2-perturbed Lambert problem and the shooting
method we implemented to solve it.
A. Dynamical modeling with J2 perturbation
The J2 non-spherical term of the gravity field of planets and m oons in the solar system induces a significant
variation of the orbital parameters of an object orbiting those celestial bodies. Thus, the accurate solution of the
Lambert problem [12] needs to account for the J2 perturbation, especially in the case of a multi-revolution transfer.
The dynamic equations of an object subject to the effect of J2 can be written, in Cartesian coordinates, in the
following form: 6
2 2
2 32
2 2
2 32
2 2
2 32311 52
311 52
313 52x
y
z
x
y
zxv
yv
zv
xR zvJr rr
yR zvJr rr
zR zvJr rr



         
        
          
 (1)
where
, R ,and J2 represent the gravitational constant, mean equator radius and oblateness of the celestial body,
respectively. ( x, y, z, vx, vy, vz) is the Cartesian coordinates of the state of the spacecraft, and 22 2rx y z  i s
the distance from the spacecraft to the center of the celestial body.
B. Shooting Method for the J2-perturbed Lambert Problem
The classical Lambert problem (or Keplerian Lambert problem in the following) considers only an unperturbed
two-body dynamics [12]. However, perturbations can induce a sig nificant deviation of the actual trajectory from the
solution of the Keplerian Lambert problem. One way to take pert urbations into account is to propagate the dynamics
in Eqs. (1) and use a standard shooting method for the solution of two-point boundary value problems.
Fig. 1 depicts the problem introduced by orbit perturbations. T he solution of the Keplerian Lambert problem,
dashed line, provides an initial velocity v0. Because of the dynamics in Eq.(1), the velocity v0 corresponds to a
difference f0 f f0rr r between the desired terminal position fr and the propagated one f0r, when the dynamics
is integrated forward in time, for a period tof, from the initial conditions [ r0, v0]. In order to eliminate this error, one
can use a shooting method to calculate a velocity v that corrects v0. Fig. 1 shows an example with two subsequent
varied velocity vectors vi and the corresponding terminal conditions. 7
0rfr
0r
ir0viv0v
iv
nvf0r
fir
Fig. 1 Illustration of the shooti ng method based on Newton’s it eration algorithm for the J2-perturbed
Lambert problem
As mentioned in the introduction, shooting methods have been ex tensively applied to solve the perturbed
Lambert problem. Different algorithms have been proposed in the literature to improve both computational
efficiency and convergence, e.g. the Picard iteration [11] and the Newton’s iteration [23]. In this section, the
standard shooting method based on Newton’s algorithm is present ed [23]. Given the terminal position rfi = [xi, yi, zi]T
and the initial velocity vi = [vxi, vyi, vzi]T at the i-th iteration, the shooting method requires the Jacobian matrix :
=iii
xiy iz i
iii
i
xiy iz i
iii
xiy iz ix xx
vvv
y yy
vvv
zzz
vvv    
  
 
  
   H , (2)
to compute the correction term:
1
f iivHr r , (3)
where J-1 is the inverse of the Jacobian matrix Hi, and rf is the desired terminal position, as shown in Fig. 1. The
corrected initial velocity then becomes 1ii i vvv . 8
Here the partial derivatives in the Jacobian matrix are approxi mated with forward differences. Finite differences
are computed by introducing a variation 610v in the three components of the initial velocity and computing t he
corresponding variation of the three components of the terminal conditions ixr, iyr, and izr. Consequently, the
Jacobian matrix can be written as follows.
=iy ix iz
ivvv
 
 
 r rrH (4)
Because of the need to compute the Jacobian matrix in Eq. (2), finite-difference-based shooting methods need to
perform at least three integrations for each iteration. Further more, if the accuracy of the calculation of the Jacobian
matrix in Eq.(2) is limited, this algorithm could fail to conve rge to the specified accuracy or diverge, which is a
common situation if the time of flight is long (e.g., tens of r evolutions). Homotopy techniques are an effective way
to improve the convergence of standard shooting methods for MRP LP but still require an initial guess to initiate the
homotopy process and can require the solution of multiple two-p oint boundary value problems over a number of
iterations. Here a DNN is employed to globally map the change i n the initial velocity to the variation of the terminal
position for a variety of initial state vectors and transfer ti mes. This mapping allows one to generate a first guess for
the initial velocity change ivby simply passing the required initial state, transfer time and terminal condition as
input to the DNN.
In the following, we will present how we trained the DNN to gen erate good first guesses to initiate a standard
shooting method. We will show that an appropriately trained DNN can generate initial guesses that provide
improved convergence of the shooting method even for multi-revo lution trajectories. It will be shown that the use of
this initial guess improves the robustness of convergence of a standard shooting method and makes it significantly
faster than the homotopy method in [15]. 9
III. Sample Learning Feature Analysis
DNN consists of multiple layers of neurons with a specific arch itecture, which is an analytical mapping from
inputs to outputs once its parameters are given. The typical st ructure of DNN and its neuron computation is
illustrated in Fig. 2. The output of each neuron is generated f rom the input vector x, the weights of each component
w, the offset value b, and the activation function y=f(x). The inputs are provided according to the specific problem or
the outputs of the neurons of the previous layer. The weight an d offset values are obtained through the sample
training. The activation function is fixed once the network is built. The training process includes two steps: the
forward propagation of the input from the input layer to the ou tput layer; and then the back propagation of the output
error from the output layer to the input layer. During this pro cess, the weight and the offset between adjacent layers
are adjusted or trained to reduce the error of the outputs.
ii sbw x
 yf s
Fig. 2 The diagram of the DNN s tructure and neuron computation
The ability of a DNN to return a good initial guess depends hig hly on the representation and quality of samples
used to train the network. High-quality samples cannot only imp rove the output accuracy of the network, but also 10
reduce the training cost. Therefore, in the following, we prese nt the procedure used to generate samples with the
appropriate features.
A. Definition of Sample Form and Features
In this work two groups of sample forms have been considered: o ne has the initial velocity v0 solving the
J2-perturbed Lambert problem as output, the other has the veloc ity correction 0v to an initial guess of v0 a s
output.
For the first group of sample forms, the input to the neural ne twork includes the known initial and terminal
positions 0f,rr and the time of flight tof. The output is only the initial velocity v0 as the terminal velocity can be
obtained through orbital propagation once the initial velocity is solved. This type of sample form is defined as
   0f 0,, ,vSt o frr v (5)
where the subscript 0 and f denotes the start and end of the tr ansfer trajectory, respectively. Thus, when trained with
sample form in Eq. (5), the DNN is used to build a functional r elationship between 0f,,tof rr and 0v.
The second group of sample forms was further divided in two sub groups. One that uses the initial state of the
spacecraft 0r, the time of flight tofand the terminal error fras input and the other that uses the initial state 0r,
the time of flight tof, the terminal position error frand the initial velocity vector from the Keplerian solution
dvas inputs. These two sample forms are defined as follows:
  
  d1 0 f 0
d2 0 d f 0,, ,
,, , ,v
vSt o f
St o f 
 rr v
rv r v (6)
In Eq. (6) the output 0v is always the initial velocity correction 00 dvv v , in which 0v is the initial
velocity that solves the J2-perturbed Lambert problem. Thus, wh en trained with sample forms Sdv1 and Sdv2, the DNN 11
realizes a mapping between 0v a n d 0f,,tof rr or  0d f,, ,tof rv r respectively. The difference between Sdv1
and Sdv2 is whether the input includes the initial velocity vd t h a t i s n e c e s s a r y f o r s o l v i n g t h e J a c o b i a n m a t r i x .
Therefore, it is theoretically easier to obtain the desired map ping with the input including the initial velocity, i.e. Sdv2.
However, this increases the dimensionality of the sample and mi ght increase the difficulty of training.
For each group of sample forms there are three main ways of par ameterizing the state of the spacecraft: Cartesian
coordinates, spherical coordinates and the mean orbital element s. Cartesian coordinates provide a general and
straightforward way to describe the motion of a spacecraft but state variables change significantly over time even for
circular orbits with no orbital perturbations. Spherical coordi nates can provide a more contained and simpler
variation of the state variables but are singular at the poles. Double averaged mean orbital elements present no
variation of semimajor axis, eccentric and inclination due to J 2 and a constant variation of argument of the perigee
and right ascension of the ascending node [24]. Which parameter ization to choose for the training of the DNN will
be established in the remainder of this section. The structures of Eqs. (5) and (6) expressed in terms of these three
coordinate systems are as follows:
 
  
  

Car 0 0 0 f f f 0 0 0
S p h 0 0 0 fff 0 0 0
OEm 0 f 0 0 0
d1 C a r 0 0 0 f f f 0 0 0d1 S p h 0 0 0 f f f,,,,,, , ,
,,, ,,, , ,
,, ,,
,,, , , , , , ,
,,, , , ,vx y z
vv v
vv v
vx y zvSx y z x y z t o f v v v
Srrt o f v
So e o e t o f v
Sx y z x y z t o f v v v
Srr t o f v   

  
 
      
   ,
, 

  
  00 0
d2 C a r 0 0 0 d d d f f f 0 0 0
d2S p h 0 0 0 d d d f f f 0 0 0
d2 O E m d f f f 0 0 0,,
,,, , , , , , , , , ,
,,, ,,, , , , , ,
,, , , , ,vv
vx y z x y z
v vv
vv vSx y z v v v x y z t o f v v v
Sr vr t o f v
So e r t o f v
     
  

         
     
     ,
, (7)
where the subscript Car, Sph and OEm denote the Cartesian coordinate, the spherical coordinate and mean orbital
elements, respectively. And x, y, and z are the Cartesian coordinates of the position vector. And r, , and  a r e 12
the distance, azimuth, and elevation angle of position vector i n the spherical coordinate system.
,, , , ,Toe a e i w M represents the mean orbital elements.
B. Performance Analysis of Different Sample Forms
In this section the performance of the eight sample forms defin ed in Eq.(7) is assessed in order to identify the
best one to train the DNN. We always generate a value for the i nitial conditions starting from an initial set of orbital
elements. Values of the orbital parameters for each sample are randomly generated with the rand function in
MATLAB using a uniform distribution over the intervals defined in Table 1. Note that semimajor axis and
eccentricity are derived from the radii of the perijove and apo jove. Considering the strong radiation environment of
Jupiter and the distribution of Galilean moons, we want to limi t the radius of the pericentre rp of the initial orbit of
each sample to be in the interval [5 RJ, 30RJ], where RJ = 71492 km is the Jovian mean radius. The value of the
inclination is set to range in the interval [0, 1] radians. The time of flight does not exceed one orbital period T, which
is approximately calculated using the following formula
3
J=2aT (8)
where a is the semi-major axis, a = ( ra + rp ) / 2.
Table 1 Parameters’ ranges of the sample
Parameters Range
Apojove radius ra (×RJ) [ rp, 30]
Perijove radius rp (×RJ) [5, 30]
inclination (rad) [0, 1]
RAAN (rad) [0, 2)
Argument of perigee (rad) [0, 2)
Mean anomaly (rad) [0, 2)
tof (T) (0, 1) 13
The following procedure is proposed to efficiently generate a l arge number of samples without solving the
J2-perturbed Lambert problem:
Step 1: The initial state [ r0, v0] and time of flight tof are randomly generated.
Step 2: The terminal state [ rf, vf] is obtained by propagating the initial state [ r0, v0] under the J2 perturbation
dynamics model, for the propagation period tof.
Step 3: The Keplerian solution vd is solved from the classical Lambert problem with the initial and terminal
position r0, rf and flight time tof.
Step 4: The end state [ rfd, vfd] is obtained by propagating the initial Keplerian state [ r0, vd] under the J2
perturbation dynamics model, and for the propagation period tof.
Step 5: The initial velocity correction 0v and the end state error fr are computed with00 dvv v and
ff f drr r .
Using these five steps, we generated 100000 samples and then gr ouped them in the eight sample forms given in
Eq.(7). Before training, a preliminary learning feature analysi s is performed on the distribution of sample data and
the correlation between the inputs and the output. Specifically , the mean, standard deviation, and magnitude
difference coefficients are used to describe the distribution o f the data, and the Pearson correlation coefficient is
chosen to evaluate the correlation of the data. Their mathemati cal definitions are given as follows


1
2
1=
1
max
log
min 0n
j
j
n
j
jX
Xn
XXn
X
X




 (9) 14
where X and  are the mean and standard deviation of the data, respectively. And n is the total number of data.
 denotes the magnitude difference coefficients that assesses th e internal diversity of the data.
The statistical characteristics of the variables in the sample are given in Table 2. For the variables described in
Cartesian coordinate, the mean values are close to 0 but the st andard deviations are generally large. Furthermore,
their magnitude difference coefficients are all more than 5, wh ich indicate a large difference in the absolute values
of the variables. For the variables described in spherical coor dinate, the most of their standard deviations are less
than these described in the Cartesian coordinate. In addition, the magnitude difference coefficients of the magnitude
of the position and velocity vectors are less than 1. The varia bles with smaller standard deviation have better
performance in the training process. Therefore, the samples wit h the variables represented in spherical coordinate
are easier to learn than those described in Cartesian coordinat es.
Table 2 The statistical distributi ons of the variables in the s amples
Parameters
of sample Mean Standard deviations Magnitude difference
coefficients
r0-Car [-0.014; 0.087; 0.001] [11.424; 11.424; 1.145] [5.125; 5.949; 7.077]
r0-Sph [14.954; 0.007229; 0.000193] [6.221; 1.815; 0.070] [0.777; 4.9 26; 6.607]
rf-Car [0.001; -0.031; -0.005] [12.438; 12.469; 1.246] [5.094; 4.371; 6.442]
rf-Sph [16.503; -0.005966; -0.000386] [6.275; 1.813; 0.070] [0.777; 4 .454; 6.321]
v0-Car [-0.088351; -0.032116;
-0.006130] [8.916; 8.887; 0.895] [5.450; 5.185; 6.338]
v0-Sph [12.082577; -0.002034;
-0.000529] [3.647; 1.821; 0.071] [0.773; 5.257; 5.851]
oe0 [15.771; 0.257895; 0.087045;
3.148016; 3.137227; 3.138286] [5.225; 0.177; 0.050;
1.813;1.812;1.816] [0.774; 5.273; 4.145;
4.653; 5.422; 4.634]
oef [15.771; 0.257850; 0.087045;
3.147935; 3.137600; 3.151625] [5.225; 0.177; 0.050;
1.813; 1.812; 1.528] [0.774; 4.721; 4.145;
5.917; 5.228; 5.163]
vd-Car [-0.087568; -0.031006;
-0.006340] [8.915; 8.886; 0.895] [5.654; 5.578; 6.673]
vd-Sph [12.081729; -0.001599;
-0.000538] [3.647; 1.821; 0.071] [0.774; 5.651; 6.658]
f-Carr [-3.162; -11.384; -0.075] [1369.838; 1395.080;
187.322] [10.495; 10.828; 11.371]15
f-Sphr [1154.249; -0.004; -0.001] [1589.222; 1.817; 0.135] [10.283; 4. 831; 8.576]
oed [15.769; 0.258264; 0.087096;
3.147709; 3.136805; 3.139263] [5.226179; 0.177; 0.051;
1.813; 1.812; 1.814] [9.556; 4.800; 5.049;
5.240; 5.927; 4.639]
tof 4.023 3.220 5.481
0-Carv [-0.000782; -0.001109; 0.000210] [0.326; 0.283; 0.063] [9.917; 1 0.432; 10.471]
0-Sphv [0.013321; 0.003913; 0.002884] [0.436; 1.818; 0.503] [8.948; 4. 672; 5.924]
It is also known that the learning process is easier if the cor relation between the input and output of the sample is
stronger. Here the Pearson correlation coefficient is used to d escribe this correlation and is defined as follows

1n
jj
j
XYXX Y Y
R

( 1 0 )
where n is the total number of sample data. Y and Yrepresent the mean and standard deviation of the data Y.
X a n d Xdenote the mean and standard deviation of the data X.
The matrix of the Pearson correlation coefficients of the propo sed sample’s inputs and outputs are given in Table
3. The elements of Pearson correlation coefficients matrix are the correlation coefficient between the corresponding
input and output variables. The signs of the elements indicate positive and negative correlations, respectively. The
absolute values of elements represent the strength of correlati on. The greater the absolute value is, the stronger the
correlation is.
Table 3 The matrix of the Pearson correlation coefficients of t he input and output for different sample forms
Sample
Forms Pearson correlation coefficients matrix
Sv-Car 0.003 0.004 0.000 0.002 0.002
0.002 0.003 0.001 0.001 0.003
0.005 0.000 0.002 0.001 0.002 0.000 0.002 

  

-0.764 0.126
0.764 -0.122
Sv-Sph 0.005 0.001 0.003 0.000
0.002 0.000 0.004 0.000 0.001 0.003
0.001 0.001 0.002 0.003 0.002 0.001 0.003

  

-0.898 -0.459 -0.344
-0.116
Sv-OEm 0.002 0.002 0.002 0.000 0.002 0.003 0.002 0.001
0.002 0.000 0.000 0.002 0.003 0.007 0.002 0.000 0.000 0.002 0.003 0.001 0.0 03
0.002 0.001 0.007 0.004 0.002 0.001 0.002 0.001 0.007 0.004 0.000 
 -0.587 0.291 -0.587 0.291 -0.344
0.008 0.003

16
Sdv1-Car 0.006 0.002 0.002
0.004 0.000 0.002
0.003 0.006 0.003 0.004 0.004
 


-0.011 -0.049 -0.053 -0.046
0.010 0.037 -0.041 -0.013
0.011 -0.090
Sdv1-Sph 0.003 0.005 0.004 0.002
0.002 0.000 0.002 0.000 0.001
0.001 0.002 0.001 0.001 0.004



-0.025 0.081 0.010
0.377 0.254
0.512 0.045
Sdv2-Car 0.006 0.002 0.000 0.002
0.004 0.000 0.002 0.002
0.003 0.006 0.003 0.004 0.004 0.004
  


-0.011 -0.017 -0.014 -0.049 -0.053 -0.046
0.010 0.011 -0.012 0.037 -0.041 -0.013
0.010 -0.040 0.011 -0.090
Sdv2-Sph - . 0.003 -0.005 . 0.004 -0.005 . 0.004 -0.002 .
-0.002 . 0.000 0.005 . -0.001 0.002 . 0.000 -0.001
-0.001 -0.002 . 0.003 0.004 . 0.001 -0.001 . 0.004      
      
            0 025 0 032 0 081 0 010
0 377 0 259 0 254
05 1 2 02 9 7 00 4 5
Sdv2-OEm 0.001 0.004 0.001 0.004 0.002
0.002 -0.003 0.002 0.004 0.002 0.001 0.002 0.000 0.001
0.000 0.001 0.002 0.003 0.002 0.001 0.001 0.004 
 

-0.019 0.082 0.076 0.081 0.010
0.254
0.010 0.045
First, it is seen that most elements of the matrix are less tha n 0.01, indicating the correlations between the inputs
and the outputs are generally weak. Second, for the first three sample forms of Table 3, the absolute values of all
elements for some rows are less than 0.01. This means that some components of the output variable are in
weak-correlation with all input variables, and hence the mappin g from these output components to the input
variables is very difficult to capture. Therefore, samples with the initial velocity as output, i.e. Sv-Car, Sv-Sph, and
Sv-OEm, are not deemed to be ideal for the training of the neural net work. Third, by comparing the matrix listed in
rows 4 to 7 of Table 3, the absolute values of the elements for the samples described in Cartesian coordinates are
smaller than those for the samples described in spherical coord inates. Furthermore, for the samples in spherical
coordinates, it is seen that the submatrix of each input variab le in the Pearson correlation coefficients matrix is a
diagonally dominant matrix, where the elements with large absol ute values for each input variable are distributed in
different rows and columns, and are independent. Therefore, the samples described in the spherical coordinate have
better learning features and performance due to the strong corr elations. Additionally, for Sdv2-Sph that includes the
Keplerian solution vd as one of the inputs, the correlation with the initial velocit y correction 0v i s [0.032 , 0.004, 17
-0.005; 0.005, 0.259 , -0.001; 0.003, 0.004, 0.297 ], which is diagonally dominant with large diagonal values, whi ch
demonstrates that the Keplerian solution is an important input. Finally, for the sample in the mean orbital elements
in the last row of Table 3, the matrix only contains a few elem ents whose absolute values are greater than 0.01, and
most of them are distributed in the first row. The mean variati ons of semimajor axis, eccentricity and inclination are
not affected by the J2 perturbation but only by the variation o f the initial velocity. Therefore, only the first row in the
matrix displays larger values. In addition, the elements in the first six columns of the Pearson correlation matrix of
Sdv2-OEm are generally smaller than others in Table 3, because the outp uts of the sample is the initial velocity
correction, which is calculated using the osculating orbital el ements that contain both the long and the short term
effects of the J2 perturbation. Thus the correlation using the mean orbital elements is moderate. This would suggest
that the sample Sdv2-Sph is the best option for the training of the DNN among the eight tested sample forms. We will
now quantify the training performance for each of the eight sam ple forms by comparing the training convergence of
a given DNN. It has to be noted that the structure of the DNN p lays a role as well. For example, a high dimensional
sample with more variables needs a larger size DNN with more la yers and neurons. However, we argue that, since
the sample form selection mainly depends on the problem and the dynamics, a better sample form will have better
training performance than other sample forms given the same DNN structure. For this reason, it is reasonable to
compare sample forms even on DNN structures that are not optima l. The effect of the structure of DNN on the
training performance will be discussed in section V.
Some data pretreatment is necessary to facilitate the training process and improve the prediction accuracy.
Standardization, normalization and logarithms are used to pre-p rocess data with large ranges or magnitude
differences. Tests in this section were performed using a four- layer fully connected DNN with 50 neurons per
hidden layer. The activation functions of the hidden layers and the output layer are all Tanh. The Adaptive moment estimatio n
algorith m
The con s
training p
output o f
where n i
Here MS E
From
significa n
has a lar gn (Adam) [25 ]
m works throu g
struction and
process, the va
fthe sample f o
s the number o
E has no unit s
Fig. 3 one c a
ntly smaller t h
ger range of v] was employ e
gh the entire t
training of t h
ariations of t h
or different sa m
of samples, a n
s because data
Fig. 3 The tr
an see that th e
han that with t h
values and the r
ed for the opt i
training datas e
he DNN are b
he mean squa r
mple forms ar e
MSE
ndˆiyand y i are
has been nor m
ainin g conve r
e MSE of the n
he initial velo c
refore has a m
imization. Th e
et) was set to
based on the
re error (MS E
e given in Fig
11ˆn
i
iE y
n
the output pr e
malized befor e
rgence histor y
neural netwo r
city as the ou t
more scattere d
e maximum e p
10000 and t h
Python impl e
E) between th e
. 3. The math e
2
iy
edicted by the
e training.
y for differe n
rk with the in i
tput. This is b e
d distribution.
poch (or num b
he initial lear n
ementation o f
e output of t h
ematical expr e
DNN and th e
nt sample for m
itial velocity c
ecause the ini t
Also, the M S
ber times that
ning rate was s
fTensorFlow.
he neural net w
ession of the M
e true output r e
ms
correction as t
tial velocity i n
SE of sample S
18the learning
set to 0.001.
During the
work and the
MSE is:
(11)
espectively .
the output is
n the sample
Sdv1-Sph is an 19
order of magnitude higher than that of sample Sdv2-Sph. Therefore, the accuracy of predicting the initial velocity
correction is effectively improved by including the Keplerian v elocity in the input of the sample. The blue line in
Fig. 3 has obvious fluctuations due to the weak correlations be tween the output and the input of Sv-OEm, as shown in
Table 3. Finally, the training results of the samples in spheri cal coordinate are better than those in Cartesian
coordinates, which is consistent with the conclusions drawn in previous sections.
In summary, for the J2-perturbed Lambert problem, the samples d escribed in spherical coordinate appear to be
more suitable for the training of a DNN. In fact, among all eig ht sample forms, the sample form Sdv2-Sph yielded the
best learning converge, given the initial position, Keplerian v elocity, the terminal position error of the Keplerian
solution and time of flight as inputs and the initial velocity correction as output. Therefore, in the remainder of this
paper, the Sdv2-Sph sample form is selected for the training of the DNN.
IV. Solution of the J2-perturbed La mbert Problem Using DNN
The proposed solution algorithm (see the flow diagram in Fig. 4 ) is made of an Intelligent initial Guess
Generator (IGG) and a Shooting Correction Module (SCM). The DNN is used in the IGG to estimate the correction
of the Keplerian solution and provide an initial guess to the s hooting module. The shooting method discussed in part
B of Section II is employed in th e SCM to converge to the requi red accuracy. As s h
vectors. T
terminal p
form an d
shooting
rendezvo u
approxi m
The m
terminal s
one call t o
method mFig. 4
hown in Fig. 4
Then the init i
position error
d the generat i
method in S e
us constraint .
mated with the
method propo s
state, where i
o the DNN ar e
mainly depend4 The flow c h
4, first the K e
ial conditions
rfd. With th i
ion method o
ection II is a
. The Jacobi a
difference qu o
sed here perfo r
is the numbe r
e necessary t o
s on the SC M
hart of the pr o
eplerian Lam b
[r0, vd] are p
is erro r, the i n
of the sample s
applied to co r
an matrix is
otient to redu c
rms a total of
r of iterations.
o obtain the in i
M. As it will be
oposed J2-pe
bert problem
propagated f o
nitial velocity
s a r e d e s c r i b e
rrect the initi a
calculated a c
ce the comput a
4i+2 numeric
Additionally ,
itial velocity g
shown in the
rturbed La m
is solved wit
orward in tim e
correction is
ed in Sectio n
al velocity to
ccording to E
ational load.
al propagatio n
, one solution
guess. Theref o
next section,
mbert proble m
th the desired
e u n d e r t h e e
calculated us i
n I I I . T h e n t h
make the te r
Eq.(4), where
ns to obtain t h
of the Keple r
ore, the calcul a
the initial gu e
m solver
initial and f i
ef f e c t o f J 2 t o
ing the traine d
he finite diff e
rminal positi o
the partial d
he Jacobian ma
rian Lambert p
ation time of t
ess provided b y
20inal position
o obtain the
d DNN. The
erence-based
on meet the
derivative is
atrix and the
problem and
the proposed
y the IGG is 21
close enough to the final solution that the number of iteration s required to the SCM to converge to the required
accuracy is significantly reduced.
V. Case Study of Jupiter Scenario
In this section, taking the Jovian system as an example, some n umerical simulations are performed to
demonstrate the effectiveness and efficiency of the proposed J2 -perturbed Lambert solver. Firstly, different network
structures and training parameters are tested to find the optim al ones for this application. Then, we simulate the
typical use of the proposed solver with a Monte Carlo simulatio n whereby a series of transfer trajectories are
computed starting from a random set of boundary conditions and transfer times. To be noted that although the tests
in this section use the J2, μ and R, of Jupiter the proposed method can be generalized to other ce lestial bodies by
training the corresponding DNNs with a different triplet of val ues J2, μ and R, but using the same sample form.
A. DNN Structure Selection and Training
With reference to the results in Section III, the samples used to train the DNN include the initial position, the
initial velocity, coming from the solution of the Keplerian Lam bert problem, the terminal position error of the
Keplerian solution, and the time of flight. The output is the i nitial velocity correction of the Keplerian solution and
all vectors in a sample are expressed in spherical coordinates. In order to generalize the applicability of this method,
the ranges of the parameters of the sample given in Table 1 hav e been appropriately expanded. The range of orbital
inclinations is [0, ] in radian. The range of times of flight is now in the open in terval (0, 10 T), where T is calculated
using Eq. (8) from the initial state ( r0, v0). The ranges of other parameters are consistent with Table 1. In total,
200000 training samples are obtained using the rapid sample gen eration algorithm given in part B of Section III. Since
results, i n
one woul
sample f o
We s t
learning w
and ReL U
The sphe[-0.5
, 0
of the th r
chosen a s
Also i
other trai n
in Table 4the structure
n this section
d need to loo p
orm remains r e
tart by defini n
while Sigmoi d
U will be use d
rical coordin a
0.5]. Becau s
ree compone n
s the activatio n
in this case t h
ning paramet e
4. and training p
we analyze d
p back and ch e
easonably go o
ng the activa t
d functions ar e
d. The output r
ates (magnitu d
se the range o
nts of the sph e
n function of t
Fig
he Adaptive m
ers are the sa m
parameters o f
different DNN
eck the optim a
od even once t h
tion functions
e less used bec
ranges of Tan h
de, azimuth, a
f elevation a n
erical coordin
the output lay e
g. 5 The t ypica
moment estim a
me as in Secti o
fthe neural ne
structures a n
ality of the sa m
he DNN stru c
. Tanh and R
cause the gra d
h and ReLU a
and elevation)
ngle can be tr a
ates can all m
er.
al activation
ation is used a
on III. The tr a
twork also pl a
nd settings. N o
mple form, ho
cture is chang e
ReLU are the
dient tends to
are [-1, 1] and
of the outpu t
ansformed fro m
meet the requi
functions for
as optimizer. T
aining results
ays a signific a
ote that once t
wever, in this
ed.
common acti v
vanish [26], t h
[0, ∞] respec t
t of the samp l
m [-0.5, 0.5
rements of R e
DNN
The maximu m
of DNNs wit h
ant impact on
the structure i
paper we ass u
vation functi o
hus in the fol l
tively, as sho w
le are [0, ∞],
5] to [0, ]
eLU. Therefo
m epoch is 5 0
h different si z
22the training
is optimized
ume that the
on s f o r d e e p
lowing Tanh
wn in Fig. 5.
[0, 2] and
], the ranges
re, ReLU is
0000 and the
zes are listed 23
Table 4 Training results of DNNs with different sizes
Hidden
Layers Neurons per
hidden layer activation
function MSE Training
time (s)
2 20 ReLU 9.423e-05 762
Tanh 3.286e-05 839
50 ReLU 1.435e-05 951
Tanh 1.226e-05 1084
100 ReLU 9.423e-06 1210
Tanh 9.163e-06 1425
3 20 ReLU 2.423e-06 1198
Tanh 2.154e-06 1267
50 ReLU 1.315e-06 1347
Tanh 1.258e-06 1523
100 ReLU 5.631e-06 1746
Tanh 1.226e-06 1935
4 20 ReLU 9.423e-06 1648
Tanh 3.286e-06 1864
50 ReLU 7.522e-07 1977
Tanh 4.816e-07 2186
100 ReLU 6.395e-05 2361
Tanh 2.861e-05 2643
The neural network with the minimum MSE has 4 hidden layers, ea ch with 50 neurons. The activation function
of its hidden layers is Tanh. Additionally, some conclusions ca n be made from Table 4. Firstly, the networks with
ReLU as the activation function take less time for training. Se condly, the networks with Tanh as the activation
function achieve smaller MSEs. Thirdly, the network with 4 hidd en layers and 100 neurons in each hidden layer has
overfitted during the training process.
The variation of MSE of the neural network with 4 hidden layers and with 50 neurons for each layer is shown in
Fig. 6. MSE finally converges to 4.816e-07, which transforms in to the mean absolute error (MAE) of the DNN’s
output: [0.004241 km/s; 0.000232 rad; 0.000152 rad]. In or d
trained s a
of the tr a
terminal p
of the tra i
Fig. 7
DNN. [Δ v
Kepleria n
the termi n
points in
also redu c
After the der to verify t
amples were r
ained DNN. T
position rf are
ined DNN ( vc,
7 and Fig. 8 s
v0dx; Δv0dy; Δv
n solutions, re s
nal position a
Fig. 7 and Fi g
ced significa n
correction b yFig. 6 M S
the predictio n
randomly reg e
The initial vel o
used as refer e
, rfc) are calcu
show the co m
v0dz] and [Δ rfdx
spectively. [ Δ
after the DN N
g. 8) is much c
ntly after the c
y the DNN, th
SE of the sele c
n accuracy of
enerated with t
ocity v0, whi c
ence values. T
lated as follo w
0d
0cv
v
mparison betw e
x; Δrfdy; Δrfdz]
Δv0cx; Δv0cy; Δv
N’s correc tion
closer to 0 aft e
correction, wh
e initial velo c
cted DNN du r
the trained D
the algorithm
ch is the exac t
The errors of t h
ws
0df d
0cf c,
,
vv r
vvr
een the Kepl e
are the errors
v0cz] and [Δ rfcx
s, respectivel y
er the DNN’s c
ich is indicat e
city error is li m
ring the trai n
DNN, 1000 n e
in part B of S
t solution of t
he Keplerian s
ff d
ff c
rr
rr
erian solution s
of the initial v
x; Δrfcy; Δrfcz]
y. It can be s
correction. T h
ed by the len g
mited to 10 m
ning process.
ew samples t h
Section III to
the J2-pertur b
solutions ( vd, r
s and the app r
velocity and t h
are the errors
een that the m
he standard de v
gth of the blue
m/s, and the te r
hat are differ e
examine the p
bed Lambert p
rfd) and the ap p
roximation o f
he terminal p o
of the initial v
mean of thes e
viation of the s
bars in Fig. 7
rminal positio n
24ent fro m t h e
performance
proble m an d
proximation
(12)
f t h e t r a i n e d
osition of the
velocity and
e errors (red
se errors has
7 and Fig. 8.
n error does not exce e
initial va l
Fig. 7 T
Fig.
B. Perfo
In this s e
method ued 100 km. T
lue with respe c
The statistica l
. 8 The statis t
ormance Ana l
ection the pr o
using NewtonThis proves th a
ct to a simple
l results of th e
tical results o
lysis for MR P
oposed DNN- b
’s iteration a l
at the applic a
Keplerian La m
e initial velo c
of the termin a
PLP
based metho d
lgorithm (SN )
ation of the D
mbert solutio n
city errors of t
al position er r
correctio n
d is compare d
) a n d t h e h o m
DNN has sign i
n.
the Kepleria n
rors of the K e
n
d a g a i n s t o t h e
motopic pertu r
ificantly imp r
n solution an d
eplerian solu t
er two metho d
rbed Lamber t
roved the acc u
d the DNN’s c
tion and the D
ds: a traditio n
t algorith m (H
25uracy of the
correction
DNN’s
nal shooting
HL) in [15]. 26
When applying the HL, the C++ version of Vinit6 algorithm in li terature [27] is employed to implement the HL
method in Ref. [15] and to decrease the CPU computation time of HL. The HL is running in Matlab and the MEX
function calls the Vinit6 algorithm that is running in visual s tudio 2015 C++ compiler to analytically propagate the
perturbed trajectory. The accuracy tolerance of Vinit6 algorith m is set at 1 10-12. The homotopy parameter is
defined as the deviation in the terminal position and other det ails of implementation and settings are the same as
these given in Ref. [15]. For the SN and the proposed method, t heir dynamical models only include the J2
perturbation. For the Vinit6 algorithm, the dynamical model inc ludes the J2, J3 and partial J4 perturbations.
However, the magnitudes of J3 and J4 of Jupiter are much smalle r than that of J2. Their perturbation effects are very
weak compared with that of J2. Therefore, the slight difference in the dynamical model has very limited impact on
the number of iterations and running time of the HL since the V init6 algorithm has high computational efficiency.
Therefore, the comparison among the three methods is still vali d.
The performance of the three methods is compared over 11 datase ts one per number of full revolutions from 0
to 10. Each dataset has 1000 samples, which are regenerated wit h the method described in Section III to validate the
DNN. The maximum iterations and tolerances of the three methods are listed in Table 5.
Table 5 The maximum iterations an d tolerance of three methods
Algorithm Tolerance (km) Maximum iterations
SN 0.001 2000
HL 0.001 10000
DNN-based method 0.001 2000
If the algorithm converges to a solution that meets the specifi ed tolerance within the set number of iterations, it is
recorded as a valid convergence, otherwise, as a failed converg ence. The result is displayed in Fig. 9 and Fig. 10, in
terms of convergence ratio (number of converged solutions over number of samples) and average number of
iterations to converge. Fig.
Fig. 10 A
Acco r
the valid
the num b
the num b
requires t
revolutio n
problem. mitigates . 9 The conv e
Avera ge num
rding to Fig. 9
convergence r
ber of iteratio n
ber of iteratio n
the least nu m
ns is due the
For the sam e
this problem ergence ratio
mber of iterat i
9, the HL and t
ratio of the S N
ns of HL appe a
ns of SN and t
mber of iterati
growing dif f
e r e a s o n t h e H
by providing a
of different a
ions of differ e
the proposed m
N decreases a
ars to increas e
the proposed D
o n s . T h e l a c k
ference betwe e
HL progressi v
a good initial
algorithms fo r
ent al gorith m
method coul d
s the number
e linearly in l o
DNN-based m
k of converg e
en the exact s
vely requires
guess for eve r
r the Jupiter J
ms on the Jup i
converge to t
of revolution s
og-scale as th e
method remai n
ence of the S N
solution and t
more iteratio n
ry number of r
J2-perturbe d
iter J2-pertu r
the required a c
s increases. T h
e number of r e
n nearly const a
N w i t h t h e i n
the solution o
ns to conver g
revolutions.
d Lambert pr
rbed Lambe r
ccuracy in all
hen, accordin g
evolutions inc r
ant. The prop o
ncrease in th e
of the Kepler i
ge . T h e p r o p o
27roblem
rt problem
cases, while
g to Fig. 10,
reases while
osed method
e number of
ian Lambert
osed method The a
only acc o
DNN-Ba s
CPU cal c
of the re v
the SCM
the incre a
time of t
computatDNN eff
e
time wit h
number o
proposed shooting matrix.
T
HL takes
Fig. 11 Aaverage CPU c
ounts the ti m
sed method a n
culation time o
volution incre a
. In general, t
ase in the nu m
the SN and t h
tional time of H
ectively redu c
h the number o
of revolution s
method are r
algorith m, fo
Their computa t
less time, the
Avera ge CPU computationa l
me of the S C
nd HL are 0. 0
of the propos e
ases because t h
the computati o
mber of iterat i
he proposed m
HL appears t o
ces the numb e
of revolution s
s tested in th
respectively 0
or which eac h
tional time pe r
HL requires m
computatio nl time of the t
CM . F o r z e r o
051 seconds, 0
ed method is t
he accurate i n
onal time inc r
ions and the l
method appe a
o increase mo r
er of iteration s
s. The comput
is pape r. The
0.0082 s, 0.0 0
h iteration ne e
r iteration is h
much more it e
nal time of di f
three method s
o-revolution c
0.027 second s
the shortest. T
nitial guess ob t
reases with th e
longer propa g
ar to increase
re rapidly. Th e
s and provide s
ational time o
e average co m
018 s, and 0. 0
eds additional
higher than th a
erations than t h
fferent metho
s is given in F
case, the av e
s and 0.329 s e
This advantag e
tained using I
e increase in t
gation time. A
linearly with
e figure show s
s, as a result,
of the propose
mputational t i
0078 s. The p r
t h r e e i n t e g r a
at of the HL. H
he other two m
ds for the Ju p
Fig. 11, in w h
erage CPU c o
econds, respe c
e becomes m o
GG reduces t h
the number o f
As shown in F
the number
s that the initi a
a slower incr e
d method is b
ime per itera t
ropose d m e t h
al operations
However, tho u
methods, as s h
piter J2-pert u
hich the prop o
omputation t i
ctively. It is s
ore obvious as
he number of
f revolutions,
Fig. 11, the c o
of revolution s
al guess obtai n
ease of the c o
below 0.5 sec o
tion of SN, H
ho d a n d S N u
to calculate t
ugh the singl e
hown in Fig. 1
urbed Lamb e
28osed method
ime of SN,
seen that the
the number
iterations of
due to both
omputational
s, while the
ned with the
omputational
onds, for the
HL , a n d t h e
se the same
the Jacobian
e-iteration of
1.
ert problem C. Mon t
In thi
conditio n
generate s
time of s a
methods,
transfer t i
DNN is t
called o n
while the computatfinal res
u
increase i
or larger sample g
ete Carlo Ana l
s section we
ns and transf e
samples and t
ample genera t
four sets of M
imes are per fo
trained only o
ne time per M
solutions of t
tions are perf o
ults are given
in the numbe r
than 5000, t h
eneration and
Fig. 12 Tota llysis
simulate the
er times and
train DNN be f
tion, the traini n
Monte Carlo s i
formed. For e a
once, using 2 0
MC simulation
the J2-perturb
ormed on the
in Fig. 12. I
r of Lambert s
he proposed m
the training o f
l CPU time o f
repeated use
computing m
fore using the
ng of the DN N
imulations wi t
ach set, the n u
00000 sample s
to generate t h
ed Lambert p r
personal co mp
It can be see n
olutions to b e
method outper f
f the DNN.
f different m e
of the DNN-
multiple J2-pe r
proposed me t
N and the SC M
th 1000, 5000 ,
umber revolu t
s and the par a
he first guess
roblem using
mputer with In t
n t h a t t h e e f fi
e compute d. In
forms the oth e
ethods for th e
-based metho d
rturbed Lam b
thod, the total
M. To compa r
, 10000, and 1
tions are equ a
ameters settin g
. The trainin g
the proposed
tel Core-i7 4. 2
ficiency of th e
n particular, w
er two metho d
e Jupiter J2- p
d b y t a k i n g a
bert solutions .
l computation a
re the total C P
100000 sets o f
ally distribute d
g presented i n
g of DNN wa
method, HL a
2 GHz CPU a
e p r o p o s e d m
when the num b
ds even when
perturbed La
random set o
. Since it is
al time shoul d
PU time of the
f boundary co n
d between 0 a
n previous se c
s implemente
and SN run in
and 128GB o f
method i m p r o v
ber of simulat i
including th e
mbert probl e
29of boundary
essential to
d include the
above three
nditions and
and 10. The
ction, and is
d in Python
Matlab. All
f RAM. The
ves with the
ions is equal
e cost of the
em In ad
have bee n
converge
Fig. 11. F
0.024 se c
due to its
Fig. 13 A
A fas t
solve the
neural ne t
the nove l
J2-pertur b
the Kepl e
applied t oddition, two s t
n teste d with t
successfully a
For the zero r e
conds and 0.0 2
longer time o
Avera ge CP U
t and novel m
J2-perturbed
tworks, whic h
l method is t o
bed Lambert p
erian solutio n
o t h e J u p i t e r tress cases, w
the proposed m
and their ave r
evolution cas e
29 seconds, r e
of flight for ea c
U computatio n
method using D
Lambert pro b
h has an excel l
o u s e a D N N
proble m. We
n and provide
J2-pertur bed
here the angl e
method. For e a
rage CPU co m
e, the CPU co m
espectively. T h
ch revolution.
nal time of t w
VI.
DNN and th e
blem. DNN co
lent performa n
N to generate
demonstrated
good initial
Lambert pro b
e between the
ach revolutio n
mputational ti m
mputation ti m
he case of 36 0
wo stress cas e
Conclus i
e finite-differ e
mposed of se v
nce on appro x
a first guess
that the DN N
values for t h
bl e m , t h e e r r o
initial and te r
n, 100 MC tes t
me is given in
me of the 180 d
0 deg costs a
es for the Jup i
ion
ence-based sh o
veral layers is
ximating nonl i
o f t h e c o r r e c
N is capable o
he subsequent
ors in the ini t
rminal positio
ts are perfor m
Fig. 13, whic h
degree and th e
bit more tim e
iter J2-pertu r
ooting algorit h
the extensio n
inear system. T
ction of the i n
f correcting t h
differential c
tial velocity a
ns is 180 deg
med for each c a
h is similar to
e 360 degree s
e than the cas e
rbed Lambe r
hm has been
n of conventio n
The major co n
nitial velocit y
he initial velo c
correction me
and terminal p
30or 360 deg,
ase. All tests
the trend in
scenarios are
e of 180 deg
rt problem
proposed to
nal artificial
ntribution of
y t o s o l v e a
city error of
thod. When
position are 31
limited to 5m/s and 100 km, respectively. In addition, when com pared to a direct application of a shooting method
using Newton’s iterations and to a homotopy perturbed Lambert a lgorithm, the proposed method displayed a
computational time that appears to increase linearly with a slo pe of 0.047 with the number of revolutions. In the
application scenario presented in this paper the computational time is less than 0.5 seconds even for ten revolutions.
It was also shown that compared to a direct application of a sh ooting method it provides convergence to the required
accuracy in all the cases analyzed in this paper. Thus, we can conclude that the proposed DNN-based generation of a
first guess is a promising method to increase robustness and re duce computational cost of shooting methods for the
solution of the J2-pertubed Lambert problem.
The method proposed in this paper can be used to solve the J2-p erturbed Lambert problem for other celestial
bodies, by training the corresponding DNN with the correspondin g J2 a n d  parameters. Thus a library of
pre-trained DNN could be easily used to have a more general app lication to missions around any celestial body. On
the other hand, adding these dynamical parameters as part of th e training set would allow a single more general
DNN to be used with all celestial bodies. This latter option is the object of the current investigation.
Acknowledgments
The work described in this paper was supported by the National Natural Science Foundation of China (Grant No.
11672126), sponsored by Qing Lan Project, Science and Technolog y on Space Intelligent Control Laboratory (Grant
No. 6142208200203 and HTKJ2020KL502019), the Funding for Outsta nding Doctoral Dissertation in NUAA
(Grant No. BCXJ19-12), State Scholarship from China Scholarship Council (Grant No. 201906830066). The authors
fully appreciate their financial supports.
32
References
[1] Engels R C, and Junkins J L., “The gravity-perturbed Lambert pr oblem: A KS variation of parameters approach,” Celestial
mechanics , Vol. 24, No.1, 1981, pp. 3-21.
doi: 10.1007/BF01228790
[2] He B, Shen H., “Solution set calculation of the Sun-perturbed o ptimal two-impulse trans-lunar orbits using continuation
theory,” Astrodynamics , Vol. 4, No. 1, 2020, pp. 75-86.
doi: 10.1007/s42064-020-0069-6
[3] Izzo D, “Revisiting La mbert’s problem,” Celestial Mechanics and Dynamical Astronomy , Vol. 121, No. 1, 2015, pp. 1-15.
doi: 10.1007/s10569-014-9587-y
[4] Bombardelli C, Gonzalo J L, and Roa J., “Approximate analytical solution of the multiple revolution Lambert’s targeting
problem,” Journal of Guidance, Control, and Dynamics , Vol. 41, No. 3, 2018, pp. 792-801.
doi: 10.2514/1.G002887
[5] Russell R P., “On the solution to every Lambert problem,” Celestial Mechanics and Dynamical Astronomy , Vol. 131, No. 11,
2019, pp. 1-33.
doi: 10.1007/s10569-019-9927-z
[6] Der G J., “The superior Lambert algorithm,” Proceedings of the Advanced Maui Optical and Space Surveillance
Technologies Conference , Maui Economic Development B oard, Maui, 2011, pp. 462–490.
[7] Armellin R, Gondelach D, and San Juan J F., “Multiple revolutio n perturbed Lambert problem solvers,” Journal of Guidance,
Control, and Dynamics , Vol. 41, No. 9, 2018, pp. 2019-2032.
doi: 10.2514/1.G003531
[8] Kraige L G, Junkins J L, and Ziems L D., “Regularized Integrati on of Gravity-Perturbed Trajectories-A Numerical Efficiency
Study,” Journal of Spacecraft and Rockets , Vol. 19, No. 4, 1982, pp. 291-293.
doi: 10.2514/3.62255
[9] Junkins J L and Schaub H., Analytical mechanics of space systems , 2nd ed., AIAA, Reston, VA, 2009, pp. 557-561.
doi: 10.2514/4.867231
[10] Arora N, Russell R P, and Strange N, and Ottesen, D., “Partial derivatives of the solution to the Lambert boundary value
problem,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 9, 2015, pp. 1563-1572.
doi: 10.2514/1.G001030
[11] Woollands R M, Bani Younes A, and Junkins J L., “New solutions for the perturbed lambert problem using regularization
and picard iteration,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 9, 2 015, pp. 1548-1562.
doi: 10.2514/1.G001028
[12] Godal T. “Method for determining the initial velocity vector co rresponding to a given time of free flight transfer between
given points in a simple gravitational field,” Astronautik , Vol. 2, 1961, pp. 183-186.
[13] Woollands R M, Read J L, Probe A B, and Junkins J. L., “Multipl e revolution solutions for the perturbed lambert problem
using the method of particular solutions and picard iteration,” The Journal of the Astronautical Sciences , Vol. 64, No. 4, 2017,
pp. 361-378. doi: 10.1007/s40295-017-0116-6
[14] Alhulayil M, Younes A B, and Turner J D. “Higher order algorith m for solving lambert’s problem,” T he Journal of the
Astronautical Sciences , Vol. 65, No. 4, 2018, pp. 400-422.
doi: 10.1007/s40295-018-0137-9 33
[15] Yang Z, Luo Y Z, Zhang J, and Tang G J, “Homotopic perturbed La mbert algorithm for long-duration rendezvous
optimization,” Journal of Guidance, Control, and Dynamics , Vol. 38, No. 11, 2015, pp. 2215-2223.
doi: 10.2514/1.G001198
[16] Li H, Chen S, Izzo D, and Baoying H, “Deep networks as approxim ators of optimal low-thrust and multi-impulse cost in
multitarget missions,” Acta Astronautica , Vol. 166, 2020, pp. 469-481.
doi: 10.1016/j.actaastro.2019.09.023
[17] Rubinsztejn A, Sood R, and Laipert F E., “Neural network optima l control in astrodynamics: Application to the missed
thrust problem,” Acta Astronautica , Vol. 176, 2020, pp.192-203.
doi: 10.1016/j.actaastro.2020.05.027
[18] Izzo D, Märtens M, and Pan B., “A survey on artificial intellig ence trends in spacecraft guidance dynamics and control,”
Astrodynamics , 2018, pp. 1-13.
doi: 10.1007/s42064-018-0053-6
[19] Sánchez-Sánchez C and Izzo D., “ R e a l - t i m e o p t i m a l c o n t r o l v i a D eep Neural Networks: study on landing problems,”
Journal of Guidance, Control, and Dynamics , Vol. 41, No. 5, 2018, pp. 1122-1135.
doi: 10.2514/1.G002357
[20] Zhu Y and Luo Y Z., “Fast Evaluation of Low-Thrust Transfers vi a Multilayer Perceptions,” Journal of Guidance, Control,
and Dynamics , Vol. 42, No. 12, 2019, pp. 2627-2637.
doi: 10.2514/1.G004080
[21] Song Y and Gong S., “Solar-sail trajectory design for multiple near-Earth asteroid exploration based on deep neural
networks,” Aerospace Science and Technology , Vol. 91, 2019, pp. 28-40.
doi: 10.1016/j.ast.2019.04.056
[22] Cheng L, Wang Z, Jiang F, and Zhou C., “Real-time optimal contr ol for spacecraft or bit transfer via multiscale deep neural
networks,” IEEE Transactions on Aerospace and Electronic Systems , Vol. 55, No. 5, 2018, pp. 2436-2450.
doi: 10.1109/TAES.2018.2889571
[23] Battin R H. An Introduc tion to the Mathematics and Methods of A strodynamics, revised ed ., AIAA, VA, 1999, Chap. 6.
doi: 10.2514/4.861543
[24] Ely T A., “Transforming mean and osculating elements using nume rical methods,” The Journal of the Astronautical
Sciences , Vol. 62, No. 1 , 2015, pp: 21-43.
doi: 10.1007/s40295-015-0036-2
[25] Kingma D P, Ba J., “Adam: A method for stochastic optimization, ” arXiv preprint arXiv:1412.6980, 2014.
[26] Menon A, Mehrotra K, Mohan C K, et al., “Characterization of a class of sigmoid functions with applications to neural
networks,” Neural Networks , Vol. 9, No. 5, 1996, pp: 819-835.
doi: 10.1016/0893-6080(95)00107-7
[27] Vinti, J. P., Orbital and Celestial Mechanics, Vol. 177, Progre ss in Astronautics and Aeronautics, AIAA, Reston, VA, 1998,
pp. 367–385. doi:10.2514/4.866487
View publication statsView publication stats